text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
11 July 2012 21:54 [Source: ICIS news] HOUSTON (ICIS)--US caustic soda exports in May were 360,433 tonnes, down by 2.6% from the 370,140 tonnes shipped the same month a year earlier, according to US International Trade Commission (ITC) data released on Wednesday. Exports in April were also 30.4% higher than the 276,392 tonnes shipped in April,.?xml:namespace> Brazil was the top destination for US caustic soda exports in May at 170,588 tonnes, followed by Canada, Japan and Jamaica. Major producers of US caustic soda include Georgia Gulf, Formosa Plastics, OxyChem, Shintech, Dow, Olin and PPG Industries. Follow Ken Fountain (@ICIS_Ken) on Twitter
http://www.icis.com/Articles/2012/07/11/9577586/us-caustic-soda-exports-down-by-2.6-year-on-year.html
CC-MAIN-2014-10
refinedweb
111
65.32
Learn to write PAM (Pluggable Authentication Modules) service modules for authentication and security services, and see an example module. In the first three articles in this series (Part 1, Part 2, and Part 3) we covered the basics of password-based user authentication, concentrating on the use of PAM (Pluggable Authentication Modules). We described the PAM API that applications (called PAM consumers) use for authentication, and showed how to write PAM conversation functions. In this fourth, and final, article we'll describe PAM service modules and show an example of how to write them. the PAM configuration file, /etc/pam.conf. We described PAM configuration files in Part 2 of this series. /etc/pam.conf We mentioned in Part 2 of this series that PAM consumers call one or more of the following functions to perform user authentication and related functions: pam_authenticate pam_acct_mgmt pam_setcred pam_open_session pam_close_session pam_chauthtok Each of these functions is implemented in service modules by functions having the same name, except that the pam_ prefix is replaced with pam_sm_. So, pam_authenticate is implemented by pam_sm_authenticate, pam_sm_acct_mgmt implements pam_acct_mgmt, and so on. Service modules that we write must provide one or more of these functions. pam_ pam_sm_ pam_sm_authenticate pam_sm_acct_mgmt To communicate with PAM consumer applications, service modules use the pam_get_item and pam_set_item functions, as shown in the following code example. (We should also point out that PAM consumers can use these functions to communicate with service modules.) pam_get_item pam_set_item #include <security/pam_appl.h> int pam_get_item(const pam_handle_t *pamh, int item_type, void **item); int pam_set_item(pam_handle_t *pamh, int item_type, const void *item); The pam_set_item function enables service modules to update information for the PAM transaction specified by the handle, pamh. The information type, specified by item_type, can be one of about a dozen specified in the pam_set_item man page. Examples of item types include PAM_AUTHTOK, PAM_CONV, PAM_USER, and PAM_USER_PROMPT. The value we want to set the PAM information to is specified by item. PAM_AUTHTOK PAM_CONV PAM_USER PAM_USER_PROMPT Similarly, the information for a PAM transaction can be accessed by calling pam_get_item. In this case, a pointer to the PAM information of the specified type is placed into item. Service modules can access and update module-specific information using the pam_get_data and pam_set_data functions. We won't discuss these functions further because we're focusing on communications between PAM service modules and their consumers. Interested readers are referred to these functions' man pages for more details. pam_get_data pam_set_data PAM service modules must provide a PAM return code to their consumer. This return code must be one of three types: PAM_SUCCESS PAM_IGNORE PAM_<error> PAM_USER_UNKNOWN PAM_PERM_DENIED To prevent the display of unwanted messages, all service modules must honour the PAM_SILENT flag. We recommend the use of the debug flag to enable the logging of diagnostic debugging information via the syslog facility. Debugging messages logged using syslog should use the LOG_AUTH facility and LOG_DEBUG severity level. Any other messages logged using syslog should use the LOG_AUTH facility with an appropriate priority level. PAM_SILENT debug syslog LOG_AUTH LOG_DEBUG Important: The syslog-related functions openlog, closelog, and setlogmask must not be used in service modules because they interfere with the application's settings. openlog closelog setlogmask Now that we've described service modules and what they must do, let's have a look at one. The service module we write provides a mechanism by which named users in a certain group are denied access. An example of where this could be useful would be a web hosting company: customers are allowed to connect via ftp and sftp, but login shells are forbidden. This access policy can be enforced by using this module and naming the customers in the forbidden group. ftp sftp This type of account access policy is applied to users who have successfully authenticated, so it could be characterized as account management. PAM-aware applications call pam_acct_mgmt to perform this task, so our example module implements pam_sm_acct_mgmt, which has the following prototype: #include <security/pam_appl.h> #include <security/pam_modules.h> int pam_sm_acct_mgmt (pam_handle_t *pamh, int flags, int argc, const char **argv); The PAM handle returned by pam_start is referenced by pamh, flags contains any flags passed to the module by the application, and argc and argv contain the number of module options specified in pam.conf and the list of options respectively. pam_start pam.conf Here's the source code to our example module. 1 #include <stdio.h> 2 #include <stdlib.h> 3 #include <grp.h> 4 #include <string.h> 5 #include <syslog.h> 6 #include <security/pam_appl.h> 7 int pam_sm_acct_mgmt (pam_handle_t *ph, int flags, int argc, char **argv) 8 { 9 char *user = NULL; 10 char *host = NULL; 11 char *service = NULL; 12 char *denied_group = ""; 13 char group_buf[8192]; 14 struct group grp; 15 struct pam_conv *conversation; 16 struct pam_message msg; 17 struct pam_message *msgp = &msg; 18 struct pam_response *resp = NULL; 19 int i; 20 int err; 21 int no_warn = 0; 22 int debug = 0; 23 int ret_val; 24 for (i = 0; i < argc; i++) { 25 if (strcasecmp (argv[i], "nowarn") == 0) 26 no_warn = 1; 27 else if (strcasecmp (argv[i], "debug") == 0) 28 debug = 1; 29 else if (strncmp (argv[i], "group=", 6) == 0) 30 denied_group = &argv[i][6]; 31 } 32 if (flags & PAM_SILENT) 33 no_warn = 1; 34 pam_get_user (ph, &user, NULL); 35 pam_get_item (ph, PAM_SERVICE, (void **)&service); 36 pam_get_item (ph, PAM_RHOST, (void **)&host); 37 if (user == NULL) { 38 syslog (LOG_AUTH | LOG_DEBUG, "%s: denied_group: user not set", service); 39 ret_val = PAM_USER_UNKNOWN; 40 goto out; 41 } 42 if (host == NULL) 43 host = "unknown"; 44 if (getgrnam_r (denied_group, &grp, group_buf, sizeof (group_buf)) == NULL) { 45 syslog (LOG_AUTH | LOG_NOTICE, "%s: denied_group: group \"%s\" not defined", 46 service, denied_group); 47 ret_val = PAM_SYSTEM_ERR; 48 goto out; 49 } 50 if (grp.gr_mem[0] == '\0') { 51 if (debug) 52 syslog (LOG_AUTH | LOG_DEBUG, "%s: denied_group: group \"%s\" is empty: " 53 "all users allowed", service, grp.gr_name); 54 ret_val = PAM_IGNORE; 55 goto out; 56 } 57 for (; grp.gr_mem[0]; grp.gr_mem++) { 58 if (strcmp (grp.gr_mem[0], user) == 0) { 59 msg.msg_style = PAM_ERROR_MSG; 60 msg.msg = "Access denied: you are not on the access list for this host."; 61 pam_get_item (ph, PAM_CONV, (void **)&conversation); 62 if ((no_warn == 0) && (conversation != NULL)) { 63 err = conversation->conv (1, &msgp, &resp, conversation->appdata_ptr); 64 if (debug && err != PAM_SUCCESS) { 65 syslog (LOG_AUTH | LOG_DEBUG, "%s: denied_group: conversation returned %s", 66 service, pam_strerror (ph, err)); 67 } 68 if (resp != NULL) { 69 if (resp->resp) 70 free (resp->resp); 71 free (resp); 72 } 73 } 74 syslog (LOG_AUTH | LOG_NOTICE, "%s: denied_group: Connection for %s " 75 "not allowed from %s", service, user, host); 76 ret_val = PAM_PERM_DENIED; 77 goto out; 78 } 79 } 80 if (debug) 81 syslog (LOG_AUTH | LOG_DEBUG, "%s: denied_group: user %s is not a member of " 82 "group %s. Access granted.", service, user, grp.gr_name); 83 84 ret_val = PAM_SUCCESS; 85 out: 86 return (ret_val); 87 } Let's take a closer look at this 80-line function. Note that for the sake of this example, we've arbitrarily limited the buffer group_buf (defined on line 13) to 8K characters. In a real program we'd probably dynamically size this buffer depending on the maximum system value, as determined by calling sysconf. group_buf sysconf 1-6:Include the required header files. 1-6: 24-33:Interpret the module options, setting the debug and no warnings flags as appropriate. 24-33: 34-36:Get the user, service, and remote host names. 34-36: 37-41:Deny access if the user isn't specified. 37-41: 44-49:Deny access if the specified group isn't defined. 44-49: 50-56:Allow access for all users if the specified group has no named members. 50-56: 57-79:Check to see if the user is a member of the group. If so, deny access, and (if warnings aren't disabled) call the conversation function to pass the appropriate error message back to the user. Note that the denial is always reported to syslog. Note that for brevity we've used strcmp to compare the user names. In a real application we'd probably use strncmp to avoid buffer overflows. Notice also our use of pam_strerror on line 66. This function returns the error message associated with its second argument, in much the same manner as strerror does for regular error messages. 57-79: strcmp strncmp pam_strerror strerror 80-84:If we get here, the user is not a member of the specified group so access is granted. 80-84: 85-87:Return to the caller. 85-87: Service modules are shared objects, so we use the following command to build our example with Sun's Studio compiler. rich@ultra20# cc -c -Kpic -o pam_service_module.so pam_service_module.c (Users of gcc should replace -Kpic with -fpic.) gcc -Kpic -fpic The PAM infrastructure performs various security checks, so our shared object must be owned by root, as shown in the following example. root rich@ultra20# su - root@ultra20# chown root:root /home/rich/pam_service_module.so When we've finished testing our service module and are ready to deploy it, we would normally place the shared object in /usr/lib/security/$ISA, where $ISA represents the instructions set of the target machine. Another place we might install our own modules (if we provide them in package format) is /opt/lib/security/$ISA. /usr/lib/security/$ISA $ISA /opt/lib/security/$ISA Finally, we need to add an entry for our new module to pam.conf, as shown here. other account required /home/rich/pam_service_module.so group=staff debug All being well, named members of the group staff will be denied access. With the user rich being named as a member of the group staff, trying to log in using ssh fails, as shown in the following example. (Using telnet will also fail, but security-conscious people shouldn't use telnet.) staff rich ssh telnet rich@sunblade1000# ssh ultra20 Connection closed by 192.168.0.2 Changing the denied group (to root, for example) allows the user rich to log in, as shown here. rich@sunblade1000# ssh ultra20 Last login: Sun Dec 2 16:43:05 2007 from sunblade1000 Sun Microsystems Inc. SunOS 5.11 snv_70 October 2007 rich@ultra20# Removing the user rich from the member list of the group named staff has the same effect. In this article we described what PAM service modules are, and what they must do (that is, what is expected of a service module). We stated that service modules must implement one or more of the following functions, depending on what the service module is to do: pam_sm_authenticate, pam_sm_acct_mgmt, pam_sm_setcred, pam_sm_open_session, pam_sm_close_session, and pam_sm_chauthtok. We also briefly described the two functions that service modules and PAM consumers use to communicate with each other. pam_sm_setcred pam_sm_open_session pam_sm_close_session pam_sm_chauthtok We then described the three types of values that service modules must return to their caller, which indicate that the request was successful, ignored, or resulted in an error (including failure). After discussing the types of logging a service module is expected to do, and when not to log certain messages, we showed an example service module that implements a policy of denying access to named members of the specified group. Finally, we showed how to build and install.
http://developers.sun.com/solaris/articles/user_auth_solaris4/index.html
crawl-002
refinedweb
1,864
53.1
Program to find all permutations of a string in Java In this Java tutorial, we will learn how to find all permutations of a string in Java. We will solve the problem using recursion. Recursion is a process where a function calls itself repeatedly. This function is called a recursive function. Problem Statement Given a string, we have to find all the permutations of that string. A permutation is a reordered arrangement of elements or characters of a string. For example, string “abc” have six permutations [“abc”, “acb”, “bac”, “bca”, “cab”, “cba”]. Input We will be given a single string input. Output We have to print all the permutations of the given string in lexicographical order. A Lexicographical order means the order in which words or strings are arranged in a dictionary. How to sort a String? - First, convert the string to a character array using toCharArray() method. - Sort the array using Arrays.sort() method. - Finally, obtain a string from the sorted array. String str = "adcb"; char[] arr = str.toCharArray(); Arrays.sort(arr); System.out.println(new String(arr)); Output: abcd String Functions used in the program - charAt(int index): It returns the character at the specified index. - substring(int begin, int end): It returns a part of the string from index begin to index end-1. - length(): It returns the length of a string. - Collections.sort(): It sorts the elements in the specified list of Collection. So, it is used to sort the ArrayList of strings. Program to find all permutations of a string in Java import java.util.Collections; import java.util.Scanner; public class permutations { public static void main(String[] args) { Scanner s = new Scanner(System.in); String str = s.next(); ArrayList<String> answer = allPermutation(str); Collections.sort(answer); System.out.println(answer); } public static ArrayList<String> allPermutation(String str) { if (str.length()==0){ ArrayList<String> baseResult= new ArrayList<>(); baseResult.add(""); return baseResult; } char ch = str.charAt(0); String rest = str.substring(1); ArrayList<String> recResult = allPermutation(rest); ArrayList<String> myResult = new ArrayList<>(); for (int i = 0; i < recResult.size(); i++) { String s = recResult.get(i); for (int j = 0; j <= s.length(); j++) { String newString = s.substring(0, j) + ch + s.substring(j); myResult.add(newString); } } return myResult; } } Input: adcb Output: ] Explanation: We pass the inputted string to the recursive allPermutations() function. We store the first character of the string in variable ch. And, the string rest contains the rest of the string which is passed to the recursive function. When the length of the string becomes 0, we create an empty ArrayList of string. This string returns to the recResult. Then, we iteratively obtain each string in recResult. Then, we place character ch at all positions in the string. We create an ArrayList myResult and add the resulting string to it. We return this myResult list each time. Finally, we get all the permutations of the string. We sort the final answer ArrayList using Collections.sort(). At last, we print the answer. You can also read:
https://www.codespeedy.com/find-all-permutations-of-a-string-in-java/
CC-MAIN-2019-43
refinedweb
501
61.73
remove() prototype int remove(const char* filename); The remove() function takes a single argument filename and returns an integer value. It deletes the file pointed by the parameter. Incase the file to be deleted is opened by a process, the behaviour of remove() function is implementation-defined. In POSIX systems, if the name was the last link to a file, but any processes still have the file open, the file will remain in existence until the last running process closes the file. In windows, the file won't be allowed to delete if it remains open by any process. It is defined in <cstdio> header file. remove() Parameters filename: Pointer to the string containing the name of the file along with the path to delete. remove() Return value The remove() function returns: - Zero if the file is successfully deleted. - Non zero if error occurs. Example: How remove() function works #include <iostream> #include <cstdio> using namespace std; int main() { char filename[] = "C:\\Users\\file.txt"; /* Deletes the file if exists */ if (remove(filename) != 0) perror("File deletion failed"); else cout << "File deleted successfully"; return 0; } When you run the program, the output will be: If the file is deleted successfully: File deleted successfully If the file is not present: File deletion failed: No such file or directory
https://www.programiz.com/cpp-programming/library-function/cstdio/remove
CC-MAIN-2020-16
refinedweb
216
62.38
std::cout This would tell you that the output stream instance cout belonged to the namespace std. 18.09.01: Thanks to Wicker808 for the factual corrections! In Haskell, the :: operator stands for "has type", and is used in the type signatures of functions. For example, the signature of length looks like this: length :: [a] -> Integer A lesser-known smiley or emoticon, which represents a person flaring his/her nostrils very wide, while keeping his/her mouth very tightly closed. Rarely used, as nobody is sure what emotion it is supposed to signify.. :: also refers to "a whole heap" of zeros in an IPv6 address. Due to the massive address space available with the IPv6 addressing scheme, its inevitable that many addresses will contain long sequences of zeros. In order to make writing these addresses easier, a special syntax is available to compress the zeros, the guidelines of which are:::8:800:200C:417A (a unicast address) FF01::101 (a multicast address) ::1 (the loopback address) :: (the unspecified addresses) 1080::8:800:200C:417A (a unicast address) FF01::101 (a multicast address) ::1 (the loopback address) :: (the unspecified addresses) And remember kids, if you dont have access to it already, hassle your ISP as the end of IPv4 address space is nigh! Rules and examples shamelessly copied from RFC2373 "IPv6 Addressing Architecture" Log in or registerto write something here or to contact authors. Need help? accounthelp@everything2.com
http://everything2.com/title/%253A%253A
CC-MAIN-2016-44
refinedweb
238
55.17
what about the java virtual machine? i have the JRE already what about the java virtual machine? i have the JRE already i cant find the JVM on the oracle website. how would i go about installing it onto my computer? I want to be able to add a new feature that lets me control the paddle to move left and right in a straight line. I already included the event library for mouse listeners but I dont know what to... i have added the code. the sprite is basically an object of a GImage class. How do I make it so the character sprite blends with the background? Very much appreciate any help given. :) heres the code import... Thanks Chris. Apparently the book did not even told me about this. Make perfect sense nevertheless set before you draw. I will remember this! Thanks a bunch Now I can move on to chapter 3\:D/ All I see is a White oval and a black oval. Why is not my set color red code in effect? I want a red oval on screen :) public class Guy extends java.applet.Applet { /** initialization method that will be called after * applet is loaded into the browser */ public void init() { // } Hey JPF members, I'm a beginner in java trying to get a Red Oval to appear on screen. I'm also using a java book but im stuck with this problem. Any solution to this problem? :) As you may know...
http://www.javaprogrammingforums.com/search.php?s=11b59083dd8eeed8501495b3fb845d62&searchid=1272817
CC-MAIN-2014-52
refinedweb
247
85.18
Issue I am using pygsheets to make a budget. I want to be able to store all the negative cells in some sort of dictionary (I’m not great with python yet) I’ve been able to select a DataRange of cells, but how do I add a filter to that? For example, drange = pygsheets.DataRange(start='A1', worksheet=wks) this is one of my ranges. How would I add a filter to this to only select negative numbers? Solution This is a simple solution. import pygsheets client = pygsheets.authorize(service_file="cred.json", local=True) sh = client.open('Testing Excel') wks = sh.sheet1 #This will drag the cell data from range A1:A10 and transform all the string to float date_unfiltered =[float(*i) for i in wks.get_values(start = "A1", end = "A10")] #This will filter out all the negative values and return it as a list data_filtered = list(filter(lambda money: money < 0, date_unfiltered)) print(data_filtered) This Answer collected from stackoverflow, is licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0
https://errorsfixing.com/in-pygsheets-what-is-the-proper-way-of-selecting-negative-numbers/
CC-MAIN-2022-27
refinedweb
180
67.96
Need help cloning? Learn how to clone a repository. Atlassian SourceTree is a free Git and Mercurial client for Windows. Atlassian SourceTree is a free Git and Mercurial client for Mac. sphinx-apidoc discovers packages and modules via os.walk(). By default, os.walk() skips symbolic links. So does sphinx-apidoc. sphinx-apidoc This pull request introduces a --followlinks (alias -l) option to make sphinx-apidoc follow symbolic links. By default, the former behaviour of sphinx-apidoc is preserved. I mean, symbolic links are skipped by default, so that the command remains backward-compatible. Here is an use case... I am using to ease navigation in sourcecode. In this "omelette", I install some projects using namespace packages. Let's call them "foo.bar" and "foo.baz". So I get "omelette/foo/bar/" and "omelette/foo/baz folders". I'd like to run sphinx-apidoc on foo/ folder to generate API documentation for both foo.bar and foo.baz with one command. Since omelette uses symbolic links, I need sphinx-apidoc to care about symbolic links too. Or I only get documentation for the (empty) "foo" package.
https://bitbucket.org/birkenfeld/sphinx/pull-requests/75/add-followlinks-or-l-option-to-sphinx/diff
CC-MAIN-2016-36
refinedweb
186
63.86
Up to Design Issues The management of disk space for long-term high-availability items This article discusses the merging of the concepts of a decent installation manager and a persistent cache. It is a plea for web cacheing to be advanced to the power of current installation systems. It is a plea for XML catalogs to be transparent and integrated with In the history of computing, varies forms of remote storage have been used, with different characteristrics. Local disks have (on a good day) reliable access, high speed. Applications are written to assume their availability. Running on a local area network, remote mounted disks using RPC-based access protocols (NFS, etc) provide quite high speed, and quite a high functionality, and as they are mounted as virtual disks, the application has to treat them as though they are reliable: there is no application access to network outages for example. Web pages acesses using HTTP over a wide area network are still less reliable. In this case, applications and users are both aware of the many forms of errors which can occur, some such as packet loss arising from the distances involved, and some such as authentication errors, deriving from the fact that the system crosses organizational boundaries. Yet another form of remote storage, though it might not be often considered as such, is installed software. This information is accessed remotely, but is copied completely locally, as the reliability is exteremely important. Typically the installation process is managed explictly by the user, who is aware of the reason for wanting it, security issues (DNS address and signatures) and resource commitment (disk space). Unlike disk, NFS and HTTP access, there is no URI for most installed things. You can find them on the web, typcially, and the Debian packaging system has a global (non-URI) package name space, but the integration with the web is weak. This article proposes to strengthen that connection. It is driven by the needs not only for conventional software installation to be handled more powerfully, but also for documents, and in particular namespace documents to be handled much more as installations. There The web architecture is that things should be identified by URIs. The functionality is a sort of amalgam of Debian's packaging system, Microsoft's "Add/remove programs", Apple's Software Update system, persistent Web caches such as waffle, and various SGML DTD catalog systems being proposed. I like hierarchical ways of organizing things like local resources, so the ability as a user to make dummy module in order to build a dependency. Let's call it a project. If I have asked the system over time for many things, I need to be able to group them. In some kind of ideal world, it would be nice to be unaware of limitations of disk space, to be unheeded by the time to access any data, and for all infromation to be available at all times. In reality, of course, compromises are made, and an individual's proprities have to be matched against real costs. Here we imagine that this process is centralized in one place, and made as painless as possible. We need to match sets of data (say projects) against management policies. Examples of project may be: everything I have; everything I need for work; All the recorded music I own; all the family photos. Dimensions of management policies may include: Traditionally, the directtory system has been used to categorize information so as to for the purposes above. However, the conflicting needs of different dimensions Currently many applications allow users to categorize data such as photos and music. To summarize, Track Expiry of modules, so as to know when to check for new versions. Subscription to updates notification streams by email, web servcice, polling (RSS), etc. The task of organizing data is shared between the user and I would like to install Amaya. This will cost you 15MB and require you to trust yyy and zzz. OK? Yes. Why? For a new project "Interactive editing tools". Why do I have zzz installed? Because it is used [by foo which used by bar which is used] by Amaya which you asked for on 18th June, for your Interactive Editing Tools project. Please show me the still images I have installed as a function of project. Ok, here is the list, showing disk space used by project. How much do I gain if I delete Interactive Editing Tools? Well, they take 2.3GB but you would gain only 1.4GB due to common modules shared with other things. Please install my usual working environment on this machine. Ok. Please make everything under project Family Personal Media mirrored up on machines alpha, beta and gamma, and part of the weekly incremental DVD burn backup. No, sorry, you wouldn't have enough disk space on gamma. Please save space by deleting my Classical Music, but make a DVD for me to restore it later. Ok Always keep the any ontologies of RDF documents you process as "local, forever". Anything remote that I access from now until I say "stop" is to be deemed part of My Presentation project. I would like to install SuperDVDPlayer3. Sorry, that needs Super3dShound, which needs access to your sound card, which is not sharable and is currently used by AudioDriver84. Here is a list of the projects you would get with one module but not the other. [...] Please make a choice. You can reverse it at any future time. Thanks, I'll stick with what I have. Ok. I'll leave the switch in the set of options in your control panel. Conflicts occur when different applications, or projects here, need different versions of the same. These don't ahev to be conflicts when a smart systm can load both versions of the module in question. They do turn out to be a problem when that isn 't possible. There is a class of conflict which happens when something needs a unique or limted resource. A hardware driver needs a device, for example. That is when your system has to make some hoice between them. Then it is useful to have a very good gui to give you the options in terms of things you asked for. makeand CVS These Unix tools have been the mainstay of huge amounts of software development over the years, and have many clones and spinoffs with similar functionality, that is, dependency management and source code control. They meet a surprising number of the requirements above, in that CVS tracks where each file came from (effectively a URI, from which it can be verified, updated, etc), and make keeps track fo the dependencies between files. Mainly, though, make is limited to working within a directory. To work in a large space managed using make and CVS typically requires a recursive invokation fo make to find dependencies on external directories, CVS to get an updated version, and make to build new versions of files which make have changed. These systems are designed to deal with the problem that SGML used FPIs, and some XML systems use non-dereferencable identifiers for things like DTDs, other external entities. If one then acquires copies of the resources in question, a catalog format allows one to tell the system where they are and what they correspnd to. These come in two forms - typically integrated with a browser, or indepedent, operating typically as a proxy. In the first class, Internet Explorer has had in various versions the ability to ask for abookmark to be "available offline", with the option of keeping resources linked to a given nesting. These resources would be availble ffline, could be automatically resynchronized at requested intervals (not drtiven by the HTTP expiry date), and would be deleted after a while, on an algorithm which was not clear to me at the time. As an example of the second class, wwwoffle allwos the user to cache files permanently, and, by running locally as a proxy, allows any browser to be used offline as though it were online. A danger in software design is to make modules which require to be "king of the castle", which must own all of a certian space, which much be the only one of their kind. (See Test of Independent Invention). This is clearly a trap here. A key requirement is that the user should be aware of storage space being used, and balance that, as a user value judgement, against the quality of availability for different resources. One way to design this would, then, be to make the installation function for all resources of all kinds calls to the operating system. This would make it difficult to make applications portable between operating systems, unless a posix-like standard API were deployed evenly. At best, the abuse of such an API would be a constant temptation on the part of the designers of monopoly operating systems. On ther hand, it is clear that each application needs to be involved with the installation or deinstallation of its own resources. The operating system needs to be involved with shared program modules. However, when application code itself is desinstalled, the application itself often has to be called to cleanly remove itself. A photo application needs to be aware of the arrival of new photos, and the music application of new music, and so on. Sometimes such applications in fact require the media to be stored within their own directory hierarchy, though sometimes not, and sometimes they are flexible about it. Here are some possible architectures: It also has a snag in that the application becomes bound to a particular way of getting resources. For now, one might easily get pictures from a connected camera and music from a CD, but in the future one may want to do the reverse, or get python modules via peer-peer filesharing, and so on. A fundamental flexibility point of the web is to separet the ways of gettings tuff (URI scehemes, basically) with the sorts of stuff you can get (Internet Content Types, basically). Possibly a final architecture is likely to be a bit of a mixture. The safest way to build any software will be to make the few assumptions about how resources arrived or where they are stored on the local disk. If you need to get something from the web, then put it somewhere obvious and allow it to be deleted, rather than hiding it. (Anecdote: A user lost all her music because her music application hid it where it didn't get backed up) This means that any system which needs to know about a resource is going to have to be notified that it exists, or must be able to search for it. It must then be able to pick up what information it needs about it. This probably includes where it came from, and for anything executable a security trail of why it should be trusted, if at all. For media or software, it may include licencing information. It may include expiry dates. A solution is for data to come with this metadata expressed in a form which anyone can read, a common metadata format such as RDF. Moving data on disk is bad. A bit like changing URIs but opn the local scale, it is bad for similar reasons. Why does one move data on disk (actually, rename it)? For different conflicting reaons. The conflict is the problem. Because it belongs to someone else, because it needs to be kept, because it needs to be backed up, because it needs to be managed by XXX application. Let's look at architectures in which files are rarerly moved if ever. What should determine where it is stored? The current (2004 -- I hope this will date the article!) rash of viruses may be largely based on some mail and web applications' inability to distinguish between safe and unsafe files, between passive media and executable code. This binary distinction should be core to the design of a system. Most users can do all their normal business without ever having to execute something sent by email or picked up by following a link, except under very controlled software installation conditions. Where does such responsability fall in the resource managemt scenario? Primarily on any application which passes control to, executes or interprets code. It is unreasonable to make applications which deal with passive media to jump through security hoops. For security purposes, I am very much in favor of keeping files from a given source (eg signed by a given manufacturer) within a given subtree in the file system. The copying of files from application space into shared areas which often happens alas leaves no trace fo where they came from, and can lead to undiagnosable configuration errors. There is a certain safety, then, in keeping files in place under the source. This is also consistent with installing them from a compressed zip or tar file which typically unzips into a single directory tree. When that has been done, connections ahve to be made. Both the disk space management and the appliaction itself typically need to index what is there. The interpreter needs to build a list of available modules. The loader needs to per-link runtime images. The photo application indexes photos and builds thumbnails. The music application indexes music. So, the rule we end up with is: It is the provenence of data which determines where it is stored in the local file system. On the web, metadata may be published by anyone about anything. It is then up to diverse systems to find it and use it or not use it as they chose. Global indexes like Google make finding metadata (such as, in formal way, movie reviews) easy. Some metadata, of course comes with the resource itself. This can clearly be stored with that resource. It is worth converting it from an application-sepcific standard such as EXIF of digtal photos to RDF. Other metadata, such as the HTTP resource response message headers when something is fetched from the web, are generated when the resource is acquired and can also be stored with that resource. Meanwhile, applications produce their own metdata. For example, if I rate my pictures and music, that is very important information from the resource management point of view. I run scripts which combine GPS data and photo data to produce medata about where photos were taken. The source of this metadata is the application, or the user with the application as his agent. Under the policy above, it would be inappropriate for this metadata to be stored with the original data, as it has different provenance. Therefore the rule for applications is to save application data generated in an onvious place and in an accessible file format, such as RDF. The rule we end up with for metadata is just the same as for data, which is not surprising as metadata is data. It is the provenence of metadata which determines where it is stored in the local file system. This leaves us with the question of how to merge the data from the different sources. This is the inter-application communication problem. Many operating systems, and many RDF-based systems, use central repositories, such the NeXTStep defaults database, or the now infamous Windows registry. Why does the windows registry figure in so many problems? Because it is a common repository shared by applications, and it gets out of step with the file system? Because it is hidden from the user? Because it is used as a way of communicating between different applications, but it is not clear who wrote what when? The latter makes it a security hole, a hook-point for viruses. A sounder scheme, it seems, is to leave the definitive data where it is, or always be aware of where it came from. RDF systems are now being developped which are more aware of the provenance of data. But is we stick with the model that files are upacked into a directory tree under the source from which they came, and an index is built over them using trust rules, then at least the index can be kept is sync with reality, by resynching at any time. Remembering the csh "rehash" command, we know it is better to have a notificaton-based system for such regeneration than a polling-based system, if the infrastructure supports it. File systems which allow notification to be delivered on changes will allow indexes to be updates in real time. Systems which use make (or equivalent) will also be able to propagate changes, but pull mode rather than push mode. The rules above can guide application and operating system designers. In the mean time, can we usefully build better resource management systems on top of existing operating systems and applications? One way is to build consistent databases of the data which is available, and to make rules which will check or determne policy as regards backups, availability, etc. The author played with the extraction of metadata from make and from fink (essentially Debian) dependency data. Another way is to infiltrate existing installation systems with hooks to allow the extraction of metadata but also the installation and deinstallation of modules. The system described is a metadata-driven system to allow the owner of a computer system to manage the availability and reliability of information resources as a function the reason for which they are needed, and the amount of storage space needed. In practice projects need information which is installed in many different ways. Up to Design Issues Tim BL
http://www.w3.org/DesignIssues/Installation.html
crawl-002
refinedweb
2,948
61.06
I was trying to figure out how to check if input given by the user is an integer. I tried for a while and failed, until today my teacher showed me a cool trick. In c, you can get a return value from the scanf function like this(returns a 0 if it's false, and a 1 if it's true) hope this helps someone!!: #include <stdio.h> int main(void) { int inputStatus; char input; printf("Please enter an integer\n"); inputStatus = scanf("%d",&input); printf("%d\n",inputStatus); system("pause"); } Edited by ApprentiC, 10 February 2011 - 06:59 PM.
http://forum.codecall.net/topic/61962-re-check-if-input-is-int-float-or-char/
crawl-003
refinedweb
101
57.81
I am building a BSP tree using a recursive function, however, I seem to be stuck in getting out of the function. I'll paste my code and explain along the way: module tree type treenode type (treenode), pointer :: left, right integer :: n real :: data endtype type (treenode), pointer :: root real :: tree_xmin, tree_xmax ! Counters integer :: nodes, work contains function create_node( ) result (node) type (treenode), pointer :: node integer :: ierr allocate( node, stat=ierr ) if (ierr .ne. 0) then print *, "Allocate Failed" stop endif nullify( node%right ) nullify( node%left ) node%n = 0 end function create_node This code was given to us. Just the definition for the treenone and a create_node function integer function partition_data( xdata, xmiddle ) implicit none real,INTENT(INOUT), dimension(:) :: xdata !Data Array real,INTENT(IN) :: xmiddle integer :: l = 1, i integer :: r = 16 real :: temp do if (xdata(l) < xmiddle) then !Data Value less than middle if (l == r) then partition_data = l return end if l = l + 1 !Increment Counter else do if (xdata(r) >= xmiddle) then if (l == r) then partition_data = (l - 1) return end if r = r -1 else temp = xdata(l) xdata(l) = xdata(r) xdata(r) = temp exit end if end do end if end do end function partition_data This code I wrote. Essentially, it takes in a set array (given) and partitions the array so that any values lower than the x middle value are put to the left and value greater are put to the right. Then the function returns the array index of the element to the left of the xmiddle value. I have tested this function by itself, and it works fine and gives the expected results. recursive function make_node( xdata, xmin, xmax ) result (node) implicit none type (treenode), pointer :: node real, intent(inout) :: xdata(:) real, intent(inout) :: xmin, xmax real :: xmiddle integer :: p, i, arraySize REAL, ALLOCATABLE, DIMENSION(:) :: newArray1, newArray2 node => create_node() xmiddle = ((xmin + xmax) / 2) arraySize = size(xdata) print *, "The arraysize is ", arraysize if (arraySize == 1 ) then node%data = xdata(1) node%n = 1 else p = partition_data(xdata, xmiddle) print *, "THe partition value is", p print *, "THe xmiddle value is ", xmiddle ALLOCATE(newArray1(p)) !Allocate memory for the LEFT array do i = 1,p !Copy data from xdata to newarray1 newArray1(i) = xdata(i) enddo print *, newArray1(1:p) node%left => make_node(newArray1, xmin, xmiddle) ! ALLOCATE(newArray2((p+1):arraySize)) !Allocate memory for the RIGHT array ! do i = (p+1),arraySize !COpy data from xdata to newarray2 ! newArray2(i) = xdata(i) ! enddo ! node%right => make_node(newArray2, xmiddle, xmax) end if ! end function make_node This part I also wrote myself. (There are some print statements and commented out code for debugging purposes) This recursive function is used to make the nodes. The problem I am having is that this function never seems to hit the case where arraysize is equal to one. The parition value always stays constant and the new array that is suppose to be sent in, does not get changed. Tracing it out on paper, the function seems like it should work. subroutine build_tree( xdata, xmin, xmax ) implicit none real, intent(inout) :: xdata(:) real, intent(in) :: xmin, xmax if (size(xdata) == 0) then print *, "No data provided" return endif tree_xmin = xmin tree_xmax = xmax root => make_node( xdata, tree_xmin, tree_xmax ) end subroutine build_tree end module tree This was the build_tree subroutine given to us. program treetest use tree implicit none integer, parameter :: ndata = 16 real, target :: xdata(ndata) integer :: partition_value, i real :: xpartition = 0.7 ! call random_number( xdata ) xdata = (/ 0.00392, 0.0251, 0.453, 0.667, 0.963, 0.838, 0.335, 0.975, & 0.796, 0.833, 0.345, 0.871, 0.0899, 0.888, 0.701, 0.735 /) nodes = 0 ! call build_tree( xdata, 0.0, 1.0 ) print *,"Tree Built: nodes ",nodes print *, xdata(1:16) partition_value = partition_data(xdata, xpartition) print *, "The partition value is ", partition_value print *, "The partitioned array is " do i=1,16 print *, xdata(i) end do end program treetest This is the main driver program. I apologize for the length of this post, however, any help at all would be appreciated. This is the only forum I can find offering Fortran help. Thanks! This post has been edited by bobby19: 25 November 2007 - 02:27 PM
http://www.dreamincode.net/forums/topic/38138-building-a-tree-with-a-recursive-function/
CC-MAIN-2017-39
refinedweb
705
51.38
Gecko 1.8 products, Thunderbird 2 and SeaMonkey 1.1 in particular, need patches contained in NSS 3.12.3 to fix several SSL bugs will be revealed at BlackHat at the end of July. We could backport individual fixes perhaps, but it would be safer to take the entire, tested NSS 3.12.3 release. Either way the fixes will be changing the FIPS-certified parts of NSS, but I'm not sure we really care if Thunderbird or SeaMonkey is FIPS-certified. If we do maybe cherry-picking the patches would be more acceptable since it's a smaller change. This may also affect Linux distros still shipping Firefox 2, but I believe they already ship using "system NSS" and they'll be able to upgrade NSS when these issues come out. Bugs this will fix include bug 159483, bug 480509, and bug 504456 also bug 471539 and bug 484111 I'm actually building and running TB2 against 3.12.3 since it's available and haven't noticed any regression. Are there known compat issues with PSM on 1.8.1 and NSS 3.12.3? Regarding FIPS-140, my main interest is seeing TB3+ use FIPS-140 validated versions of NSS. TB2 would be nice, but I don't think I can argue that it's critical. We need the not-yet-created 3.12.3.1 in order to pick up some fixes that went into Firefox 3.5.1 Created attachment 391227 [details] [diff] [review] Upgrade NSS to the FF3.5.1 version This upgrades the 1.8 branch to use NSS 3.12.3.1 as used by FF3.5.1, which required undoing the franken-build cruft required to preserve the fips-certified bits. Also required upgrading to NSPR 4.7.4 or higher, so I went with the 4.7.5 as used and tested in Firefox 3.0.x Does Thunderbird 2 need the fix for bug 506407 ? (I think it does) The fix to that is NOT in 3.12.3.1 Comment on attachment 391227 [details] [diff] [review] Upgrade NSS to the FF3.5.1 version This has build issues, working on it. It would be very nice to pick up the fix for bug 506407, but it seemed pretty clear that the NSS team doesn't have the resources to produce another 3.12.3-based tag. We'll pick up 3.12.4 when the active Firefox branches do, that'll have to be soon enough. Created attachment 392619 [details] [diff] [review] need to link the new nssutil3, too Checked in and everything went up in flames. Mac: /builds/tinderbox/Tb-Mozilla1.8-Nightly/Darwin_8.7.0_Depend/mozilla/nsprpub/pr/src/md/unix/unix.c: In function `_MD_unix_get_nonblocking_connect_error': /builds/tinderbox/Tb-Mozilla1.8-Nightly/Darwin_8.7.0_Depend/mozilla/nsprpub/pr/src/md/unix/unix.c:3295: error: `socklen_t' undeclared (first use in this function) /builds/tinderbox/Tb-Mozilla1.8-Nightly/Darwin_8.7.0_Depend/mozilla/nsprpub/pr/src/md/unix/unix.c:3295: error: (Each undeclared identifier is reported only once /builds/tinderbox/Tb-Mozilla1.8-Nightly/Darwin_8.7.0_Depend/mozilla/nsprpub/pr/src/md/unix/unix.c:3295: error: for each function it appears in.) /builds/tinderbox/Tb-Mozilla1.8-Nightly/Darwin_8.7.0_Depend/mozilla/nsprpub/pr/src/md/unix/unix.c:3295: error: parse error before "optlen" /builds/tinderbox/Tb-Mozilla1.8-Nightly/Darwin_8.7.0_Depend/mozilla/nsprpub/pr/src/md/unix/unix.c:3296: error: `optlen' undeclared (first use in this function) Linux: /usr/bin/gmake -j1: *** No rule to make target /builds/tinderbox/Tb-Mozilla1.8-Nightly/Linux_2.4.18-14_Depend/mozilla/dist/lib/libsoftokn3.chk. Stop. Windows: nsNSSIOLayer.cpp e:/builds/tinderbox/Tb-Mozilla1.8-Nightly/WINNT_5.0_Depend/mozilla/security/manager/ssl/src/nsNSSIOLayer.cpp(1817) : error C2664: 'DER_Lengths' : cannot convert parameter 3 from 'unsigned long *' to 'unsigned int *' Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast Maybe wtc has some insight into the build bustage? What version of NSPR is in the TB2 tree? Maybe it's not new enough. Dan bumped it ... from 4.6.8 to 4.7.5 already. On Mac, what version of XCode are you using? On Linux: need to see a little more context for that error. In what directory what make running when it output that error? This looks like a dependency problem. I'm guessing some parts of NSS didn't get rebuilt. Need to do FULL rebuild of NSS. On Windows, compare that source file with Please make that change to the 1.8 tree, too. XCode 2.2.1 building on PPC XServe 10.4 Linux clobber build: Windows clobber build: Mac clobber build: OK, made the nsNSSIOLayer.cpp change. The linux build error is here Apparently I grabbed the wrong (second) error, the first problem is in nss's copy of sqlite gmake[5]: Entering directory `/builds/tinderbox/Tb-Mozilla1.8-Nightly/Linux_2.4.18-14_Depend/mozilla/security/nss/lib/softoken' gcc -o Linux2.4_x86_glibc_PTH_OPT.OBJ/sdb.o -c -O2 -fPIC -DLINUX1_2 -Di386 -D_XOPEN_SOURCE -DLINUX2_1 -ansi -Wall -Werror-implicit-function-declaration -Wno-switch -pipe -DLINUX -Dlinux -D_POSIX_SOURCE -D_BSD_SOURCE -DHAVE_STRERROR -DXP_UNIX -DSHLIB_SUFFIX=\"so\" -DSHLIB_PREFIX=\"lib\" -DSOFTOKEN_LIB_NAME=\"libsoftokn3.so\" -DSHLIB_VERSION=\"3\" -UDEBUG -DNDEBUG -D_REENTRANT -DNSS_ENABLE_ECC -DUSE_UTIL_DIRECTLY -I/builds/tinderbox/Tb-Mozilla1.8-Nightly/Linux_2.4.18-14_Depend/mozilla/dist/include/sqlite3 -I/builds/tinderbox/Tb-Mozilla1.8-Nightly/Linux_2.4.18-14_Depend/mozilla/dist/include/nspr -I/builds/tinderbox/Tb-Mozilla1.8-Nightly/Linux_2.4.18-14_Depend/mozilla/dist/include -I../../../../dist/public/nss -I../../../../dist/private/nss -I../../../../dist/include -I/builds/tinderbox/Tb-Mozilla1.8-Nightly/Linux_2.4.18-14_Depend/mozilla/dist/include/dbm -I../../../../dist/public/dbm sdb.c sdb.c: In function `sdb_FindObjectsInit': sdb.c:707: error: implicit declaration of function `sqlite3_prepare_v2' NEXT ERROR gmake[5]: *** [Linux2.4_x86_glibc_PTH_OPT.OBJ/sdb.o] Error 1 gmake[5]: Leaving directory `/builds/tinderbox/Tb-Mozilla1.8-Nightly/Linux_2.4.18-14_Depend/mozilla/security/nss/lib/softoken' I don't know why that's "implicit", sqlite3_prepare_v2 is in sqlite3.h which is included by that file. I don't want to muddy the water too much, given that there are three different errors already, but Camino's generating a different error than Mac Tb (and Sm). We're using Xcode 2.5 on 10.4.10 Intel (also note we have higher system requirements than the stock 1_8 branch, namely 10.3.9 SDK and gcc4.0 for PPC); this is a clobber build: Note that we get past where the Tb and Sm Mac builds are failing (unix.o) and fail when compiling prsystem.o. When did NSPR and NSS drop support for 10.2/10.3 and those SDKs? I have a vague, maybe incorrect, memory of seeing comments to that effect in a couple of bugs over the past year or so. when MOZILLA_CLIENT is defined (it is) nss uses the sqlite from the mozilla client rather than its own copy. The nss copy is version 3.3.17 and has sqlite_prepare_v2(), the 1.8-branch client copy is version 3.3.5 and does not. the nsNSSIOLayer.cpp change was from bug 397296 -- making this bug depend on that one. The windows failure is because it's trying to use sqlite3 as a shared lib, but on 1.8 it was built as a static lib. That was changed in bug 306907 I'm getting a build failure for Firefox 2 on Mac at least: /usr/bin/make -j1: *** No rule to make target /work/mozilla/builds/1.8.1/mozilla/firefox-opt/dist/lib/libsoftokn3.chk. I know Firefox 2 isn't supported by I use it to test the JS engine on 1.8.. re sqlite_prepare_v2(). Executive summary: It should be OK to map sqlite_prepare_v2 to the old sqlite_perpare. The older interface will not return as specific of error codes on failure, and will not automatically recompile the sqlite request on database change. The signatures are exactly the same. NOTE: the 1.8 branch will not use the sqlite database by default, so only early adopters are likely to run into problems with any issues with the sqlite NSS databases. Those people are likely already on some from of Thunderbird 3. Here's the documenation from the sqlite website:: 1.. 2.. (In reply to comment #22) >. The prsystem.c failure on the Camino tinderbox, PPC building with 10.3.9 SDK and gcc4.0 ( in NSPR_4_7_5_RTM, or 304 in cvs HEAD) points back to bug 454878; see bug 454878 comment 12 and bug 454878 comment 13. That 'max_mem' item is clearly not in the 10.3.9 SDK version of the host_basic_info structure. I don't understand the error in unix.c on the Tb/Sm boxen (PPC builds with gcc3.3 and 10.2.8 SDK), though; the function where the error is occurring is unchanged since 1999: The actual source of the problem must be "upstream" from that yet, but I don't know where to go. (In reply to comment . That implies that we can't upgrade NSPR on the 1.8 branch, which likely means we can't upgrade NSS on the 1.8 branch. Dare I ask how much work it'd be to backport the necessary fixes from one NSS branch to another? ... You can't upgrade the PPC SDKs? I declare failure. NSS code says to use client's sqlite, but requires a function not in the version of sqlite on the 1.8 branch. We could fix nss to use its own version, but that's not using the tagged version anymore (though it's a build change rather than a code change) and doesn't solve the NSPR/Mac issues. The other tack is to upgrade sqlite on the 1.8 branch. First, gotta switch it from a static lib to a shared lib. Then have to replace it with the version used by NSS (or later, but hopefully the NSS version is tested to work with NSS). That's introducing some change there. Then we have to reapply the patches we've made to sqlite because mozStorage calls functions that aren't in the NSS version of sqlite. And it's still not building -- what's the next roadblock? And when we're done will we trust that Thunderbird or SeaMonkey or Camino is going to work? How hard would it be to backport the NSS fixes to NSS 3.11.x ? at least changes in those areas are fresh in nelson's mind compared to trying to get info on 3 year old mozStorage issues. (In reply to comment #26) > You can't upgrade the PPC SDKs? That will make Thunderbird no longer run on 10.2 -- not many people, to be sure, but security fixes shouldn't require folks to upgrade their OS. backed out, someone else's turn to untoast our Tbird2 users Checking in Makefile.in; /cvsroot/mozilla/Makefile.in,v <-- Makefile.in new revision: 1.299.2.23; previous revision: 1.299.2.22 done Checking in client.mk; /cvsroot/mozilla/client.mk,v <-- client.mk new revision: 1.245.2.44; previous revision: 1.245.2.43 done Checking in configure; /cvsroot/mozilla/configure,v <-- configure new revision: 1.1492.2.133; previous revision: 1.1492.2.132 done Checking in configure.in; /cvsroot/mozilla/configure.in,v <-- configure.in new revision: 1.1503.2.114; previous revision: 1.1503.2.113 done Checking in config/Makefile.in; /cvsroot/mozilla/config/Makefile.in,v <-- Makefile.in new revision: 3.113.2.5; previous revision: 3.113.2.4 done Checking in security/manager/Makefile.in; /cvsroot/mozilla/security/manager/Makefile.in,v <-- Makefile.in new revision: 1.57.4.8; previous revision: 1.57.4.7 done Checking in security/manager/ssl/src/nsNSSIOLayer.cpp; /cvsroot/mozilla/security/manager/ssl/src/nsNSSIOLayer.cpp,v <-- nsNSSIOLayer.cpp new revision: 1.97.2.23; previous revision: 1.97.2.22 done The Mac SDKs aren't backward compatible? That's surprising. NSS claims to be drop-in backward binary compatible. That means you can take NSS 3.12.3 shared libs and drop them in as replacements for the older 3.11.x libs without recompiling, and they work (provided their dependencies are new enough, such as sqlite and NSPR).. If it works, it's all software that you (Mozilla) built and for which you have sources. (In reply to comment #24) Smokey, the unix.c socklen_t compilation error is most likely caused by this change to mozilla/nsprpub/configure.in in rev. 1.240: (In reply to comment #30) >. We used to change SQLite, but we don't anymore, and haven't since before Firefox 3.0 shipped (1.9.0). > We used to change SQLite, but we don't anymore OK, then I don't know why FF won't run with NSS's copy. But it won't. does TB 2 use mozStorage? Iirc, we ship with it but that may have been for extensions to use. Though we don't want to break Lightning users, for example... How complicated are the NSS fixes? Nelson, is it realistic to backport them? > How complicated are the NSS fixes? Nelson, is it realistic to backport them? certainly more complicated than including a #define for sdb.c (or even better, in your sqlite3.h) to remap back to using the old prepare interface.:) > We used to change SQLite, but we don't anymore, and haven't since before > Firefox 3.0 shipped (1.9.0). 1.9 is on a new enough version. I'm not sure, however, that not updating is a good idea. sqlite continues to develop, and it's been my experience that they are *very* careful with the api (creating a new _v2 api which has improved semantics that could break old apps is an example, the old prepare still exists so old apps continue to work). That being said, for this release I would suggest simply updating your sqlite3 header to map the two functions to the same value. bob (In reply to comment #33) > > We used to change SQLite, but we don't anymore > OK, then I don't know why FF won't run with NSS's copy. But it won't. Because we used to change SQLite (in Firefox 2). (In reply to comment #34) > does TB 2 use mozStorage? Iirc, we ship with it but that may have been for > extensions to use. Though we don't want to break Lightning users, for > example... I'm pretty sure satchel uses it. (In reply to comment #36) > I'm pretty sure satchel uses it. I think TB 2 was still using wallet internally. But we probably shipped with satchel as well... Created attachment 392939 [details] [diff] [review] shared sqlite3 builds on windows (PSM/NSS doesn't load) I tried making the 1.8-branch sqlite a shared library and faking the missing sqlite3_prepare_v2 entry point (hoping NSS didn't actually depend on the one-bit difference). It builds on windows, but the resulting Thunderbird can't load NSS. Tried with a fresh profile, so it's not some migration issue -- I'm guessing NSS just can't use the older sqlite. I should also note that I built using VC7.1 (msvs 2003) and results might be different using the official VC6 (Learned my lesson after relying on my personal Mac build results). This patch also adds a new file to the build--sqlite3.dll/libsqlite3.so--that would have to be added to the packaging/installer files. But I'm back to thinking we need to backport the NSS patches to 3.11, even if just on the MOZILLA_1_8_BRANCH > It builds on windows, but the resulting Thunderbird can't load NSS. Tried with > a fresh profile, so it's not some migration issue -- I'm guessing NSS just > can't use the older sqlite. More likely some sort of dll load issue on windows. Unless you explicitly ask for it, NSS will not even try to use the sqlite database (and 1.8 doesn't ask for it). So if you didn't set the NSS_DEFAULT_DB_TYPE environment variable, there must be some windows DLL/OS issue that's preventing NSS for loading softoken. Is there an equivalent of ldd for windows which you can run on softoken? bob You can try dumpbin /dependents softokn3.dll Dan, NSS 3.12.x has three new DLLs. You discovered nssutil3.dll, whose import library nssutil3.lib is needed at build time. You already know about sqlite3.dll. The third one is nssdbm3.dll, which is only used at run time. Make sure you update mozilla/security/manager/Makefile.in (and perhaps some other makefiles) to add nssdbm3.dll. Please diff mozilla/security/manager/Makefile.in against the revision on the CVS trunk (for Mozilla 1.9.0) to see the required changes. nssdbm3.dll is loaded using LoadLibrary() at run time, so dumpbin /dependents won't report it. Created attachment 393010 [details] [diff] [review] apply on top of attachment 392939 [details] [diff] [review] to get nssdbm3 adding nssdbm3 to the build got TB2 on windows running. Extremely minimal testing shows it can talk SSL to mail.mozilla.com To maintain FIPS compatibility you'll also need a nssdbm3.chk file (from a comment in bug 489961 and email with Nelson). On mozilla-central (with NSS 3.2.4.4) it doesn't seem to be copied to dist/bin like the other check files. Still trying to figure that out. Has anyone else tried this? Would love a sanity check. I'm also supposed to have left on vacation so someone else is probably going to have to land this. We met this morning and discussed what we're going to do here. * Dan's going to checkin what he has here. ** That doesn't solve Mac (and maybe Linux), which will remain red while we work on the next item ** Need *extra* verification that nothing SSL-related is broken on any platform. * We'll need to modify NSPR (probably on a new branch) to remove the changes that cause Mac to not build / run on 10.2/10.3. * Mozilla Messaging is going to put some effort into getting everything working and ultimately tested since it's unclear what these changes will break. * QA (between MoCo and MeMo) will test this release, but more focused testing should happen by developers/QA which know Thunderbird best I think that about sums it up, but let me know if I missed anything. Wan-Teh: Do you have any idea of what NSPR changes we took in the latest NSPR that cause it to not be 10.2/10.3 compatible? Created attachment 393293 [details] [diff] [review] package list changes for the new NSS libraries I've checked in again, and included these packaging list changes. Note that the checkins didn't address comment 42, the reasoning being that since Firefox has shipped a couple of major versions without this and noone's screamed, it's not worth slowing down the effort to defuse this ticking bomb for. Oh, right. FIPS. Somehow I missed comment 42. If we want Thunderbird users in the military (and likely other places) to update to this version of Thunderbird, we'd have to preserve FIPS. Otherwise they can't and are potentially vulnerable. Not sure how many users we're talking about though... An examination of mozilla-central shows that there are MANY files outside of NSS that have lists of file names including .chk files. I'll bet that one of them needs to be changed in order for the new .chk file to get propagated properly. Samuel, the NSPR changes in question can be found in comment 24 and comment 31. I will be happy to produce an official NSPR_4_7_6_RTM release for you if you come up with the fixes. If you have a Mac with the right build environment in your Mountain View office, I can come over to fix these two bugs for you. Sam, if that were true, wouldn't we have heard about them refusing to upgrade to Firefox 3 and Firefox 3.5, since it would appear to have the same problem? Dan: We've already heard of that problem. It's been reported quite a few times to us and it's why the NSS team is working on FIPS validation for a newer NSS (the branch in 3.0.x and 3.5.x, aiui). Wan-Teh: If we don't figure it out over the weekend, I might take you up on that. I'm already home for the weekend though. Thanks for the offer. :) Sam: I was under the impression that while they weren't excited about the lack of FIPS certification, they had not refused to upgrade, despite the new .chk files not being packaged. One thing that's not clear to me is what the exact implications of the lack of the .chk file are. Do they just mean we can't claim that it's "certified"? Or is there specific functionality that will break? If so, what functionality is it? If the .chk file is missing, you can't put NSS in FIPS mode. Whether the specific version of NSS is FIPS validated is another question. Yeah, if the .chk files are missing, NSS won't start in FIPS mode. It will fail until you take it out of FIPS mode. As far as I can tell, Firefox 3.0 contains nssdbm3 and nssutil and has never contained .chk files for those. Has anyone filed a bug that Firefox 3 doesn't work in FIPS mode, or have people simply not tried since it wasn't certified anyway? Do we need a .chk file for sqlite3 as well? Created attachment 393378 [details] [diff] [review] Try to make it build with static rather than shared libs One cause of build bustage on Firefox Windows is that default dev builds use shared libs, so I got it working for that (which make seamonkey go green) but static builds were still busted. I think this fixes it. Could the Mac problems be connected with prebinding? I just by chance stumbled over the patches that removed prebinding support from mozilla-central, but on 1.8 I think we routinely enabled this for the PPC platform at least. Dan, please try NSPR_4_7_1_RTM. It doesn't have the changes that break Mac OS X 10.2 and 10.3. We should add all the necessary .chk files. If nobody reports missing .chk files, it means nobody is using the product in FIPS mode. We don't need a .chk file for sqlite3. (In reply to comment #58) > Dan, please try NSPR_4_7_1_RTM. It doesn't have the changes > that break Mac OS X 10.2 and 10.3. Will that "work"? Dan said in comment 5 > Also required upgrading to NSPR 4.7.4 or higher, so I went > with the 4.7.5 as used and tested in Firefox 3.0.x I didn't see any obvious dependencies between NSPR upgrades and NSS upgrades in the bugs for each on the 1.9.0 branch (but I also don't pretend to understand how NSS build-config tracks its dependencies). (In reply to comment #58) > Dan, please try NSPR_4_7_1_RTM. It doesn't have the changes > that break Mac OS X 10.2 and 10.3. I tried that on the SeaMonkey tinderbox, and it compiles and the Ts/Txul/Tdhtml tests run fine, but none of them uses NSS, AFAIK. Could someone with a Mac try and tell if it works with https and whatever NSS checks can be done on a casual test run? Smokey: Your question in comment 59 prompted me to look into the NSPR upgrade more closely. Here are my findings. 1. The upgrade from NSPR_4_6_8_RTM to NSPR_4_7_RTM is a strict improvement -- you won't lose any bug fixes. Note: I was worried that we continued to patch NSPR 4.6.x after NSPR 4.7 was released, but that wasn't the case. 2. NSS 3.12.x requires a new function added NSPR_4_7_RTM. So NSPR_4_7_RTM is the absolute minimum requirement. NSS release notes are more conservative and document the version of NSPR that was used during QA certification. 3. NSPR_4_7_1_RTM doesn't have the _darwin.cfg change that broke JavaScript's (incorrect) x86<->ppc cross-compilation. See bug 466531. So NSPR_4_7_1_RTM doesn't require the JS patch in bug 466531. If we upgrade to NSPR_4_7_2_RTM or later, we need to apply the JS patch in bug 466531. 4. I still think it's better to fix the Mac OS X 10.2/10.3 breakage that was inadvertently introduced. But let's use NSPR_4_7_1_RTM as a backup plan. I'll run the NSS test suite on the NSS 3.12.3.1/NSPR 4.7.1 combination on Linux, Mac, and Windows. I will also review the diffs between NSPR 4.7.1 and NSPR 4.7.4 to see if there are any bug fixes that NSS 3.12.3.1 requires. Wan-Teh: So you confirm that using NSPR_4_7_1_RTM for the moment should be OK and my build from comment #60 should be good? In that case, I think we should check in the switch to that version at least for now so that we can get testing of this whole NSS upgrade on Thunderbird builds as well, which should go green with that. We still can investigate if we should or need move up to a fixed variant of NSPR 4.7.4, but I think having nightlies for people to test is better in the mean time than not having those. Checked in a new client.mk that uses NSPR_4_7_1_RTM instead of 4.7.5 If Mac builds we need to test the heck out of the PPC version, and make sure the various NSS functionality works on all versions. IMAP over SSL (e.g. mail.mozilla.com) and S/MIME ought to be enough of a sanity check, although I'm sure this is already all covered in the QA testplan. (In reply to comment #63) > Checked in a new client.mk that uses NSPR_4_7_1_RTM instead of 4.7.5 How does that mesh with this? (In reply to comment #61) > 4. I still think it's better to fix the Mac OS X 10.2/10.3 > breakage that was inadvertently introduced. But let's use > NSPR_4_7_1_RTM as a backup plan. At this point, I expected all I'd need to do for Camino was fix packaging (and maybe linking the app); unfortunately, instead builds are (still) failing with missing sqlite3 symbols: In static builds on the official release tinderbox (10.4, Xcode 2.5), linking libsoftokn3.dylib fails: (note: this is fresh build; both the srcdir and objdir were deleted before this build began, after the previous build failed the same way) In static builds on my own Mac (10.5, Xcode 3), the error is different: (same error on the i386 half as well) In (Intel-only) dev builds on my own Mac, we fail much earlier, building libstoragecomps: (after a full distclean) Do all of these symbols have different (wrong) visibilities in a shared lib as opposed to the former static lib, or are they getting stripped now, or? Ah, yes, it looks like there were a host of visibility-related changes in bug 306907 that didn't get ported here. I'm starting a build with the "VISIBILITY_FLAGS= " change in the sqlite Makefile, and the 2 headers in system-headers, but since sqlite on 1.9 is "just" one file, I suspect more changes will be required :-( Created attachment 393471 [details] [diff] [review] Backport visibility fix to unbreak Camino Turns out that the "VISIBILITY_FLAGS= " change in the sqlite Makefile was the only change required to unbreak libstoragecomps and NSS in the Camino build (sqlite{3|3file}.h aren't system headers on the Mac; I misread that part of bug 306907). Since only Camino is building with hidden-visibility on 1_8, I believe this change is a no-op for everyone else (and we won't need to port any of the rest of the visibility fixes), but I don't have any way of building other platforms for any sort of sanity-check. This patch, plus my project changes in bug 509342, should make Camino go green. Mac now builds and is green, but it's not clear if it's okay to take NSPR 4.7.1 or if we need to upgrade to make everything work. Wan-Teh: Thoughts on the above? Regardless of what we do, we need to test this "setup" really well. Smokey, if that has no affect on anything but Camino, feel free to check it in at any time. kaie walked me through the steps of figuring out whether FIPS is actually working (thanks Kai!), and I tried it with Firefox 3.5.2 on OS X. It only ships with libfreebl3.chk and libsoftokn3.chk. Nonetheless, it allowed itself to be configured into FIPS mode and make SSL connections after that. This would seem to suggest that there's an NSS bug around the .chk mechanism, and that Thunderbird 2.0.0.23 should benefit from the existence of that bug in the sense that we won't have to get the packaging mechanism working before we ship. We only need .chk files for freebl3, softoken3, and in certain cases nssdbm3 (we never need the .chk file for nssutil). The checking of nssdbm3 was a late addition to NSS 3.12.4 and may not be in the particular release candidate you have. It is also only checked if you are opening an old NSS database (though that is usually the case in FF).. bob Comment on attachment 393471 [details] [diff] [review] Backport visibility fix to unbreak Camino I've checked this in and will keep an eye on the trees to make sure Camino goes green and no-one else notices a thing ;) Samuel: it is fine to use NSPR 4.7.1. Yesterday I inspected the diffs between NSPR 4.7.1 and NSPR 4.7.4 and went through all the bug reports. I didn't find any bug fix that NSS 3.12.x requires. Since bugzilla.mozilla.org was down last night, I could post a status update until today. I still want to run the NSS test suite on the NSS 3.12.3.1/NSPR 4.7.1 combination. If you have the Mac OS X 10.2/10.3 build environment in your Mountain View office, I can come over this afternoon around 4 pm to fix the Mac problems with NSPR 4.7.5. But it is perfectly fine for you to use NSPR 4.7.1, unless I discover some problem when running the NSS test suite. (In reply to comment #70) > (From update of attachment 393471 [details] [diff] [review]) > I've checked this in and will keep an eye on the trees to make sure Camino goes > green and no-one else notices a thing ;) Other than the unhappy coincidence of KaiRo's Windows box losing the ability to talk to cvs-mirror, all of the boxen are green. Wan-Teh: I'll get a 10.3 build environment setup. If you have preferences on static/shared debug/opt, let me know so I can have it ready for you. My extension is 284 when you get here. Question: Is there still concern at this point that customers who run TB2 in FIPS-mode will have trouble with this upgrade? We're not likely to hear from people who run into trouble directly since their normal internal escalation paths probably don't include contacting us. Also, it's not practical to find out how many people are using TB2 (let alone in FIPS-mode) because we're talking about very large departments. It's a common problem with large, decentralized organizations. [Off topic: My main ask is that TB3 ship with a FIPS-validated crypto module that works in FIPS-mode.] Samuel: I will build NSPR and NSS standalone, so I just need the 10.3 SDK installed. Thanks. (In reply to comment #69) >. Filed bug 509558 for packaging nssdbm3.chk for 3.12.4. I now have patches to restore Mac OS X 10.2 and 10.3 support in NSPR 4.7.x. Thanks to Samuel for his help. But the patches require code review and more testing, so I suggest that we use NSPR_4_7_1_RTM for now. I'd also like to correct myself -- I found comments in two bug reports (bug 454878 comment 13 and bug 469744 comment 0) that showed I assumed 10.4 was the minimum Mac OS X version. So it was wrong for me to say I broke 10.2 and 10.3 inadvertently. I have certified NSPR_4_7_1_RTM for use with NSS_3_12_3_1_RTM on Linux, Mac, and Windows. This bug is now fixed for 1.8.1.23. Will this bug stay closed or will it be opened now that TB 2.0.0.23 is released?
https://bugzilla.mozilla.org/show_bug.cgi?id=504523
CC-MAIN-2016-36
refinedweb
5,562
76.32
Man Page Manual Section... (2) - page: bdflush NAMEbdflush - start, flush, or tune buffer-dirty-flush daemon SYNOPSIS #include <sys/kdaemon.h> int bdflush(int func, long *address); int bdflush(int func, long data); DESCRIPTIONbdIf TObdflush() is Linux-specific and should not be used in programs intended to be portable. SEE ALSOfsync(2), sync(2), sync(8), update(8) COLOPHONThis page is part of release 3.24 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. Index This document was created by man2html, using the manual pages. Time: 15:26:26 GMT, June 11, 2010
http://linux.co.uk/documentation/man-pages/system-calls-2/man-page/?section=2&page=bdflush
CC-MAIN-2014-35
refinedweb
105
68.47
flasker 0.1.45 Flask, SQLAlchemy, and Celery integration Flasker is now deprecated. Consider using Kit instead, which allows YAML configuration files, running multiple projects side by side and more. A configurable, lightweight framework that integrates Flask, SQLAlchemy and Celery. What Flasker is! - A one stop .cfg configuration file for Flask, Celery and SQLAlchemy. - A simple pattern to organize your project via the flasker.current_project proxy (cf. Quickstart). - A command line tool from where you can is under development. You can find the latest version on GitHub and read the documentation on GitHub pages. Installation Using pip: $ pip install flasker Using easy_install: $ easy_install flasker Quickstart This short guide will show you how to get an application combining Flask, Celery and SQLAlchemy running in moments (the code is available on GitHub in examples/basic/). The basic folder hierarchy for a Flasker project looks something like this: project/ conf.cfg # configuration app.py # code Where conf.cfg is: [PROJECT] MODULES = app The MODULES option contains the list of python modules which belong to the project. Inside each of these modules we can use the flasker.current_project proxy to get access to the current project instance (which gives access to the configured Flask application, the Celery application and the SQLAlchemy database session registry). This is the only option required in a Flasker project configuration file. Here is a sample app.py: from flasker import current_project flask_app = current_project.flask # Flask app celery_app = current_project.celery # Celery app session = current_project.session # SQLAlchemy scoped session maker # for this simple example we will only use flask_app @flask_app.route('/') def index(): return 'Hello World!' Once these two files are in place, we can already start the server! We simply run (from the command line in the project/ directory): $ flasker server * Running on We can check that our server is running for example using Requests (if we navigate to the same URL in the browser, we would get similarly exciting results): In [1]: import requests In [2]: print requests.get('').text Hello World! Configuring your project In the previous example, the project was using the default configuration, this can easily be changed by adding options to the conf.cfg file. Here is an example of a customized configuration file: [PROJECT] MODULES = app [ENGINE] URL = sqlite:///db.sqlite # the engine to bind the session on [FLASK] DEBUG = true # generic Flask options TESTING = true For an exhaustive list of all the options available, please refer to the documentation on GitHub Pages. Finally, of course, all your code doesn’t have to be in a single file. You can specify a list of modules to import in the MODULES option, which will all be imported on project startup. For an example of a more complex application, you can check out the code in examples/flisker. Next steps Under the hood, on project startup, Flasker configures Flask, Celery and the database engine and imports all the modules declared in MODULES (the configuration file’s directory is appended to the python path, so any module in our project/ directory will be accessible). There are two ways to start the project. The simplest is to use the flasker console tool: $ flasker -h This will list all commands now available for that project: - server to run the Werkzeug app server - worker to start a worker for the Celery backend - flower to run the Flower worker management app - shell to start a shell in the current project context (using IPython if it is available) Extra help is available for each command by typing: $ flasker <command> -h Or you can load the project manually: This is useful for example if you are using a separate WSGI server or working from an IPython Notebook. from flasker import Project project = Project('path/to/default.cfg') To read more on how to user Flasker and configure your Flasker project, refer to the documentation on GitHub pages. Extensions Flasker also comes with extensions for commonly needed functionalities: - Expanded SQLAlchemy base and queries - ReSTful API - Authentication via OpenID (still alpha) - Author: Matthieu Monsch - License: MIT - Categories - Package Index Owner: mtth - DOAP record: flasker-0.1.45.xml
https://pypi.python.org/pypi/flasker
CC-MAIN-2018-13
refinedweb
683
54.42
statprof 0.1.2 Statistical profiling for Python This package provides a simple statistical profiler for Python. Python’s default profiler has been lsprof for several years. This is an instrumenting profiler, which means that it saves data on every action of interest. In the case of lsprof, it runs at function entry and exit. This has problems: it can be expensive due to frequent sampling, and it is blind to hot spots within a function. In contrast, statprof samples the call stack periodically (by default, 1000 times per second), and it correctly tracks line numbers inside a function. This means that if you have a 50-line function that contains two hot loops, statprof is likely to report them both accurately. <b>Note</b>: This package does not yet work on Windows! See the implementation and portability notes below for details. Basic usage It’s easy to get started with statprof: import statprof - statprof.start() - - try: - my_questionable_function() - finally: - - statprof.stop() - statprof.display() For more comprehensive help, run pydoc statprof. Portability Because statprof uses the Unix itimer signal facility, it does not currently work on Windows. (Patches to improve portability would be most welcome.) Implementation notes The statprof profiler works by setting the Unix profiling signal ITIMER_PROF to go off after the interval you define in the call to reset(). When the signal fires, a sampling routine is run which looks at the current procedure that’s executing, and then crawls up the stack, and for each frame encountered, increments that frame’s code object’s sample count. Note that if a procedure is encountered multiple times on a given stack, it is only counted once. After the sampling is complete, the profiler resets profiling timer to fire again after the appropriate interval. Meanwhile, the profiler keeps track, via os.times(), how much CPU time (system and user – which is also what ITIMER_PROF tracks), has elapsed while code has been executing within a start()/stop() block. The profiler also tries (as much as possible) to avoid counting or timing its own code. History This package was originally [written and released by Andy Wingo](). It was ported to modern Python by Alex Frazer, and posted to github by Jeff Muizelaar. The current maintainer is Bryan O’Sullivan <bos@serpentine.com>. Reporting bugs, contributing patches The current maintainer of this package is Bryan O’Sullivan <bos@serpentine.com>. Please report bugs using the [github issue tracker](). If you’d like to contribute patches, please do - the source is on github, so please just issue a pull request. $ git clone git://github.com/bos/statprof.py - Author: Bryan O'Sullivan - Keywords: profiling - License: LGPL - Categories - Package Index Owner: bos - DOAP record: statprof-0.1.2.xml
https://pypi.python.org/pypi/statprof/
CC-MAIN-2016-50
refinedweb
453
56.76
Opened 5 years ago Closed 5 years ago Last modified 5 years ago #16648 closed Bug (invalid) Router unable to delete relationships spanning multiple DBs Description The following method in django.db.models.Model has a deficiency such that if you define a router that uses _meta info to determine where to write and your model objects have Fkey dependencies spanning the multiple DBs then you get an integrity error because this method assumes a single DB when collecting dependent objects to delete: def delete(self, using=None): using = using or router.db_for_write(self.__class__, instance=self) assert self._get_pk_val() is not None, "%s object can't be deleted because its %s attribute is set to None." % (self._meta.object_name, self._meta.pk.attname) # Find all the objects than need to be deleted. seen_objs = CollectedObjects() self._collect_sub_objects(seen_objs) # Actually delete the objects. delete_objects(seen_objs, using) Basically, router.db_for_write(…) should be called inside delete_objects(…) for each seen_objs You can work around this by overriding delete(...) for your models but I think this should be fixed in base. Change History (2) comment:1 Changed 5 years ago by comment:2 Changed 5 years ago by Now wait a second, there is still a problem with this method in that it calls delete_objects(...) which descends into all children and deletes them by constructing SQL (FWIU) to perform the delete. This circumvents the delete(self, using=None) routine on the Child objects. Meaning if I have overridden delete() then my overridden routine doesn't get called for my Children only for the Parent. This seems to be inconsistent behavior. Don't you think delete() should be called on all seen_objs? Django doesn't support cross-database FKr elationships. Closing for the same reson and using the same values as #16240.
https://code.djangoproject.com/ticket/16648
CC-MAIN-2016-40
refinedweb
296
56.35
In this blog, I am going to discuss: - Feature-Flags. - How to start working with Feature Flags. - LaunchDarkly. - Optimizing Feature-Flag full potential and power with LaunchDarkly. So let’s start with an overview of what a feature flag is and its uses. 1). Feature-Flags. (a) Introduction: Feature-Flags is a software development best practice of gating functionality. Functionality can be turned on/off via the feature flags, separate from deployment. Feature flags can help manage the entire life-cycle of a feature, where a feature is an increment in program functionality. (b) Getting Started with Feature-Flags: Feature Flag gives a software organization the power to reduce risk, iterate quicker, and get more control allowing to move fast and not breaking things. Feature flags allow us to decouple feature roll-out from code deployment that gives an unprecedented control to us, of who sees what, when and independent of release. Allowing the control over a release unlocks the true power of our software. Feature Flagging is a method by which developers wrap a new feature in an if/then statement to gain control over its release. By wrapping a feature with a flag its possible to isolate its effect on the system and to turn that flag on/off independent from deployment. Feature Flagging is a core component of continuous delivery that empowers software organizations to release quickly and reliably. 2). How to start working with Feature Flags. The process of feature flagging is fairly straightforward: we wrap our features in conditionals for full control over that feature’s visibility. At an enterprise scale, organizations must confront the complexities of mitigating technical debt, managing developer workflows, compliance, and controlling the life-cycle of feature flags. To meet these challenges, an enterprise-grade feature flag platform built specifically for development teams is required, LaunchDarkly provides such a platform. 3). LaunchDarkly. LaunchDarkly is a continuous delivery and feature flags as a service platform that integrates into a company’s current development cycle. The platform allows companies to continuously deliver and deploy software to their users in a faster, more reliable way. We wrap our features in LaunchDarkly flags, which allows us to have full control over who sees the feature (users or user groups). It helps companies perform gradual feature rollouts, disable buggy features without re-deploying, and reduce development costs with a more streamlined development cycle. So moving ahead we’ll explore LaunchDarkly and how does it help supporting FeatureFlags. 4). Optimizing Feature-Flag full potential and power with LaunchDarkly. LaunchDarkly provides several benefits in Feature-Flag-Driven development. Starting off with its General Uses. Usage1: Early Access. It allows us to run beta programs on our live application by explicitly including the people we want to see a new feature. It is a common best practice to roll out to a specific group of persons before others. LaunchDarkly provides with this feature to be available to specific people beforehand its actual delivery. Usage2: Kill Switch. A well-wrapped feature means we can quickly turn it off if it’s performing poorly. This can be the difference between a public relations disaster and minimal impact. When we combine this with percentage rollouts, it’s even more powerful. we can rollout to say 1% of our users and get feedback. If something goes wrong we’ve disrupted the smallest possible audience. If we’re doing this right, we can essentially take risk off the table. Usage3: Incremental Roll Outs. We can do phased rollouts to percentages of our users to verify there are no scalability issues. We can roll out to a random percentage of users to verify to get feedback. This is particularly good for infrastructure projects. If we’re bringing on a new back-end and want to make sure it can handle the load in the real world. Instead of releasing the new functionality at 4Am in case it didn’t work well, we can do this during normal business hours. Ramp up to 2% at 9Am to ensure it’s all working properly and keep ramping up during the day. We can even turn it completely off to leave for lunch when we aren’t around to monitor it. Usage4: Block Users. While a feature flag can enable a specific user or user group to access some element of our site, there is also the capability to protect features from users by excluding them from ever seeing them. Sometimes it’s someone in our own organization who we don’t want to see a certain feature until it’s more baked. We may also want to block an IP or anyone in a certain domain, which we can easily do by LaunchDarkly UI. LaunchDarkly UI offers great control over our feature flags in a way that is easy and simple in terms of understanding and flags manipulation. It provides us with a Flag management dashboard to manage the lifecycle of our features from local development to QA, to production. Let’s Start off by creating our own feature flag 🙂 , Step1: Go to the signup page of LaunchDarkly, and signUp for your 30-day free trial account. Now log in to your freshly created LaunchDarkly account. Ps:- LaunchDarkly is not a free platform service after your trial has ended you are required to select a plan as per your requirements. 😛 Step2: Create your first feature flag. After you log in, first select the environment. LaunchDarkly provides us with two default environments (Production and Test), select test environment. You will land on the Dashboard Page when you have selected the test environment. Click on the “New+” button to add a new feature flag to your environment. A panel opens up on the right side of our window, fill in the details as per the requirements, for help see the below image: Press save Flag to save your newly created feature flag. In the above image, I have generated a sample flag just for demonstration purpose and I have set it to be a Boolean flag(you can have a multivariate flag too) so that I can use it in my application as a conditional constraint. Step3: Set properties for your feature flag. After completing the step2 you can see your Flag has been created and can be accessed through the Dashboard just click over your feature Flag and you will see a new window that allows you to set properties for your Flag, see below: Property1 – Targeting: From Targetting properties we can specify the individuals to whom we want to have our feature available to or whom we don’t want to have access to our feature(Specify this in TargetIndividualUsers and UsersWhoMatchTheseRules area). Also, we can just pass on a default rule in case none of the conditions are satisfied, in our case let’s just set our boolean flag to “Serve False” as a Default rule. We also can turn “Targetting” on/off and make our flag to behave differently in both the cases, for example; in our case, we can specify `If targetting is off, serve = False`. Property2 – Variations: In the variations tab next to the Targeting tab we can set the possible variations of our flag and also can name them so that we can use them in the DefaultRule section under targetting tab. Property3 – History: In the History tab next to the Variations tab we can see a complete history of any modifications or any details of access to our feature flag. Property4 – Settings: Under the Settings tab, one can see details of the feature flag, like who created that feature flag(Maintainer), and the tags associated with the Flag. We can also delete the flag from here so that it gets deleted from all the environments in the project. Now then we are done with creating our feature flag 🙂 , it’s time for us to start accessing this feature flag through our application, so why wait :-/ let’s start off with the implementation: Hands-On with the LaunchDarkly client: When we create an account at LaunchDarkly we are by default provided with two environments, both the environments (Test and Production) have their own set of SDK-key, which is used to connect the LaunchDarkly SDK to a specific environment. Each feature flag that we create has its own set of targeting rules for each environment. This means that we can change our flag rollout rules in a development or staging environment for QA testing before rolling out to production. Here we will be discussing Flags maintenance only with reference to Java SDK, however, LaunchDarkly provides support for `Python, Ruby, Go, Node.js, PHP, .Net, iOS and Android` too. 1) The first Step we need to do is to add the SDK to our project. <dependency> <groupId>com.launchdarkly</groupId> <artifactId>launchdarkly-client</artifactId> <version>2.3.2</version> </dependency> Next thing to do is to import the LaunchDarkly client in our application. 2) Import LDClient. import com.launchdarkly.client.*; 3) Configure the LaunchDarkly Client with our SDK and create the client. LDClient ldClient = new LDClient(Enter_YOUR_Environment_SDK_KEY); LDUser user = new LDUser("Your_User_Email"); boolean showFeature = ldClient.boolVariation("your.feature.key", user, false); if (showFeature) { // application code to show the feature } else { // the code to run if the feature is off } Here “ your.feature.key” is the feature key of the flag that we created through the LaunchDarkly UI. For example, here we created a feature flag named `ShowEnabledFeature` and correspondingly its feature key was `show-enabled-feature`, so I would write something like: boolean showFeature = ldClient.boolVariation("show-enabled-feature", user, false); There, we are all done with setting up the necessary connections to the LaunchDarkly. For more details on how the code is working or how we are fetching from LaunchDarkly or how FeatureFlags are working, one can see how I have implemented the whole, I have pushed my code here- LaunchDarklyImplementationCode . Hope this helps in understanding feature flags and Launchdarkly, in case any doubts please feel free to put them in the comment section. Happy Reading. 🙂 1 thought on ““Feature Flags and LaunchDarkly”- what, why and how?” Reblogged this on Coding, Unix & Other Hackeresque Things.
https://blog.knoldus.com/feature-flags-and-launchdarkly-what-why-and-how/
CC-MAIN-2019-22
refinedweb
1,693
52.6
Redux is a library used to contain the state of your application in one single location. Redux is language-agnostic and can be used with React, Vue, Angular, or even vanilla JS. I really love using Redux with React. As our React apps become bigger and more complicated the state can get unwieldy and hard to pass around to different components. This is where Redux can help us. It's generally recommended to start with Redux at the beginning of creating your project but it's not impossible to convert a project to using Redux. We can keep all of the data we need in a store that is separate from our React components. One of the biggest benefits of React is how fast and performant it is, but unnecessary re-renders can slow your project down. One of React’s core features is that whenever a component’s state or props are updated the component will re-render. This tends to slow down our app when we have long component trees and a complex state that needs to be passed to multiple children. The general flow of Redux is you send an action to the reducer which then updates the state. Only the components that rely on that state will then be re-rendered, which saves us on performance. The first step to setting up Redux is creating the store. Redux Store To save our state we'll want to set up our store and hook it up to our app. Luckily Redux comes with the createStore() method when we install the package. The createStore method accepts a reducer. A reducer is a function that updates our state. The Provider is used to connect our app with the store we created. In the index.js file, we can create the store and connect it with our App component so that all our child components have access to the store. import {createStore} from 'redux'import {Provider} from 'react-redux'import reducer from './reducers'const store = createStore(reducer, initialState)ReactDOM.render(<Provider store={store}><App/><Provider/>,document.getElementById('root'); Reducers Now that we've created our store, which takes a reducer, we'll have to make the reducer. Reducers take in actions and return a new state based on the type of action. Reducers rely on pure functions to return a new state and not mutate the current state. const reducer = (state, action) => {switch (action.type) {case "INCREASE_VOTE":return {animes: {...state.animes,[action.payload.id]: {...state.animes[action.payload.id],votes: state.animes[action.payload.id].votes + 1,},},}default:return state}} The code above is an example of a reducer that is typically written with a switch statement. You'll notice that we use the ES6 spread operator, which is very important for creating pure functions. The INCREASE_VOTE case will return a new object with all of the animes in the previous state (...state.animes) except the particular id of the anime we send in the payload. All the information about that particular anime will stay the same (...state.animes[action.payload.id]), except we will increment the number of votes it has. The default case of the switch statement is very important because if we send an action that doesn't have a case in the switch statement we don't want to affect the original state. Actions The only way to change the state of the store is to dispatch an action. An action is just a plain JavaScript object. Actions normally contain a type key which describes the state change, and a payload of any data needed to change the state. export const increaseVote = id => {return {type: "INCREASE_VOTE",payload: { id },}} This action will be dispatched or sent to the reducer which will read the type on the object and do something based on that. It is standard to define your types in SNAKE_CASE with all capitals. Like the function's name suggest if you look at the reducer we wrote above this increaseVote function will increment the vote of a specific anime, based on the id, by 1. Connect To connect our components to the store, we need to use the connect function provided by react-redux. In our export statement, we can use the connect function to wrap the component we want to have access to the store. import { connect } from "react-redux"export default connect()(Animes) Now our component is connected to the store but we need to do one more thing to use the state that is held in our store. We need to map state to props. const mapStateToProps = state => {return {animes: state.animes,}}export default connect(mapStateToProps)(Animes) We pass mapStateToProps into the connect function and now we can access the state in the store as props(props.anime). Before adding Redux to our app if we wanted to update the state we had to call setState, but with Redux we will need to dispatch our actions to the reducer. And this is done through a function mapDispatchToProps. Similar to our mapStateToProps we will create another function that returns an object of all of our actions. import { increaseVote, decreaseVote } from "../actions"const mapDispatchToProps = dispatch => {return {increaseVote: id => dispatch(increaseVote(id)),decreaseVote: id => dispatch(decreaseVote(id)),}}export default connect(null, mapDispatchToProps)(AnimeCard) You'll notice the connect now features a null because the first argument accepted by connect is always mapStateToProps, and in this component we only need mapDispatchToProps. And with that our app should be connected to the Redux store and be able to read and update from the store. If you'd like to see more of the code I made a small demo app!
https://www.jenkens.dev/blog/beginners-guide-to-react-redux/
CC-MAIN-2022-27
refinedweb
943
63.29
----- Original Message ----- From: "Sean D'Epagnier" <address@hidden>----- Original Message ----- From: "Sean D'Epagnier" <address@hidden> To: "Anatoly Sokolov" <address@hidden>Cc: "Weddington, Eric" <address@hidden>; <address@hidden> Sent: Wednesday, May 13, 2009 6:35 PM Subject: Re: [avr-gcc-list] mcall-prologues completely broken for >128k. avr-libc/crt1/gcrt1.S : .section .init0,"ax",@progbits .weak __init ; .func __init __init: ....... #ifdef __AVR_3_BYTE_PC__ ldi r16, hh8(pm(__vectors)) out _SFR_IO_ADDR(EIND), r16 #endif /* __AVR_3_BYTE_PC__ */ This code set EIND to third byte of '__vectors' address.1. This code is not clear, it is better use start address of *(.trampolines) section instead of '__vectors' . 2. More logical to move this code from crt1/gcrt1.S to gcc\config\avr\libgcc.S. Btcause the GCC uses EIND internally, then it must initialize its. Anatoly.
http://lists.gnu.org/archive/html/avr-gcc-list/2009-05/msg00095.html
CC-MAIN-2016-22
refinedweb
131
58.38
0 Dear friends, I want to create a program that reads 10 numbers from the user and store it in a file namely Numbers.txt. After that I have to read each numbers and if it is a Even number then it should be stored in a file "Even.txt" otherwise it should be stored in a file "Odd.txt". I craeted the program and while executing the files are created correctly but when it is reading and displaying the odd and even numbers from their file the last number is printed twice. I used eof() and good() function but there is no change in the output. so please tell me why it is printed twice and also how to rectify it ? The programs is shown below #include<iostream> #include<conio.h> #include<fstream> using namespace std; int main() { int x,i ,n; fstream fout("Numbers.txt",ios::out); cout<<"Enter the Numbers: \n"; for(i=0;i<10;i++) { cin>>x; fout<<x<<"\n"; } fout.close(); fstream fin("Numbers.txt",ios::in); fstream outfile2("Even.txt",ios::out); fstream outfile3("Odd.txt",ios::out); for(i=0;i<10;i++) { fin>>n; if(n%2==0) outfile2<<n<<"\n"; else outfile3<<n<<"\n"; } fin.close(); outfile2.close(); outfile3.close(); fstream infile2("Even.txt",ios::in); cout<<"\nEven File Contains:"; while(infile2.good()) { infile2>>n; cout<<"\n"<<n; } fstream infile3("Odd.txt",ios::in); cout<<"\nOdd File Contains:"; while(infile3.good()) { infile3>>n; cout<<"\n"<<n; } return 0; }
https://www.daniweb.com/software-development/cpp/threads/440422/writing-from-one-file-to-another
CC-MAIN-2015-14
refinedweb
248
69.89
SabreDAV 0.12 I just released a new version of SabreDAV, 0.12. I've skipped on posting for the last few versions, because I didn't want to get too spammy on this blog. These were mostly bugfixes, and a few added features. SabreDAV is also a PEAR-package again, so installing is as simple as 'pear install SabreDAV-0.12.0.tgz'. Full list of changes - Added: Experimental PDO backend for Locks Manager. - Fixed: Sending Content-Length: 0 for every empty response. This improves NGinx compatibility. - Fixed: Last modification time is reported in UTC timezone. This improves Finder compatibility. - Fixed: Issue 13. - Added: now a PEAR-compatible package again, thanks to Michael Gauthier. - Added: Plugin to automatically map GET requests to non-files to PROPFIND (Sabre_DAV_Browser_MapGetToPropFind). This should allow easier debugging of complicated WebDAV setups. - Added: Ability to choose to use auth-int, auth or both for HTTP Digest authentication. (Issue 11) - Fixed: TemporaryFileFilter plugin now intercepts HTTP LOCK requests to non-existant files. (Issue 12) - Updated: Browser plugin now shows multiple {DAV:}resourcetype values if available. - Added: generatePropfindResponse now takes a baseUri argument. - Added: ResourceType property can now contain multiple resourcetypes. - Added: Sabre_DAV_Property_Href class. For future use. - Changed: Made more methods in Sabre_DAV_Server public. - Added: Central list of defined xml namespace prefixes. This can reduce Bandwidth and improve legibility for xml bodies with user-defined namespaces. - Changed: moved default copy and move logic from ObjectTree to Tree class.
https://evertpot.com/249/
CC-MAIN-2017-39
refinedweb
241
53.47
Changelog for package tf2_geometry_msgs 0.4.12 (2014-09-18) 0.4.11 (2014-06-04) 0.4.10 (2013-12-26) 0.4.9 (2013-11-06) 0.4.8 (2013-11-06) 0.4.7 (2013-08-28) 0.4.6 (2013-08-28) 0.4.5 (2013-07-11) 0.4.4 (2013-07-09) making repo use CATKIN_ENABLE_TESTING correctly and switching rostest to be a test_depend with that change. 0.4.3 (2013-07-05) 0.4.2 (2013-07-05) 0.4.1 (2013-07-05) 0.4.0 (2013-06-27) moving convert methods back into tf2 because it does not have any ros dependencies beyond ros::Time which is already a dependency of tf2 Cleaning up unnecessary dependency on roscpp converting contents of tf2_ros to be properly namespaced in the tf2_ros namespace Cleaning up packaging of tf2 including: removing unused nodehandle cleaning up a few dependencies and linking removing old backup of package.xml making diff minimally different from tf version of library Restoring test packages and bullet packages. reverting 3570e8c42f9b394ecbfd9db076b920b41300ad55 to get back more of the packages previously implemented reverting 04cf29d1b58c660fdc999ab83563a5d4b76ab331 to fix #7 0.3.6 (2013-03-03) 0.3.5 (2013-02-15 14:46) 0.3.4 -> 0.3.5 0.3.4 (2013-02-15 13:14) 0.3.3 -> 0.3.4 0.3.3 (2013-02-15 11:30) 0.3.2 -> 0.3.3 0.3.2 (2013-02-15 00:42) 0.3.1 -> 0.3.2 0.3.1 (2013-02-14) 0.3.0 -> 0.3.1 0.3.0 (2013-02-13) switching to version 0.3.0 add setup.py added setup.py etc to tf2_geometry_msgs adding tf2 dependency to tf2_geometry_msgs adding tf2_geometry_msgs to groovy-devel (unit tests disabled) fixing groovy-devel removing bullet and kdl related packages disabling tf2_geometry_msgs due to missing kdl dependency catkinizing geometry-experimental catkinizing tf2_geometry_msgs add twist, wrench and pose conversion to kdl, fix message to message conversion by adding specific conversion functions merge tf2_cpp and tf2_py into tf2_ros Got transform with types working in python A working first version of transforming and converting between different types Moving from camelCase to undescores to be in line with python style guides Fixing tests now that Buffer creates a NodeHandle add posestamped import vector3stamped add support for Vector3Stamped and PoseStamped add support for PointStamped geometry_msgs add regression tests for geometry_msgs point, vector and pose Fixing missing export, compiling version of buffer_client test add bullet transforms, and create tests for bullet and kdl working transformations of messages add support for PoseStamped message test for pointstamped add PointStamped message transform methods transform for vector3stamped message
http://docs.ros.org/hydro/changelogs/tf2_geometry_msgs/changelog.html
CC-MAIN-2019-35
refinedweb
447
50.02
Date: 06/29/2004 at 23:35:35 From: Adrian Subject: Coin Toss What is the expected number of times a person must toss a fair coin to get 2 consecutive heads? I'm having difficulty in finding the probabilty when the number of tosses gets bigger. Here's my thinking: 1) You will only stop when the last two tosses are heads. 2) The random variable should be # of tosses # of tosses 1 2 3 4 5 Prob 0 P(HH) P(THH) P(TTHH)+P(HTHH) and so on.... However I get 2(1/4)+ 3(1/8) + 4(2/16) + 5(4/32) + .... Am I doing something wrong here? Date: 07/01/2004 at 19:23:53 From: Doctor Mitteldorf Subject: Re: Coin Toss Hi Adrian - Your thinking is very good and clear. But the calculation you have set up is potentially infinite. Sometimes that's ok, and you can get very close with just the first few terms; in this case, you'll have to go out past 20, taking 2^20 possibilities into account in order to get a reasonably accurate number. So we might look for a trick or shortcut. In this case, the standard trick is "recursion". Try to write the probability after a certain number of flips in terms of itself. Let's let X = the expected count (the thing we're looking for). Start the way you started: X = [2(1/4) for HH] + [something for HT] + [something for TT] + [something for TH] Here's how we can evaluate the "something" for HT. There's a 1/4 probability of getting there. And once you've gotten there, you're exactly where you were when you started, except you've wasted 2 flips. So for that "something" I'm going to write (1/4)(X + 2). The 1/4 says that you have a 1/4 chance of flipping HT. The (X + 2) says that we have the same expectation now as we did when we started (X) except that we've wasted two flips. Similarly, the [something for TT] becomes another (1/4)(X + 2), just the same as HT. The last term, TH, is a little trickier, because we're NOT in the same position we started in, because we've got one H "in the bank". We have a 1/2 chance of getting another head on the 3rd flip, so that gives a contribution of (1/4)(1/2)(3). We also have a 1/2 chance of getting a tail on the 3d flip, leaving us in the same position we started, except that we've wasted 3 flips now. So that is represented by (1/4)(1/2)(X + 3). Put all this together now to make an equation for X: X = 2(1/4) + (1/4)(X+2) + (1/4)(X+2) + (1/4)(1/2)3 + (1/4)(1/2)(X+3) What I would do at this point is first to solve for X, and see if I got a reasonable answer. A reasonable answer is something bigger than 4 and smaller than 10. Next, I wouldn't trust my abstract reasoning - I'd go write a computer program to check the value that I got from the algebra. Will you let me know if this works out? - Doctor Mitteldorf, The Math Forum Date: 07/01/2004 at 20:22:46 From: Doctor Anthony Subject: Re: Coin Toss Hi Adrian - A difference equation is often useful here. Let a = expected number of throws to first head. We must make 1 throw at least and we have probability 1/2 of a head and probability 1/2 of returning to a, so a = (1/2)1 + (1/2)(1 + a) (1/2)a = 1 a = 2. Let E = expected number of throws to 2 consecutive heads. Consider that we have just thrown a head and what happens on the next throw. We are dealing with the (a + 1)th throw, with probability 1/2 this is not a head and we return to E. So E = (1/2)(a + 1) + (1/2)(a + 1 + E) (1/2)E = a + 1 E = 2(a + 1) and now putting in the value a = 2 we get E = 2(3) = 6 Expected throws to 2 consecutive heads is 6. - Doctor Anthony, The Math Forum Date: 07/03/2004 at 01:20:24 From: Adrian Subject: Thank you (Coin Toss) Thanks. The answer is indeed 6. I've got a simple C++ program to show this... //------------------------------------ #include <iostream> #include <cmath> #include <ctime> using namespace std; int flips(); void main() { int result = 0; int i; srand(time(NULL)); for (i = 1; i < 999999; i++) result += flips(); cout << "Expected value = "<< result / i << endl; } int flips() { int i, j, counter; i = 0; j = rand() % 2; counter = 1; while ((i+j) != 2) { i = j; j = rand() % 2; counter++; } return counter; } //--------------------------- Date: 07/03/2004 at 06:29:56 From: Doctor Mitteldorf Subject: Re: Thank you (Coin Toss) Dear Adrian, Yes, 6 is the answer. I'm glad to see that your instincts are like mine--you'll believe it when it comes out of a computer program! But the abstraction is worth something, too. Infinite calculations are inconvenient, so we often look for tricks that use recursion to cut the calculation short, and make it finite. You're probably familiar with the simplest, most standard trick for evaluating the sum S = 1 + 1/2 + 1/4 + 1/8 + 1/16 + ... The trick is to multiply the whole sum by 2. You find that 2S = 2 + 1 + 1/2 + 1/4 + 1/8 + 1/16 + ... But this sum is identical to the original one, except for the 2 in front. So you can write 2S = S + 2, and solve for S = 2, exactly. We're using a different trick, but the same idea, to evaluate the infinite sum for the expectation value that you asked about. - Doctor Mitteldorf, The Math Forum Search the Dr. Math Library: [ Choose "whole words" when searching for a word like age.] Ask Dr. MathTM © 1994-2015 The Math Forum
http://mathforum.org/library/drmath/view/65495.html
CC-MAIN-2015-27
refinedweb
1,022
79.3
Simple Lopy to Lopy via WiFi Hi, I have some trouble to connect my two lopy4 via WiFi. Here is receiver code: import socket import pycom pycom.heartbeat(False) s = socket.socket() port = 80 s.bind(('0.0.0.0',port)) print("Bind") s.listen(5) print("Listen") while True: pycom.rgbled(0xFFFFFF) # Blue c, addr = s.accept() print('Connection from :', addr) pycom.rgbled(0x00FF00) # Green data = c.recv(64) print(data) c.send(b'Hello') And this is sender code from network import WLAN import pycom import machine import socket #Si connette ad una rete con un certo ssid e una certa password ssid = "****" #ssid of the other lopy password = "*****" #password of the other lopy pycom.heartbeat(False) pycom.rgbled(0xFF0000) # Red led at start wlan = WLAN(mode=WLAN.STA) nets = wlan.scan() s = socket.socket() for net in nets: if net.ssid == ssid: print("Found network: " + net.ssid) wlan.connect(net.ssid, auth=(net.sec, password), timeout=5000) while not wlan.isconnected(): machine.idle() # save energy if wlan.isconnected() == True: print('WLAN connection succeeded!') print(wlan.ifconfig()) while True: b = bytes("Hello",'utf-8') s.sendall(b) #send butes pycom.rgbled(0xFFFF00) buffer = s.recv(64) #receive bytes print(buffer) #print bytes received else: print("WLAN connection error") When the sender codes arrive at line "s.sendall(b)", it gives me OS ERROR: 128, but I don't understand why. Can anyone help me? Thanks @crumble and @timh for your answers! I've changed my code, because I want to implement a WiFi Direct between my two Lopy4 ( so one act like AP, and the other like STA). I have changed my code as you advised me: Server Main import usocket as socket import machine from network import WLAN wlan = WLAN() # get current object, without changing the mode if machine.reset_cause() != machine.SOFT_RESET: wlan.init(mode=WLAN.AP, ssid='Lopy-wlan', auth=(WLAN.WPA2,''), channel=10, antenna=WLAN.INT_ANT) myconfig = wlan.ifconfig() lst_myconfig = list(myconfig) lst_myconfig[0] = '192.168.0.20' myconfig = tuple(lst_myconfig) wlan.ifconfig(config=myconfig) print("Wlan configuration", myconfig) s = socket.socket() s.bind((lst_myconfig[0],2000)) s.listen(5) c, addr = s.accept() # Establish connection with client. print ('Got connection from', addr) c.send(b'Thank you for connecting') # Send bytes c.close() Client Main from network import WLAN import pycom import machine import socket ssid1 = "Lopy-wlan" password = "" wlan = WLAN(mode=WLAN.STA) #station nets = wlan.scan() s = socket.socket() #create socket for net in nets: if net.ssid == ssid1: print("Found network: " + net.ssid) wlan.connect(net.ssid, auth=(net.sec, password), timeout=5000) #connected to lopy AP while not wlan.isconnected(): machine.idle() # save energy if wlan.isconnected() == True: print('WLAN connection succeeded!') s.connect(socket.getaddrinfo('192.168.0.20',2000)[0][-1]) b = bytes("Hello",'utf-8') s.sendall(b) #send bytes buffer = s.recv(64) #receive bytes print(buffer) #print bytes received s.close() I've upload Server code on my Lopy server, then I run Client code on my Lopy client, but when I try to connect the socket ( with s.connect(...)) I have OSError: -1. I've seached on google but I did not find anything. What can it be? The socket is not connected. WiFi communication is point2point by default. So you have to provide a target adress and port. Set up an own network with fix IP numbers. This will be the easiest way to start. You can switch to broadcast, after you managed the p2p communication. @p-m-97 Maybe you haven't supplied all of your code, but a few things stand out. In your sender code you don't actually connect to anything with the socket. Somewhere you need to call sock.connect(addr) In addition in your listener code, you don't connect to the wifi network and that should occur before you bind.
https://forum.pycom.io/topic/4508/simple-lopy-to-lopy-via-wifi/1
CC-MAIN-2021-39
refinedweb
644
63.46
Create the RespondToSlackCommand Azure function You've created the Azure Function MojifyImage, which returns a mojified image. You need a second endpoint that Slack calls whenever someone executes the /mojify command. This second endpoint needs to return the URL to the MojifyImage function. When you run a Slack command like /mojify <some-image-url>, it makes a POST request to the endpoint you've configured, and passes in <some-image-url> as a text query parameter embedded in the body of the message. You're going to create this function, which will coordinate and respond to the Slack commands as RespondToSlackCommand. You'll create the HTTP endpoint that uses the format Slack expects for the request, and how to get the function to respond with an image. Create the Function Trigger and convert to TypeScript You need to create another HTTP-triggered Azure Function. These instructions are basically the same as before. What's different is that you're calling this function RespondToSlackCommand instead of MojifyImage. Click on View then Command Palette, then search for and select Azure Functions: Create Function... Select the folder where you originally created the function project. Select the HTTP Trigger option. Type RespondToSlackCommandas the name of your function. Choose Anonymous as the authentication level. Note By choosing Anonymous, the function is open to the world and insecure. If you create other functions in the future, this isn't the recommended default behavior. Since this is a low-risk exercise with free Azure learning resources, it's not a problem for now. If it was successful, then you should now have the folder RespondToSlackCommand in the root directory. Now you can convert the file from TypeScript to JavaScript. Create a file called index.tsin the RespondToSlackCommandfolder. Make sure that the TypeScriptbuild process is still running and that it automatically compiled into the index.jsand index.js.mapfiles. Replace the code in index.tswith the following code: export function index(context, req) { context.log("RespondToSlackCommand HTTP trigger"); context.res.body = "Hello!"; context.done(); } Try it out Make sure that everything is working by visiting () in a browser. It should print our Hello!. Write the index function This index(context, req) TypeScript function is a lot quicker to write than the previous function. Set up your context.resobject at the top of the function. context.res = { headers: { "Content-Type": "application/json" }, body: null }; This time, you don't need to set the isRawproperty since this defaults to false. However, you do need set the content type to be application/json. Add the useful library querystring. Since Slack sends the image URL, you want to process it as a query string with the key of text. It embeds this in the body of the request, so you need to work a little harder to get the right information. It's not too hard though! Make life easier for yourself by importing the Node.js querystringpackage. This is part of Node.js, so there's no need to install anything extra. Add this import statement to the top of the file. import * as querystring from "querystring"; In your index(context, req)function, convert the req.bodyinto an object from which you'll extract the textproperty. const { text } = querystring.parse(req.body); If a user types the command properly, then the text should contain the URL of an image. Add some basic validation to it to make sure it works. let message = "Your mojified image my liege..."; if (!text) { message = "You must provide an image to mojify"; } The Slack command is calling. It needs to respond with the MojifyImage URL, which we assume will be on the same domain. Extract the domain from the request MojifyImagerequest URL. const mojifyUrl = req.originalUrl.substr(0, req.originalUrl.lastIndexOf("/")) + "/MojifyImage"; Finally, set the body of the response to the Slack-specific format at the bottom of the index(context, req)function. Important The image_urlin the attachmentsproperty needs to be set to return the mojifyUrl, passing in the URL the user supplied in the command as the imageUrlquery parameter. context.res.body = { response_type: "in_channel", text: message, attachments: [ { image_url: `${mojifyUrl}?imageUrl=${text}` } ] }; context.done(); Try it out Make sure that the local Azure function application is running using one of the methods you previously used: either use func host startor run it from the debug menu using the Attach to JavaScript Function task. If the function app started correctly, then the output window should show something like this: Http Functions: MojifyImage: RespondToSlackCommand: Make sure that everything is working by visiting in your browser. It should now print some json. { "response_type": "in_channel", "text": "You must provide an image to mojify", "attachments": [ { "image_url": "" } ] } Need help? See our troubleshooting guide or provide specific feedback by reporting an issue.
https://docs.microsoft.com/en-us/learn/modules/replace-faces-with-emojis-matching-emotion/6-create-the-respondtoslackcommand-function
CC-MAIN-2019-35
refinedweb
789
58.69
numpy tile vs repeat In this post, we will learn what is numpy tile and what’s the difference between numpy tiles and repeat Numpy tile - numpy.tile() The numpy tile function is used for repeating an array by any number of times in any dimension of the array Here is the formula for using numpy tile: numpy.tile(A, reps) It repeats the array A by number of times given by reps parameter. If reps has length d, the result will have dimension of max(d, A.ndim). Let’s understand this with the help of examples below Using Numpy tile in one dimension array Let’s create a 1D array and we want to repeat this array two times along the axis=0 import numpy as np A = np.array([1, 2, 3, 4, 5]) Pass the reps parameter as 2 and check the result np.tile(A, 2) # Out array([1, 2, 3, 4, 5, 1, 2, 3, 4, 5]) So you can see here the output is a 1D array and the elements of original array is repeated twice Using Numpy tile to build a 2D array Let’s use the same 1D array and we will build a 2D array out of it by repeating this array(A), reps param in tile function is given as (5,2) which means repeat the array 5 times along axis=0 and 2 times along the axis=1 np.tile(A, (5, 2)) Output: In the above output you can see the initial array(A) is repeated 5 times along the axis=0 and the array is repeated 2 times along the axis=1 Using Numpy tile to build a 3D array Let’s use the same 1D array and we will build a 3D array out of it by repeating this array(A), reps param in tile function is given as (2, 5, 3) which means repeat the array 2 times along axis=0, 5 times along the axis=1 and 3 times along axis=2 np.tile(A, (2, 5, 3)) Output: In the above output you can see the initial array(A) is repeated along each of dimensions as mentioned in the passed reps parameter value Numpy repeat - numpy.repeat() Unlike numpy tile, numpy repeat helps to repeat the elements of an array Here is the formula for the numpy repeat: np.repeat(a, repeats, axis=None) where, a: Input array repeats: the number of repetitions for each element and broadcasted to fit the shape of the given axis axis: Along which axis to repeat values numpy repeat 1D array First create a 1D array a = np.array([1, 2, 3, 4, 5]) Output: array([1, 2, 3, 4, 5]) Repeat each element of the array 4 times np.repeat(a, 4) Output: array([1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 5, 5, 5, 5]) numpy repeat 2D array first create a 2D array x = np.array([[1,2],[3,4]]) Output: array([[1, 2], [3, 4]]) if no axis is mentioned then a flattened array is returned, lets understand this with an example. np.repeat(x, 2) Output: array([1, 1, 2, 2, 3, 3, 4, 4]) Let’s repeat the element of array(x) along the axis 1, 2 times. we will pass the axis parameter value as 1 np.repeat(x, 2, axis =1 ) Output: array([[1, 1, 2, 2], [3, 3, 4, 4]]) So you can see, each element is repeated 2 times along the axis=1 in the 2D array(x) numpy repeat elements of 2D array along axis=0 and axis=1 we will repeat the elements of a 2D array(x) along the axis=0 and pass the repeats parameter as [3, 4] which means repeate the first array 3 times and 2nd array 4 times np.repeat(x, [3, 4], axis=0) Next, we will repeat the elements of a 2D array(x) along the axis=1 and pass the repeats parameter as [3, 4] which means repeate the first elements in the array 3 times and 2nd element in array 4 times np.repeat(x, [3, 4], axis=0)
https://kanoki.org/2022/01/03/numpy-tile-vs-repeat/
CC-MAIN-2022-33
refinedweb
702
54.6
Re: why are overloaded methods statically bound? From: Chris Smith (cdsmith_at_twu.net) Date: 11/01/03 - ] Date: Fri, 31 Oct 2003 16:01:50 -0700 Chris Riesbeck wrote: > My post said "This can be handled if the language designers so > choose." and "Again, if the language designers want to support > this,..." > > How does that turn into "the language designers didn't care enough?" Okay, let's put this in context. Original poster asks why Java doesn't base polymorphism on parameter types. Someone responds saying performance. You respond saying "this can be handled if the language designers so choose." So, what you said is that the performance impact is solvable, and there's some *other* reason for the choice instead. You may not have meant to say that, but you clearly did. I objected, and you made some statement about "different trade-offs" and "fast enough" to imply that you were minimizing the importance of performance. I agreed that this is sometimes the case. The "didn't care enough" was hyperbole on my part. You haven't said what other reasons you believed the language designers to have in omitting the feature, if indeed you believe it to be the case. > > There are valid reasons why even a good > > implementation of a language with parameter-based polymorphism would > > have weaknesses compared to a language without it; > > and strengths. If it was all weaknesses, no one would do it. If it was > all strengths, no one would not do it. Of course! That's not the issue. The issue is that when someone pointed out the weaknesses, you jumped in and and said "that can be handled" as if it weren't an issue. > > and both the > > difficulty of predicting dispatch with that many factors involved > > What's your evidence for this? It's messy in C++ because namespaces > are involved. But I've not seen any results that show programmers in > the Common Lisp Object System (CLOS) have any trouble here with normal > programming. Messy in C++? It doesn't even exist in C++! The difficulty in predicting method dispatch on so many factors is explained merely by the observation that there are so many more factors involved. That implicitly makes it harder to predict. Frankly, I think the amount of difficulty people have with just static resolution of overloaded parameter types indicates great trouble in that direction. I don't know anything about CLOS programmers, so I can't speak one way or the other about whether they are confused. I can only point out that there is a much longer and more complex process involved than is generally the case. > No one gave any performance metrics here or cited any "very real > performance measures." Googling for "multimethod dispatch performance" > finds a number of results such as constant-time lookup for Cecil, and > modest to no penalty for an multimethod extension to Java, and that > was in the first few search results. It would take a good bit more effort than USENET is worth for someone to put together a comparative benchmark of performance between a real language and a hypothetical language. You provide a pseudo-comparison in which the authors compare the techniques using a worst case scenario piece of code, and even then conspicuously avoid quoting results of the original code running on an unaltered modern virtual machine using standard JVM performance techniques. Instead, they represent "normal" with an interpreted VM; a meaningless number, as we all know. Furthermore, even this paper making the case for such method dispatch in Java doesn't dare apply it universally. Instead, they mark a few very specific places to use the technique using a marker interface, recognizing that it's not universally applicable. (As for constant time, I should certainly hope that method dispatch is constant time! That's generally considered a basic assumption in programming languages, not an accomplishment. The challenges in practical optimization of language implementations relate to minimizing the cache touches of the process and thus chance of a page fault, the plain processor cycles involved, etc.) I'm all for giving something a fair shot, but I think you're misrepresenting the issue by claiming that general polymorphism on parameters in Java might possibly not have a performance impact. -- The Easiest Way to Train Anyone... Anywhere. Chris Smith - Lead Software Developer/Technical Trainer MindIQ Corporation - ]
http://coding.derkeiler.com/Archive/Java/comp.lang.java.programmer/2003-11/0010.html
CC-MAIN-2016-44
refinedweb
729
63.39
Python Dataclasses from Scratch This post assumes a general understanding of common Python structures, but most everything else will be explained. I've always been fascinated by Python's dataclasses module. It provides a wrapper that dynamically adds methods to a class based on its type annotations. Coming from the Typescript world (where type annotations are compiled away), being able to make runtime changes based on type information. was intriguing. After reading through the surprisingly-approachable implementation, I thought it would be fun to write about the techniques it uses and walk through a simple implementation. Let's get to it! What We're Building Let's start with the ending: what code should our code generate? While the actual dataclass implementation has a bunch of code that handles inheritance and obscure methods, our toy implementation can be much simpler. All we want is an __init__ function that takes, as arguments, each of the properties we want to store. So if we write: @custom_dataclass class Pet: name: str age: int sound: str = 'woof' it should generate the following: class Pet: def __init__(self, name: str, age: int, sound: str = "woof"): self.name = name self.age = age self.sound = sound It may not look like much, but the techniques used to generate that __init__ method also power the rest of the dataclass methods. Keep that Pet in mind - it'll be our example throughout this post. How We Build It Getting from point A to B above takes a few, generally unrelated Python techniques, so they'll each get their own sections. After that, we'll put them all together and, voila! Starting With a Test Knowing exactly how our code should behave is an important step to actually writing it. Our code should do 3 main things: - Create an __init__function on a decorated class (with the correct type hints) - Add properties to that class during __init__ - Handle defaults correctly (optional, but allowed) Here's a simple test that covers those three areas: from typing import get_type_hints from unittest import TestCase, main from custom_dataclass import custom_dataclass class TestCustomDataclass(TestCase): def test_functionality(self): @custom_dataclass class Pet: name: str age: int sound: str = "woof" # check that the __init__ function was # created and has the right types self.assertTrue(hasattr(Pet, "__init__")) self.assertEqual( get_type_hints(Pet.__init__), {"age": int, "name": str, "sound": str}, ) # check that properties were assigned # and that the default works fido = Pet("fido", 3) self.assertEqual(fido.name, "fido") self.assertEqual(fido.age, 3) self.assertEqual(fido.sound, "woof") # check that the default can be overridden rover = Pet("rover", 5, "bark") self.assertEqual(rover.name, "rover") self.assertEqual(rover.age, 5) self.assertEqual(rover.sound, "bark") if __name__ == "__main__": main() Save that test in a tests.py file. Next to it, create an empty custom_dataclass.py. Unsurprisingly, python tests.py blows up spectacularly; we haven't written anything yet. Let's fix that! Our "main" function will be custom_dataclass, which takes a class object doesn't something to it, and returns it: def custom_dataclass(cls): # ??? return cls Replacing that ??? takes a few steps, starting with... Type Introspection The first thing we need to do is read the properties defined on the class. In the Pet example above, this is name, age, and sound. Each has a type and one of them has a default. At runtime, type information lives on the .__annotations__ attribute. It's a dict mapping the variable name to the type. Values will either be type objects (like <class 'int'>) or values from the typing package (such as typing.List[int]). By iterating over that dict, we can build a list of fields that our dataclass will have. Let's store that field definition in a simple class: class Field: def __init__(self, name: str, type_: type) -> None: self.name = name self.type_ = type_ Python style tip: if you want to call a variable the same name as a reserved word, the official style guide recommends adding a trailing underscore to the name. That's why type_is named as such above. As with any good guideline, exceptions are made for common alternatives (such as clsbelow). We can wrap our field-building logic in a simple function: def get_fields(cls): annotations = getattr(cls, "__annotations__", {}) fields = [] for name, type_ in annotations.items(): fields.append(Field(name, type_)) return fields This is a good start, but it's missing one of our promised features: defaults. When a class annotation is assigned a value, it's accessible on the class itself. So, we can look at the class itself to see if the field should have a default: default = getattr(cls, name, None) This works in theory, but there's a bug: how do we differentiate between a value defaulting to None and a value without a default? Situations like this are exactly what sentinel objects are for! Because Python can create globally unique objects, we can always determine if we're seeing a pre-defined object. Here's the updated code: MISSING = object() class Field: def __init__(self, name: str, type_: type, default) -> None: ... self.default = default @property def has_default(self) -> bool: return self.default is not MISSING def get_fields(cis): ... for name, type_ in annotations.items(): default = getattr(cls, name, MISSING) fields.append(Field(name, type_, default)) ... One last thing! These default values will eventually be plugged into a function. As such, we don't want to allow users to pass in mutable defaults (because they get reused across calls). Our get_fields is the perfect place for that check: def get_fields(cis): ... for name, type_ in annotations.items(): default = getattr(cls, name, MISSING) if isinstance(default, (list, dict, set)): raise ValueError( f"Don't use mutable type {type(default)} as a default for field {name}" ) ... ... Now we've got an ordered array of Field objects, which hold all the details we'll need to build the text of the __init__ function. Let's add a call in our main function and move on: def custom_dataclass(cls): ... fields = get_fields(cls) ... Writing a Dynamic Function In this section, we need to build a string that's a valid python function declaration. It should be called __init__ and should list all of our fields as arguments. Let's start with the easiest part init_fn = 'def __init__(self, ???)' We also basically know what should replace the ???. For each field, we need: - the name - the type annotation - a default, if it exists By adding a method to our Field class, this can be done fairly cleanly: class Field: ... @property def init_arg(self) -> str: return f'{self.name}: {self.type}{f"={self.default}" if self.has_default else ""}' It looks like it should work, doesn't it? Unfortunately the stringification of objects isn't necessarily valid Python: print(f'name: {str} = {"cool"}') # "name: <class 'str'> = cool" <class isn't how we declare classes and our cool lost its quotes; not cool at all. Because we can reliably stringify variable names, we should use variables in place of any Python objects. Then we can build an outer function that we'll call with the actual Python values. The wrapper returns a valid __init__ function string. To ensure we have consistent variable names, we should add some helpers to our Field class. We can then tweak our our init_arg function to call them and we're good to go! class Field: ... @property def init_arg(self) -> str: return f'{self.name}: {self.type_name}{f"={self.default_name}" if self.has_default else ""}' @property def default_name(self) -> str: return f"_dflt_{self.name}" @property def type_name(self) -> str: return f"_type_{self.name}" This gives us a bulletproof function declaration: def custom_dataclass(cls): fields = get_fields(cls) init_fn_def = f"def __init__(self, {', '.join(f. init_arg for f in fields)}):" # "def __init__(self, name: _type_name, age: _type_age, sound: _type_sound = _dflt_sound):" The last thing in this section is the body of the function. For each property, we need to assign it to self. Let's add one more helper method: class Field: ... @property def assginment(self) -> str: return f"self.{self.name} = {self.name}" def custom_dataclass(cls): ... assignments = "\n".join([f" {f.assginment}" for f in fields]) or ' pass' Note the leading space in " {f.assginment}". Because we're now inside a function, we have to indent our code. We also handle the case where there are no properties at all by adding pass to the body. If we print our init_fn and assignments, we get: def __init__(self, name: _type_name, age: _type_age, sound: _type_sound = _dflt_sound): self.name = name self.age = age self.sound = sound Looks perfect! Now we have to actually fill it with values. If we wanted to write a function to return a function by hand, it would look something like: def wrapper(some_type, some_default): def inner_func(arg1: some_type = some_default): return arg1 return inner_func init_fn = wrapper(int, 4) Type hints in the editor confirm the function works as expected: We want to use that same logic to write a wrapper function for our dataclass. Its args should be all the defaults and types that __init__ expects. Let's work backwards for a sec. When we call this wrapper, we'll need to use the actual values ( int, 3, etc). Those currently live in our fields list, but there's a better way to store them. Since the types will be eventually referenced by their type_name, we can store them that way in a dict: def custom_dataclass(cls): ... locals_ = {} for field in fields: locals_[field.type_name] = field.type_ if field.has_default: locals_[field.default_name] = field.default ... Now we have al our types an defaults in one place. Even better, they're stored under the keys that __init__ already knows to use. It looks like this: { '_type_name': <class 'str'>, '_type_age': <class 'int'>, '_type_sound': <class 'str'>, '_dflt_sound': 'woof' } All that remains is to generate the wrapper string that returns the __init__ string we made before: def custom_dataclass(cls): ... wrapper_fn = "\n".join([ f"def wrapper({', '.join(locals_.keys())}):", f" def __init__(self, {', '.join(init_args)}):", "\n".join([f" {f.assginment}" for f in fields]) or " pass", " return __init__", ]) ... which produces: def wrapper(_type_name, _type_age, _type_sound, _dflt_sound): def __init__(self, name: _type_name, age: _type_age, sound: _type_sound=_dflt_sound): self.name = name self.age = age self.sound = sound return __init__ Note the specific leading spaces as we get further into the functions: def wrapper has none, def __init__ has one and {f.assignment has two. It's up to us to tell Python how these functions are nested. Next, we want to actually run that string as Python code. Executing a Dynamic Function The notion of telling Python to execute a string might seem a little odd. But, that actually what Python always does. python my_file.py basically says "get a string from that file and run it". We're doing that same thing, but we built our string from scratch instead of writing it in a file. Ultimately, we want to be able to write the line init_fn = wrapper(...) in our custom_dataclass function. To do that, we have to turn our string into an actual callable function. We're most of the way there: it's already got the def keyword that tells Python it should define a function. How do we bridge the gap between a string an our actual Python code? The exec function to the rescue! Just like its name suggests, the function executes code. There's a slight hitch though - the docs indicate that exec returns None. Where does our function go when it's evaluated? It goes the same place that any Python declarations go: into the ✨namespace✨. Whenever a piece of Python code runs, it maintains a dict of all the variables that have been declared. Each function (including the root __main__) maintains its own list. That why you can repeat variable names in separate functions without them overwriting each other. In addition to a string, the exec function can take a dict that it'll use as a namespace (and modify as needed): def custom_dataclass(cls): ... namespace = {} exec(wrapper_fn, None, namespace) print(namespace) # {'wrapper': <function wrapper at 0x10673dee0>} That 0x... is a memory address, where Python has stored our newly created function. Now all that's left is to call it: def custom_dataclass(cls): ... namespace = {} exec(wrapper_fn, None, namespace) init_fn = namespace['wrapper']() # TypeError: wrapper() missing 4 required positional arguments: # '_type_name', '_type_age', '_type_sound', and '_dflt_sound' Despite being an error, that exactly what we want to see! We're (correctly) being told that we haven't provided the args our function expects. Luckily, we've already got the arguments in locals_. Because Python lets us use a dictionary for function args 1 , we can call wrapper with our locals_ dictionary: def custom_dataclass(cls): ... init_fn = namespace["wrapper"](**locals_) # <function wrapper.<locals>.__init__ at 0x1035f0f70> Setting an Attribute The final step is also the easiest. Now that we've got a live function, we addd it to the class and return the whole thing. Our entire function (after a little re-organizing) is as follows: def custom_dataclass(cls): fields = get_fields(cls) locals_ = {} init_args = [] for field in fields: locals_[field.type_name] = field.type_ init_args.append(field.init_arg) if field.has_default: locals_[field.default_name] = field.default wrapper_fn = "\n".join( [ f"def wrapper({', '.join(locals_.keys())}):", f" def __init__(self, {', '.join(init_args)}):", "\n".join([f" {f.assginment}" for f in fields]) or " pass", " return __init__", ] ) namespace = {} exec(wrapper_fn, None, namespace) init_fn = namespace["wrapper"](**locals_) setattr(cls, "__init__", init_fn) return cls Tada! We've got basic functionality working and our test should now pass, so I think this is a good place to stop. The actual implementation does a lot more work, but is also much more complex, internally. We didn't have worry about inheritance, for instance. If you're curious to learn more, I've linked a bunch of great resources below. Thanks for reading! Further Resources - The complete code from this post, including a more extensive test suite. - the Python docs for the package, which cover some neat features we didn't touch on here - The original PEP that proposed dataclasses. It explains a lot of the rationale behind them. It's a very approachable read. - The actual implementation, which handles many more edge cases and adds more functions. In particular, you should recognize parts of the _process_classfunction and _get_field. - This video from the author of the dataclasspackage, who talks a lot about the rationale and implementation.
https://xavd.id/blog/post/python-dataclasses-from-scratch/
CC-MAIN-2021-39
refinedweb
2,389
66.64
strstr man page Prolog This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. strstr — find a substring Synopsis #include <string.h> char *strstr(const char *s1, const char *s2); Description The functionality described on this reference page is aligned with the ISO C standard. Any conflict between the requirements described here and the ISO C standard is unintentional. This volume of POSIX.1‐2008 defers to the ISO C standard. The str() The Base Definitions volume of POSIX.1‐2008, .
https://www.mankier.com/3p/strstr
CC-MAIN-2017-17
refinedweb
110
58.38
class SuffixTree(object): class Node(object): def __init__(self, lab): self.lab = lab # label on path leading to this node self.out = {} # outgoing edges; maps characters to nodes def __init__(self, s): """ Make suffix tree, without suffix links, from s in quadratic time and linear space """ s += '$' self.root = self.Node(None) self.root.out[s[0]] = self.Node(s) # trie for just longest suf # add the rest of the suffixes, from longest to shortest for i in range(1, len(s)): # start at root; we’ll walk down as far as we can go cur = self.root j = i while j < len(s): if s[j] in cur.out: child = cur.out[s[j]] lab = child.lab # Walk along edge until we exhaust edge label or # until we mismatch k = j+1 while k-j < len(lab) and s[k] == lab[k-j]: k += 1 if k-j == len(lab): cur = child # we exhausted the edge j = k else: # we fell off in middle of edge cExist, cNew = lab[k-j], s[k] # create “mid”: new node bisecting edge mid = self.Node(lab[:k-j]) mid.out[cNew] = self.Node(s[k:]) # original child becomes mid’s child mid.out[cExist] = child # original child’s label is curtailed child.lab = lab[k-j:] # mid becomes new child of original parent cur.out[s[j]] = mid else: # Fell off tree at a node: make new edge hanging off it cur.out[s[j]] = self.Node(s[j:]) def followPath(self, s): """ Follow path given by s. If we fall off tree, return None. If we finish mid-edge, return (node, offset) where 'node' is child and 'offset' is label offset. If we finish on a node, return (node, None). """ cur = self.root i = 0 while i < len(s): c = s[i] if c not in cur.out: return (None, None) # fell off at a node child = cur.out[s[i]] lab = child.lab j = i+1 while j-i < len(lab) and j < len(s) and s[j] == lab[j-i]: j += 1 if j-i == len(lab): cur = child # exhausted edge i = j elif j == len(s): return (child, j-i) # exhausted query string in middle of edge else: return (None, None) # fell off in the middle of the edge return (cur, None) # exhausted query string at internal node def hasSubstring(self, s): """ Return true iff s appears as a substring """ node, off = self.followPath(s) return node is not None def hasSuffix(self, s): """ Return true iff s is a suffix """ node, off = self.followPath(s) if node is None: return False # fell off the tree if off is None: # finished on top of a node return '$' in node.out else: # finished at offset 'off' within an edge leading to 'node' return node.lab[off] == '$' stree = SuffixTree('there would have been a time for such a word') stree.hasSubstring('nope') False stree.hasSubstring('would have been') True stree.hasSubstring('such a word') True stree.hasSuffix('would have been') False stree.hasSuffix('such a word') True
http://nbviewer.jupyter.org/github/BenLangmead/comp-genomics-class/blob/master/notebooks/CG_SuffixTree.ipynb
CC-MAIN-2018-51
refinedweb
507
74.29
Agenda See also: IRC log nm: Welcome Peter. Let's go around doing brief introductions. jar: (intro) ht: (intro) sgml, xml , more recently status of uris in webarch dka: (intro) mobile web, privacy, social web timbl: (intro) DIG, privacy, policy, semweb UI ashok: (intro) standards, oasis, etc, rdb to rdf plinss: (intro) css, gecko, print as 1st-class citizen on web ... pre-css: object based editor on nextstep, design model. Digital Style Websuite noah: agenda review ... norm w is planning to spend all of wed. with us noah: (re priorities session on thu) we had identified 3 areas, larry has created a 4th area of core technologies (mime, sniffing, etc) ... please think about tradeoffs <DKA> DAP privacy requirements: dka: Looking at DAP group's document on requirements ... javascript apis that access things containing sensitive information - just about anyting ... camera, address book, calendar, orientation, velocity (pointing at table 'how each element is covered' with notice, consent, minimization, etc. rows) dka: what might the tag do to help promote privacy [control] on web? ... set of small, targeted docs that build on work of others (DAP, UCB, others)? ... look at existing docs, amplifying, put in specific web contexts. e.g. (for instance) Hannis (sp?) doc is general, DAP specific to DAP, connect them. projecting the API minimization note dka: come up with several examples of this idea in action ... want to sidestep Ashok's issue - about the Abelson et al. paper pointing out that user dialogs are silly, since they can't assess consequences Ashok: Abelson et al suggests to consider legal accountability as alternative dka: Vodafone privacy counsel said (at workshop) things are coming together on that front ... Minimization is not about this. timbl: Need global change in ethos regarding data use, independent of how they got it ... All these [tactics] need to be in the list dka: Looking for technical [tactics] that TAG might be able to say something about. ... image metadata capturing privacy intent? ... If you keep asking people about this, good results are unlikely timbl: What if you say: I want my friends to see my pictures. would be nice if software kept track of how/why friend got them, as reminder dka: Problem - technical jargon in dialog boxes ('GPS coordinates' ...) noah: You're saying the apps should be able to say: I don't need more info than xxx. ... What about malicious apps. dka: Remember this philosophical approach. We tend to get distracted. Need to find particular points to focus on. ... [Solve one problem at a time.] noah: ... But my experience is that most of the problems have to do with attackers ... and exploiters dka: Problem comes with attacker exploiting well-intended app. What to do to well-intended to make it less vulnerable to exploitation ... We need to be clear that even if you do [any particular thing], you won't have a privacy solution noah: Problem is interacting with untrusted services that I need to use. dka: The aggregate amount of info open to abuse is lower if you minimize. So several docs to chip away at specific things, not to provide comprehensive solution <ht> <ht> is what JR is talking about <ht> LM and JK join the meeting at this point jar: security is just one way to support privacy... and need to do lots to get security. least privilege just one. ht: Dan's answer did address Noah's point. By specifying an approach that the platforms subscribe, you bound the damage that the bad guys can do. If they have less info, they can do less. ... You can reduce the bandwidth of any particular API call. This raises the barrier. dka: If the app only needs city location, but has to request fine grained location, ... is the right question being asked [or user, developer, app...?] noah: Document needs an intro that sets expectations masinter: Framing = it's warfare, we're minimizing the attack surface <ht> There is a HF/UI design/human engineering issue here which won't go away, but micro-capabilities do create a real opportunity to reduce your exposure, much as they make me tear my hair out as an implementor masinter: To say there's a way around a defense, is not an argument against the defense ) <Zakim> ht, you wanted to support DKA wrt NM's use case and to give the Mark Logic API parallel ht: I use two different xml database systems... the 'open' one has unix style object protection - file x RW ... the commercial one has about 60-70 capabilities. almost 1-1 on API calls, file x cap ... bigger effort to manage for both users and developers. ... you get high degree of control. Compare minimization. You have to get informed consent, but if it's granular enough you get questions that are specific enough to make sense dka: Resistance to normative requirements for UI design, esp. re privacy ... The minimization approach doesn't impose specific UI requirements. This might enable creative UI design johnk: There's always a useability tradeoff in security. E.g. facebook has tons of knobs ... but underneath there's a simple set of access control privs ... e.g. app needs to do something special to get email address ... This is a usability issue, a tradeoff dka: Re minimization, the approach stands, since it says nothing about the user interaction. [API and UI needn't slavishly correspond] <Zakim> johnk, you wanted to mention the FB API model noah: Proposal? <noah> I'm asking: what do you propose we do that will have real, useful impact for the community? dka: Useful output might be: Umbrella document. Privacy and webarch. Subdocuments, e.g. minimization. masinter: Big discussion on privacy in larger community. Our schedule should coordinate with external events <Zakim> masinter, you wanted to talk about participating in larger discussion masinter: What does API minimization have to do with HTTP? jar (under breath): there are HTTP APIs noah: DKA, can we get together and make a straw-man product proposal? masinter: E.g. can be a problem sending info in Accept: headers when it's not needed in order for server to do its job ... Trying to suggest how to expand this from a DAP point to a TAG point timbl: (masinter, you missed the beginning of the session) (break) jk: I was asked to frame section 7 of the webarch report on apps ... Wanted to echo [style of] Larry's MIME writeup ... If you start with browser/server/protocol, and trace history of the three with a security focus... ... start with just getting a doc. ... then more support in http. history in doc is well known but worth reviewing ... NN2 introduced cookies, and cookies needed origin <masinter> jk: Related to lots of security issues. State in protocol. Origin and document not linked securely. Why should you trust the DNS? timbl: It assumes there's a social connection between - and -. There was a trust model, it just wasn't cryptographically secure. jk: These are layered protocols, that makes security harder. eg. DNSsec isn't bound to higher protocols ht: scripts?? jk: Dynamically loaded scripts not subject to SOP noah: XML and JSON is good example - the weaker language was subject to tighter security controls - dumb ht: script with a source tag predates JSON. it was never subject to SOP ?? <masinter> timbl: Suddenly all these APIs have this extra parameter, the calling function ... <timbl> the function to be called by the injkected script tag jk: Cookies were easiest way to do session indicator. shopping carts and so on. ... AJAX was other driver ... XHR does use SOP, but using JSONP you can circumvent it ... apps send cookies from one place to another ... Trying to abstract away, to find security issues as opposed to implementation bugs. What issues are architectural in these examples ... One is, when doc contains multiple parts, contributed from different security domains noah: (When did we stop using the term 'representation'?) jk: If you don't mediate the interaction, e.g. using sandbox, bad things happen. ... e.g. runaway cpu time ... Silent redirects. Malicious site forwards, cookies sent to 2nd site -> clickjacking ... Authentication based on Referer: (i.e. referrer) header ... Servers depend on client to do the right thing, in particular proper origin processing ... Specs are difficult read, so there can be broken user agents. ... My advice: Server should not trust user agents. What are circumstances in which you can server can align with user timbl: We need to preserve the role of the user-agent as the agent of the (human) user. johnk: Yes, but we need to be a bit more nuanced. There shouldn't be inordinate trust in a class of agents. One should only need to trust an agent to a certain degree. noah: Users don't understand UAs well enough to be able to discriminate.. <masinter> somehow I want to bring in' timbl: That doesn't diminish the responsibility of UAs ... One of the the things the TAG does is to ascribe blame johnk: Who's responsible for a clickjacking attack? Software was behaving per spec masinter: Users are presented choices that they don't understand johnk: Not much you can do about that - masinter: don't require users to make decisions that they don't understand. design principle. ... optimize a match between what user wants and what happens. doesn't matter whether choices are simple or complex pl: You said simplicity might be better - maybe so at user level, not nec. across the system <masinter> complex choices are less likely to be understood, but simple choices might be a problem (scribe notes that henry suggested just the opposite. see above) jk: Cache poisoning might mean no link between IP and domain name... in fact no way to guarantee domain name ownership <masinter> want to talk about TAG work in context with <masinter> Oct 2010 Submit 'HTTP Application Security Problem Statement and Requirements' as initial WG item. -- don't see that document jk: ssl... data not encrypted on hotspot timbl: Firefox 'get me out of here' jk: When you run web content, the content starts being rendered immediately - there is no install step. It just starts running ht: I've been manually virus checking every downloaded app. Can't do this with pages masinter: some antivirus sw modifies the HTTP stack noah: Also you lose the ability to make sticky decisions. Nextbus is an example of non-installed app but that you come back to repeatedly ... you keep getting asked for permssion to use location. annoying timbl: But most browsers do this well ? jk: Lack of tie-in between host naming and where you access the doc (where published) ... who is responsible for the content of the document? Nonrepudiation. timbl: You can sign the document until you're blue in the face ... noah: Doc is written by an expert, would be helpful if some of the examples were spelled out in more detail masinter: Security WG calls for a [...] document. Is what we're doing related to their work item? ... They have a bunch of specific documents, but nothing at this level jk: Their docs are very narrow masinter: No, look at their charter Oct 2010 Submit 'HTTP Application Security Problem Statement and Requirements' as initial WG item. masinter: Isn't this what we're doing? jk: The issue of mime sniffing. It became a good idea for the browser to ignore media type... problem is guessing user intent (slight aside) jk: So what would be desirable properties of security webarch? (reviewing doc) noah: please clarify use of 'web agent' ... 'tie' isn't evocative - what constitutes success? what system properties are we after? timbl: E.g. maybe avoid separation of authentication and authorization jk: App layer with signed piece of content, same key should be used in both levels of protocol stack (or at least related) timbl: WebID people have expereienced this need - converting keys between apps / layers - PGP to log in using ssh etc. ht: I'm having to use Kerberos - very inconvenient - when I ssh from laptop home I need a kerberos principal... way too much work... [so unification cuts both ways?] timbl: but kerberos isn't public-key ... The thing about connecting the two parts together is valuable jk: WebID is a case where it can't be done. User generates a cert, puts it in foaf file. Impossible to tie foaf description of me with me the person. masinter: can show 1 person wrote 2 things noah: Same issue as in PGP - you have to be careful when first picking up the key jk: what's the purpose of encrypting the assertion (in webid)?... ... 3rd bullet in properties section: We should be able to do what the original web design wanted us to do timbl: But doesn't CORS do this for us? jar: Controversial. <masinter> W3C TAG should be a participant in overall work on web security, including other work in IETF and W3C <noah> ACTION-417? <masinter> action-417? <trackbot> ACTION-417 -- John Kemp to frame section 7, security -- due 2011-01-25 -- OPEN <trackbot> <masinter> ACTION-417? <trackbot> ACTION-417 -- John Kemp to frame section 7, security -- due 2011-01-25 -- OPEN <trackbot> masinter: There's ongoing work. We should review it regularly and be seen as a participant. The way to do that is to publish a note, and announce, repeat. But be clear that we're not trying to take the lead. noah: But the action was to frame a section of our document... <masinter> The W3C chapter on security on the web could identify that there are some issues and point at other groups that are working on the problems <masinter> W3C TAG should have input on W3C activities decisions, and this should be a W3C activity, on "security and privacy" ashok: Let's close 417, start another one to write a note. If that becomes bigger/better, fine. masinter: In general the TAG should be more involved in setting up W3C activities. timbl: So far it's just been a series of workshops, not an activity ashok: Privacy at w3 is morphing masinter: Would like to see a note out before Prague meeting (end of March) <noah> noah: any objection to a proposal to close ACTION-417, and have John publish what he's got, slightly cleaned up, as a note with no formal status, but at a stable URI. Noah will help. <noah> Larry will help too, and would like this done in time for IETF in Prague. <noah> PROPOSAL: close ACTION-417, and have John publish what he's got, slightly cleaned up, as a note with no formal status, but at a stable URI. Noah will help. <noah> No objections. <noah> close ACTION-417 <trackbot> ACTION-417 Frame section 7, security closed <noah> action John to publish, slightly cleaned up, with help from Noah and Larry Due: 2011-03-07 <trackbot> Sorry, couldn't find user - John <noah> action Larry (as trackbot proxy for John) who will publish, slightly cleaned up, with help from Noah and Larry Due: 2011-03-07 <trackbot> Created ACTION-515 - (as trackbot proxy for John) who will publish, slightly cleaned up, with help from Noah and Larry Due: 2011-03-07 [on Larry Masinter - due 2011-02-15]. <noah> ACTION: Noah to talk with Thomas Roessler about organizing W3C architecture work on security [recorded in] <trackbot> Created ACTION-516 - Talk with Thomas Roessler about organizing W3C architecture work on security [on Noah Mendelsohn - due 2011-02-15]. <DKA> Scribe: Dan <DKA> ScribeNick: DKA [roll call] Noah: Ted is joining us. Noah: [background] there are certain resources w3c publishes on its website - e.g. dtds... ... certain organizations were [fetching] these resources a lot. <ted> summary Yves wrote of actions taken by W3C Noah: practical question: what can be done? Architectural question: what can be fixed in the architecture? <ted> article on DTD traffic Noah: one angle proposed is : what would be the role of a catalog? You could tell people that certain resources won't change or won't change any time soon so they could build [their products] not to fetch these resources. ... Anything else from Ted? Ted: We've employed some different techniques - for certain patterns we've given http 503 after reaching a threashold. At peaks, we see half a billion a day. Starts to become a problem. Sometimes this has resulted in blocking organizations. ... if it's an organization that is a member then we pursue through the AC rep... ... this doesn't scale well. ... there are several big libraries - eg. msxml - they've put a fix in which has led to a sharp decline. ... Norm Walsh came up with a URI resolver in Java that would implement a caching catalog solution but this never made its way into Sun JDK. ... Sun has been bought by Oracle so now we are talking to Oracle engineers and they have been responsive. Trying to see if we can get something into next JDK. ... We had a fast response from Python. Noah: Do you ask these people to implement caching or a catalog? Ted: We suggest either. I like the caching catalog solution [from Norm]. ... we educate, we block, we have a high-volume proxy front-end that distinguishes traffic... ... when we explain to people that this is not good architecture - receiving the same thing over the network 100000's times a day - they agree. ... we probably should be in the business of packaging and promoting the catalog. Henry has done some work on this. ... the idea we came up with - find the most popular ones based on traffic and we routinely package these up, have RSS feeds to alert to catalog changes, talk to Oracle, Microsoft, Python, etc... get some of the bigger customers out there to adopt the catalog. ... meta-topic (that the TAG is concerned with) is the scalability of URIs in general. There is a lack of directives to do rate limiting, to set boundaries, how to scale URIs... Could be useful in dealing with DDOS attacks. <Zakim> timbl, you wanted to wonder about RSS feeds fro updates to things with distant expiry dates. <masinter> who's here? Tim: We don't have real push technology available (apart from Email) but supposing we make a package [a catalog] and we send them out. Then an erratum comes in for something that has a 12 month expiry date. Do we need a revocation mechanism? Henry: I think there's an 80/20 point. Speaking as a user, I'm grateful for the shift from the 503s to the tarpitting. ... the delay of 30 seconds helps people to remind people to get the catalog. ... system administrators to install the catalog that would cause the tools to find them, then I don't think there's an expiry problem. Tim: We have to consider the new and the old separately. <scribe> ... new systems could be designed differently. The total load on the server from the HTML dtd will go down over time.. ... so that the chance of finding a copy locally (of a DTD) would be quite high. ... after the Egypt situation, there's been a lot of interest in this. ... I'd love to have the TAG push that forward. <Zakim> timbl, you wanted to mention HTTP automatically morphing to P2P when under stress <Zakim> noah, you wanted to talk about what's required vs. what's desirable Noah: I think the role for the TAG is to talk about the broader problem that is not specific to particular resources like the HTML dtd. When an organization I worked with come across this problem, the response from some was "well you should be running a proxy" - that sounds good, but for that organization running a proxy would have meant paying for and maintaining racks of proxy servers, and in the many cases where caches miss, adding overhead. As it turned out, buying more T1 lines or whatever was far cheaper than running the proxies. Most importantly, I don't believe that failure to run a proxy violates any normative specificastions. ... so: we could clarify the responsibilities that people have to cache or to not cache. ... should we change the normative specs? ... [some will push bacl] ... for long term - we could break open this protocol http version 2. Ted: Looking over the rfc-2616, the language is "should" around caching of http. ... it's optional and treated as such. ... lighter-weight implementations tend to be very barebones. ... I think promoting catalogs is the way to go - and we should work to get major libraries to include it, ship it, and have it enabled by default. ... I think the focus for the TAG should be in the meta problem. How to make URIs and web sites scale. ... Sites do get overwhelmed. There is no way to let consumers of this data know what is acceptable behaviour besides sending back a 503. <masinter> should also note that HTML itself has gotten rid of DTDs. But isn't main problem giving out "http:" URIs in the first place? Ted: we see lots of sites experiencing similar problems. <Zakim> ht, you wanted to speak up for the user Noah: I read it as a MAY in rfc-2616 <noah> From RFC-2616 section 13.2.1: <noah> "The primary mechanism for avoiding <noah> requests is for an origin server to provide an explicit expiration <noah> time in the future, indicating that a response MAY be used to satisfy <noah> subsequent requests." <noah> So, it's a MAY not a SHOULD. Henry: I'm concerned about the message we're sending to students "you should produce valid html, valid XML, etc..." and yet when they try to validate their documents they have to wait 30 seconds, because the web page has the public identifier. Tim: Why does the validator not cache it? Henry: Because the number of validators out there is quite large, and the free ones (while they support catalogs) don't distribute the catalog of DTTs as part of their install. <ted> [libxml from the beginning shipped w a catalog] <masinter> valid HTML no longer has a doctype Tim: That can be fixed relatively easily - the DTDs can be wired into the code for things that aren't going to change any more. Henry: The crucial people you need to convince are the open source implementers. Noah: in many cases, when you dig into what needs to be fixed, it is not straightforward to change all the implementations... Henry: I am more worried about the people [students] who are the future of the Web. The people who use off-the-shelf free validator tools and get burned. <Zakim> masinter, you wanted to give strawman: specs were wrong, so asking people to run a proxy is really only to compensate for our failures Noah: should we undertake any work to help Ted and/or ongoing work. Larry: I think it was a serious design mistake to put a URL in a document that you didn't want anyone to retrieve and not tell them that. ... all of these proxies are compensating for someone else's mistake. .... <Zakim> johnk, you wanted to note that waiting 30 seconds should be to encourage alternate behaviour Larry: We should think of the architectural design flaw here and make sure we don't do this again. <masinter> "there are no cool URLs, everything changes eventually" John: Pragmatically, tarpitting requests that are overwhelming your server seems like the right way to deal with it [counterpoint to Henry's statement]. They should learn that they are doing something wrong. <masinter> "the URL is already broken" <Zakim> noah, you wanted to ask: is it a mistake? John: I'm worried we're going to overthink this, when education plus pragmatic tarpitting could be the right response. Noah: My inclination is close to John's. This is a big distributed file system. The system should cope with this, or else the specs. should be changed to make it an error to [request at a certain rate, not cache, whatever, TBD] . Noah:. <Zakim> timbl, you wanted to long tail Tim: There are lots of DTD-like things out there. We need to be able to copy with various different scaling. We could provide some specific tailored response for these w3c issues. There may be similar things with some libraries... Noah: Let's say there 100,000 ontologies, getting a lot of traffic. Let's say if I work my way through 100,000 ontologies in a loop. Should I also be tarpitted? Tim: No. ... I won't want to mess up the fact that in general you should be able to dereference a dtd if you want to.. ... Publishers can take care of it. Tim: For the case of harry potter, the book industry operates differently, because it's a different scale of usage. Jonathan: Transaction costs [on the web] are so much lower. Inexpensive social expectations. Jonathan: it's a question of economics in relation to social expectations. ... who pays for what. <masinter> One downside of using URIs for things other than href@a and img@src is that these scale issues arise. This has been an architectural principle, to use html: URIs for things that you don't really intend to be referenced. it's not the only downside <noah> I guess I just disagree that they should not be derefenced <noah> On the contrary, we've said that when you make things like namespaces, we want you to use http-scheme URIs precisely so that you CAN dereference them. <noah> Larry, these DTD references are like img src -- each of the references is from an HTML document.. ... The mismatch has led to a couple of problems. <Zakim> noah, you wanted to disagree with Larry Larry: let's acknowledge the problem. Noah: [disagreeing on the different scaling model between DTDs and IMG SRC...] Ted: To Tim's point: a software engineer comes up with a brand new ontology, puts it on his web site, it becomes popular - he will have the same headaches and hassles as we do. Noah: If the Apache server came pre-configured to handle the load would you be happy? Ted: Yes, for example, if Apache told search engines "I'm busy right now please come back later" then that would be good. You can't express in HTTP your pain threshold. Tim: TCP works really well because you stuff in as much as you can. It was designed at 300 baud times and it works at 300 gigabit times. ... You want to have negotiated quality of service. Noah: That didn't come easily. Van Jacobson did a lot of hard and rather subtle work to get that scalability into TCP. <masinter> speaking of Van Jacobson, <Zakim> masinter, you wanted to say that the problem was the W3C published a STANDARD that pointed to a http URI rather than something more permanent and to Larry: Van Jacobson - has an interesting project on content-centric networks that we might want to look into. [debate on whether DTDs are intended to be retrieved or not] Noah: Next steps... <noah> ACTION-390? <trackbot> ACTION-390 -- Daniel Appelquist to review ISSUE-58 and suggest next steps -- due 2011-03-01 -- OPEN <trackbot> Dan: I don't have an answer... <ted> ted: the # (2-3?) of connection limit per ip gets in the way of user experiences as well, making CDN more popular. as administrator i would like to improve a user's browser experience (faster load time) and allow in some cases more concurrent connections Noah: The simple answer is to [keep this on the back burner]. I need a proposal on what we should do and who does it. <ted> ted: I also want to encourage search engines to crawl me and do so efficiently when convenient for me Noah: I think we need a short finding on what people's responsibilities are regarding caching. Henry: I will reach out to [authors of XML parsers]. Tim: We should write what we want clients to do. <masinter> wonder if Henry could write up what he's asking and what they say or do? Henry: A good idea is - what Ted mentioned - an adaptive caching mechanism. Noah: We could talk about turing the MAY in rfc-2616 to a SHOULD. Larry: I am against that. I think it's the wrong place. Noah: When you have a piece of software that is in a position to detect repeated requests, you should cache. <ted> [if caching was less optional and more widely deployed on net popular resources would scale better and performance would be better] John: [supporting tarpitting] ... I think it should be cached in the open source code level... <masinter> (a) I don't think we can quickly come to a conclusion, but (b) Henry has agreed to ask tool authors to do something, (c) think we could endorse what Henry asks if the tool authors are willing to go along with it <johnk> Norm has written about this; [discussion of caching catalog and whether or not it's a catalog] <masinter> for example, "clear my cache" for privacy reasons might not clear the catalog Henry: the OASIS catalog is just a string-to-string matcher, matching HTTP URIs to loca disk copies. Larry: for privacy reasons you might want to say "clear my cache" but that wouldn't clear my catalog. Noah: What's implicit in John's proposal: separation of concerns. Tim: I hope you wouldn't expect clients to spot that tcp connection is going slowly... . Noah: the server is creating a network that is robust against traffic access pattern. Different clients will make different choices. A client might not need to change anything [in the case of e.g. tarpitting]. [if you are not time sensitive] Larry: Henry - I would like you to document what you tell [the implementors] and report back what they say. Dan: on the p2p topic - should we be doing something here? Henry: I don't know enough the next gen internet... Tim: I don't think that internet2 is reinventing http. <ted> [p2p has too much overhead (startup time to connect to peers) imho to be worthwhile for small resources. yves makes that point as well in his email]. ... but we need people who want to put time into that. ... web) is going to survive. ... [clarifying] as the future becomes clearer, we need to start tracking it ... <Zakim> ted, you wanted to put that on rec Noah: I want to focus this on next steps. <ted> ted: ^^ comment on merits of caching. in practice as we've heard from noah the costs of maintaining caching proxies too high compared to bandwidth. <ted> ted: glad to hear larry's comment. get library developers to implement what ht suggests. i heard ht (and others) liked norm's caching catalog. would oracle implement it in jdk? <Zakim> masinter, you wanted to suggest Henry write that up Ted: [ speaking in support of the caching catalog approach ] DKA, you wanted to remind people that just because there is a next-gen or internet2 activity doesn't mean that will be the future of the internet. :) NM: Ted, anything hi priorty you want the TAG to do? TG: Day by day, we're getting by. The catalog work would be helpful. What seems really useful is for the TAG to tackle the meta-issue. TG: Directives are potentially useful; peer-to-peer seems most applicable for large things. NM: Large or high volume? TG: P2P startup times are typically significant, so large resources. ted> [and p2p could be intersting failover for http] NM: Floor is open for volunteers noah> ACTION: Larry to help us figure out whether to say anything about scalability of access at IETF panel [recorded in] <trackbot> Created ACTION-517 - Help us figure out whether to say anything about scalability of access at IETF panel [on Larry Masinter - due 2011-02-15]. <ht> trackbot, status? <ht> ACTION: Henry S. to report back on efforts to get undertakings from open-source tool authors to ship pre-provisioned catalogs configured into their tools [recorded in] <trackbot> Created ACTION-518 - S. to report back on efforts to get undertakings from open-source tool authors to ship pre-provisioned catalogs configured into their tools [on Henry S. Thompson - due 2011-02-15]. <noah> . ACTION Peter to frame architectural opportunities relating to scalability of resource access <ht> trackbot, action-518 due 2011-07-15 <trackbot> ACTION-518 S. to report back on efforts to get undertakings from open-source tool authors to ship pre-provisioned catalogs configured into their tools due date now 2011-07-15 <noah> ACTION Peter to frame architectural opportunities relating to scalability of resource access Due: 2011-03-15 <trackbot> Created ACTION-519 - Frame architectural opportunities relating to scalability of resource access Due: 2011-03-15 [on Peter Linss - due 2011-02-15]. <noah> close ACTION-390 <trackbot> ACTION-390 Review ISSUE-58 and suggest next steps closed <noah> ACTION-514 Due 2011-03-01 <trackbot> ACTION-514 Draft finding on API minimization Due: 2011-02-01 due date now 2011-03-01 <noah> (that should have been fixed this morning) Noah: I think this draft would be better if it focused on making a few key points. Ashok: there is at the end a section on recommendations... ... sections 4, 5 and 6 are the heart of it. Noah: Can it be abstracted into a one or 2 sentence best practice? [ looking through section 4 and picking out BP statements ] <noah> I'm seeing as potential recommendations: <noah> As the state of the resource and the display changes, the fragment identifier can be changed to keep track of the state. <masinter> I think the wording would be better if recast a bit <noah> ...and... <noah> if the URI is sent to someone else the fragment identifier can be used to recreate the state. <masinter> "the application can be designed so that the fragment identifier 'identifies' the state" <noah> NM: What about "?" vs. "# Ashok: I have added one paragraph - in the google maps case which I think talks about that. <noah> AM: I added a para about that. <masinter> "the application can be designed so that the fragment identifier identifies or encodes the relevant transferable parts of the state". Ashok: Yes. Larry: the application can be designed so that the fragment identifier identifies or encodes the relevant transferable parts of the state Ashok: Yes. Larry: in the case of a map application with a lot of state, then you want the app to be designed so that the URI contains the [part of the state that you want to be transferred to another client] <Ashok> Larry: You can design the app so that the frag ig identifies or encodes the state you want uniformly erferenced Larry: the part that you want to have uniformly referenced. Noah: let's suspend disbelief and assume that google maps used hash signs. The question is: state of what? [demonstrates using google maps] Ashok: What [gmaps Noah: there are a lot of http interactions under the covers... ... let's be careful about what is the transferable part of the resource... ... originally, [in the case of gmaps] an http request was made for the generic document. ... scrolling through this map feels like scrolling through an http document. ... the question I want to raise: for this class of apps, you emphasise that there is a virtual document that is the map... Ashok: [points to text in:] ... we can work on this wording... Tim: When you're looking at the map... It's interesting that you don't use the hash as you drive around... They do not use the hash, but they could... Ashok: the question mark tells you what to bring from the server. the has would not tell you that. Tim: they both would... Ashok: I disagree it could be done with the hash. Tim: What comes back on the response is a piece of javascript. The javascript then starts pulling in all the tiles. Ashok: if the only thing that comes back is javascript on the first get... [then it could be hash...] Noah: I think one of the attractions of this - is you don't have to do the distribution in the same way in all cases. If I use the hash sign and I us it in an email reader, the typical email client [wouldn't handle it correctly]. ... [disables javascript and reloads the map from google maps; it works] ... You couldn't do that with the hash sign. Ashok: Your first access gets you the app plus some javascript... Noah: Where does the word representation apply? In the case of Google Maps, is it a representation of the original URI with query parms when the rendering is assembled with Javascript, client side? Tim: yes, it's a representation. ... lots and lots of web pages are filled in with javascript. Noah: Ok - it would be good to tell that story. Many web pages do this. There may be other ajax apps where you get different behavior. Ashok: I'll ask TV if he can tell us what goes on under the covers [of google maps]. <johnk> example 3 talks about client URI generation - Tim: History manipulation - to be able to change the behavior of the back button and change what's in the location bar - is in firefox 4. Ashok: [talking through section 6] ... Do these or don't these violate specs and what do / should we do? ... frag ids for html and xml... many media types don't define usage of frag ids.. Larry: But we are specifically talking about http and html... Ashok: [last paragraph] - "active content" Larry: When you talk about URIs do you mean URIs in general, or just http URIs...? ... [you need to be specific.] Tim: I think we should make feel bad about using hash in this way. We should change the specs. Larry: We should fix the specs to match. Henry: I'm happier with doing this if we can say "because it's not incompatible" with the speced story. Larry: originally content was static. Fragment ids were pointers to static pointers. Now content is active... Henry: the interpretation of stuff after the hash should be client side... [broad agreement] Larry: it would be great if URIs worked [interoperated] between google maps and yahoo maps... Henry: Historically the spec told you that all you needed to know was the media type of the response, now it's more tightly coupled. <Ashok> The page tells you what the fragId is used for Tim: what's interesting about the maps space - it would be great if the user has independent control over what happens when you get a GEO URI... what service you want to use... John: Lat and Long have meaning in the real world. You also have the position on a map, which is different from the real space. The third part is the panning and zooming. Tim: all you need is the lat - lon. ... the user [should] just see lat, long. <ht> There has been a real change in where the responsibility for determining the meaning of the post-# strings lies <ht> Per the existing specs, it's global, and lies in the media type registration <ht> Per the practice under discussion, it lies with the [transitive closure of] the representation retrieved for the pre-# URI <ht> This is parallel to where the code comes from the _implements_ the semantics: for the existing spec. story, it's in the UA from the beginning, because it's known at UA-creation time, because it comes from the media type spec. <ht> whereas for the new usage, it's in the retrieved representation itself John: I think this goes back to the coupling issue. Ashok: [back to the document] Section 7 - I didn't do anything with it - Yves says take it out... Noah: It feels like we haven't nailed the good practices and recommendation. There are some interesting bits here. I'd like to see them in support of some news [some concrete recommendations]. Then we could see what other groups we need to coordinate with. [back up to section 4] <noah> Noah: Not happy with the word "operate" in section 4. [discussion on the wording] <noah> Noah: I think it's more like: the JavaScript uses the fragment identifier as well as other information to render and support interaction with the representation(?) of the resource. <noah> Noah: On "As the state of the resource and the display changes, the fragment identifier can be changed to keep track of the state." Yes, but we need to get clear on pros and cons of ? vs. # Dan: do you need to assume programmatic access to the history/address bar? <noah> TBL: The key point on # vs ? is that when you update the address bar, the page >will< reload. In the case of #, well, the right document is already loaded. In the case of ?, the tendency would be to reload the page. <noah> TBL: Right, and when the GET happens, you lose state. Noah: This finding has been slowly evolving. Need to hear from the TAG : we need to focus on it, get it to where people are happy and move ahead. +1 on its usefulness. Jonathan: I am not worked up about it. My focus tends to be on what does the stuff mean, independent on the protocols. ... I can't figure out who it would help or who would pay attention.. Larry: the media type registration needs to say (for active content) when and how those parameters are passed to the active content. We are extending something originally designed for passive content to change for active content. Henry: So this should be a story about how we think about media type registration in the space [active content] that we are now living in. Larry: ..make the frag identifiers useful for the potion of the state that you are interested in [uniformly referencing]. ... We could start with the current document as a note and use that as a basis to add something to the mime-web document and maybe another document. Noah: the document either has to cut the advice out, or it needs to give advice in close to the style that we've done in findings. "Good practice: xxx , explanation"... ... or describe use cases. ... Ashok I think that work needs to be done before publishing it as a note. Larry: I'm OK with it. The context is a discovery... Dan: I think that sounds like the right approach - reformatting / expanding some of the recommendations and publishing it as a note. John: I think it makes sense to document things we'd like to see happen. ... highlighting that kind of usage is good. But I worry that it's getting a bit wooly. ... I told Raman when I reviewed this document that he could pull out 2 things - the same things referenced in section 4 of the current document. Ashok: I think we can make this [section 4] better. ... If people think that after that we can publish this as a note, great. Following that, if you want something smaller - one page, about spec recommendations, then we can pull that out. Noah: that could be as simple as giving someone an action... <masinter> action-508? <trackbot> ACTION-508 -- Larry Masinter to draft proposed bug report regarding interpretation of fragid in HTML-based AJAX apps Due: 2011-01-03 -- due 2011-02-12 -- OPEN <trackbot> <masinter> action-500? <trackbot> ACTION-500 -- Larry Masinter to coordinate about TAG participation in IETF/IAB panel at March 2011 IETF -- due 2011-02-15 -- OPEN <trackbot> <noah> Leave ACTION-481 as is <noah> ACTION-508? <trackbot> ACTION-508 -- Larry Masinter to draft proposed bug report regarding interpretation of fragid in HTML-based AJAX apps Due: 2011-01-03 -- due 2011-02-12 -- OPEN <trackbot> <noah> LM: Ashok's document should be a stable reference. <noah> ACTION-508 Due 2011-02-22 <trackbot> ACTION-508 Draft proposed bug report regarding interpretation of fragid in HTML-based AJAX apps Due: 2011-01-03 due date now 2011-02-22 <masinter> action-508 should say that the problem is that #XXXX are parameters to acdtive content Larry: What is the boundary between "the web" and the "rest of the Internet"? ISSUE-500? <trackbot> ISSUE-500 does not exist <masinter> issue-500? <trackbot> ISSUE-500 does not exist <masinter> action-500? <trackbot> ACTION-500 -- Larry Masinter to coordinate about TAG participation in IETF/IAB panel at March 2011 IETF -- due 2011-02-15 -- OPEN <trackbot> <Yves> [re: Ashok's document on fragments, I'll send further comments/help working on it] [debate on what is implied by the quote from the IAB] <Ashok> Thanks, Yves! Noah: The TAG has decided to say yes to participating on the IETF panel in Prague. Noah: Once again, welcome to Peter. ... Minutes of the 20th - approved? RESOLUTION: telcon minutes of 20 January 2011 are approved. Noah: Note that TPAC is happening November in Santa Clara. ... we would normally meet sometime in may timeframe. there is an ac meeting in bilbao, spain in may. ... so - open to suggestions. ... we could meet in Cambridge again... Tim: 11-12-13 of May in London...? Noah: Doesn't work for me. ... Who else is going to the ac meeting? ... 9-11 in the UK? Larry: Week of the 9th I am completely booked. Noah: Week after the AC? [week of the 23rd] [not good for Tim] Noah: Week of June 6? ... 7-8-9 of June? Tim: Yes could do it - would have to be in Cambridge. Noah: Formal proposal - 7-9 June in cambridge Mass for next TAG f2f meeting. +1 Noah: Should we talk about September? Henry: I would be happy to host. +1 to edinburgh in September. <noah> ACTION: Settle London vs. Edinburgh for Sept. 13-15 F2F Due 2011-05-31 [recorded in] <trackbot> Sorry, couldn't find user - Settle <noah> RESOLUTION: The TAG will meet at MIT 7-9 June RESOLUTION: The TAG will meet in Cambridge 7-9 June 2011 NOTE: Later, at Tim's request, we changed this to 6-8 June 2011 <noah> ACTION: Noah to settle London vs. Edinburgh for Sept. 13-15 F2F Due 2011-05-31 [recorded in] <trackbot> Created ACTION-520 - Settle London vs. Edinburgh for Sept. 13-15 F2F Due 2011-05-31 [on Noah Mendelsohn - due 2011-02-15]. <noah> RESOLUTION: The TAG will meet in the UK 13-15 Sept, either Edinburgh or London, TBD see ACTION-520 RESOLUTION: The TAG will meet in the UK 13-15 Sept, either Edinburgh or London, TBD see ACTION-520
http://www.w3.org/2001/tag/2011/02/08-minutes
CC-MAIN-2014-10
refinedweb
7,794
73.07
How to draw a horizontal line?? I want to draw a horizontal line through code and I tried something like below drawline.h #ifndef DRAWLINE_H #define DRAWLINE_H #include <QWidget> #include <QLineF> #include <QPainter> #include <QDebug> #include <QGridLayout> class DrawLine : public QWidget { public: DrawLine(QWidget *parent = 0); ~DrawLine(); void lineEvent(); void paintEvent(QPaintEvent *); private: QPainter paint; QLineF line; QGridLayout *gLayout; }; #endif // DRAWLINE_H drawline.cpp #include "drawline.h" DrawLine::DrawLine(QWidget *parent):QWidget(parent) , paint(this) { gLayout = new QGridLayout; this->setLayout(gLayout); qDebug() << "Inside Constructor"; } DrawLine::~DrawLine() { qDebug() << "Inside Destructor"; } void DrawLine::paintEvent(QPaintEvent *) { qDebug() << "Inside paintEvent()"; paint.setPen(QPen(Qt::black, 10)); line.setP1(QPointF(80,80)); line.setAngle(45); line.setLength(50); paint.drawLine(line); } main.cpp #include "drawline.h" #include <QApplication> int main(int argc, char **argv) { QApplication a(argc, argv); DrawLine *dLine = new DrawLine; dLine->show(); return a.exec(); } This code is building well but when I run it, following errors, occurs: QWidget::paintEngine: Should no longer be called QPainter::begin: Paint device returned engine == 0, type: 1 Inside Constructor setGeometry: Unable to set geometry 22x22+363+124 on QWidgetWindow/'QWidgetClassWindow'. Resulting geometry: 116x22+363+124 (frame: 8, 30, 8, 8, custom margin: 0, 0, 0, 0, minimum size: 22x22, maximum size: 16777215x16777215). Inside paintEvent() QPainter::setPen: Painter not active Inside paintEvent() QPainter::setPen: Painter not active Can somebody help me fix it?? - Chris Kawa Moderators last edited by Chris Kawa There are a couple of errors here and some weird unused code. Lets go over it: setGeometry: Unable to set geometry (...)- this is just a warning. A widget has a 0 minimum size and minimum size hint by default. You used a layout on it so it has some margins (11 px in this case) Because of the frame of the window a window manager can't create a window that small so it resizes it to a smallest possible size and lets you know with a warning. To fix it either use setMinimumSize()or override minimumSizeHintfor your widget. QPainter related messages - you're creating a painter as a class member variable. Don't do that. Create the painter locally in the paint event with a proper surface to use it on: void DrawLine::paintEvent(QPaintEvent *) { QPainter p(this); //paint using p ... There's no such thing as a lineEvent() - what is that? What's the layout for? Are you gonna put something in it? If yes then remember that it will obscure whatever you paint in your paint event. You're not deleting the dLinevariable in main(). It's leaking. Either create it on the stack (which is the easiest and recommended), set a Qt::WA_DeleteOnCloseattribute on it or delete it manually after the a.exec()call returns (pointless manual work). just a hint - when overriding virtual methods use overridespecifier. It will save you a lot of typo related bugs: void paintEvent(QPaintEvent *); //compiles fine and overrides the base implementation void pAintEvent(QPaintEvent *); //compiles fine and never gets called because of a typo void pAintEvent(QPaintEvent *) override; //doesn't compile and lets you know. yay!
https://forum.qt.io/topic/68693/how-to-draw-a-horizontal-line
CC-MAIN-2019-47
refinedweb
509
55.84
A SQL client can have access to multiple databases side-by-side. The same table name (e.g., orders) can exist in multiple databases. When a query specifies a table name without a database name (e.g., SELECT * FROM orders), how does CockroachDB know which orders table is being considered? This page details how CockroachDB performs name resolution to answer this question. Overview The following name resolution algorithm is used both to determine table names in table expressions and function names in value expressions: - If the name is qualified (i.e., the name already tells where to look), use this information. For example, SELECT * FROM db1.orderswill look up " orders" only in db1. - If the name is unqualified: - Try to find the name in the "default database" as set by SET DATABASE. - Try to find the name using the search path. - If the name is not found, produce an error. Search Path In addition to the default database configurable via SET DATABASE, unqualified names are also looked up in the current session's search path. The search path is a session variable containing a list of databases, or namespaces, where names are looked up. The current search path can set using SET SEARCH_PATH and can be inspected using SHOW SEARCH_PATH or SHOW ALL. By default, the search path for new columns includes just pg_catalog, so that queries can use PostgreSQL compatibility functions and virtual tables in that namespace without the need to prefix them with " pg_catalog." every time.
https://www.cockroachlabs.com/docs/v1.0/sql-name-resolution.html
CC-MAIN-2019-43
refinedweb
248
65.52
What is a block in ruby? Exploring anonymous functions In Ruby, blocks, as well as Procs and lambdas, are all self contained, anonymous functions. Might sound confusing, but let’s start with an example. If you were in a math class and your teacher told you to “write the function that takes whatever is input (for instance the variable x) and multiplies it by two”, you’d probably come up with a solution like f(x) = x · 2. Now in ruby we could have written a function like def mult_by_two(x) x * 2 end But this is quite verbose just to explain the simple mathematical construct of f(x) = x · 2. In comes the Proc/lambda (they are technically distinct, but their distinction is very minute and subtle, so consider them the same for right now). I can open up pry and write down a Proc which has the same behavior: [1] pry(main)> -> (x) { x * 2 } Which you can read as f(x) = x · 2. Now the awesome thing is now I can assign this arbitrary and anonymous function to a variable and pass it around as if it were data. [2] pry(main)> fx = -> (x) { x * 2 } #<Proc:0x007ff6046dd9c0@(pry):1 (lambda)> Note: I can’t use just f because of name collisions in ruby/pry. I can even execute the function given an arbitrary input. I just use the variable I’ve bound it to, and invoke the proc with [parameter] [3] pry(main)> fx[7] 14 [4] pry(main)> fx[42] 84 [5] pry(main)> fx[“hello”] “hellohello” Note: This only works with strings because we did x · 2, not 2 · x Now what is a block? A block is a special form of a self contained function that exists in ruby. You can think of it as a special form of a Proc. Notice how we were able to assign the Proc to a variable? That’s because a Proc is an actual object. A block, on the other hand, is just a piece of syntax sugar for when you are writing a method definition. For instance, let’s make our own fake array-like class (that only takes 3 elements) and implements its own #map method[1] . class ThreeElementArray def initialize(first, second, third) @first = first @second = second @third = third end def map ThreeElementArray.new( yield(@first), yield(@second), yield(@third) ) end end Which would then be used like [6] pry(main)> ThreeElementArray.new(1, “hello”, 42).map do |element| [6] pry(main)* element * 2 [6] pry(main)* end #<ThreeElementArray:0x007ff60476dfc0 @first=2, @second=”hellohello”, @third=84> What this really breaks down into is ThreeElementArray.new(1, “hello”, 42).map which is the method invocation. And: do |element| element * 2 end which is the block it was given. yield is the special keyword in the method definition that states “take the block that this method is called with, and execute that block with what I passed in as the parameter to the block”. So in pseudocode, the three yields in sequence behave like: # yield(@first) and @first = 1 do |1| 1 * 2 end # yield(@second) and @second = "hello" do |"hello"| "hello * 2" end # yield(@third) and @third = 42 do |42| 42 * 2 end A second look at the block we gave to the #map: do |element| element * 2 end This should look really familiar, because it’s a more verbose version of the Proc we defined earlier, -> (x) { x · 2 }! We can even use a Proc in something like map if we first convert it to a block. We can do this by prefixing it with &, so for instance [7] pry(main)> ThreeElementArray.new(1, “hello”, 42).map(&fx) #<ThreeElementArray:0x007ff603998c10 @first=2, @second=”hellohello”, @third=84> Which you’ll notice has the same output as before with the do end block. So to summarize: Procs, lambdas, and blocks, are ways of grouping arbitrary pieces of code. They are also known as anonymous functions, or specifically closures, which I hope to explain further in a future blog post. Procs are actual objects in ruby, whereas blocks are just syntax sugar around passed in Procs in ruby’s method definitions. Recommended readings - Why does ruby have blocks? by Avdi Grimm
https://medium.com/@materialdesignr/what-is-a-block-in-ruby-f3637030483c
CC-MAIN-2017-39
refinedweb
705
69.62
XML. Keeping up with evolution of XML family of standards, Microsoft products support various other specifications, such as XPath, XSLT, XML Schemas, DOM, SAX, SOAP and Web services. This recent ZDNet story declared Microsoft as one of the winners in the Web services market and mentioned that "Microsoft is establishing strong position in the developing Web services market". Considering these facts, if you’re a developer who works on Microsoft platform, building Web or Windows applications, it’s crucial that you understand the usage and applications of XML, and know about the level of XML support offered in various Microsoft products and SDKs. The goal of this tutorial is to provide you with a complete picture of XML and Web services support made available in varied Microsoft products. More specifically we’ll discuss the following offerings: - MSXML or Microsoft XML Core Services - XML and Internet Explorer - SQL Server 2000 XML or SQLXML - SOAP Toolkit - .NET Framework - Web Services Toolkits (Office XP and Exchange Server) - BizTalk Server - Other tools and SDKs In this first part, we’ll explore MSXML, Data Islands, SQLXML, and SOAP Toolkit. In the second part of this tutorial we’ll focus on .NET Framework and Web Services. This tutorial makes the assumption that you are familiar with XML family of standards. If you’re not, read SitePoint’s Introduction to XML. Let’s get started and talk about what is MSXML and how to use it in your Web and/or desktop applications. Microsoft XML Core Services As mentioned earlier, Internet Explorer 4.0 was the first browser to support XML. With IE 4.0, Microsoft provided a COM DLL named msxml.dll, a basic DOM (Document Object Model) implementation that allowed creating and parsing XML documents. Over years, Microsoft has greatly enhanced this COM-based free XML parsing API, and added support for various other XML standards. MSXML, now known as Microsoft XML Core Services, is the paramount XML processing API available on Microsoft platform. In additions to DOM parsing, it also supports various other standards such as SAX, XPath, XSLT, XML Schemas and XML namespaces. MSXML SDK is shipped with various products such as Internet Explorer, Office XP, etc., and also can be downloaded from the MSDN Website at. The current Internet Explorer 6.0 release ships MSXML version 3.0 SP2. And the latest MSXML version available is MSXML 4.0 SP1 that you can download from the MSDN site mentioned earlier. MSXML can be used to: - Create, parse, navigate and update XML document using DOM or SAX API, - Transform XML documents (XSLT), - Extract data (XPath), - Validate XML documents using DTD, XDR , XML Schemas (XSD), and - HTTP data access (XMLHTTP and ServerXMLHTTP) The Microsoft site also has complete details on the standards supported by MSXML. In the following section, we’ll look at examples on how to use MSXML on the server-side in an ASP page, and on the client-side in a Visual Basic application. Let’s begin with an ASP page example. Using MSXML on the Server Side MSXML is a COM-based API and hence can be used from scripting languages such as VBScript, ECMAScript, and Perl. In this example, we’ll write VBScript code inside an ASP page, use MSXML DOM to load an XML document, and create HTML response to be sent to the client browser. Let’s say you have the following XML document available as sites.xml under the same IIS virtual directory as the ASP page: <?xml version='1.0' encoding='utf-8'?> > The following ASP code uses MSXML 4.0 DOM to load the sites.xml, process it and generate HTML output. If you do not already have MSXML 4.0 installed, download and install it from the MSDN Website. ShowSites.asp <%@ Language=VBScript %> <% Option Explicit Response.Expires = -1 'Create MSXML 4.0 DOMDocument instance Dim objXMLDoc Set objXMLDoc = Server.CreateObject("MSXML2.DOMDocument.4.0") 'Load sites.xml file present in the same directory as this ASP page objXMLDoc.async = False objXMLDoc.validateOnParse = False objXMLDoc.resolveExternals = False Dim bLoadResult bLoadResult = objXMLDoc.load(Server.MapPath("sites.xml")) 'If Load successful If bLoadResult Then 'Generate HTML Output 'Select Site Nodes Dim siteNodes Set siteNodes = objXMLDoc.selectNodes("/Sites/Site") Response.Write "<b>SitePoint: <i>Empowering Web Developers since 1997.</i></b><ul>" 'For each Site node Dim aSiteNode For Each aSiteNode in siteNodes With Response .Write "<li><a href='http://" .Write aSiteNode.selectSingleNode("URL").nodeTypedValue .Write "'>" & aSiteNode.selectSingleNode("Title").nodeTypedValue .Write "</a></li>" End With Response.Write "</ul>" Else 'Load Unsuccessful, print error Response.Write "<font color='red'>" & objXMLDoc.parseError.reason & "</font>" End If %> The above ASP page begins by creating an instance of class DOMDocument using the MSXML 4.0 version dependent ProgID MSXML2.DOMDocument.4.0. Next, we set certain properties to have the XML document loaded synchronously, and tell MSXML not to validate XML document and skip resolving any external references in the XML document. As the document is being loaded from an external file, we use the load method, instead of loadXML which is used to load the XML document from a string. If the document load succeeds, we use the DOM selectNodes methods and pass it the XPath expression /Sites/Site that selects all the <Site> nodes from the source XML document. Finally, for each <Site> node, we assume that it contains <URL> and <Title> child nodes, and we select these node values to generate the HTML output. If the document load fails, the error message string is generated using parseError interface. Download the source code for this article, save the included XML (sites.xml) and ASP page (ShowSites.asp) under an IIS virtual directory, browse to ShowSites.asp and you should see the output similar to following screen: Figure 1. HTML output generated by an ASP page using MSXML DOM In this example, you learned about using MSXML DOM in an ASP page on the server side. Let’s now see an example of using MSXML in a Visual Basic application to be run on the client side. Using MSXML on the Client Side Let’s assume that you’re working on a Windows application that periodically needs to connect to a Web server, download some configuration details, and refresh the same on the client side. Let’s also assume that these configuration settings are saved as an XML document on the server. So the task in hand is to download this XML document from the server, and save it on the client side as a disk file. The following Visual Basic application does the exactly same job: Start Visual Basic 6.0, create a new standard EXE project, and add reference (Project | References…) to Microsoft XML, v4.0 (msxml4.dll). Double click on the form and write the following code in the Form_Load() method: Const strURL = "" 'Create MSXML DOMDocument instance Dim objXMLDoc As New MSXML2.DOMDocument40 'Set Properties objXMLDoc.async = False objXMLDoc.validateOnParse = False objXMLDoc.resolveExternals = False 'Since loading over HTTP from a remote server objXMLDoc.setProperty "ServerHTTPRequest", True If objXMLDoc.Load(strURL) Then objXMLDoc.save "c:books.xml" MsgBox "Remote XML document saved as c:books.xml." Else MsgBox "Error: " & objXMLDoc.parseError.reason End If Unload Me The above Visual Basic code, like ASP page example, uses MSXML DOM to load the XML document. The important point to note is that, as we’re loading a remote XML document over HTTP, we must set the ServerHTTPRequest property to True. If the document load succeeds, the save method is called to persist the loaded XML document on the client side as c:books.xml. We just saw an example that used MSXML on the client side in a Visual Basic application. Let’s now focus on using XML inside the browser client. XML and Internet Explorer Beginning with Internet Explorer 5.0, Microsoft introduced the notion of XML Data Islands, which refers to the ability of including chunks of XML data inside the HTML Web page. These islands of XML data inside the Web pages then can be bound to HTML controls, such as tables, or processed using client-side JScript or VBScript. The Internet Explorer browser internally uses MSXML to offer the Data Island functionality. Let’s now look at an example that makes use of Internet Explorer’s XML Data Island feature. For this example to work, you’ll need IE 5.0 or higher. <html> <head> <title>Data Island Example</title> <style> BODY, A, LI, TD { font-family: 'Verdana', 'Arial', sans-serif; font-size: 9pt; color : black; } </style> <XML ID="SITES"> > </XML> </head> <body> <div align="center"> <h2>SitePoint.com</h2> <table width="100%" cellpadding="2" cellspacing="1" border="0" bgcolor="#EEEEEE" DATASRC="#SITES"> <thead> <th>Title</th> <th>URL</th> </thead> <tr> <td bgcolor="#FFFFFF"><div DATAFLD="Title"></div></td> <td bgcolor="#FFFFFF"><div DATAFLD="URL"></div></td> </tr> </table> </div> </body> </html> Inside an HTML Web page, we can include XML data using the <XML></XML> tag. The above HTML page contains the XML Data Island named SITES and then later binds this data to a table using the DATASRC and DATAFLD attributes. Browse to the above HTML page and you’ll see the output similar to shown in the following screen: Figure 2 XML Data Island bound to a table control inside an HTML page This concludes our discussion on MSXML. Let’s now explore the XML features in SQL Server 2000. XML and Databases The primary problem with HTML is that it combines data with presentation. On the other hand, XML is all about data. XML has become the de facto format for portable data. One of the primary sources of data is still the relational databases. Keeping these facts in mind, Microsoft introduced support for XML in their relational DBMS, SQL Server 2000. SQL Server 2000 allows relational data to be retrieved as XML, and XML to be directly imported into relational database. The FOR XML clause was introduced to be used with the standard SELECT Transact SQL statement. This clause allows the retrieval of relational data as XML. Try out the following example: Start SQL Server Query Analyzer tool, select the Northwind sample database and execute the following query: SELECT * FROM [Customers] FOR XML AUTO Instead of returning the standard relational rowset, you’ll notice that the data is returned as XML nodes. To complement the FOR XML clause, the OPENXML function was introduced. This allows XML data to be used as a relational rowset, which can be SELECTed, INSERTed, or used for the relational UPDATE statement. There are three steps involved in using OPENXML. The first is to load the XML document and get the document handle. Then use OPENXML to convert the XML document into a relational rowset. And finally, free the XML document handle. Two system stored procedures, sp_xml_removedocument and sp_xml_preparedocument are used to work with the handles. DECLARE @idoc int DECLARE @doc varchar (1000) SET @doc =' <ROOT> <ShipperRec Company="ABC Shippers" Ph="(503) 555-9191" /> </ROOT>' --Create an internal representation of the XML document EXEC sp_xml_preparedocument @idoc OUTPUT, @doc -- Execute a SELECT statement that uses the OPENXML rowset provider FROM OPENXML (@idoc, '/ROOT/ShipperRec',1) WITH (Company varchar(80), Ph varchar(48) ) -- Clear the XML document from memory EXEC sp_xml_removedocument @idoc Run the above commands in Query Analyzer, and you should see a record in the output window, with column names and data values from the XML document defined by the @doc variable above. SQL Server 2000 also introduced ability to access relational data as XML over HTTP. A tool known as Configure SQL XML Support in IIS was added that allows configuring IIS virtual directories that map to relational database. This virtual directory then can be used to access the database over HTTP. See SQL Server 2000 Books Online (Start | Programs | Microsoft SQL Server | Books Online) for more details on native XML support in SQL Server 2000. To keep up with the fast-evolving XML standards, and to enhance the XML support in SQL Server 2000, Microsoft followed the Web release model (like with MSXML) and frequently releases the SQLXML extension via the MSDN Website. The current SQLXML 3.0 release supports ability to update the database over HTTP using XML (Updategrams), XML Bulk Import, exposing SQL Server stored procedure and functions as Web service methods, and more. Visit for more details on this. XML and ADO ActiveX Data Objects, or ADO, is the premier data-access API on the Microsoft platform. It is a COM-based automation-enabled wrapper over OLE-DB. Starting with ADO 2.5, Microsoft added functionality to save relational Recordset as XML. The Recordset save method can be used for this purpose. Similarly, XML data can be loaded into a relational recordset. Let’s look at a Visual Basic example that connects to SQL Server Northwind database and saves the Orders table as a XML file. Start Visual Basic 6.0, create a new standard EXE project, add reference to Microsoft ActiveX Data Objects, and write the following code in the Form_Load() method: Dim objRS As New ADODB.Recordset objRS.Open "SELECT * FROM [ORDERS]", _ "PROVIDER=SQLOLEDB.1;SERVER=.;UID=sa;PWD=;DATABASE=Northwind;" objRS.Save "c:NWOrders.xml", adPersistXML objRS.Close MsgBox "Order data saved as c:NWOrders.xml." Set objRS = Nothing Unload Me The above ADO code connects to a relational database, opens the recordset and saves as the XML format into a file named c:NWOrders.xml. You can modify the above example to connect to any other data source, such as Microsoft Access or Oracle, by just updating the connection string in the Recordset Open method. XML Messaging using Microsoft SOAP Toolkit In a recent Web Services conference, one presenter asked five people to define Web services and got six different answers! XML Web services is the hottest topic in the industry today. The notion of XML Web services allows two applications to seamlessly communicate over Internet, without any platform and programming language worries. This means that, for instance, a Perl script running on a Linux machine can now easily call .NET code running on Windows 2000, over the Internet. This is made possible via two very successful standards, XML and HTTP. Over HTTP, one application sends XML request package to the other application, mostly over Internet, and receives an XML response package as the result of the Web method. One of the primary pillars for Web services is SOAP, a W3C specification, which defines the use of XML and Web protocols (such as HTTP) for XML messaging. Using Web services can be loosely termed as sending SOAP request and receiving back SOAP responses. There are many toolkits available that simplify this process of sending and receiving SOAP packages, as well as working with the resultant XML. The Microsoft SOAP toolkit is one of these. The Microsoft SOAP Toolkit can be downloaded from MSDN Website at. The current 3.0 release offers much more than a basic SOAP toolkit functionality. It allows COM objects to be used as XML Web services, sending and retrieving attachments, and more. Check out the MSDN Website for complete details on the toolkit offerings. Let’s look at an example of how the SOAP Toolkit can be used to call a Web service. If you don’t have the SOAP Toolkit 3.0 installed, download and install it from the MSDN Website before you run the following example. In this example, we’ll write the client for Weather — Temperature Web service available at the XMethods Website. Given the U.S Zip code, this Web service returns the current temperature information for that region. Start Visual Basic 6.0, create a new standard EXE project, add reference to Microsoft SOAP Type Library v3.0 and write the following code in the Form_Load() method: Dim objWebSvcClient As New MSSOAPLib30.SoapClient30 Dim dTemp As Double objWebSvcClient.MSSoapInit _ "" dTemp = objWebSvcClient.getTemp("60195") MsgBox dTemp Unload Me You can see from the above Visual Basic code how easy it is to call Web service using the Microsoft SOAP Toolkit. You don’t have to worry about packaging and unpacking SOAP request/response structures. The above code creates an instance of SOAPClient30 class, initializes it by passing the Web service WSDL URL (WSDL can be thought of as synonymous to IDL for DCOM/CORBA), followed by calling the actual Web service method ( getTemp), and getting the results. There are hundreds of sample Web services listed on sites such as XMethods, BindingPoint, SalCentral, and more. Go ahead and try out some more examples using SOAP Toolkit to call the Web services. Summary More and more developers use XML in their applications, as both Web and desktop application developers are exploring the new possibilities made available by XML and Web services. Various Microsoft products, such as SQL Server 2000 and .NET natively support XML, and various toolkits, such as MSXML and SOAP Toolkit simplify working with XML. In this two-part tutorial you’ll learn about XML offerings available for Microsoft-platform developers. In this first part, you learned about MSXML, SQLXML, and SOAP toolkit. In the second part, we’ll focus on .NET framework and Web services. Stay tuned!
https://www.sitepoint.com/microsoft-developers-1/
CC-MAIN-2018-34
refinedweb
2,861
55.95
Your answer is one click away! I'm scraping some JSON data from a website, and need to do this ~50,000 times (all data is for distinct zip codes over a 3-year period). I timed out the program for about 1,000 calls, and the average time per call was 0.25 seconds, leaving me with about 3.5 hours of runtime for the whole range (all 50,000 calls). How can I distribute this process across all of my cores? The core of my code is pretty much this: with open("U:/dailyweather.txt", "r+") as f: f.write("var1\tvar2\tvar3\tvar4\tvar5\tvar6\tvar7\tvar8\tvar9\n") writeData(zips, zip_weather_links, daypart) Where writeData() looks like this: def writeData(zipcodes, links, dayparttime): for z in zipcodes: for pair in links: ## do some logic ## f.write("%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\n" % (var1, var2, var3, var4, var5, var6, var7, var8, var9)) zips looks like this: zips = ['55111', '56789', '68111', ...] and zip_weather_links is just a dictionary of (URL, date) for each zip code: zip_weather_links['55111'] = [('', datetime.datetime(2013, 1, 1, 0, 0, 0), ...] How can I distribute this using Pool or multiprocessing? Or would distribution even save time? You want to "Distribute web-scraping write-to-file to parallel processes in Python". For a start let's look where the most time is used for Web-Scraping. The latency for the HTTP-Requests is much higher than for Harddisks. Link: Latency comparison. Small writes to a Harddisk are significantly slower than bigger writes. SSDs have a much higher random write speed so this effect doesn't affect them so much. some example code with IPython parallel: from ipyparallel import Client import requests rc = Client() lview = rc.load_balanced_view() worklist = ['', ''] @lview.parallel() def get_webdata(w): import requests r = requests.get(w) if not r.status_code == 200: return (w, r.status_code,) return (w, r.json(),) #get_webdata will be called once with every element of the worklist proc = get_webdata.map(worklist) results = proc.get() # results is a list with all the return values print(results[1]) # TODO: write the results to disk You have to start the IPython parallel workers first: (py35)River:~ rene$ ipcluster start -n 20
http://www.devsplanet.com/question/35271922
CC-MAIN-2017-22
refinedweb
378
68.97
Method function There are functions when you come into contact with the first program in Java. The main function is a function that specifies the writing method in Java: the main function is usually written in an open class, and the code in the main function is automatically generated when executing a java program. Definition of function: Functions are code blocks written in classes with certain special functions. Functions can be defined by themselves, but the main function will be automatically scanned and run by the interpreter. Customized functions need to be called manually. Meaning of function: Functions actually exist to make code reusable. For example, if you write the program for addition operation into a separate function, you only need to call this function when using addition operation, without rewriting a new addition algorithm. This is called code reusability, that is, the reuse of a set of code. When all addition operations use the same set of addition codes, if there are special changes in the addition algorithm, only the content of the addition function needs to be changed, and the location of other calls will automatically update the algorithm due to the update of the addition function, so as to realize the unification of standards and facilitate maintenance and change. Function design principle: Because the meaning of function is to make the code reuse to the greatest extent, and it is convenient to maintain and improve the algorithm. The concept of "high cohesion and low coupling" should be considered when designing the method. Cohesion can be simply understood as the degree of concentration in doing one thing, and coupling represents the cascade impact of doing one thing on other things. High cohesion can be understood as being extremely specific to what you are doing. Low coupling means that your changes will not have too much impact on too many modules. For example, if you want to do a set of login functions, if you concentrate all the code in one method, any maintenance or change in any step will test the whole method or lead to the collapse of the whole process. If these functions such as input, verification, display and so on are isolated, these methods are invoked in the running process of the login process. The verification of a function can be realized during development and testing, and the failure of a function will not affect other modules, and the troubleshooting and maintenance is simpler. To what extent can the process code be split to achieve the most perfect high cohesion and low coupling, which involves the granularity of the business. If the splitting is too detailed, it will lead to difficult maintenance and complex code, but if the splitting is too coarse, it will violate the principle of high cohesion and low coupling. In fact, each project and process should have its own business granularity, which means different splitting fineness. This needs to be studied with reference to the strength of the whole system and more professional knowledge. Declarative function Declaration is to create a user-defined function. You can refer to each part of the main function to design your own function structure: public static void main(String[] args){ System.out.println("helloworld!"); } Referring to the structure of the main function, you can get the syntax of creating the function: Modifier return value type custom function name(Parameter table){ Code in method body } There are three necessary parts for designing a function: method name, return value type and parameter table. The following is an example of the simplest method declaration, but it cannot be executed. Similar functions will be seen in future knowledge: void fun(); If you want to design a function that can run, you can modify it slightly on the basis of the main function: public static void hello(){ System.out.print("in hello!"); } The specific content is the same as the explanation of each part of the main function. See the first program chapter. Methods are written to the class. If there is a main function in the class, the contents of the whole java file should be as follows: public class Index{ public static void main(String[] args){ System.out.println("helloworld!"); } public static void hello(){ System.out.print("in hello!"); } } The creation of the first method has many words that do not understand the meaning and many theories that have not been explained. You can understand the meaning after learning deeper knowledge in the future. At present, the code can be temporarily written as an example for the main purpose of running successfully. Call function Except for the main function, any custom function must be called manually to execute. Call is to execute the contents of the function, and use an instruction to complete the call to the function. This instruction is usually written in the main function and other functions: Object or class name.Function name(Parameter table); The function name is also the name of the user-defined function. The parameter table will be described below. As for the object or class name in front of the function name, you will know its principle after learning object-oriented. In the initial function learning stage, you can ignore all the contents in front of the function name. The call to a user-defined function is as follows: public static void main(String[] args){ System.out.print("test hello!"); hello2();//call } public static void hello1(){//statement System.out.print("in hello!"); } In the beginning process, the main function is usually used to trigger the call to the method, so the call statement is usually written in the main function. But in fact, in the future development process, it is more often the call between methods: public static void main(String[] args){ System.out.println("test hello!"); //1 hello1();//call // two System.out.println("test hello end!"); //7 } public static void hello1(){//statement System.out.println("in hello1!"); //3 hello2();//call // four System.out.println("hello1 end!"); //6 } public static void hello2(){//statement System.out.println("in hello2!"); //5 } Parameters and return In addition to the method body wrapped in braces, the method name, parameter table, return value and modifier are called the signature of the method. Modifiers usually refer to the access scope of the method and the type of the method, which will be learned in the future. When using a method, the following three points should be clear: - What is the specific function of this method. - What parameters are required to start this method. - What data can be obtained after the execution of this method. In fact, in order to make the program easier to read and use, the above three views will be marked in the way of document annotation. Parameter table The parameter table represents the values required for the start of the method and the types of values. Parameters can be multiple and can be of any type. When calling a method, you must pass in the parameters required by the method. These parameters can only be used in the method body as local variables. In the parameter table of the declared method, you need to declare the parameter type and the parameter name used in the method. Because the parameter declaration does not have an actual value, it is only a formal parameter, which needs to be assigned during the call, which is called a formal parameter. When calling a method, you only need to pass in the specified number and type of values at the specified position in the parameter table. These values will be assigned to formal parameters before the method is started, so they are called arguments. Formal parameters and arguments can also be assigned using automatic type promotion. public static void main(String[] args){ // Call the same method and get different results through different parameters add(1,1.0); add(2,2.0); add(3,3.0); } public static void add(int a,double b){ System.out.println(a+b);//Direct output of parameter addition results } The String[] args of the main function is actually a formal parameter, because the caller is a Java interpreter, and the parameters passed in are usually passed in the console. That is, String [] is the type of args, and args is actually a variable name. Although the declaration of formal parameter variables can be customized, it is still recommended to use the official JAVA recommended writing args. Return value The location of the calling method. The value can be obtained by calling this method, and the return value type indicates what type of value can be obtained by calling this method. This value can be an operation result or a value of any meaning, depending on the actual meaning of the method. However, as long as the return value type of the method is specified, the method must return a value of the specified type. void means that this method does not have any return value, so it cannot return any value in the method body. Return keyword is used to terminate the method and return the value to the calling position of the method. The value carried will be assigned to the variable through the calling statement of the method: // Call the method to pass in two values and get the result after adding the two values public static void main(String[] args){ int i = 10; double j = 11.11; double h = addNumber(i,j);//The values of i and j are passed here System.out.print(h); } public static double addNumber(int a,double b){ double c = a+b; return c;//The value of c is returned here } In a method, the return statement is usually written to the end of the method. Of course, it can also be returned in the middle of method execution. If the method returns, the execution of the method will be terminated, which means that all statements after the return statement can never be executed, and an error will be reported in the compilation. However, you can selectively terminate the method by writing multiple return s through the process control statement. Because Java believes that any if statement may not be executed, there will be no code error that can never be executed during compilation: public static double addNumber(int a,double b){ double c = a+b; /* java I think that all if code blocks may not be executed. So "if the method declares the return value type, but the return statement is in if, there must be a guaranteed return under if"! If the if code block is executed, the guaranteed return is not executed; otherwise, the guaranteed return is executed. */ if(c >= 100){ return c;//The value of c is returned here } return 0; //The following code: execute else without executing if, and return on the contrary. java believes that this program must return content, so it is legal. if(c >= 100){ return c;//The value of c is returned here }else{ return 0; } //The first writing method is recommended } Reasonable use of branch statements to control the return of methods can reduce the amount of code and make the process clear and easy to read. For example, there are multiple branch conditions in a method that can return results. Usually, continuous if else will be used, but in fact, as long as you enter an if code block, you will not execute other code blocks. Then use a separate if statement and add a return statement. Similarly, stop the method after entering a code block, and you will not enter other code blocks: public static double addNumber(int a,double b){ double c = a+b; //The following two return processes are the same! The second one is recommended! //Return data using if else coherence if(c >= 100){ return c; }else if(c>=200){ return 0; }else if(c>=300){ return 1; }else{ return 2; } } After the above code is modified, it can be written as follows: public static double addNumber(int a,double b){ double c = a+b; //Use if to return (recommended) if(c >= 100){ return c; } if(c>=200){ return 0; } if(c>=300){ return 1; } return 2; } The coordination optimization of return statement and if statement can be studied and explored in the project reconstruction stage after mastering the basic knowledge. Recursive call If you call yourself in a method, such syntax is allowed, but if the program is running, the call of the method will be nested continuously and progressively. The necessary conditions exist to enable the method to end the progression. Usually, it will carry parameters back layer by layer. This writing method is called recursion. Recursion can realize functions similar to loops, but recursion will occupy stack space. Too many recursion layers will lead to stack space overflow. Too many loops will produce too many useless variables or objects, which mostly occupy large heap space. Using loops is safer than using recursion. Recursion is used to return the execution result of the inner layer method to the upper layer until it is returned to the position of the first layer method call to display the result. Using this feature, the Fibonacci sequence of specified length can be obtained: //The Fibonacci value is obtained recursively. The passed in value represents the first + 1 Fibonacci value, and the returned value refers to the value on the location. public static int Fibonacci(int n) { if(n == 0) return 0; if(n == 1) return 1; return Fibonacci(n-2) + Fibonacci(n-1); }
https://programmer.group/programming-always-needs-some-methods-let-me-tell-you-what-methods-are-today.html
CC-MAIN-2022-40
refinedweb
2,252
50.97
Streams are represented in Java as classes. The java.io package defines a collection of stream classes that support input and output (reading and writing). To use these classes, a program needs to import the java.io package, as shown below import java.io.* ; In general, streams are classified into two types known as character streams and the byte streams. The java.io package provides two sets of class hierarchies to handle character and byte streams for reading and writing: l. Input Stream and Output Stream classes are operated on bytes for reading and writing, respectively. 2. Classes Reader and Writer are operated on characters for reading and writing, respectively. There are two other classes that are useful for handling input and output. These are File class and Random Access File class. A brief description of these four classes of java.io package. Reader and Writer are abstract super-classes for streaming l6-bit character inputs and outputs, respectively. Methods of these classes throw the IO Exception under error conditions. All the methods in the Writer class have return type void. Description of the four classes in the package java.io. Reader class hierarchy classes in italic are of type (i) the rest are of type (ii). Writer class hierarchy classes shown in static are of type (i), the rest are of type (ii). Character streams are normally divided into two types. (i) Those that only read from or on write on to streams and (ii) those that also process the data that was read/written. Figures and show the class hierarchies for the Reader and Writer classes. The descriptions of Reader and Writer classes are summarized in Table Various Reader and Writer classes and their description The example below illustrates how to read characters using the File Reader class. FileReader fr = new FileReader("filename.txt"); //Create a File Reader class from the file filename.txt. int i = fr.read(); //Read a character Internally, to represent characters in computers, a character-encoding scheme (for example, ASCII) is generally used and every platform has a default character-encoding scheme. Java uses16-bit Unicode character-encoding scheme to represent characters internally. The Reader classes support conversions of Unicode characters to internal character storage. Besides using default encoding, Reader and Writer classes can also specify which encoding scheme to use. Most programs use Reader and Writer streams to read and write textual information. This is because they can handle any character in the Unicode character set. On the other hand, the byte streams are limited to ISO-Latin-l 8-bit bytes. Byte streams are used in a program to read and write 8-bit bytes. Input Stream and Output Stream are the abstract super-classes of all byte streams that have a sequential nature. Input Stream and Output Stream provide the Application-Program Interface (API) and partial implementation for input streams (streams that read bytes) and output streams (streams that write bytes). These streams are typically used to read and write binary data such as those related to images and sounds. Methods of these two classes throw the IO Exception. All methods of the Output Stream will have the return type void. The hierarchies of Input Stream class and Output Stream class are shown in Figures and all the sub-classes of Input Stream and Output Stream work only on bytes. Note that both Input Stream class hierarchy Output Stream class hierarchy Input Stream and Output Stream are inherited from the Object class. Since Input Stream and Output Stream are abstract classes, they cannot be used directly .A brief description of classes in the Input Stream' and Output Stream hierarchies are given in Table Two other classes that are available are Object input Stream and Object Output Stream, which are used for object serialization. These are the sub-classes of Input Stream and Output Stream which internally implement the Object input and Object Output interfaces, respectively. These classes are covered in section We cover the most useful stream classes in the rest of this chapter. Working with the I/O super-classes The classes Reader and Input. Stream define similar APTs but for different data types. Reader contains the methods described in Table for reading characters and arrays of characters. Similarly, Input Stream defines the same methods but for reading bytes and arrays of bytes. These are listed in Table Both Reader and Input Stream provide methods for marking a location in the stream, skipping input and resetting the current position. The following code illustrates reading a character. FilelnputStream inp = new FilelnputStream("filename.txt"); while ((input = inp.read()) != null) { System.out.println(input); } Classes in the Input Stream and Output Stream hierarchies. Methods contained by the Reader class, or reading character arrays. Methods contained by the Input Stream class, for reaching arrays of bytes. Similarly, Writer and Output Stream are parallel concepts. Like Reader and Input Stream, Writer (or Output Stream) defines the following methods for writing characters (or bytes) and arrays of characters (or arrays of bytes): void write( int c ) void write( char cbuf[ ] ) void write( char cbuf[ ], int offset, int length) These methods are used to write into the invoking stream. The first method writes a single character (or byte, in the case of Output Stream) to the invoking output stream. The second method writes the complete array into the invoking output stream. The final method writes the sub-range of the length of characters (or bytes, in the case of output stream) starting from the offset value of the buffer to the invoking stream. All of these stream classes, namely, Reader, Writer, Input Stream and Output Stream are automatically opened when they are created. A stream can be closed either implicitly or explicitly. When the stream object is no longer referenced, the garbage collector can implicitly close it. Alternatively, the close () method can be used to close the stream explicitly. In addition to read (), write () and close () methods, Table .lists some other methods that belong to Input Stream and Output Stream classes. Other methods in Input Stream and Output Stream
http://ecomputernotes.com/java/stream/java-io
CC-MAIN-2020-05
refinedweb
1,012
56.45
react-svg A React component that injects SVG into the DOM. Background Let's say you have an SVG available at some URL, and you'd like to inject it into the DOM for various reasons. This module does the heavy lifting for you by delegating the process to @tanem/svg-injector, which makes an AJAX request for the SVG and then swaps in the SVG markup inline. The async loaded SVG is also cached, so multiple uses of an SVG only require a single server request. Basic Usage import React from 'react' import { render } from 'react-dom' import { ReactSVG } from 'react-svg' render(<ReactSVG src="svg.svg" />, document.getElementById('root')) Live Examples - Accessibility: Source | Sandbox - API Usage: Source | Sandbox - Basic Usage: Source | Sandbox - Before Injection: Source | Sandbox - CSS Animation: Source | Sandbox - CSS-in-JS: Source | Sandbox - External Stylesheet: Source | Sandbox - Fallbacks: Source | Sandbox - No Extension: Source | Sandbox - SSR: Source | Sandbox - SVG Wrapper: Source | Sandbox - Typescript: Source | Sandbox - UMD Build (Development): Source | Sandbox - UMD Build (Production): Source | Sandbox API Props src- The SVG URL. afterInjection(err, svg)- Optional Function to call after the SVG is injected. If an injection error occurs, erris an Errorobject. Otherwise, erris nulland svgis the injected SVG DOM element. Defaults to () => {}. beforeInjection(svg)- Optional Function to call just before the SVG is injected. svgis the SVG DOM element which is about to be injected. Defaults to () => {}. evalScripts- Optional Run any script blocks found in the SVG. One of 'always', 'once', or 'never'. Defaults to 'never'. fallback- Optional Fallback to use if an injection error occurs. Can be a string, class component, or function component. Defaults to null. loading- Optional Component to use during loading. Can be a string, class component, or function component. Defaults to null. renumerateIRIElements- Optional Boolean indicating if SVG IRI addressable elements should be renumerated. Defaults to true. useRequestCache- Optional Use SVG request cache. Defaults to true. wrapper- Optional Wrapper element types. One of 'div', 'span'or 'svg'. Defaults to 'div'. Other non-documented properties are applied to the outermost wrapper element. Example <ReactSVG afterInjection={(error, svg) => { if (error) { console.error(error) return } console.log(svg) }} beforeInjection={(svg) => { svg.classList.add('svg-class-name') svg.setAttribute('style', 'width: 200px') }} Installation ⚠️This library depends on @tanem/svg-injector, which uses Array.from(). If you're targeting browsers that don't support that method, you'll need to ensure an appropriate polyfill is included manually. See this issue comment for further detail. $ npm install react-svg There are also UMD builds available via unpkg: - - For the non-minified development version, make sure you have already included: For the minified production version, make sure you have already included: Why are there two wrapping elements? This module delegates it's core behaviour to @tanem/svg-injector, which requires the presence of a parent node when swapping in the SVG element. The swapping in occurs outside of React flow, so we don't want React updates to conflict with the DOM nodes @tanem/svg-injector is managing. Example output, assuming a div wrapper: <div> <!-- The wrapper, managed by React --> <div> <!-- The parent node, managed by @tanem/svg-injector --> <svg>...</svg> <!-- The swapped-in SVG, managed by @tanem/svg-injector --> </div> </div> See: Related issues and PRs: How can I improve the accessibility of the rendered output? Let's assume we want to add role and aria-label attributes to the outermost wrapper element, plus title and desc elements to the SVG. Since non-documented properties are applied to the outermost wrapper element, and the beforeInjection function allows us to modify the SVG DOM, we can do something like the following: <ReactSVG aria- A live example is available here. Related issue: License MIT GitHub
https://reactjsexample.com/a-react-component-that-injects-svg-into-the-dom/
CC-MAIN-2022-27
refinedweb
615
50.02
...one of the most highly regarded and expertly designed C++ library projects in the world. — Herb Sutter and Andrei Alexandrescu, C++ Coding Standards The modular multiplicative inverse of a number a is that number x which satisfies ax = 1 mod p. A fast algorithm for computing modular multiplicative inverses based on the extended Euclidean algorithm exists and is provided by Boost. #include <boost/integer/mod_inverse.hpp> namespace boost { namespace integer { template<class Z> Z mod_inverse(Z a, Z m); }} int x = mod_inverse(2, 5); // prints x = 3: std::cout << "x = " << x << "\n"; int y = mod_inverse(2, 4); if (y == 0) { std::cout << "There is no inverse of 2 mod 4\n"; } Multiplicative modular inverses exist if and only if a and m are coprime. If a and m share a common factor, then mod_inverse(a, m) returns zero. Wagstaff, Samuel S., The Joy of Factoring, Vol. 68. American Mathematical Soc., 2013.
https://www.boost.org/doc/libs/1_79_0/libs/integer/doc/html/boost_integer/mod_inverse.html
CC-MAIN-2022-27
refinedweb
151
53.71
- FAQ Topic - What does the future hold for ECMAScript? (2008-11-22) - Does removing an element also remove its event listeners? - Re: Muuttujien leveydet C++:ssa? - Update location.hash without adding to history? - FAQ Topic - Internationalisation and Multinationalisation in javascript. (2008-11-21) - pass function into another function as parameter? - Two weirdnesses..are they related? (IE7) - cross domain XHR - OnComm event with JavaScript doesn't fire - onclick only works once - Can you suggest a better way of Reporting Errors? - Dragging something in JS - JAVAScript Public Key Encryption - hiding javascript function call from status bar. - 3rd party page access from JavaScript. Is this possible? - Chinese antique - Definition/Standard for DOM node property offsetParent - Copy Clipboard - Side-effect only requests - FAQ Topic - What is the document object model? (2008-11-20) - Update labels - Haskell functions for Javascript - Unknown Errors Use of getBoxObjectFor() is deprecated. Try to useelement.getBoundingClientRect() if possible. - Encrypted code with certificate - Where's the Window() - IE6 memory leak - very fiddly - native code attached to onblur/onfocus event handler - Help on onchange event for refreshing the page - FAQ Topic - What are object models? (2008-11-19) - Create Login page - check boxes again - Re: comp.lang.javascript FAQ - Quick Answers 2008-11-17 - declare variables document.write() - frames and back action - Re: How to implement a mask of visibility/invisibility of a set of<div> elements? - Hide/Show Divs - having difficulty calling my functions (directly not threw events) - Javascript onSubmit - FAQ Topic - What is JScript? (2008-11-18) - Link within a div that has onclick - An oddity when clicking checkbox - Write to xml - ng2000 keeps spamming newsgroups - noobslide help please - Do X if element is Y - kiddy question about newsticker snippet - FAQ Version 11 - Escape .(dot) in a Regular Expression - Simple Ajax - How to exit a form validation function so that the form isn'tsubmitted - FAQ Topic - How do I generate a random integer from 1 to N? (2008-11-16) - Trying to create an array? - on-anchor-click? - [jQuery] Why does img_width always return 0 (zero)? - print pdf using javascript - Modify code from random to sequence - FAQ Topic - Why does 1+1 equal 11? or How do I convert a string to a number? (2008-11-15) - Jlint another script validation problem - Validating Javascript function using JLint - Javascript and DIV popup help... - When to minify? - Display of the image from JavaScript - Compare string - Inheritance Chain - FAQ Topic - Why does K = parseInt('09') set K to 0? (2008-11-14) - How to assign event handler in css? - Noob Q: Different ways to run code in script tags - mouseout and checkboxes - Writing to popups - Passing event as parameter to dynamic function - How do I create a function that copies all the fields? - Closure - Augmenting functions - FAQ Topic - Why does simple decimal arithmetic give strange results? (2008-11-13) - weird var problem - unescape() and escape styles question - temporarily draw freehand on a page using javascript - invalid flag after regular expression - Href Links in Dynamic table - JavaScript / ECMAScript - You cowboys were right - Re: n00b questions for javascript! - Re: n00b questions for javascript! - local/global scope confusion - Feature detection vs browser detection - Google Toolbar autofill doesn't fire change events - when popup window content loaded - FAQ Topic - How do I format a Number as a String with exactly 2 decimal places? (2008-11-12) - What does Google Calendar's grid uses? - stupid question - eval, alternative - ajax to html - Unhide text using a radio button - Events - FAQ Topic - What online resources are available? (2008-11-11) - Re: comp.lang.javascript FAQ - Quick Answers 2008-11-10 - Image creation and 'on load' behavior - IE7 Javascript Errors - ie7 and prototype windows - charCodeAt - DocType impact on javascript - Am I using 'this' to often? - Embedded <divs> with events: How to prevent the parent div's eventfrom being fired when the embedded div's event is fired? - Nokia 5310 XpressMusic Mobile Phones - Dr. Stephen R.Covey LIVE! in India in Jan 2009 - Will MS adopt WebKit? - FAQ Topic - What books cover javascript? (2008-11-10) - variables and ajax - "Back" no actions in Firefox - unable to validate syntax need help ASAP - EECP treatment - No Bypass Surgery - Pass onmouseover event to the element underneath - FAQ Topic - What does the future hold for ECMAScript? (2008-11-09) - newbie: constants in JavaScript - Is it Possible to Programmatically Customizing Firefox3 Settings? - Hidden Forms - show/hide problem with explorer 7 - Local jawascript search to search pages from only my website - document.domain problem - FAQ Topic - How can old comp.lang.javascript articles be accessed? (2008-11-08) - Dynamically add frames to frameset - SaveAs Command - How can I display the content literally without any change? - about document.image1 - call function from within another - MS08-045 - Cumulative Security Update for Internet Explorer (953838)and frame location - FAQ Topic - Internationalisation and Multinationalisation in javascript. (2008-11-07) - Measuring recurring elapsed time - =?windows-1256?B?x9/I0SDjzOPm2skg1ebRIN3kx+THyiDa0cjtxw==?==?windows-1256?B?yiDax9Htx8ogPz8/?= - Re: Full / Part Time Jobs - focus listener on non-form elements in Safari/Chrome - changing specific <div> status - Change images onclick - Javascipt Image effects - retrieving HTML text - get text from dom element - FAQ Topic - What is the document object model? (2008-11-06) - iFrame issue - Dojo v. Crockford re privates - Updated Conventions Document - ajax run 2 scripts? - popup window - FAQ Topic - What are object models? (2008-11-05) - svg in firefox - pass javascript in xmlHttp.responseText - How to create a textarea dependant on flag in javascript - Possible to introspect a function & parameters ? - =?windows-1256?B?x9vK1cfIIN3sIMfh1MfR2iDH48fjIMfh5MfTLi4=?== ?windows-1256?B?1ebRLi4gLi4u?= - Tooltip box - javascript onclick "save as" - firefox - Seeking to defeat auto-fill - switch or select case and code inside it - Passing a lot of data - FAQ Topic - What is JScript? (2008-11-04) - 'new' operator for built-in types? - Looping to populate selections for IE & Firefox - pls help w/unusual code.. (YUI/JSON) - Activating toolbar item from javascript - how to not hide a division - Parsing XML with namespaces in IE. - Quick Question - Function( confusion - calling a function with parameters packed in array - sorting a textarea. - FAQ Topic - How do I generate a random integer from 1 to N? (2008-11-02) - dynamically causing file browser to appear - FAQ Topic - Why does 1+1 equal 11? or How do I convert a string to a number? (2008-11-01) - JavaScript Convention Documents - 80 columns wide? 132 columns wide? - Refreshing parent page from a child page opened as a modal dialog box - SpiderMonkey Multithreading String Issues - Closure code to assign image index to onload handler - Generated JS in Google's Mobile Talkgadget - input checkbox onclick not working via DOM on IE7, FF, WebKit - Passing variable from function to html body... - Skipping OnBeforeUnload event - Standalone Javascript interpreter for Linux? - FAQ Topic - Why does K = parseInt('09') set K to 0? (2008-10-31) - input checkbox onchange not working on IE7 - =?= - testing if date is in past - Opening a stream in Word with JS - 2 dimensional array - sorting mechanism - ie div onclick problem - Re: comp.lang.javascript FAQ - META 2008-10-29 - Good visual javascript aide? - FAQ Topic - Why does simple decimal arithmetic give strange results? (2008-10-30) - Please Assist With Submission to Remote Iframe - simply super - Script works only with firebug installed, or in non-mozilla - About (function(){})() - Failing rollover image - a href with static and dynamic content using JavaScript - not override onload - frame collection versus gEBI - FAQ Topic - How do I convert a Number into a String with exactly 2 decimal places? (2008-10-29) - UL LI get text - Direct file download - Re: Passing on variables to a bank shopping cart - Check/Uncheck all checkboxes with name as 'name[]' - newbie: how to set a breakpoint - change position of alert()-is it possible? - Help Jquery: unable to register a ready function - How to check all the checkboxes if checkbox name is 'name[]' - Writing an XML document via Javascript. - Reading data from user-submitted XML file. - FAQ Topic - What online resources are available? (2008-10-28) - style.cursor on IE - Jquery not registering the ready func - keydown listener for div element and "event forwarding" - Data persistence and refresh - FunctionExpression's and memory consumptions - FAQ Topic - What books cover javascript? (2008-10-27) - JavaScript Math vs Excel - ==?= - Best practices for error handling - shopping cart - Formatting the clipboard - Webkit Javascript Application in c++ - Need help for javascript/webkit - Pages doesn't load properly until mouse movement - Unsafe Names for HTML Form Controls - general function who activate callback on every object - please help - show/hide any division - FAQ Topic - What does the future hold for ECMAScript? (2008-10-26) - can't get clientid of .net label in a JS file - How to add array elements to parent window from child window - FAQ Topic - I have a question that is not answered in here or in any of the resources mentioned here but I'm sure it has been answered in comp.lang.javascript. Where are the archives located? (2008-10-25) - Need assistance reinventing the wheel..... - Javascript Not Working on Page - New Widget - Cross browser event issues - Re: IE 7 Zoom Problem - Is a closure's scope accessible by untrusted code? - e.layerX problem on Macintosh browsers - JSON array problem - FAQ Topic - Internationalisation and Multinationalisation in javascript. (2008-10-24) - Removing all but first item in drop down list - Sparkline Graphs no longer work in FireFox - Multi column Listbox - earn on line - Do people think this is logical behavior from the string split method? - Re: I need assistance using callback - UnLoad out of body - Decoding html pages - Events problems - I need assistance using callback - Re: How to pass a parameter via an image.onload function call? - Re: How to pass a parameter via an image.onload function call? - Re: How to pass a parameter via an image.onload function call? - Motorola Mobile Technology - JSDB? - Galileo RIA Toolkit - FAQ Topic - What is the document object model? (2008-10-23) - save/restore screen area - Possible to tell when a hidden field's value changes - onsubmit called even when cancel button hit in onbeforeunload - Manipulating a textarea - want to move images in Y direction - Dojo or what ? - Re: Debug Request - Re: Debug Request - FAQ Topic - What are object models? (2008-10-22) - json multidimensional array - How can server interrupt client in browser? - Advanced Core Javascript - select onchange an many values - Re: IE 7 Zoom Problem - <! ... v. <!-- ... - gridView row selected - Crockford's new video: "Web Forward" - Permission denied... parent page and <iframe> - FAQ Topic - What is JScript? (2008-10-21) - FAQ 5.40, Ajax and GET - How to use DOM to check a checkbox - IE Error "Stop Running This Script" - CSS has bigger pixels than canvas? - get Table cell value - AJAX + delay - Re: Whats the point in these groups? - Yahoo! UI Library - HTML or JavaScript - Download an XML file from the server - keycode or keypress button - Elegance is an attitude - Two windows: one for IE7 and the other for FF3 - FAQ Topic - What is ECMAScript? (2008-10-19) - Re: Whats the point in these groups? - Re: Script not accepting some values - Re: Whats the point in these groups? - cross-domain - Convert state of radiobutton => byte on send - FAQ Topic - Why was my post not answered? (2008-10-18) - Re: comp.lang.javascript FAQ - META 2008-10-15 - Re: Whats the point in these groups? - Get Array Name - Passing an Array to a form field - Re: Whats the point in these groups? - Encodings of javascript - How to completely destroy a script and make it disappear forever. - why my javascript for menu doesn't work properly? (sth wrong withonMouseOut event) - JESUS in the QURAN - !!!...Who is Jesus - Why did I Embrace Islam (( Scientific Miracles in the Quran - Synchronous dynamic script loading - Re: JavaScript/HTML Developer position in Sunnyvale CA Bay area - FAQ Topic - What should I do before posting to comp.lang.javascript? (2008-10-17) - Can this be done with Javascript? Microcontrollers/TCP IP - Update function variable on same page - Update function variable on same page - Create Dynamic Arrays - Rendering HTML prior parsing Javascript - Re: Caching icons in browser - Problem(s) on number auto-format function. - Plus sign not shown - FAQ Topic - What questions are on-topic for CLJ? (2008-10-16) - Javascript & Captcha - onchange event in select element with keyboard scrolling in IE6 - freewebs - eecp in india - Filtering or avoiding double click on form submission - FAQ Topic - Which newsgroups deal with javascript? (2008-10-15) - How to use a variable name in a regex? - Strange RegExp problem - JavaScript Image Popup's - ppk on JavaScript - Moving objects in Javascript question - unique reference id - Re: Split Array - Unescape escapeXML text - What do you use instead of enum? - FAQ Topic - How do I generate a random integer from 1 to N? (2008-10-14) - Re: comp.lang.javascript FAQ - Quick Answers 2008-10-13 - how to load client-side xml file? - Refreshing page properly on back button - Is it possible to use Image Map with Lightbox ? - Re: Air Max Air Max 90 man Air Max 90 women Air Max LTD man Air MaxLTD women Air - conflict with latest windows update - FAQ Topic - Why does 1+1 equal 11? or How do I convert a string to a number? (2008-10-13) - textbox id is not defined error for firefox, works fine in IE - Variables for JavaScript artificial intelligence - innerHTML - Re: Air Max Air Max 90 man Air Max 90 women Air Max LTD man Air MaxLTD women Air - toggle node with checkbox - Auto spell check problem - If you think Kenny is a jerk... - how does one get a set of items that are in one table but not inanother? - FAQ Topic - Why does K = parseInt('09') set K to 0? (2008-10-12) - JS/DHTML Grid Product - Favorites? - Overriding .textContent of BR elements - comp.lang.python u see website get some doller - Prototype WTP 0.2 released,this release for Prototype 1.6.0 - XML Parsing Problem in Internet Explorer - updating data in object - script wanted not standard tab menu thing - FAQ Topic - Why does simple decimal arithmetic give strange results? (2008-10-11) - Re: Moving dynamically created table rows up and down in an HTMLtable - FAQ Sections - Feedback Wanted - ready function too late? - Closures Explained - Find smallest distance between numbers in Array - love dating and romance with hot modeles live u just click - How can I made this - Highlight block A elements - dynamic display - FAQ Topic - How do I convert a Number into a String with exactly 2 decimal places? (2008-10-10) - How is unit testing performed at your location? - hrefs and javascript, navigate and other beasties - Html Listbox Question - Inherit Date Class? - Are there factors that may prevent script jittering? - FAQ Topic - What online resources are available? (2008-10-09) - Microsoft + jQuery = Really cool Websites? - open MS Access databse in Firefox - make array empty - SWFUpload class problem - comp.lang.javascript FAQ - META 2008-10-08 - JavaScript Examples - autosuggest key events - FAQ Topic - What books cover EcmaScript? (2008-10-08) - dyn-drive calendar script not working.. - select onchange with typing in Webkit - IE 7 Zoom Problem - IE6 Form and Its Properties After Submit - Modifying a variable in a forEach loop - Finding out how many checkboxes are in a table - Giving a window an onblur event - Resetting GET query values - autosuggest key controls - EcmaScript, ECMAScript, or JavaScript ? - FAQ Topic - What does the future hold for EcmaScript? (2008-10-07) - Change element opacity in Firefox - Get rid of 'eval' - maintain opener property after refresh/reload of popup - Create Element in Firefox - Read entire xml file - As - Places that are hiring in my area - FAQ Topic - I have a question that is not answered in here or in any of the resources mentioned here but I'm sure it has been answered in clj. Where are the clj archives located? (2008-10-06) - why would sortable.create not fire onUpdate? - Basic array stuff - FAQ Maintainer - Bart? - Memory Leaks, createElement, and Form Controls - how to get the privous URL of the browser using javascript - FAQ Topic - What is the document object model? (2008-10-05) - setTimeout(this.doAnimation, 1000) will fail if it is defined in aclass definition - innerHTML = JavaScript object - The new operator - FAQ Topic - What are object models? (2008-10-04) - FAQ Noise - Changes to FAQ - Get all elements in firefox - Getting option values as array - Firefox get shapes problem. - Firefox2 problem with dynamic rendering (it's fine in IE, Safariand FF3) - Set Encoding Dynamically in IE6 Instead of Enctype - closures in JavaScript - Hooking into page's onsubmit event from code within page? - FAQ Topic - What is JScript? (2008-10-03) - make a date pretty - id names are global variables? - Set global variable from within a Class - Trying to pass a variable to xmlHttp.onreadystatechange - Where to find this script code? or similar? - Text wrap problem firefox - Nice websites here123 - Post file higher than public_html folder - FAQ Topic - What is ECMAScript? (2008-10-02) - FAQ Clean-Up - AJAX refresh - selectedIndex not functioning properly..?.. - Offer programmer for our site - work with dojo and dijit - Best implementation of setTimeout / clearTimeout - Toggling the SurroundContents method - comp.lang.javascript FAQ - META 2008-10-01 - FAQ Topic - How do I direct someone to this FAQ? (2008-10-01) - Visit this sites please - Override asynchronous (XMLHttpRequest) activity? - Tapestry template exception - Re: Partial evaluation in JavaScript - FAQ Topic - Why was my post not answered? (2008-09-30) - Load Body html on button click. - Javascript does not run in intranet zone? - javascript to execute java program - Javascript about to be disabled event - execute the radio and selective button and return the message on thesame page - Detecting status change in a checkbox - setting window size and php - FAQ Topic - What do I have to do before posting to clj? (2008-09-29) - Is jQuery worth a second look? - Cursor position in textarea? (XY pos, not caret index) - Re: JavaScript / AJAX RIA Developer Job Opening (SYS-CON Media) - javascript+safari - Re: Variable scope problem - [beginer]Why this script failed to work? - BLOCK POPUPS - Linking external page - JavaScript Animation Issues... - Probably A Simple Answer...Text Display... - Displaying Ajax data and using timers properly - FAQ Topic - What questions are off-topic for clj? (2008-09-28) - Handling DOM event on element not on top - How to change color of <select>? - FAQ Topic - Which newsgroups deal with javascript? (2008-09-27) - Re: Javascript project seeking competent scripter/coder - Re: Netscape Communications Corp.'s JavaScript language - Abbreviate Currency using Javascript - Open Source project needs quick help from regexp expert: XML Namevalidation - Javascript on Nokia phones - Print Screen and open in paint - "Access is denied" in IE on node.focus() - FAQ Topic - Why is my AJAX page not updated properly when using an HTTP GET request in Internet Explorer? (2008-09-26) - onclick behaves differently when defined via javascript - Replace Line breaks - Logarithmic scale - Cisco Tries to Break Out of the Data Center Role - Can Javascript work with Multiple lines of Text. - script debug errors - Prevent scrolling - need some newbie help... - Dynamically generated functions with variable-based payload - Use of \n in strings - FAQ Topic - What is AJAX? (2008-09-25) - Java Applet<-->JavaScript communication (Firefox 3). - functions and arguments.length; passing unknown number of arguments - Obtaining query_string from JavaScript - Re: Bust my code! - .removeChild Not Working... - .removeChild Not Working... - comp.lang.javascript FAQ - META 2008-09-24 - closure question - FAQ Topic - How do I get my browser to report javascript errors? (2008-09-24) - Dynamic forms and functions - KB-Traversal in JavaScript AI - Re: Bust my code! - Re: Bust my code! - Unselect one listbox when second is selected - Re: Bust my code! - Elementos de un array - Need some JavaScript puzzles - OT: problems with orkut.com - inputbox - Unexpected clearing of radio button - FAQ Topic - How do I open a new window with javascript? (2008-09-23) - java script for "post a comment " - Declaring public variables inside a function - Why does JavaScript Regx behave weirdly? - Groping around in the DOM - Javascript events image load - cross domain scripting problem in IE6 (not IE7) - Changing case in a sentence to Capitalize Case. - Newbie needs help with setInterval - Factors that may prevent jittering? - FAQ Topic - Why doesn't the global variable "divId" always refer to the element with id="divId"? (2008-09-22) - good news - Re: ECMAScript Secure Transform. My idea, i think... - Re: ECMAScript Secure Transform. My idea, i think... - Cant find proper method or idea how to... - Adding & Removing Form Options - dual scripts problem - FAQ Topic - When should I use eval? (2008-09-21) - JavaScript implementation - validating two forms - SFX : SquirrelFish eXtreme. - tricky javascript situation - FAQ Topic - How do I access a property of an object using a string? (2008-09-20) - Errors in walking through XML - this keyword and closures - system sound - JavaScript programmer at Earth.org - How to empty a node - Javascript and working with dates - gwt, qooxdoo - framework choice for a C++ && apache application - qooxdoo || gwt for a apache + c++ module web application - gwt, qooxdoo - framework choice for a C++ && apache application - RungeKutta.js - If Else format - RegEx noob question - Can the code be simplified? - FAQ Topic - How do I download a page to a variable? (2008-09-19) - RegExp.test() with global flag set - JavaScript Progress bar is timing out.
http://bytes.com/sitemap/f-63.html
crawl-002
refinedweb
3,505
64.61
The circuit I currently have can be seen in the diagram here: A photo of my voltage divider can be seen here (all resistors are 10K, so 2 are used to make 20K in one half of the divider. The diode is a 3.3V zener diode. Black wire is 5V sensor output, purple is 3.3V and goes to Pi for reading, grey goes to Pi ground): This is the Python code I'm using to read the GPIO pin: It just prints out zeros, even when it is sunny (the sensor has an LED indicator on it so I know when it is sunny and a 1 should be printed) Code: Select all import RPi.GPIO as gpio import time gpio.setmode(gpio.BCM) gpio.setup(17, gpio.IN, pull_up_down = gpio.PUD_DOWN) while True: print(gpio.input(17)) time.sleep(1) I'd really appreciate some help as I've been trying for over a week to get this to work and have run out of ideas now. Thanks!
https://www.raspberrypi.org/forums/viewtopic.php?f=44&t=211133&p=1303165
CC-MAIN-2020-10
refinedweb
171
79.6
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. How to make a field readonly from a function? or Can we call a function on readonly attribute like "readonly="function_name" in odoo9? I want to make the "Pricelist" and "Payment Term" readonly if pricelist and payment term is set for the customer in sale order form. It should be done on onchange of the customer field how can I do this in odoo9? Is it possible to call a function on readonly attribute like following? field_name = fields.Many2one('related_model', 'label', readonly="function_name") def function_name(self): if condition: return True else: return False Hello Sebin Siby, set attrs on pricelist like attrs="{'readonly': [('partner_id.pricelist_id', '!=', False)]} set attrs on payment term same as like pricelist field just change payment term field name. I tried it but it's not working. The states and readonly attribute are set for that field while defining it in .py file. The changes made for readonly attribute from xml is not getting reflected on that field. Now I need to remove the states and readonly attribute from .py file and defining it in xml. I tried redefining that field without these attributes but the attributes doesn't chages. Did you update your module and refresh your web page after you made your changes? Because changes made in the xml are not automatically reflected on the database. About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/how-to-make-a-field-readonly-from-a-function-or-can-we-call-a-function-on-readonly-attribute-like-readonly-function-name-in-odoo9-108009
CC-MAIN-2018-13
refinedweb
276
66.03
Given a list of arbitrarily many strings, show how to: - test if they are all lexically equal - test if every string is lexically less than the one after it (i.e. whether the list is in strict ascending order) Each of those two tests should result in a single true or false value, which could be used as the condition of an if statement or similar. If the input list has less than two elements, the tests should always return true. There is no need to provide a complete program & output: Assume that the strings are already stored in an array/list/sequence/tuple variable (whatever is most idiomatic) with the name strings, and just show the expressions for performing those two tests on it (plus of course any includes and custom functions etc. that it needs), with as little distractions as possible. Assuming that the strings variable is of type T<std::string> where T is an ordered STL container such as std::vector: #include <algorithm> #include <string> std::all_of( ++(strings.begin()), strings.end(), [&](std::string a){ return a == strings.front(); } ) // All equal std::is_sorted( strings.begin(), strings.end(), [](std::string a, std::string b){ return !(b < a); }) ) // Strictly ascending Content is available under GNU Free Documentation License 1.2.
https://tfetimes.com/c-compare-a-list-of-strings/
CC-MAIN-2019-30
refinedweb
211
60.35
CGI::AppToolkit::Data::Object - A data source component of CGI::AppToolkit CGI::AppToolkit::Data::Objects provide a common interface to multiple data sources. The data sources are provided by CGI::AppToolkit::Data::Object decendants that you create, generally on a per-project basis. Providing a data source requires creating an object in the CGI::AppToolkit::Data:: namespace that inherits from CGI::AppToolkit::Data::Object. You do not use this module or it's descendants in your code directly, but instead call CGI::AppToolkit->data() to load it for you. For a Person object, you might start the module like this: package CGI::AppToolkit::Data::Person; use CGI::AppToolkit::Data::Object; use strict; @CGI::AppToolkit::Data::Person::ISA = qw/CGI::AppToolkit::Data::Object/; After that, you simply have to override four subroutines: fetch, store, delete, and update. All of these are called with two parameters: the object, of course, and the arguments. The arguments are passed in a single parameter which is usually a hashref. sub store { # or fetch, update, or delete my $self = shift; my $args = shift; # args is usually a hashref, so you would use it like this my $id = $args->{'id'}; #... do the actual storing and such here } There is nothing forcing you to implement all of these subroutines. For example, if you are implementing a read-only object, then you could override only fetch. Conventionally a relational database is used, but there's nothing forcing you to that either. The data can come from any source at all. However, if you data is coming from a DBI accessed RDBMS that uses SQL, then you should take a look at CGI::AppToolkit::Data::SQLObject and CGI::AppToolkit::Data::Automorph. CGI::AppToolkit::Data::SQLObject handles a few of the common DBI tasks for you, and it's descendant CGI::AppToolkit::Data::Automorph attempts to handle the rest of them. CGI::AppToolkit::Data::Object provides several convenience methods to inherit. Returns the creating CGI::AppToolkit object. This can be used to retrieve required data. my $dbi = $self->get_kit()->get_dbi(); The CGI::AppToolkit object has an autoload mechanism that provides all variables that are passed to it's new() method as method calls, with get_ or set_ added to the beginning to retrieve or set the value, repectively. In particular, CGI::AppToolkit->get_dbi() retrieves the DBI object stored from a call to CGI::AppToolkit->connect(). Returns a new, empty CGI::AppToolkit::Data::Object::Error object, as described in detail below. Using the built-in AUTOLOAD mechanism, you can retrieve and set object variables with named method calls. These method names are not case sensitive. # setting $self->set_wierd_variable($value); # retrieving my $value = $self->get_wierd_variable(); CGI::AppToolkit::Data::Object::Error provides a simple interface to errors returned by Data-fetch()>. This class seperates errors into three classes: plain text errors (errors), missing items (missing), and wrong items (wrong). Plain text errors are for sending back to the interface. Missing items and wrong items are both lists of keys that were missing or wrong in the args provided to store. The following methods are for use by scripts using CGI::AppToolkit::Data and will only need to retrieve errors. Return nonzero if there are errors and zero if there are not. Returns three arrayrefs: errors, missing, and wrong, in that order. The errors arrayref points to an array of hashes of the form {'text' => $error}. The other two arrayrefs point to arrays of strings. # as called in a script using CGI::AppToolkit # $of is an instance of CGI::AppToolkit my $ret = $CGI::AppToolkit->data('person')->store(\%person); if (ref $ret =~ /Error/) { my ($errors_a, $missing_a, $wrong_a) = $ret->get(); # ... } The following methods are for use inside CGI::AppToolkit::Data::Object descendants. Adds an error with the text ERROR to the error object. $error->error('You screwed up, dude!'); Adds a missing or wrong ITEM to the error object. $error->missing('address1'); # missing a required field $error->wrong('email'); # malformed email address This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. Please visit for complete documentation.
http://search.cpan.org/dist/CGI-AppToolkit/lib/CGI/AppToolkit/Data/Object.pm
crawl-003
refinedweb
680
55.84
As we all know, one goal set by WS-Security is to enforce integrity and/or confidentially on SOAP messages. In case of integrity, the signature which is added to the SOAP message is the result of a mathematical process involving the private key of the sender resulting in an encrypted message digest. Most frameworks, such as WSS4J, will by default only sign the body. If you’re adding extra headers, such as a Timestamp header, you’ll have indicate explicitly to sign them. Using the Spring support for WSS4J for example, you can set a comma separated list containing the local element name and the corresponding namespace using the securementSignatureParts property. Below an example how to instruct it to both sign the Body and Timestamp element (and their siblings). This will result in two digital signatures being appended to the message: <property name="securementSignatureParts" value="{}{}Body;{}{}Timestamp"> </property> Eventually the SOAP message will be send together with the XML digital signature data and in most cases a BinarySecurityToken containing the certificate. Nothing new so far. However, what struck me is that it seems not widely understood what the goal is of the BST neither how authentication is controlled using it. Let me try to shed some light on this: The certificate of the sender which is send along with the SOAP message plays the role of identification. You can compare it as being the username++. It should be clear that the certificate inside the message cannot be trusted, neither can a username without verifying the password. So far everyone agrees on that: “yeah of course, certificates need to be validated in order to be trusted and then you’re set!” But that is not the entire story. Validation of the certificate is not the same as authentication. The fact that the certificate in the message is valid and is signed by a known CA is not enough to consider the sender authenticated. For example: I, in my most malicious hour, could have intercepted the message, changed the content, created a new signature based on my private key and replaced the BST in the message with my certificate. My certificate could perfectly be an official CA signed certificate (even signed by the same CA as you’re using) so it would pass the validation check. If the framework would simply validate the certificate inside the message we would have no security at all. Note: If you’re sending the message over secure transport instead, chances are that I was not able to intercept the message. But secure transport is mostly terminated before the actual endpoint, leaving a small piece of the transport “unsecured”. Albeit this part will be mostly internally in your company, but what I want to point out is that no matter how secure your transport is, the endpoint has the end responsibility in verifying the identity of the sender. For example; in an asynchronous system the SOAP message could have been placed on a message queue to be processed later. When processing starts by the endpoint, the trace of the secure transport is long gone. You’ll have to verify the identity using the information contained in the message. In order to close this loophole we have two solutions: The first solution builds further on what we already described: the certificate in the message is verified against the CA root certificates in the truststore. In this scenario it advised to first narrow the set of trusted CA’s. You could for example agree with your clients on a limited list of CA’s to get your certificates from. Doing so you are already lowered the risk of trusting more “gray zone” CA’s which might not take the rules for handing out certificates so strict (like for example, proper checking the identity of their clients). Secondly, because *every* certificate handed out by your trusted CA will be considered “authenticated”, we’ll close the loophole by issuing some extra checks. Using WSS4J you can configure a matching pattern based on the subject DN property of the certificate. They have a nice blog entry on this here:. We could specify that the DN of the certificate must match a given value like this: Wss4jHandler handler = ... handler.setOption(WSHandlerConstants.SIG_SUBJECT_CERT_CONSTRAINTS, "CN = ..."); Note: that there is currently no setter for this using the Spring support for WSS4J in Wss4jSecurityInterceptor, so you’ll have to extend it in order to enable this! To conclude the steps being performed: - The certificate contained in the message is validated against the trusted CA in your trustore. When this validation succeeds it tells the application that the certificate is still valid and has actually been handed out by a CA that you consider trusted. - This check gives us the guarantee that the certificate really belongs to the party that the certificate claimes to belong to. - Optionally the certificate can also be checked on revocation so that we don’t continue trusting certificates that are explictly been revoked. - WSS4J will check if some attributes of the certificate match the required values for the specific service (Subject DN Certificate Constraint support). - This would be the authentication step; once the certficate has been found valid, we check if the owner of the certificate is the one we want to give access too - Finally the signature in the message is verified by creating a new digest of the message, compare it with decrypted digest from the message and so forth It should be noted that this check (at least when using WSS4J) is not done by default! If you don’t specify it and simply add your CA’s in the trust store you’ll be leaving a security hole! The second solution requires no extra configuration and depends on ONLY the certificate of the sender to be present in the truststore. The certificate contained in the message is matched against the certificate in the truststore. If they match the sender is authenticated. There is no need to validate certificates against a CA since the certificates imported in the truststore are explicitly trusted (WSS4J will still check if the certificate is not expired and possibly check it for revocation). Again, there are no CA certificates (or CA intermediate certificates) in the truststore! Only the certificates of the senders that you want to give access too. Access is hereby controlled by adding (or removing) their certificate from the truststore. This requires you to be cautious when initially importing the certificates since you’ll have to make sure they actually represent the sender. But this is something you’re always obliged to do when adding certificates to your truststore, also when adding CA certificates like in the first solution. Conclusion In the assumption you can limit the trusted CA’s, the first solution is in most cases the preferred one and also the most scalable. For new clients there are no changes required to the truststore. The attributes to match can be stored externally so they are easy to change/add. Also, when the client certificates expires or gets revoked, you don’t need to do anything special. The new certificate will we used by the sender at a given moment and will directly be validated against the CA in your truststore. In the second solution you would have to add the new certificate to the trustore and leave the old one in there for a while until the switch is performed. Overall lessons learned: water tight security is hard. The #1 rule in IT (assumption is the mother of all f***ups) is certainly true here. Be skeptical and make sure you fully understand what is going on. Never trust default settings until you are sure what they do. The default setting on your house alarm (eg. 123456) is no good idea either. Neither is the default admin password on a Tomcat installation.
https://www.javacodegeeks.com/2013/09/ws-security-using-binarysecuritytoken-for-authentication.html
CC-MAIN-2016-50
refinedweb
1,312
59.23
PROLOGThis manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. NAMEsys/uio.h — definitions for vector I/O operations SYNOPSIS #include <sys/uio.h> DESCRIPTIONThe <sys/uio.h> header shall define the iovec structure, which shall include <sys/uio.h> header shall define the ssize_t and size_t types, the maximum number of scatter/gather elements the system can process in one call were described by the symbolic value {UIO_MAXIOV}. In IEEE Std 1003.1‐2001 this value is replaced by the constant {IOV_MAX} which can be found in <limits.h>. FUTURE DIRECTIONSNone. SEE ALSO<limits.h>, <sys_types.h> The System Interfaces volume of POSIX.1‐2008, read(), readv(), write(), write .
https://jlk.fjfi.cvut.cz/arch/manpages/man/sys_uio.h.0p.en
CC-MAIN-2019-47
refinedweb
139
50.84
hi folks, I am new to windows programming, i have the following code: #include <windows.h> int main(){ MOUSEMOVEPOINT lppt; ...... } but MOUSEMOVEPOINT was not defined, what headers should I include? thanks Printable View hi folks, I am new to windows programming, i have the following code: #include <windows.h> int main(){ MOUSEMOVEPOINT lppt; ...... } but MOUSEMOVEPOINT was not defined, what headers should I include? thanks >>I am new to windows programming<< Then you should post your windows questions on the windows board. MOUSEMOVEPOINT is declared in winuser.h, which is included when you #include <windows.h> so you don't have to include any other headers to use it. You will have to #define WINVER 0x0500 prior to #including windows to use it, though. For future reference for this type of problem, steps to get this information: 1. MOUSEMOVEPOINT search on google yields: 2. msdn description for MOUSEMOVEPOINT from which the header it is declared in, winuser.h, can be read. 3. 'Find' the occurence(s) of the string MOUSEMOVEPOINT in winuser.h which reveals:Doesn't always work, but at least it eliminates it as a possible source of error.Doesn't always work, but at least it eliminates it as a possible source of error.Code: #if(WINVER >= 0x0500) typedef struct tagMOUSEMOVEPOINT { int x; int y; DWORD time; DWORD dwExtraInfo; } MOUSEMOVEPOINT, *PMOUSEMOVEPOINT, FAR* LPMOUSEMOVEPOINT; /* blah blah blah*/ Hope that helps. Good luck :) thanks Ken, that was very helpful. However I ran into another problem after the program was executing, it says"the procedure entr point GetMouseMovePOints could not be located in the dynamic link library USER32.dll, I tried to include lots of other libraries, but none worked. how do I go about solving this problem. thanks, and I will post my future problems to the windows section I can only assume you are trying to run it on win98 which gives rise to the reported error. It works ok on win2k and presumably xp(?). msdn seems to point to the function being intended primarily for use with win ce. actually I"m running it on windows xp... , dunno what other lib i need to load. thanx >>actually I"m running it on windows xp<< My apologies. The fn I was actually referring to and is declared in winuser.h is GetMouseMovePointsEx which compiles and runs on win2k so should be good for xp too. Whether it actually 'works' or not is another matter entirely. Hope that helps. :)
http://cboard.cprogramming.com/windows-programming/30905-mousemovepoint-printable-thread.html
CC-MAIN-2015-48
refinedweb
409
67.96
Module not found, after installing with pip in stash Hi, relative Python newcomer here, but with a couple of scripts under my belt, developed on Windows. I've been playing with Pythonista the last few days, but I'm struggling with one aspect. I've installed stash, from there installed some modules via pip, but the script I've written produces a Module Not Found error every time I run it. My script resides in a folder under Documents/myapp on This iPad but when running it consistently fails to find the pip installed modules in site-packages-3. The modules in question are google.api_core and google.auth. Is this some kind of permissions issue? Happy to give more information if it helps. - mcriley821 It’s likely that the module has dependencies that you haven’t installed, or that the module isn’t pure python. Both these problems will prevent you from using your module, but only the second one will prevent files from being downloaded to your phone. In the first case, find out the dependencies and install them too. In the second case, you’re kinda sol. You could try figuring out what the code does and rewrite it in python. More information like the error message and what module you’re trying to pip install would be helpful! @Quicky, can’t say I would have successfully used those libraries, although I have accessed Google services, at least the calendar. Can you share a full stack trace of the import attempt? Could you be accidentally running your script in Python 2? Cheers, here's the trace Traceback (most recent call last): File "/private/var/mobile/Containers/Shared/AppGroup/98568767-10B5-4B13-A6E1-34E319B6C605/Pythonista3/Documents/Quickify/quickstart.py", line 10, in <module> import googleapiclient.discovery File "/private/var/mobile/Containers/Shared/AppGroup/98568767-10B5-4B13-A6E1-34E319B6C605/Pythonista3/Documents/site-packages-3/googleapiclient/discovery.py", line 49, in <module> import google.api_core.client_options ModuleNotFoundError: No module named 'google.api_core' The pip installs I ran were: pip install google-api-python-client pip install google-auth-oauthlib google-auth-httplib2 For testing before trying to get it working on my own script, I'm using Google's example for getting a YouTube channel list: # -*- coding: utf-8 -*- # Sample Python code for youtube.channels.list # See instructions for running these code samples locally: # import os import google_auth_oauthlib.flow import googleapiclient.discovery import googleapiclient.errors scopes = [""] def main(): #.channels().list( part="snippet,contentDetails,statistics", forUsername="GoogleDevelopers" ) response = request.execute() print(response) if __name__ == "__main__": main() ``` so, I will ask the obvious question.... does the site-packages-3/google/api_core folder exist? if not, you may need to pip install google-api-core. pip on pythonista does not always figure out every dependency. also, some pip installs fail, and you need to use wget to download, and tar to untar/gz to copy into site packages. or, use the workng copy app to download modules from git and copy the appropriate folder to site-packages Ah right, cheers for that Jon. Obvious questions are welcome at my level! That's pointed me in the right direction, it looks like pip hasn't picked up on the dependencies. Thanks for the advice with regards to workarounds. Hello. Sorry if this is not right. But i would like to know how to install libraries in Pythonista please. I have some personal files I’d like to use with “import”. Thank you @3ryck there are a few ways to import files into pythonista... iCloud, share sheet, copy/paste. Once in pythonista, move them into the My iPad folder (aka ~/Documents), then you can import them from scripts in the same folder. Alternatively, you may copy them to site-packages, where they are accessible to all scripts
https://forum.omz-software.com/topic/6473/module-not-found-after-installing-with-pip-in-stash/7
CC-MAIN-2020-40
refinedweb
632
67.35
Detection of planar objects I am trying to do feature extraction using C++ API. For this, I am using... as reference. The code I have written is : #include <opencv2/core/core.hpp> #include <opencv2/highgui/highgui.hpp> #include <opencv2/imgproc/imgproc.hpp> #include <opencv/cv.hpp> #include <opencv2/imgcodecs/imgcodecs.hpp> #include <opencv2/calib3d/calib3d.hpp> #include <opencv2/features2d/features2d.hpp> #include <opencv2/features2d.hpp> #include <opencv2/calib3d.hpp> using namespace cv; #include <iostream> #include <stdio.h> using namespace std; Mat img; Mat templ; int main(int, char** argv) { templ = cv::imread("C:/Users/.../workspace/FeatureDetectionCPP/src/temp.jpg", IMREAD_GRAYSCALE); img = cv::imread("C:/Users/.../workspace/FeatureDetectionCPP/src/image.jpg", IMREAD_GRAYSCALE); if(templ.empty()) { cout<<"Empty template"; } if(img.empty()) { cout<<"Empty image"; } Ptr<Feature2D> surf = SURF::create(); vector<KeyPoint> keypoints1; Mat descriptors1; surf->detectAndCompute(img, Mat(), keypoints1, descriptors1); Ptr<Feature2D> surf1 = SURF::create(); vector<KeyPoint> keypoints2; Mat descriptors2; surf1->detectAndCompute(templ, Mat(), keypoints2, descriptors2); // matching descriptors BruteForceMatcher<L2<float> > matcher; vector<DMatch> matches; matcher.match(descriptors1, descriptors2, matches); // drawing the results namedWindow("matches", 1); Mat img_matches; drawMatches(templ, keypoints1, img, keypoints2, matches, img_matches); imshow("matches", img_matches); waitKey(0); return 0; } During compilation, I got this: error: 'SURF' has not been declared Ptr<feature2d> surf1 = SURF::create();; On searching, I found that #include "opencv2/nonfree/features2d.hpp" should be added. However, I don't have any folder nonfree or misc in the opencv2 folder. I am using opencv 3.0.0. And, have compiled it using mingw. Can anyone help in telling as if why this error is appearing. And, how can I resolve it. I have looked at... but don't understand that on donloading zip file, where shall I place it. How shall it be compiled to add to the existing build of opencv. @berak , can you help me wih this ? to use SURF or SIFT in 3.0, you need the opencv_contrib repo, also they are in another namespace, try to add: there's also plan B: use another keypointdetector/featureextractor pair, e.g. AKAZE instead of SURF should do pretty well. cmake -DOPENCV_EXTRA_MODULES_PATH=/your/opencv_contrib/modules. again, please see the readme here. I am using cmake-gui. I just entered path for contrib/modules. On executing mingw32-make install, the issues coming are : warning: ignoring #pragma warning [-Wunknown-pragmas] pragma warning( disable : 4267 ) In file included from C:/opencv1/opencv_contrib-master/opencv_contrib-master/mod ules/line_descriptor/src/precomp.hpp:72:0, from C:\opencv1\build\x86\mingw\modules\line_descriptor\opencv_ line_descriptor_pch_dephelp.cxx:1: C:/opencv1/opencv_contrib-master/opencv_contrib-master/modules/line_descriptor/s rc/bitops.hpp:49:21: fatal error: intrin.h: No such file or directory include <intrin.h> How to get rid of them ? oh, the line_descriptor again. did not compile for me, either ;( you might disable it by ticking BUILD_opencv_line_descriptor=OFF in cmake-gui. there might be more modules, which do not compile (saliency ?), just disable them in the same way. Okay, so now on compilation, 3 errors are still left :; What shall I do for BruteForceMatcher ? BFMatcher your original code seems rather outdated. see e.g. here for a recent demo using AKAZE @berak, And by the way...saliency compiled w/o giving errors :) However, datasets threw an error. This time, when I am using AKAZE, issues I am facing are : 1. It's dealing with images of 32-bit depth & for others, its giving error : Assertion failed (type == B.type() && (type == CV_32FC1 || type == CV_64FC1 || type == CV_32FC2 || type == CV_64FC2)) in gemm, file C:\opencv\opencv-master\opencv-master\modules\core\src\matmul.cpp, line 893 2. Even when I crop a part of image, it isn't finding any matches :/ I am using the image -... And BFMatcher gives the error : Assertion failed ((type == CV_8U && dtype == CV_32S) || dtype == CV_32F) in batchDistance, file C:\opencv\opencv-master\opencv-master\modules\core\src\stat.cpp, line 3662 for images not having 32-bit depth. Even for some images, it works. And for some, it gives this error. Does it require any special kind of image ? Or there is something else... I have also tried with images graf1.png and graf3.png. But, the error is same... Can you please help me with this @berak ? Since I have faced a lot of problems in setting up OpenCV for Windows 7, I ended up writing a blog describing the process. You may find it at... and help others too. Thanks, Shruti
https://answers.opencv.org/question/62312/detection-of-planar-objects/
CC-MAIN-2019-47
refinedweb
733
54.08
ASGI, WSGI, and Python Most popular Python web frameworks, such as Django and Flask, works with WSGI (Web Server Gateway Interface) under the hood, which has been around sincePEP 333 in 2003, and its update PEP 3333 in 2010. With the introduction of asyncio in Python 3 and await syntax in Python 3.5,the Python community finally has an easy way to do asynchronous programming. It was only a matter of time before steam would pick up for asynchronous support for web frameworks in the form of ASGI (Asynchronous Server Gateway Interface). Major web frameworks are currently working to go async, including Django,which is a major effort itself given the size of the framework. Flask,which uses Werkzeug internally, does not have ASGI support yet. If you are building a new Python-based web project today, the new of ASGI web frameworks should be on your list for consideration; they “generally” perform faster than their WSGI-only counterparts for the same workload, and while benchmarks are helpful, it is still best to benchmark it against your own use cases. ASGI-based web frameworks perform much better than their WSGI-only counterparts Try an ASGI-Supported Web Framework Today If you are looking for a Django-like experience for a full web framework,FastAPI would offer the closest experience. There’s full support for Pydantic, which makes the experience more pure Python than to learn a new framework-specific API. If you are looking for a Flask-like experience for a micro-web framework,you will be spoiled for options. - Quart offers full-API compatibility with Flask, and should be the first place where you start - Vibora has an API inspired by Flask with an obsession for speed - Sanic is also inspired by Flask but has different design decisions from Vibora from quart import Quart app = Quart( __name__ ) @app.route('/') async def hello(): return 'hello' app.run() If you are looking for an ASGI framework/toolkit , Starlette will be the one to go for. It is lower level than the ones listed above as it is ultimately a toolkit, but you might find it useful for developing your own frameworks. Starlette-Starter is a boilerplate that I have built for my own API projects, which might be helpful for you if you are looking for a starting point for your own frameworks. If you try any of the frameworks above, I will be happy to hear more about your explorations with them here or on Twitter. Discussion (0)
https://dev.to/victorneo/asgi-frameworks-for-python-are-here-1d36
CC-MAIN-2021-49
refinedweb
419
54.66
Signals are a neat feature of Qt that allow you to pass messages between different components in your applications. Signals are connected to slots which are functions (or methods) which will be run every time the signal fires. Many signals also transmit data, providing information about the state change or widget that fired them. The receiving slot can use this data to perform different actions in response to the same signal. However, there is a limitation: the signal can only emit the data it was designed to. So for example, a QAction has a .triggered that fires when that particular action has been activated. The triggered signal emits a single piece of data -- the checked state of the action after being triggered. For non-checkable actions, this value will always be False The receiving function does not know which QAction triggered it, or receiving any other data about it. This is usually fine. You can tie a particular action to a unique function which does precisely what that action requires. Sometimes however you need the slot function to know more than that QAction is giving it. This could be the object the signal was triggered on, or some other associated metadata which your slot needs to perform the intended result of the signal. This is a powerful way to extend or modify the built-in signals provided by Qt. Intercepting the signal Instead of connecting signal directly to the target function, you instead use an intermediate function to intercept the signal, modify the signal data and forward that on to your actual slot function. This slot function must accept the value sent by the signal (here the checked state) and then call the real slot, passing any additional data with the arguments. def fn(checked): self.handle_trigger(checked, <additional args>) Rather than defining this intermediate function, you can also achieve the same thing using a lambda function. As above, this accepts a single parameter checked and then calls the real slot. lambda checked: self.handle_trigger(checked, <additional args>) In both examples the <additional args> can be replaced with anything you want to forward to your slot. In the example below we're forwarding the QAction object action to the receiving slot. action = QAction() action.triggered.connect( lambda checked: self.handle_trigger(checked, action) ) Our handle_trigger slot method will receive both the original checked value and the QAction object. Or receiving slot can look something like this # a class method. def handled_trigger(self, checked, action): # do something here. Below are a few examples using this approach to modify the data sent with the MainWindow.windowTitleChanged signal. from PyQt6.QtWidgets import ( QApplication, QMainWindow ) from PyQt6.QtCore import Qt import sys class MainWindow(QMainWindow): def __init__(self): super(MainWindow, self).__init__() # SIGNAL: The connected function will be called whenever the window # title is changed. The new title will be passed to the function. self.windowTitleChanged.connect(self.on_window_title_changed) # SIGNAL: The connected function will be called whenever the window # title is changed. The new title is discarded and the # function is called without parameters. self.windowTitleChanged.connect(lambda x: self.on_window_title_changed_no_params()) # SIGNAL: The connected function will be called whenever the window # title is changed. The new title is discarded and the # function is called without parameters. # The function has default params. self.windowTitleChanged.connect(lambda x: self.my_custom_fn()) # SIGNAL: The connected function will be called whenever the window # title is changed. The new title is passed to the function # and replaces the default parameter. Extra data is passed from # within the lambda. self.windowTitleChanged.connect(lambda x: self.my_custom_fn(x, 25)) # This sets the window title which will trigger all the above signals # sending the new title to the attached functions or lambdas as the # first parameter. self.setWindowTitle("My Signals App") # SLOT: This accepts a string, e.g. the window title, and prints it def on_window_title_changed(self, s): print(s) # SLOT: This is called when the window title changes. def on_window_title_changed_no_params(self): print("Window title changed.") # SLOT: This has default parameters and can be called without a value def my_custom_fn(self, a="HELLLO!", b=5): print(a, b) app = QApplication(sys.argv) w = MainWindow() w.show() app.exec_() The .setWindowTitle call at the end of the __init__ block changes the window title and triggers the .windowTitleChanged signal, which emits the new window title as a str. We've attached a series of intermediate slot functions (as lambda functions) which modify this signal and then call our custom slots with different parameters. Running this produces the following output. My Signals App Window title changed. HELLLO! 5 My Signals App 5 My Signals App 25 The intermediate functions can be as simple or as complicated as you like -- as well as discarding/adding parameters, you can also perform lookups to modify signals to different values. In the following example a checkbox signal Qt.CheckState.Checked or Qt.CheckState.Unchecked is modified by an intermediate slot into a bool value. from PyQt6.QtWidgets import ( QApplication, QMainWindow, QCheckBox ) from PyQt6.QtCore import Qt import sys class MainWindow(QMainWindow): def __init__(self): super(MainWindow, self).__init__() checkbox = QCheckBox("Check?") # Option 1: conversion function def checkstate_to_bool(state): if state == Qt.CheckState.Checked: return self.result(True) return self.result(False) checkbox.stateChanged.connect(checkstate_to_bool) # Option 2: dictionary lookup _convert = { Qt.CheckState.Checked: True, Qt.CheckState.Unchecked: False } checkbox.stateChanged.connect( lambda v: self.result(_convert[v]) ) self.setCentralWidget(checkbox) # SLOT: Accepts the check value. def result(self, v): print(v) app = QApplication(sys.argv) w = MainWindow() w.show() app.exec_() In this example we've connected the .stateChange signal to result in two ways -- a) with a intermediate function which calls the .result method with True or False depending on the signal parameter, and b) with a dictionary lookup within an intermediate lambda. Running this code will output True or False to the command line each time the state is changed (once for each time we connect to the signal). QCheckbox triggering 2 slots, with modified signal data To support developers in [[ countryRegion ]] I give a [[ localizedDiscount[couponCode] ]]% discount with the code [[ couponCode ]] — Enjoy! For [[ activeDiscount.description ]] I'm giving a [[ activeDiscount.discount ]]% discount with the code [[ couponCode ]] — Enjoy! Trouble with loops One of the most common reasons for wanting to connect signals in this way is when you're building a series of objects and connecting signals programmatically in a loop. Unfortunately then things aren't always so simple. If you try and construct intercepted signals while looping over a variable, and want to pass the loop variable to the receiving slot, you'll hit a problem. For example, in the following code we create a series of buttons, and use a intermediate function to pass the buttons value (0-9) with the pressed signal.: self.button_pressed(a) ) h.addWidget(button) v.addLayout(h) self.label = QLabel("") v.addWidget(self.label) self.setLayout(v) def button_pressed(self, n): self.label.setText(str(n)) app = QApplication(sys.argv) w = Window() w.show() app.exec_() If you run this you'll see the problem -- no matter which button you click on you get the same number (9) shown on the label. Why 9? It's the last value of the loop. The problem is the line lambda: self.button_pressed(a) where we pass a to the final button_pressed slot. In this context, a is bound to the loop. for a in range(10): # .. snip ... button.pressed.connect( lambda: self.button_pressed(a) ) # .. snip ... We are not passing the value of a when the button is created, but whatever value a has when the signal fires. Since the signal fires after the loop is completed -- we interact with the UI after it is created -- the value of a for every signal is the final value that a had in the loop: 9. So clicking any of them will send 9 to button_pressed The solution is to pass the value in as a (re-)named parameter. This binds the parameter to the value of a at that point in the loop, creating a new, un-connected variable. The loop continues, but the bound variable is not altered. This ensures the correct value whenever it is called. lambda val=a: self.button_pressed(val) You don't have to rename the variable, you could also choose to use the same name for the bound value. lambda a=a: self.button_pressed(a) The important thing is to use named parameters. Putting this into a loop, it would look like this: for a in range(10): button = QPushButton(str(a)) button.pressed.connect( lambda val=a: self.button_pressed(val) ) Running this now, you will see the expected behavior -- with the label updating to a number matching the button which is pressed. The working code is as follows: val=a: self.button_pressed(val) ) h.addWidget(button) v.addLayout(h) self.label = QLabel("") v.addWidget(self.label) self.setLayout(v) def button_pressed(self, n): self.label.setText(str(n)) app = QApplication(sys.argv) w = Window() w.show() app.exec_()
https://www.pythonguis.com/tutorials/pyqt6-transmitting-extra-data-qt-signals/
CC-MAIN-2022-40
refinedweb
1,499
50.94
Boruvka’s algorithm for Minimum Spanning Tree in Python Hello coders, In this tutorial, we are going to study about Boruvka’s algorithm in Python. It is used to find the minimum spanning tree. First of all, let’s understand what is spanning tree, it means that all the vertices of the graph should be connected. It is known as a minimum spanning tree if these vertices are connected with the least weighted edges. For the connected graph, the minimum number of edges required is E-1 where E stands for the number of edges. This algorithm works similar to the prims and Kruskal algorithms. Borůvka’s algorithm in Python Otakar Boruvka developed this algorithm in 1926 to find MSTs. Algorithm - Take a connected, weighted, and undirected graph as an input. - Initialize the vertices as individual components. - Initialize an empty graph i.e MST. - Do the following for each of them, while the number of vertices is greater than one. a) Find the least weighted edge which connects this vertex to any other vertex. b) Add the least weighted edge to the MST if not exists already. - Return the minimum spanning tree. Source Code from collections import defaultdict class Graph: # These are the four small functions used in main Boruvkas function # It does union of two sets of x and y with the help of rank def union(self, parent, rank, x, y): xroot = self.find(parent, x) yroot = self.find(parent, y) if rank[xroot] < rank[yroot]: parent[xroot] = yroot elif rank[xroot] > rank[yroot]: parent[yroot] = xroot else : parent[yroot] = xroot #Make one as root and increment. rank[xroot] += 1 def __init__(self,vertices): self.V= vertices self.graph = [] # default dictionary # add an edge to the graph def addEdge(self,u,v,w): self.graph.append([u,v,w]) # find set of an element i def find(self, parent, i): if parent[i] == i: return i return self.find(parent, parent[i]) #*********************************************************************** #constructing MST def boruvkaMST(self): parent = []; rank = []; cheapest =[] numTrees = self.V MSTweight = 0 for node in range(self.V): parent.append(node) rank.append(0) cheapest =[-1] * self.V # Keep combining components (or sets) until all # compnentes are not combined into single MST while numTrees > 1: for i in range(len(self.graph)): u,v,w = self.graph[i] set1 = self.find(parent, u) set2 = self.find(parent ,v)): if cheapest[node] != -1: u,v,w = cheapest[node] set1 = self.find(parent, u) set2 = self.find(parent ,v) if set1 != set2 : MSTweight += w self.union(parent, rank, set1, set2) print ("Edge %d-%d has weight %d is included in MST" % (u,v,w)) numTrees = numTrees - 1 cheapest =[-1] * self.V print ("Weight of MST is %d" % MSTweight) g = Graph(4) g.addEdge(0, 1, 11) g.addEdge(0, 2, 5) g.addEdge(0, 3, 6) g.addEdge(1, 3, 10) g.boruvkaMST() Output: Edge 0-2 has weight 5 is included in MST Edge 1-3 has weight 10 is included in MST Edge 0-3 has weight 6 is included in MST Weight of MST is 21 Now, lets us understand using an example: Find the least weighted edge for each vertex that connects it to another vertex, for instance. Vertex Cheapest Edge thatconnects it to some other vertex {0} 0-1 {1} 0-1 {2} 2-8 {3} 2-3 {4} 3-4 {5} 5-6 {6} 6-7 {7} 6-7 {8} 2-8 The edges with green markings are least weighted. Component Cheapest Edge that connects it to some other component {0,1} 1-2 (or 0-7) {2,3,4,8} 2-5 {5,6,7} 2-5 Now, repeat the above steps several times, as a result, we will get the least weighted edges. After completing all the iterations we will get the final graph, i.e., minimum spanning tree MST. In conclusion, we have understood how to create an MST of a connected, weighted graph, and also it is not very hard to create a minimum spanning tree.
https://www.codespeedy.com/boruvkas-algorithm-for-minimum-spanning-tree-in-python/
CC-MAIN-2020-50
refinedweb
669
67.15
RapidSMS Developers Guide/Internationalization This section explains how to localize your application. This describes how switch from one language (code language, usually English) to another language. If you want to serve multiple languages at the same this, do it yourself. Contents Changing your code to support Localization[edit] Create a new directory inside your app and name it locale. In our case: we create apps/survey/locale Import needed packages to translate our text in each file which contains text strings or messages to be translated. In app.py and models.py we have to import the gettext translation lib from django and from it we need the ugettext function.: from django.utils.translation import ugettext as _ We give an alias for ugettext: _ to make the code more clear and easier to write. Note: it is recommended in models and GUI to import ugettext_lazy function instead of ugettext For each message you want to translate, make a call to the _() function, passing it the text. For instance : message.respond(u"Hello World!") message.respond(_(u"Hello World !")) To translate a message with parameters, make sure you only translate the format, not the evaluated string : message.respond(u"Thank you %s. for your opinion" % lname) message.respond(_(u"Thank you %s. for your opinion") % lname) If Django finds this sentence in the file django.mo file (See bellow) it will retrieve the corresponding translation otherwise it will take the default text which is the original one. Creating the translation file[edit] Once you have marked all your strings as translatable in your app, create a translation file using the following command: cd apps/myapp mkdir locale django-admin.py makemessages –l ar In the previous example, myapp is the name of the app you are translating and ar is the language code of your target language. In the locale folder Django will create a new folder named after the language code (ar here) with an LC_MESSAGES subfolder which itself contains a file named django.po. This file is the file to be translated. That's the one you can share with a translator. Each time you create or modify message new translatable strings in your app, re-issue the makemessages command to to update .po file. Translation[edit] Open django.po file and translate your text for each message like: msgid "Thank you %s. for your opinion” msgstr "شكرا لك %s. لمشاركتنا رأيك" - msgid: the original message “default language” - msgstr: the corresponding translation. Note: It is mandatory to keep all formating parameters (%s, %d, %(key)s, etc) from the original to the translation. For Arabic translation, %s must appear like this, not s%. Compiling the translation[edit] Django doesn't use the dango.po file. It uses a compiled version of it: django.mo. To compile your .po file, use the following command: django-admin.py compilemessages This command will create .mo file which is the one used by Django and rapidsms to display localized strings. Relaunch the command everytime you update the tranlation .po file to see the changes. Enabling localized version[edit] To be able to use your localized version, your need to configure RapidSMS to use that language. This is done using the bonjour app. Link it if you haven't then add it to your local.ini apps= list. Lastly, you need to configure bonjour in local.ini to specify the language code: [bonjour] lang=ar
https://en.wikibooks.org/wiki/RapidSMS_Developers_Guide/Internationalization
CC-MAIN-2015-32
refinedweb
574
67.15
Is This Content Helpful? How can we make this better? Please provide as much detail as possible. Contact our Support Team There is no direct way to insert the start and end elevation of a line feature as attribute values. The Interpolate Shape tool can only add z-values to the feature geometry, and the Add Surface Information tool can only add the mean elevation of a feature by defining the elevation surface. However, it is possible to add the start and end z-values to a line feature as attribute values by combining the Interpolate Shape tool and a Python script. The instructions provided describe how to add start and end z-values to a line feature. import arcpy #Specify the desired workspace. input_fc = r'<workspace>\<line_feature>' #Modify this part according to the specified field names created in step 2. myfield1 = "Z_Start" myfield2 = "Z_End" myshape = "SHAPE@" #Iterate between rows available in the attribute table of the line feature and input the z-values. with arcpy.da.UpdateCursor(input_fc, (myshape, myfield1, myfield2)) as cursor: for row in cursor: geom = row[0] startpt = row[0].firstPoint endpt = row[0].lastPoint row[1] = round(startpt.Z, 2) row[2] = round(endpt.Z, 2) cursor.updateRow(row)
https://support.esri.com/en/technical-article/000014433
CC-MAIN-2019-43
refinedweb
204
59.19
"T. > So the scripting component is an object, and you are creating instances... > > Example vbs client script(where objFTP is an instance of the Python > WSC): > ---- > strRemotePath = rs.Fields("RemotePath").Value > > 'strRemotePath has a value here > > obj So here you appear to be calling a method of the instance you created... > > 'strRemotePath is now a nullstring, the cwd command has succeeded. > ---- > > Python code in WSC (objFTP is an instance of FTP imported from > ftplib): > ---- > def cwd(RemotePath, FTP=objFTP): > > ---- But this appears to be a function definition, not a method. Surely cwd() shouod be defined inside the definition of the class instantiated as objFTP. Its first argument should be self -- that's how ytou'd expect to [pick up the object on which the method is being called. > > I had to use the FTP=objFTP as an argument because I couldn't > reference objFTP in the function from the "global" namespace where it > resides (another quirk?). > Erm, objFTP shouldn't (and probably doesn't) reside in your Python module's namespace. It should be created dynamically and referred to as self (method 1st argument) when required. > Anyway, does anybody have any idea how I can get this thing to quit > "eating" my variable values? There's probably something I'm missing here. Have you got a copy of Hammond and Robinson's "Programming Python on Win32"? Great reference to all this stuff. regards Steve --
https://mail.python.org/pipermail/python-list/2001-November/110193.html
CC-MAIN-2016-44
refinedweb
234
64.1
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). I want to create a left-aligned group in the menu. I found GroupBeginInMenuLine() in the official documentation, but it can only align the group to the left. I searched for this problem and it seems that this function cannot be used to create a left-aligned group. Is there any other way to create a left-aligned group? what should I do? The above is a plug-in that uses GroupBeginInMenuLine(), and the following is a plug-in that implements left-aligned by others. Thank, aimidi Hi @AiMiDi, so, if I do understand your example correctly, you want to place your custom tab element gadget via GroupBeginInMenuLine on the height of the menu, but at the same time have its content aligned to the left, i.e., the space which is normally occupied by the "normal" menu of a dialog. GroupBeginInMenuLine When you look at the documentation of GroupBeginInMenuLine, you will see that it is built into the method to align its content to the right of the menu space. The only thing you could try to do is use BFH_SCALEFIT flags to try to consume all the space from the right to the left. Which I tried for you and does not work. I also tried using a dummy element for pushing the content to the left, which works on a technical level, but effectively introduces so many other problems that it is not useable either (with a lot of dedication, one MIGHT be able to push that hack, but this is obviously not the intended use of the SDK and therefore out of scope of support). BFH_SCALEFIT So, in the end this is simply not possible, as it goes against the purpose of GroupBeginInMenuLine of showing a smaller element on the right side of the menu, e.g., a search field. You will have to move your tab-interface into the regular layout of the dialog. Cheers, Ferdinand The code is used to look at the method: """ """ import c4d class TabMenuDialog(c4d.gui.GeDialog): """ """ def CreateLayout(self): """ """ self.SetTitle("Tabulated Menu Dialog") documentNode = c4d.documents.GetFirstDocument() commonFlag = c4d.BFH_LEFT | c4d.BFH_SCALEFIT self.MenuSubBegin("File") self.MenuSubEnd() self.GroupBeginInMenuLine() self.GroupBegin(1000, commonFlag , 0, 1, "", 0, 0, 0) self.TabGroupBegin(1001, commonFlag, c4d.TAB_TABS) ID_DYNMAIC, i = 2000, 0 while documentNode: documentName = documentNode.GetDocumentName() firstObject = documentNode.GetFirstObject() firstObjectName = firstObject.GetName() if firstObject else "None" self.GroupBegin(ID_DYNMAIC + i, c4d.BFH_LEFT , 0, 1, documentName) self.AddStaticText(ID_DYNMAIC + i + 1, c4d.BFH_LEFT, 0, 10, firstObjectName, c4d.BORDER_NONE) self.GroupEnd() documentNode = documentNode.GetNext() ID_DYNMAIC += 2 self.GroupEnd() # Will push stuff to the left self.AddStaticText(ID_DYNMAIC + 1, commonFlag, 1000, 10, "Filler", c4d.BORDER_NONE) self.GroupEnd() self.GroupEnd() return True def main(): """ """ global myDialog myDialog = TabMenuDialog() myDialog.Open(c4d.DLG_TYPE_ASYNC) if __name__ == '__main__': main() Hello @AiMiDi, thank you for reaching out to us. I have a bit troubles understanding the intentions of your posting. For once it would be important to know where you are defining your dialog - as a resource file or in code? Secondly, you talk about menus but show the image of a TAB gadget if I am not mistaken. And finally, you are asking for left aligned gadget, but that is the default. So, I assume you want to align a menu to the right? TAB Menus itself cannot be aligned to the right, but there is GeDialog::GroupBeginInMenuLine() with which you can open a second group within the dialog menu so to speak, which is meant to place elements like a search bar or similar things. This will operate right-side aligned. In case I have misunderstood your question, I would have to ask you to restate it. I want to add a few TAB gadgets to the menu bar of the window to make them aligned to the left (left-side aligned, starting from the left). Or I want to make my TAB look like I created the layout with code. Below is the code. Bool DocTabDialog::CreateLayout() { this->SetTitle("DocTab"_s); BaseDocument* doc = GetFirstDocument(); GroupBeginInMenuLine(); GroupBegin(1000, BFH_LEFT , 0, 1, ""_s, 0, 0, 0); GroupSpace(0, 0); while (doc != nullptr)//Traverse all the items and add them to the label { Int32 Index = doc_tab_dialog_arr.GetCount(); DocTabUserArea* doc_tab_user_area = new DocTabUserArea(doc, DOC_TAB); C4DGadget* const userAreaGadget = this->AddUserArea(10000 + Index, BFH_LEFT , 160, 14); if (userAreaGadget != nullptr) this->AttachUserArea((*doc_tab_user_area), userAreaGadget); iferr(doc_tab_dialog_arr.Append(doc_tab_user_area))return false; doc = doc->GetNext(); } addDocTab = new DocTabUserArea(nullptr, ADD_TAB); C4DGadget* userAreaGadget = this->AddUserArea(1001, BFH_LEFT, 30, 14);//Add "Add Tab" button if (userAreaGadget != nullptr) this->AttachUserArea((*addDocTab), userAreaGadget); recDocTab = new DocTabUserArea(nullptr, REC_DOC); userAreaGadget = this->AddUserArea(1002, BFH_LEFT, 30, 14);//Add "History document" button if (userAreaGadget != nullptr) this->AttachUserArea((*recDocTab), userAreaGadget); GroupEnd(); GroupEnd(); return true; } See more here. Thanks, Aimidi Hello @ferdinand Can you understand what I said? Do I need to add anything? thank you for your reply. No, you do not need to add anything. I think I do understand your goals now. I will answer here today or tomorrow. without any further questions, we will consider this topic as solved by Monday and flag it accordingly. Thank you for your understanding, Ferdinand
https://plugincafe.maxon.net/topic/13323/how-to-create-a-left-aligned-group-in-the-menu
CC-MAIN-2021-25
refinedweb
894
58.69
? A quick background Spark Streaming provides windowed computations as one of its main features. This allows us to process data using a sliding window very efficiently. Let’s consider the following figure: As we can see here, we keep sliding the time-window to process the data. The data that falls within the current window is operated upon to produce the right outputs. In our example, the operation is applied over the last 5 seconds of data and it slides by 2 seconds. This means that it aggregates the statistics of the last 5 seconds and it displays these results once every 2 seconds. A real example Things get better when we see an actual example with some code! Let’s consider an example where your website is getting visitors from different countries. Now, you want to see the counts of those countries only in the last 15 seconds because you have to do some real time analysis based on that. Also, you want to update those counts once every 6 seconds. For convenience, let’s assume that the input comes in three-letter form where “usa” is USA, “ind” is India, and “aus” is Australia. If we encounter anything else, we will call it “unknown”. Let’s see how to do this using Spark Python API: import sys from pyspark import SparkContext from pyspark.streaming import StreamingContext def get_countryname(line): country_name = line.strip() if country_name == 'usa': output = 'USA' elif country_name == 'ind': output = 'India' elif country_name == 'aus': output = 'Australia' else: output = 'Unknown' return (output, 1) if __name__ == "__main__": if len(sys.argv) != 3: raise IOError("Invalid usage; the correct format is:\nwindow_count.py <hostname> <port>") batch_interval = 1 # base time unit (in seconds) window_length = 15 * batch_interval frequency = 6 * batch_interval spc = SparkContext(appName="WindowCount") stc = StreamingContext(spc, batch_interval) stc.checkpoint("checkpoint") lines = stc.socketTextStream(sys.argv[1], int(sys.argv[2])) addFunc = lambda x, y: x + y invAddFunc = lambda x, y: x - y window_counts = lines.map(get_countryname).reduceByKeyAndWindow(addFunc, invAddFunc, window_length, frequency) window_counts.pprint() stc.start() stc.awaitTermination() How to run the code? Save the above code in a file called “window_count.py”. We need a way to provide real time input to our system, so let’s set up a small data server using Netcat (a tool available on Unix-like systems). Open up the terminal and type the following: $ nc -lk 9999 Now, let’s open up a new terminal and run our Spark Python code: $ cd /path/to/spark-1.5.1 $ ./bin/spark-submit /path/to/window_count.py localhost 9999 Go back to the Netcat terminal and start entering the three-letter codes like “ind”, “usa”, “aus”, “xyz”, and so on. If you look in the Spark terminal, you should be able to see the counts being updated. If you observe carefully, you will see that those counts will go down to 0 if you wait for more than 15 seconds without entering any new data into the Netcat terminal. You are all set! You are now processing data in real time using windowed computations in Spark. What happened in the code? If you need a quick refresher on Spark Streaming, you should check out my previous blog post. As we can see in our code, we initialize the StreamingContext object “stc”. We read the input from the data server into the “lines” DStream. We then map it to the corresponding country names using the “map(get_countryname)” transformation. After this, we need to compute the counts for the last 15 seconds. For this operation, we use “reduceByKeyAndWindow”. This takes the function and uses it to reduce the list to a single value. The beauty is that it takes the window length and sliding interval as input parameters. So it automatically considers only the relevant window to do the computation. We use “invAddFunc” to speed up the computation. We want the counts to be updated once every 6 seconds and the window length is 15 seconds, so there is obviously some time overlap here. So instead of recomputing that overlapping part, it uses “invAddFunc” to do smart computation and avoid recalculating the same thing again. We then go ahead and print the counts using “window_counts.pprint()”. —————-—————————————————————————————–
https://prateekvjoshi.com/2015/12/29/performing-windowed-computations-on-streaming-data-using-spark-in-python/
CC-MAIN-2021-39
refinedweb
697
64.71
Amongst the many features introduced in GameplayKit, one of the most immediately useful is its ability to provide artificial intelligence that can evaluate a situation and make smart choices. We're going to be using it in our Four in a Row game to provide a meaningful opponent, but first it's essential that you understand how GameplayKit tackles the AI problem because it directly affects the code we'll write. GameplayKit has three protocols we need to implement in various parts of our model: GKGameModelprotocol is used to represent the state of play, which means it needs to know where all the game pieces are, who the players are, what happens after each move is made, and what the score for a player is given any state. GKGameModelPlayerprotocol is used to represent one player in the game. This protocol is so simple we already implemented it: all you need to do is make sure your player class has a playerIdinteger. It's used to identify a player uniquely inside the AI. GKGameModelUpdateprotocol is used to represent one possible move in the game. For us, that means storing a column number to represent a piece being played there. This protocol requires that you also store a valueinteger, which is used to rank all possible results by quality to help GameplayKit make a good choice. We have a sensible match for the first two in our Board and Player classes, but we have nothing suitable for GKGameModelUpdate so let's create that now. Like I said, this needs to track only how "good" a move is, where each move is represented by a column number to play. This is easy to do, so please go ahead and create a new Cocoa Touch class in your project. Name it “Move”, and make it subclass from “NSObject”. Now replace its source code with this: import GameplayKit import UIKit class Move: NSObject, GKGameModelUpdate { var value: Int = 0 var column: Int init(column: Int) { self.column = column } } That's it: the default for value is 0, and we create a Move object by passing in the column it represents. We're done with that class, and I already said we were finished with the Player class, which means we can focus our mental energies on what remains: Board. GameplayKit's artificial intelligence works through brute force: it tries every possible move, then tries every possible follow-on move, then every possible follow-on follow-on move, etc. This runs up combinations extremely quickly, particularly when you consider that there are 4,531,985,219,092 unique positions for all the pieces on the board! So, you will inevitably limit the depth of the search to provide just enough intelligence to be interesting. Now, this bit is really important, so read carefully. When you ask GameplayKit to find a move, it will examine all possible moves. To begin with, that is every column, because they all have space for moves in them. It then takes a copy of the game, and makes a virtual move on that copy. It then takes a copy of the game, and makes a different virtual move, and so on until until all initial first moves have been made. Next, it starts to re-use its copies to save on memory: it will take one of those copies and apply a game state to it, which means it will reset the board so that it matches the position after one of its virtual moves. It will then rinse and repeat: it will examine all possible moves, and make one. It does this for all moves, and does so recursively until it has created a tree of all possible moves and outcomes, or at least as many as you ask it to scan. Each time the AI has made a move, it will ask us what the player score is. For some games this will be as simple as returning a score variable, but for our 4IR game it's a bit trickier because there is no score, only a win or a loss. The original Apple source code provides a simple heuristic for this, and I've kept it here because it's quite fun – the AI can sometimes make dumb mistakes, or sometimes play like a genius, which makes the game interesting! If you were wondering, a heuristic is the computer science term for a guesstimate – it's a function that tries to solve a problem quickly by taking shortcuts. For us, that means we'll tell the AI the player's score is 1000 if a move wins the game, -1000 if a move loses the game, or 0 otherwise. All this information is important because I hope now you can see why we separate the game model from the game view – why we have a slots array inside the game board and a placedChips array inside the view controller. If you're still not sure, try to imagine how many moves the AI needs to simulate in order to decide what to do – our board has seven columns, so: …and so on. Eventually one column will become full so the multiplications will decrease, but you're still talking many thousands of copies of the board. Now imagine if the Board class kept track of all the UIViews used to draw the chips – suddenly we'd be copying far more than intended, and doing it 5000 times! So: if a couple of chapters ago you were thinking I was wasting your time by forcing you to separate your model from your view, I hope you can now see why. AI is slow enough without doing a huge stack of extra work for no reason! That's enough theory, it's time for some code. If you remember nothing else, remember this: to simulate a move, GameplayKit takes copies of our board state, finds all possible moves that can happen, and applies them all on different.
https://www.hackingwithswift.com/read/34/6/how-gameplaykit-ai-works-gkgamemodel-gkgamemodelplayer-and-gkgamemodelupdate
CC-MAIN-2021-31
refinedweb
996
63.83
In previous articles I’ve broadly covered some local storage options for HTML5 mobile applications and more recently backend options for HTML5 mobile applications. Now I’m going to get a little more specific and cover a real world example of how to use PouchDB with Ionic 2, which encompasses both local storage and remote data storage. A Quick Background on PouchDB and NoSQL PouchDB is an in browser NoSQL database that was inspired by the CouchDB project. Its biggest feature is that it allows for data storage offline, and it can automatically sync to a remote database when the application comes back online. If you’re not familiar with NoSQL, I’d recommend checking out this post. To give you a really quick background, NoSQL databases usually store data in a JSON like key value style format, instead of the relational style tables that traditional SQL uses. The two approaches are completely different, the way you use them is different and the way you need to think about them is different. If your application is just storing simple data then you will easily see how to go about using the NoSQL database PouchDB provides – as you can pretty much just use it as a simple key-value storage system (which doesn’t really require you to understand NoSQL at all). But if you’re building a more complex database then you should invest some time into learning how data should be structured with NoSQL, so that the way you are storing data makes sense and is easy to retrieve later. In this tutorial we’re going to cover how to set up PouchDB in an Ionic 2 application, how to store and retrieve data locally, and how to sync the local database with a remote database hosted on Cloudant, which is part of IBM Bluemix. Before We Get Started Before you go through this tutorial, you should have at least a basic understanding of Ionic 2 concepts. You must also already have Ionic 2 installed. Set up Cloudant on IBM Bluemix Before we start building the application, we’re going to set up our backend on Cloudant through IBM Bluemix. Bluemix gives you access to IBM’s Open Cloud Architecture and can be used to create, deploy and manage cloud-based applications. It provides a bunch of services you can use and one of those is Cloudant. Cloudant is a NoSQL DBaaS (Database as a Service) so it’s an easy way for us to get our remote backend set up for PouchDB, but there are many other ways you can implement a backend for PouchDB. If you’d like you can set up CouchDB (which is the protocol Cloudant implements) on your own server, this tutorial explains how you can go about doing this. Ok, let’s get started with setting up Cloudant. First you will need to create an IBM Bluemix account. Once you have created your account and logged in, you should see a screen like this: Choose the Create App option, choose Mobile, name your application and then click Finish. After that you should see a screen like this: Choose Cloudant NoSQL DB from the Services menu on the left and then click View your Data on the Cloudant Dashboard. You should now be inside of the Cloudant dashboard. Click the Create Database option in the top right to create a new database and call it mytestdb or whatever you prefer. Now select the newly created database to view more details, you should see a screen like this: Click the API link in the top right to get a reference to your Cloudant Database URL, this is what we will supply to PouchDB later so make a note of it. You should also go to the Permissions section and generate an API key (making sure to give it Write and Replicate permissions) – this can be used with PouchDB to access your database, and it’s a better idea to use an API key which is revokable rather than the actual username and password for your account. Make sure to take note of the password because once you leave the screen you won’t be able to find it again. There’s just one more thing we need to do in here. We need to enable CORS (Cross Origin Resource Sharing) so that we are able to make requests to the database from our application. To do that go to Account in the left menu, select CORS and then choose the All Domains (*) options. You should now have everything you need set up on Cloudant, ready to be used with PouchDB – so let’s get into building our Ionic application. Generate a New Ionic 2 Application We’re going to create a new Ionic application using the blank template. Run the following command to generate a new project: ionic start ionic2-pouchdb-cloudant blank --v2 Change your directory to the new project cd ionic2-pouchdb-cloudant-blank Run the following commands to add the platforms you are building for ionic platform add ios ionic platform add android PouchDB can also be used in conjunction with the SQLite plugin, this allows data to be stored in a native database rather than in the browser which is more volatile. As well as being more stable, SQLite also gives you access to more memory to store data. So let’s get that set up as well. UPDATE: As Nolan points out below, only iOS will use the SQLite plugin (unless specified otherwise). By default, Android will use IndexedDB. Run the following command to install the SQLite plugin: ionic plugin add Of course, we also need to install PouchDB itself. Run the following command to install PouchDB: npm install pouchdb --save and we will also need to install the types for PouchDB by running the following command: npm install @types/pouchdb --save --save-exact Create a PouchDB Data Service Now we are going to set up a service to handle interacting with PouchDB for us. Let’s use the Ionic CLI to automatically generate a provider template for us. Run the following command to generate a Data service: ionic g provider Data This will create a Data provider inside of the providers folder. If you’re unfamiliar with providers/services/injectables, essentially they provide some functionality that you can import into any of the other components in your application. So after we create this service, we will be able to import it anywhere we like to interact with our PouchDB database, using some helper functions from our new service. In order to be able to use this throughout the application, we will need to add it to the app.module.ts file. Modify src/app/app.module.ts to reflect the following: import { NgModule, ErrorHandler } from '@angular/core'; import { IonicApp, IonicModule, IonicErrorHandler } from 'ionic-angular'; import { MyApp } from './app.component'; import { HomePage } from '../pages/home/home'; import { Data } from '../providers/data'; @NgModule({ declarations: [ MyApp, HomePage ], imports: [ IonicModule.forRoot(MyApp) ], bootstrap: [IonicApp], entryComponents: [ MyApp, HomePage ], providers: [{provide: ErrorHandler, useClass: IonicErrorHandler}, Data] }) export class AppModule {} Let’s modify the generated service now to connect to PouchDB.); } } First, we make PouchDB available by importing it, since we’ve already installed the package with npm this will allow us to grab a reference to it. Then we create a new PouchDB database called mytestdb and assign it to this.db. This is all we need to do to create the database, and we could start interacting with it right now to store and retrieve data locally. We also want to get the data syncing to and from our Cloudant database, though, so we add a few extra things here. We provide a username and password, which should be updated with the API key you generated on Cloudant earlier. You should also update this.remote with the URL for your Cloudant database (leaving the all_docs portion or anything else off of the end, it should just end in /mytestdb). You will need to make sure to update what I have in the code above because although it’s linking to mytestdb your Bluemix URL will be different. Next, we create an options object to supply to the sync function call. This will pass along our authentication information with the request, and we can also specify as many other options as we like here (e.g. that we want the data to be live). Take a look at the PouchDB documentation for a full list of the sync options. Then finally we call the sync function which will sync the local PouchDB database to the remote Cloudant database. Now anytime the local data changes it will be synced to the remote database, and any time the remote data changes the local database will be updated (assuming there is an Internet connection of course). Now we’re going to add a few more functions to this service that will allow us to add and retrieve documents, and also listen for changes.); } addDocument(doc){ this.db.put(doc); } getDocuments(){); }); }); } handleChange(change){ this.zone.run(() => {); } } }); } } The first function we’ve added is addDocument() which is simple enough. To add a document (a data object in NoSQL terms is called a “document”) to our PouchDB database we simply call put on our database. So we will be able to pass this function any object and it will add it to the database. The next function is a bit more complicated. We use getDocuments() to return all of the documents we have stored in our database and then we set up a change listener. This listener will trigger every time a change is detected, and it will send the change object to our handleChange function.. If you’re new to NoSQL and PouchDB then a lot of this is going to look pretty confusing, and I’m not going to dive too deeply into the syntax in this tutorial as the post would end up far too long. So I’d recommend having a poke around the PouchDB website to see how everything works. We’ve set up some pretty basic functionality here but there’s quite a bit more to learn. Create a Button to Add Data Our service is set up and ready to use now, so let’s see it in action. We’re just going to set up a simple list with a button that will add some random data, using our PouchDB service. Modify src/pages/home/home.html to reflect the following <ion-header> <ion-navbar <ion-title> Home </ion-title> <ion-buttons end> <button ion-button icon-only (click)="addData()"><ion-icon</ion-icon></button> </ion-buttons> </ion-navbar> </ion-header> <ion-content> <ion-list> <ion-item * {{item.message}} </ion-item> </ion-list> </ion-content> Here we’ve just created a simple template that will create a list of all of the items we have defined in our home.ts file (we will take care of this shortly). We’ve also added a button that will call an addData() function so that we can create new items with some junk data. Modify home.ts to reflect the following import { Component } from '@angular/core'; import { Data } from '../../providers/data'; @Component({ selector: 'page-home', templateUrl: 'home.html' }) export class HomePage { items: any; constructor(public dataService: Data) { } ionViewDidLoad(){ this.items = []; this.dataService.getDocuments().then((result) => { this.items = result; }); } addData(){ let date = new Date(); let newDoc = { '_id': date, 'message': date.getTime() }; this.dataService.addDocument(newDoc); } } In this file we are doing two main things. First we load in all of the data that we have stored in our database into the items array (which is displayed in our list) by using our PouchDB service. Then we define the addData() function which will create a new object, using the current time just as some dummy data, and then add that to our PouchDB database again by using the PouchDB service. Now if you hit the button in the top right a few times you’ll see data being added to the list: and you’ll also see the data being added to Cloudant: now if you were to go to Cloudant and modify the data there, you would also see it update immediately in your application (notice how one of the items in the list isn’t like the others?). Pretty cool! Summary PouchDB is a great tool for most jobs when it comes to data storage on mobile. You have a powerful database that can be stored directly in the browser, which can make use of native storage (SQLite) if it is available, and can also replicate to a remote server. It’s super easy to get started using PouchDB, but if you come from an SQL background (like me) then it can be super hard to get your head around doing more complex things with a NoSQL approach. It really is a case of old habits dying hard. So my advice would be to forget any concept of how you think data should be stored or retrieved in a database and start playing around.
https://www.joshmorony.com/syncing-data-with-pouchdb-and-cloudant-in-ionic-2/
CC-MAIN-2021-04
refinedweb
2,187
65.56
It’s just data I've always wondered about your essays. Who's the target audience? Half of the things I've read from you specifically both "Gentle Introduction" articles as well as the WSDL 1.1 article seemed to lack any real meat for me. All I got out of them was a good collection of links and a fuzzy feeling about the ideas therein. Now if your audience is supposed to be devs who already know about the technologies or manager-types who just want an overview then I guess the feel of your articles is alright. If not, then as a fairly technical person I don't really get much out of these essays beyond a fuzzy idea of what technologies or techniques your are in favor off without much technical detail. I did like "What Object Does SOAP Access" and "Expect More". PS: Feel free to return the favor about the articles at if you want. I like critique about my overall writings and typically don't get enough of it. Well, Dare, clearly you are not it. ;-) I try rather hard to identify my target audience at the top of my essays. And they get a steady stream of visitors from places like schools and other technical folks whose area of expertise is different than ours. I actually got a laugh when I read that the latter link referred to that particular article as "a little more technical". What concerns me is that most people's first exposure to a subject like SOAP is seeing an RPC style request with things like xsi:type (largely traceable back to Apache) and unnecessary namespace declarations (tried ASP.NET lately?) and recoil in horror. Most of these essays have as a motivation a real debate that I was having at the time with a real person. My real purpose to these essays is to plop out a complete thought so that I can refer to it later as a convenient shorthand. It saves time.
http://www.intertwingly.net/blog/814.html
crawl-002
refinedweb
336
71.04
Programming Language Concepts Using C and C++/Exception Handling in C++ Similar to Java, exceptions in C++ are most often—not always!—objects of a class type. That is, instead of returning a value of a certain type an exception object may be returned from the function. However, one can also throw an exception object of a primitive type. The following is an example to this unlikely case. Other peculiarities of exceptions in C++ are related to the way they are specified and handled. In addition to listing the exact list of exceptions thrown from a function by means of an exception specification, one can optionally remove the specification and get the liberty of throwing any exception. If we explicitly list the exceptions thrown from a function and it turns out that an unexpected exception—that is, an exception that is not listed in the specification—is thrown and not handled in the function call chain, a call to unexpected(), defined in the C++ standard library, is made. In other words, detection of specification violations is carried out at run-time. If the control flow never reaches the point where the unexpected exception is thrown, program will run without a problem. In line with its design philosophy C++ does not mandate that statements with a potential of throwing an exception be issued inside a try block. Similar to Java exceptions deriving from RuntimeException, C++ exceptions need not be guarded. In case we may be able to figure out that the exception never arises we can remove the try-catch keywords and get cleaner code. Exception ClassEdit #ifndef QUEUE_EXCEPTIONS_HXX #define QUEUE_EXCEPTIONS_HXX #include <iostream> using namespace std; namespace CSE224 { namespace DS { namespace Exceptions { Note our exception class does not have any member fields. In other words, we have no means to identify details of the situation. All we know is we have a problem, nothing more! Although in our case we do not need any details about the nature of the problem, this is not always the case. Take the factorial example for instance. We may want to pass the value of the argument that gave rise to the exceptional condition. This is equivalent to saying that we want to tell the difference between exception objects of the same class. As a matter of fact, we may formulate the problem as that of differentiating between objects of the same class, be that an exception object class or any other. We can do this simply by adding fields to the class definition. class Queue_Empty { public: Note the only function of our class has been declared to be static, which means we can invoke it without ever creating an instance of the class through the class scope operator. Similarly, one can define static data fields, which are shared by all instances of the class and there is no obligation to access these fields via objects of the class. static void error(void) { cerr << "Queue Empty!!!" << endl; } }; // end of class Queue_Empty } // end of namespace Exceptions } // end of namespace DS } // end of namespace CSE224 #endif ModuleEdit InterfaceEdit #ifndef QUEUE_HXX #define QUEUE_HXX #include <iostream> using namespace std; #include "ds/exceptions/Queue_Exceptions" using namespace CSE224::DS::Exceptions; namespace CSE224 { namespace DS { What follows is a forward class declaration. Its purpose is similar to that of forward declaration ala C: we declare our intention of using a class named Queue_Node and defer its definition to some other place. Note we cannot declare an object of this type. This is due to the fact that C++ does not let you declare variables to be of types whose definitions are not completed. Because compiler cannot figure out the amount of memory required for the object. However, we can declare variables to be pointers or references—read it as "constant pointers"—to such a class. class Queue_Node; class Queue { In some cases, it is convenient to allow a certain function/class to access the non-public members of a class without allowing access to the other functions/classes in the program. The friend mechanism in C++ allows a class to grant functions/classes free access to its non- public members. A friend declaration begins with the keyword friend. It may appear only within a class definition. In other words, it is the class that declares a function/class to be its friend, not the other way around. That is, you cannot simply declare a class as your friend and access its fields. Since friends are not members of the class granting friendship, they are not affected by the public, protected, or private section in which they are declared within the class body. That is, friend declarations may appear anywhere in the class definition. According to the following declaration, overloaded shift operator ( <<) can freely access the internals of the Queue object, whose reference is passed as the second argument, as if they were public. Had we chosen to make the shift operator into an instance function we would not have attained our goal. Take the following example: cout << q1 << q2; This statement will first print q1 and then q2 to the standard output file. We can reach the same effect by the following statements. cout << q1; cout << q2; As a matter of fact, this is what takes place behind the scene. We can see what’s happening by applying the following transformations. cout << q1 << q2; ⇒ cout.operator<<(q1).operator<<(q2); ⇒ x.operator<<(q2); The shift message is sent twice: once to the object named cout and once to the object returned by the first invocation of the appropriate function ( x). This means we need to have a function signature where the value returned and the first argument are of the same type: ostream or ostream&. Knowing an instance function takes a pointer to an instance of the class being defined as its implicit first argument, we reach the conclusion that the shift operator cannot be an instance function of the Queue class. Way out of this is providing a friend declaration such as the following. friend ostream& operator<<(ostream&, const Queue&); public: Queue(void) : _front(NULL), _rear(NULL), _size(0) { } Queue(const Queue&); ~Queue(void); Queue& operator=(const Queue&); bool operator==(const Queue&); Note the types used in the exception specifications. The first function can abnormally return throwing a Queue_Empty object while the second one will return with a pointer to such an object. This should not come as a surprise. Unlike Java, which creates objects in the heap only, C++ lets you create your objects in all three regions—that is, the heap, the run-time stack, and the static data region. Since an exception object is basically a C++ object, you can create it in any data region you like. Provided that you declare your exception handlers accordingly, there is not much of a difference between the following exception specifications.[1] Handler of the first one will expect an object, while the second one will expect a pointer that points to some area in the heap.[2] double peek(void) throw(Queue_Empty); double remove(void) throw(Queue_Empty*); void insert(double); bool empty(void); private: Queue_Node *_front, *_rear; unsigned int _size; }; // end of class Queue Note all fields of the following class definition are private. There are no functions to manipulate the objects, either. So, it looks like we need some magic for creating and manipulating an object of the class. Answer lies in Queue_Node’s relation to Queue: Queue_Node is tightly coupled to Queue. A Queue_Node object can exist only within the context of a Queue object. This fact is reflected in the friend declaration. Thanks to this declaration, we can [indirectly] manipulate a Queue_Node object through operations on some Queue object. class Queue_Node { friend class Queue; Next statement declares the shift operator to be a friend to the Queue_Node class. A similar declaration had been made in the Queue class, which means that one single function will have the privilege of peeking into the innards of two different classes. friend ostream& operator<<(ostream&, const Queue&); private: double _item; Queue_Node *_next; Queue_Node(double val = 0) : _item(val), _next(NULL) { } }; // end of class Queue_Node } // end of namespace DS } // end of namespace CSE224 #endif ImplementationEdit #include <iomanip> #include <iostream> using namespace std; #include "ds/Queue" #include "ds/exceptions/Queue_Exceptions" using namespace CSE224::DS::Exceptions; namespace CSE224 { namespace DS { Queue:: Queue(const Queue& rhs) : _front(NULL), _rear(NULL), _size(0) { Queue_Node *ptr = rhs._front; for(unsigned int i = 0; i < rhs._size; i++) { this->insert(ptr->_item); ptr = ptr->_next; } // end of for(unsigned int i = 0; i < rhs._size; i++) } // end of copy constructor Our destructor, implicitly invoked by the programmer (through delete in deallocating heap objects) or by the compiler-synthesized code (in the process of deallocating static and run-time stack objects), deletes all nodes in the queue and then proceeds with cleaning the room reserved for the fields. Had we forgotten to remove the items we would have ended up with the picture given below, which is actually the same picture we would have got without the destructor. Note the shaded region denotes the memory returned to the allocator by the delete operator itself, not the destructor.[3] All queue nodes reachable only through the fields in the shaded region have now become garbage. So, we must remove all queue items when the queue is deleted, which is what we do in the destructor body. Note also we do not write the code within a try-catch block. Unlike Java, that’s OK with C++; you can choose to omit the try-catch block if you think they will never happen. In this case, the number of removals is guaranteed to be as many as the number of items in the queue and this cannot give rise to any exceptional condition. Queue:: ~Queue(void) { unsigned int size = _size; for(unsigned int i = 0; i < size; i++) remove(); } // end of destructor Queue& Queue:: operator=(const Queue& rhs) { if (this == &rhs) return (*this); for(unsigned int i = _size; i > 0; i--) remove(); Queue_Node *ptr = rhs._front; for(unsigned int i = 0; i < rhs._size; i++) { this->insert(ptr->_item); ptr = ptr->_next; } // end of for(unsigned int i = 0; i < rhs._size; i++) if (rhs._size == 0) { _front = _rear = NULL; _size = 0; return(*this); } // end of if(rhs._size == 0) return (*this); } // end of assignment operator bool Queue:: operator==(const Queue& rhs) { if (_size != rhs._size) return false; if (_size == 0 || this == &rhs) return true; Queue_Node *ptr = _front; Queue_Node *ptr_rhs = rhs._front; for (unsigned int i = 0; i < _size; i++) { if (ptr->_item != ptr_rhs->_item) return false; ptr = ptr->_next; ptr_rhs = ptr_rhs->_next; } // end of for(unsigned int i = 0; i < _size; i++) return true; } // end of equality-test operator double Queue:: peek(void) throw(Queue_Empty) { if (empty()) throw Queue_Empty(); return(_front->_item); } // end of double Queue::peek(void) double Queue:: remove(void) throw(Queue_Empty*) { if (empty()) throw new Queue_Empty(); double ret_val = _front->_item; Queue_Node *temp_node = _front; if (_front == _rear) _front = _rear = NULL; else _front = _front->_next; delete temp_node; _size--; return ret_val; } // end of double Queue::remove(void) void Queue:: insert(double value) { Queue_Node *new_node = new Queue_Node(value); if (empty()) { _front = _rear = new_node; _size = 1; return; } // end of if (empty()) _rear->_next = new_node; _rear = _rear->_next; _size++; } // end of void Queue::insert(double) bool Queue:: empty(void) { return (_size == 0); } The following output operator definition makes use of both the Queue and the Queue_Node classes. It first prints the length of the queue by using a private field of the Queue class and then outputs the contents of the corresponding queue by traversing each and every node, which are of Queue_Node type. For this reason we had to make this function a friend to both classes. ostream& operator<<(ostream& os, const Queue& rhs) { os << "( " << rhs._size << " )"; if (rhs._size == 0) { os << endl; return(os); } // end of if (rhs._size == 0) os << "(front: "; Queue_Node *iter = rhs._front; while(iter != NULL) { os << iter->_item << " "; iter = iter->_next; } // end of while(*iter != NULL) os << " :rear )\n"; return(os); } // end of ostream& operator<<(ostream&, const Queue&) } // end of namespace DS } // end of namespace CSE224 Test ProgramEdit #include <fstream> #include <string> using namespace std; #include "ds/Queue" using namespace CSE224::DS; #include "ds/exceptions/Queue_Exceptions" using namespace CSE224::DS::Exceptions; int main(void) { Queue q1; string fname("Queue_Test.input"); ifstream infile(fname.c_str()); if (!infile) { cout << "Unable to open file: " << fname << endl; return 1; } // end of if(!infile) Now that the argument to the handler ( q) points to some heap memory, we must destroy the region as soon as we are done with handling the exception. That’s what we do with the delete operator inside the handler. If we had preferred to pass an object instead of a pointer to an object, as we do in peek, there wouldn’t have been any need for such a clean-up activity; thanks to the code synthesized by the compiler, it would have been carried out automatically upon exit from the handler. Observe we could have written the first statement of the handler as Queue_Empty::error(); This is OK because the sole function in our exception class is static, which means we can call it through the class name. try { q1.remove(); } catch(Queue_Empty* q) { q->error(); delete q; } for (int i = 0; i < 10; i++) { double val; infile >> val; q1.insert(val); } // end of for(int i = 0; i < 10; i++) infile.close(); cout << q1; Queue q2 = q1; cout << "Queue 1: " << q1; cout << "Queue 2: " << q2; if (q1 == q2) cout << "OK" << endl; else cout << "Something wrong with equality testing!" << endl; q2.remove(); q2.remove(); cout << "Queue 2: " << q2; if (q1 == q2) cout << "Something wrong with equality testing!" << endl; else cout << "OK" << endl; return(0); } // end of int main(void) Input-Output in C++Edit Input-output facilities in C++, a component of the standard library, are provided by means of the iostream library, which is implemented as a class hierarchy that makes use of both multiple and virtual inheritance. This hierarchy includes classes dealing with input from and/or output to user's terminal, disk files, and memory buffers. The attributes of a particular stream type is somehow mangled in its name. For example, ifstream stands for a file stream that we us as a source of input. Similarly, ostringstream is an in-memory buffer—a string object—stream that is used as a sink of output. The Base Stream Class: iosEdit Whatever the name of the class being used might be it eventually derives from ios, the base class of the iostream library. This class contains the functionality common to all streams, such as accessor-mutator functions for manipulating state and format. In the former group are included the following functions: iostate rdstate() const: Returns the state of the current stream object, which can be any combination of the following: good, eof, fail, and bad. void setstate(iostate new_state):In addition to the already set flags, sets the state of the stream to new_state. Note this function cannot be used to unset the flag values. void clear(iostate new_state = ios::goodbit): Sets the state to the value passed in new_state. int good(void): Returns trueif the last operation on the stream was successful. int eof(void): Returns trueif the last operation on the stream found the end of file. int fail(void): Returns trueif the last operation on the stream was not successful and no data was lost due to the operation. int bad(void): Returns trueif the last operation on the stream was not successful and data was lost as a result of the operation. For manipulating the format, we have char fill(void) const: Returns the padding character currently in use. The default character is space. char fill(char new_pad_char): Sets the padding character to new_pad_charand returns the previous value. int precision(void) const: Returns the number of significant digits to be used for output of floating point numbers. The default value is 6. int precision(int new_pre): Sets precision to new_preand returns the previous value. int width(void) const: Returns the output field width. Default value is 0, which means as many characters as necessary are used. int width(int new_width): Sets width to new_widthand returns the previous value. fmtflags setf(fmtflags flag): Sets one of the flags, which are used to control the way output is produced. flagcan have one of the following: (for base value used in output of integral values) ios::dec, ios::oct, ios::hex, (for displaying floating point values) ios::scientific, ios::fixed, (for justifying text) ios::left, ios::right, ios::internal, (for displaying extra information) ios::showbase, ios::showpoint, ios::showpos, ios::uppercase. As in the next four functions, this function returns the state that was in effect prior to the call. fmtflags setf(fmtflags flag, fmtflags mask): Clears the combination of flags passed in mask and then sets the flags passed in flag. fmtflags unsetf(fmtflags flag): Reverse of setf, this function makes sure the combination of flags passed in flagis not set. fmtflags flags(void) const: Returns the current format state. fmtflags flags(fmtflags new_flags): Sets the format state to new_flags. Input StreamsEdit On top of the functionality listed in the previous section, all input streams in C++ provide support for the following functions. istream& operator>>(type data): Overloaded versions of the shift-in (or extraction) operator are used to read in values of various types and can further be overloaded by the programmer. It can be used in a cascaded manner and in case the input operation is unsuccessful it returns false, which means it can also be used in the context of a boolean expression. int get(void): Returns the character under the read head and advances it by one. int peek(void): Like the previous function, peekreturns the character under the read head but doesn't move it. That is, peekdoes not alter the stream contents. istream& get(char& c): Cascaded version of get(void), this function is equivalent to operator>>(char&). That is in_str.get(c1).get(c2).get(c3);≡ in_str >> c1 >> c2 >> c3; istream& get(char* str, streamsize len, char delim = '\n'): Reads a null-terminated string into str. Length of this string depends on the second and third arguments, which hold the size of the buffer and the sentinel character, respectively. If len - 1is scanned without reading the sentinel character, '\0'is appended to the buffer and returned in the first argument. If the sentinel character is reached before filling in the buffer, the read head is left on the sentinel character and all that has been read up to that point with the terminating '\0'is returned in the buffer. istream& getline(ctype* str, streamsize len, char delim = '\n'): Similar to the previous function, getlineis used to read a null-terminated string into its first argument. However, in case the sentinel character is reached before the buffer is filled, the sentinel character is not left in the stream but read and discarded. Note the type of the first argument is a pointer to one of char, unsigned char, or signed char. istream& read(void* buf, streamsize len): Reads lenbytes into buf, unless the input ends first. If input ends before lenbytes are read this function sets the ios::failflag and returns the incomplete result. istream& putback(char c): Corresponding to ungetc(char)of C, this function attempts to back up one character and replace the character that has been backed up with c. Note this operation is guaranteed to work only once. Consecutive uses of it may or may not work. istream& unget(void): Attempts to back up one character. istream& ignore(streamsize len, char delim = traits::eof): This function reads and discards as many as lencharacters or all characters up to and including the delim. Output StreamsEdit Complementing the operations listed in the previous section are the operations applied on an output stream. Before we give a listing of these operations, we should mention one crucial point: in order for the output operations to take effect one of the following conditions must be met: - An endlmanipulator or '\n'is inserted into the stream. - A flushmanipulator is inserted into or a flush()message is sent to the stream. - The buffer attached to the stream is full. - An istreamobject tied to the stream performs an input operation. Tying two streams means their operations will be synchronized. A popular example is the cin- coutpair: before a message is sent to cin coutis flushed. That is, cout << "Your name:"; ≡ cout << "Your name:"; cout.flush(); ≡ cout << "Your name" << flush; cin >> name; cin >> name; cin >> name; ostream& operator<<(type data): Overloaded versions of the shift-out (or insertion) operator are used to write data of various types and can further be overloaded by the programmer. Like the extraction operator, it can be cascaded. ostream& put(char c): Inserts cinto the current stream. ostream& write(string str, streamsize len): Inserts lencharacters of strinto the current stream. Since a stringobject can be constructed from a [const] char*, the first argument can also be a C-style character string. Before moving on to file-oriented streams, we should mention that functionalities of istream and ostream are combined in the iostream class, which derives from these two classes. That is, one can use the same stream for input and output at the same time. File Input and OutputEdit Using ifstream and ofstream one can read from and write to files. Since these classes inherit from the relevant stream classes— istream and ostream, respectively—their instances can receive the messages given in the previous sections. In addition to these one can also use the following list. ifstream(const char* fn, int mde = ios::in, int prt = 644), ofstream(const char* fn, int mde = ios::out, int prt = 644): Connects the stream being constructed to the disk file named fn. The second and third arguments, which are optional, are used to specify the way the stream can be used. The third argument is specific to Unix-based operating systems and indicate the file protection bits. The second argument specifies how to open the disk file and can be a [reasonable] combination of the following: ios::in: Opens the file for input and locates the read head at the beginning. ios::out: Opens the file for output. While doing so, the file is truncated. ios::app: Opens the file for output. File contents are not destroyed and each output operation inserts data to the end of the file. ios::bin: Treats the file content as raw data. In environments where '\n'is mapped to a single byte this is not needed. ifstream(void)& ofstream(void): Creates a stream object without connecting it to a disk file. void open(const char* fn, int mde = def_mde, int prt = 644): Connects a previously constructed [disconnected] stream object to a disk file. ios::pos_type tellg/tellp(void): Return the position of the file marker. The last letters, gfor get and pfor put, of these functions serve as a reminder of whether the file marker is a read head or a write head. void seekg/seekp(pos_type n_p): These functions move the file marker—that is, the read or write head-to the absolute byte number specified by n_p. seekg—read it as "seek to a new location for the next get"—affects the read head while seekp—read it as "seek to a new location for the next put"—affects the write head. void seekg/seekp(off_type offset, ios::seekdir dir): Move by as many as offsetbytes relative to the location specified by dir, which can take one of the following values: [the beginning of the file] ios::beg, [the current file marker position] ios::cur, and [the end of the file] ios::end. As a closing remark of this handout we should mention the possibility of simultaneously reading from and writing to the same file. In such a case, we can construct a fstream object and use it to achieve our goal. NotesEdit - ↑ Actually, there is a difference. Be that a plain object or an exception object, an object created in the heap is managed by the programmer and must be freed by her - ↑ As a matter of fact, you can pass a pointer to some area in other parts of the address space such as the static data or run-time stack regions. But then how are you going to decide whether to free the region or not? If it points to some place in the heap, it is the programmer’s responsibility and she must free the object; if the pointed object is not in the heap its lifetime will be managed by the compiler. We’d better be more deterministic and create all such objects in the heap or have the handler accept an extra argument. Or yet better, choose to pass objects, not pointers to object. - ↑ Observe this is basically the same region that would have been returned with freein the case of a malloced heap object: the region pointed to by the pointer. This semblance leads us to an informal definition: deleteoperator is implicit invocation of the destructor plus free.
https://en.m.wikibooks.org/wiki/Programming_Language_Concepts_Using_C_and_C%2B%2B/Exception_Handling_in_C%2B%2B
CC-MAIN-2022-05
refinedweb
4,192
60.24
present the judy1/L modules have quite a bit of code which is first shared then specialised (threaded). The Makefile copies the common code from JudyCommon to Judy1 or JudyL, giving the new file a new filename corresponding to the target, then compiles those files with macros to select whether it's Judy1 or JudyL. Why not just provide files like: // JudyL/JudyLNext.c #define JUDYL #include "JudyNext.c" This (a) avoids copying, (b) sets the macros, avoiding needing to put them on the compiler command line (which may not port to eg MSVC++) Instead you'd need -I{DEVBASE}/src/JudyCommon on the command line .. but that's needed anyhow for the shared private headers. -- John Skaller <skaller at users dot sf dot net> Felix, successor to C++: I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/judy/mailman/message/1027091/
CC-MAIN-2017-30
refinedweb
172
72.05
Do You Know What Time It Is? At what time does the Scandinavian Airlines plane from Oslo to Athens arrive on Monday? Why are questions that seem so easy in day to day life, so difficult in programming? Time should be simple, just seconds passing, something a computer is very good at measuring: System.currentTimeMillis() = 1570964561568 Although correct, “1570964561568” is not what we want, when we ask what time it is. We prefer “1:15pm, October 13th 2019”. It turns out that time is two separate things. On the one hand, we have seconds passing. On the other, we have an unhappy marriage between astronomy and politics. Answering the question “What time is it?” depends on the location of the sun in the sky relative to your position along with the political decisions made in that region up to that point in time. Many of the problems we have with date and time in code come from mixing these two concepts. Using the latest java.time library (or Noda Time in .NET) will help you. Here there are three main concepts to help you reason correctly about time: LocalDateTime, ZonedDateTime, and Instant. LocalDateTime refers to the concept “1:15pm, October 13th 2019”. There can be any number of these on the timeline. Instant refers to a specific point on the timeline. It is the same in Boston as in Beijing. To get from a LocalDateTime to an Instant, we need a TimeZone, which comes with UTC offsets and daylight savings time (DST) rules at the time. ZonedDateTime is a LocalDateTime with a TimeZone. Which ones do you use? There are so many pitfalls. Let me show you a few: Let’s say you’re writing software to organize an international conference. Will this work? public class PresentationEvent { final Instant start; final Instant end; final String title; } Nope. While we need to represent a particular point in time, for future events, even when we know the time and the time zone, we cannot know the instant ahead of time, because DST rules or UTC offsets might change between now and then. We need a ZonedDateTime. How about regularly occurring events? Like a flight. Will this work? public class Flight { final String flightReference; final ZonedDateTime departure; final ZonedDateTime arrival; } Nope. This can fail twice a year. Imagine a flight leaving Saturday at 10pm, arriving Sunday at 6am. What happens when we move the clock back an hour because of daylight savings? Unless the aircraft circles uselessly during the extra hour, it’s going to land at 5am, not 6am. When we move ahead one hour, it’ll arrive at 4am. For recurring events with duration, we cannot fix both the start and end. We need: public class Flight { final String flightReference final ZonedDateTime departure; final Duration duration; } What about events that start 2:30am? Which one? There may be two, or it might not exist at all. In Java, the following methods handle the autumnal DST transition: ZonedDateTime.withEarlierOffsetAtOverlap() ZonedDateTime.withLaterOffsetAtOverlap() In Noda Time you would specify both DST transitions explicitly with Resolvers. I have only scratched the surface of potential issues, but as they say “Good tools are half the work”. Use java.time (or Noda Time) and you’ve saved yourself a lot of errors.
https://medium.com/97-things/do-you-know-what-time-it-is-56e9c14831a1?source=rss----a442d54f5def---4
CC-MAIN-2019-47
refinedweb
545
67.96
Hot answers tagged command-history 98 You can achieve removal from the history file using the commandline in two steps: Typing history -d <line_number> deletes a specified line from the history in memory. Typing history -w writes the current in-memory history to the ~/.bash_history file. The two steps together remove the line permanently from the in-memory history and from the ... 61 Just edit the file ~/.bash_history. 49 I have found the solution to my problem in the ZSH documentation. Oh-my-zsh seems to map the ↑ and ↓ Keys to something like bindkey '\e[A' history-search-backward bindkey '\e[B' history-search-forward Which yields the exact behavior I described above. The ZSH Documentation describes the behavior of history-search-backward as Search backward in the ... 26 You've probably got INC_APPEND_HISTORY set. The INC_APPEND_HISTORY option, from man zshoptions: This options works like APPEND_HISTORY except that new history lines are added to the $HISTFILE incrementally (as soon as they are entered), rather than waiting until the shell exits. The option that you want is APPEND_HISTORY: APPEND_HISTORY If ... 26 To prevent a command from being added to the history in the first place, make sure that the environment variable HISTCONTROL contains among its colon-separated values the value ignorespace, for example (add e.g. to .bashrc): $ export HISTCONTROL=ignorespace This will prevent any command with a leading space from being added to the history. You can then ... 21 ... 16 history -s command 13 Copy & Paste this to your ... 11 Bash History Any new commands that have been issued in the active terminal can be appended to the .bash_history file with the following command: history -a The only tricky concept to understand is that each terminal has its own bash history list (loaded from the .bash_history file when you open the terminal) If you want to pull any new history that's ... 10 You can search the history using Ctrl+R and then type the search string (e.g. iw to find iwconfig). Then you can then still navigate through the history at that point with the up and down arrow keys, or press Ctrl+R again to find the previous occurence. 10 I wanted the same behaviour for zsh with oh-my-zsh installed and found plugin history-substring-search. I achieved the same behaviour described above by adding the plugin to my ~/.zshrc: plugins=(git brew npm history-substring-search) I guess this plugin did not exist back when this question was raised. Just an alternate way to achieve the same thing. ... 10 Hit F7 to bring up a list of the last few commands, then you can hit the first letter to jump to the first matching entry. Hit the same letter repeatedly to move up commands with the same first letter (working from newest from oldest). 9 Several techniques: Prevent sensitive information from being stored in the history file If you've entered some password on a command line, then realize that all commands are logged, you could either: Force exit the current session without saving history: kill -9 $$ This will drop all current history. Type ↑ (up arrow) in the open bash session ... 8 You need to mark the nonprinting sections of the prompt with \[ ... \] so bash can tell they won't take up space on screen. Try: export PS1="\w \[\e[0;32m\]\$(vcprompt -f '[%n:%b]')\[\e[m\]\$ " 8 If you supply a negative argument to Alt-., it reverses direction. The easiest way to do that (with standard keybindings) is Alt-- (equivalent to an argument of -1). So, after one or more Alt-. keypresses, pressing Alt-- will cause the next Alt-. to go in the reverse direction. (Just ignore the argument dialog which appears when you press Alt--.) 7 Locate the line you want to delete by pressing ↑ (up arrow) until it appears, then press Ctrl+U. That should remove the line. If you use the history command, you can see the line has been substituted with an asterisk. 7 Single commands can be omitted from history (and up/down recall) by prepending with a space: $ echo "foo" # saved $ echo "bar" # <-- not saved Otherwise, you can turn off history by adding to ~/.bashrc: set +o history (to reenable, use set -o history) If you want to disable it for the current session only: $ unset HISTFILE 6 Your history is logged by your shell. Bash, for example, uses the file ~/.bash_history by default. It is also not limited by your current session, but the history is usually persisted beyond that, up to what the environment variables HISTSIZE and HISTFILESIZE allow. More information on how the history works in bash is available in it's man page, in the ... 6 Assuming your shell is bash, this question has been asked and answered on SO. ... 5 history -s command You can even bind a keystroke to do this for you. You can enter this at a Bash prompt: bind '"\C-q": "\C-a history -s \C-j"' or add this to your ~/.inputrc: "\C-q": "\C-a history -s \C-j" then you can type something and press Ctrl-q and it will be added to the history without being executed. The space before "history" causes the ... 5 There are two reasons why your script will not work as intended: The bash environment for a running script is "non-interactive" and does not have the history features enabled. The bash environment for a running script is independent from the environment you are interactively working in. Depending on your use case the easiest solution might be to source ... 5 After a bit of practice, I found how to use the workaround solution. I matched the correct syntax to print a filtered list, I did it with history | grep iwconfig (it wasn't so difficult after all); with the output I can use !n with the now easy-to-read filtered list. 5 Right-click on the taskbar. Click on Properties. Click on the Start Menu Button. Check the Store and display a list of recently opened programs option under privacy. I hope this helps! 5 The Ctrl+R functionality to search backwards through shell history is provided by the Readline library used by Bash. The corresponding function to search forwards through the history is, by default, bound to Ctrl+S. However, the problem is that the terminal driver already uses this key combination for flow control: pressing Ctrl+S stops or pauses text ... 4 That's easy... if you know the corresponding option: unsetopt HIST_VERIFY Put this in your ~/.zshrc and do source ~/.zshrc if you want that behavior to be permanent. Explanation from man zshoptions: HIST_VERIFY Whenever the user enters a line with history expansion, don't execute the line directly; instead, perform history expansion and ... ... 4 You need to set HISTFILE for your users to the location you need, set the following in .bash_profile for the user, and for new users set it in the user skeleton directory, most likely /etc/skel/.bash_profile export HISTFILE=/home/$USER/.bash_history 4 Use the script command DESCRIPTION Script makes a typescript of everything printed on your terminal. It is useful for students who need a hardcopy record of an interactive session as proof of an assignment, as the typescript file can be printed out later with lpr(1). 4 If you need to remove several lines at the same time I normally use this: history | grep <string> | cut -d ' ' -f 3 | awk '{print "history -d " $1}' If you need to remove the last command you can use: history -d $((HISTCMD-2)) 4 Here's one way to set up history-search-backward and history-search-forward: Step1: Put the following in your /etc/inputrc file: $if mode=emacs "\ep": history-search-backward "\en": history-search-forward $endif (Or simply put the following between in the existing if statement) "\ep": history-search-backward "\en": history-search-forward Step2: ... Only top voted, non community-wiki answers of a minimum length are eligible
http://superuser.com/tags/command-history/hot
CC-MAIN-2016-18
refinedweb
1,331
63.49
I' m following this Manipulating matrix elements in tensorflow. using tf.scatter_update. But my problem is: What happens if my tf.Variable is 2D? Let's say: a = tf.Variable(initial_value=[[0, 0, 0, 0],[0, 0, 0, 0]]) for line in range(2): sess.run(tf.scatter_update(a[line],[0],[1])) TypeError: Input 'ref' of 'ScatterUpdate' Op requires l-value input In tensorflow you cannot update a Tensor but you can update a Variable. The scatter_update operator can update only the first dimension of the variable. You have to pass always a reference tensor to the scatter update ( a instead of a[line]). This is how you can update the first element of the variable: import tensorflow as tf g = tf.Graph() with g.as_default(): a = tf.Variable(initial_value=[[0, 0, 0, 0],[0, 0, 0, 0]]) b = tf.scatter_update(a, [0, 1], [[1, 0, 0, 0], [1, 0, 0, 0]]) with tf.Session(graph=g) as sess: sess.run(tf.initialize_all_variables()) print sess.run(a) print sess.run(b) Output: [[0 0 0 0] [0 0 0 0]] [[1 0 0 0] [1 0 0 0]] But having to change again the whole tensor it might be faster to just assign a completely new one.
https://codedump.io/share/ddzEYLjpadi4/1/use-tfscatterupdate-in-a-two-dimensional-tfvariable
CC-MAIN-2017-04
refinedweb
207
61.22
As I've been working on the front-end of this learning management system (LMS) that my team and I have been building at work, I've had to build out a handful of functionality to deal with the ways that students interact with the courses--whether course content, course meta data, or course materials. In one particular use case, students can download all the materials for a particular course, for a particular week, for a particular day, or for an individual activity. Some of these materials are PowerPoint documents, some of Word documents, and some are even videos. The nature of the downloaded materials, and the fact that students need to download multiple items, means that we want to zip those files up. Since the LMS is a web application, we want a download link that streams the contents back to the student on the fly. I went with the archiver package: npm install archiver --save ...and I abstracted the zipping code into a util.ts file, importing the package at the top: import * as archiver from "archiver"; I'm also putting that import statement in the main index.ts file that I use as the entry point for the Express application and the routing. Let's go over the route first, and then come back to the utility function. app.get("/example", async (req, res) => { const { id } = req.params; const materials = getMaterials(id); const archive = archiver("zip"); res.attachment(`${activity}.zip`); archive.pipe(res); const archiveMaterials = await zipMaterials(archive, materials) archiveMaterials.finalize(); }); This is a standard Express route, where we get the id from the request. The getMaterials() method is going to return an array of objects that have a folder property and a url property that represents the endpoint of an API where each material is located. We then create an archive object using the archiver() method, while specifying a "zip" parameter. For Express, since we want to stream it, we set an attachment to the response--attaching the zip file name. Since we want to stream this archive back to the browser, we pipe the archive to the Express response. Then we process the materials to make an archive using zipMaterials() before using finalize() on the archived materials to finish the zip. What does the zipMaterials() method look like? export async function zipMaterials(archive: archiver.Archiver, materials: any[]): Promise<archiver.Archiver> { archive.on("error", (e) => { throw new Error(e.message); }); for(let i = 0; i < materials.length; i++) { const folder = materials[i].folder; const url = materials[i].url; const fileName = url.substr(url.lastIndexOf("/") + 1, url.length).split("?")[0]; ... try { const opts = { uri: `${url}`, method: "GET", encoding: null, }; const res = await request(opts); archive.append(res, { name: `${folder}${fileName}` }); } catch(e) { throw new Error(e.message); } } return archive; } A couple of quick notes: I've trimmed some code out of this. Also, as I'm sure you've noticed by now, I'm using TypeScript, but other than the type declarations, all this code will work in JavaScript as well. The materials object is an array of objects that each have folder and url properties. In this example, we're actually pulling the files from externally hosted systems, so we're downloading them first, and then adding them to an archive. If this were a production application, you'd probably be pulling from a blob storage or a local filesystem, but this is a good example because it shows how we're using request to download a file, and then passing those results immediately to the archive. The encoding being set to null tells request() that this is binary data. In the actual application I pulled this example from, the folder is really a course session name, and there are several materials for each session, so we group them by folders named after the session. As far as the zip archive is concerned, we set the error handler (just bubbling up the error), loop the materials array, download the files, and then use the append() method to append the materials to the zip. Notice how I add the folder as a part of the file name in the name object property passed to append(). This will actually add the file as the fileName into a folder in the archive named for the folder value, getting that grouping that we want. Once our loop is done, we return the archive, which takes us back to the Express route. As you can remember, that "finalizes" the returned object, which is being piped back to the end-user. On the end-user's side, the normal download process starts. (Photo by Tara Joyce)
https://codepunk.io/how-to-stream-a-zip-file-to-the-browser-in-express-and-node-js/
CC-MAIN-2019-30
refinedweb
774
54.93
In this article we will see these topics: - Explaining about the parameter - Benefits of using parameters. - Learn how to create a parameter. - Learn how to assign parameters to commands. Introduction about SqlCommand and parameters When we are working with data, we often come into a situation when there is a need to filter the results based on some conditions. Generally, this is obtained by accepting the input data from a user and making use of that input data to form a SQL query. For instance, we have a sales person that is looking for all the orders between specific dates. The other query might include filtering customers by city. As we all are aware that the SQL query assigned to a SqlCommand object is nothing but simply a string, hence if one is required to filter a query, one could create the string dynamically. However, no one would ever want to do this and the below list the bad example of filtering a query. Listing 1: Bad example of filtering a query // one should never use this SqlCommand cmd = new SqlCommand( "select * from Customers where city = '" + inputCity + "'"; One should never create this way. The input variable, inputCity, is normally fetched from a TextBox control. This could be done on either a Windows form or a Web Page. Anything that you enter into that TextBox control will be going into inputCity thereby adding to your SQL string. In such a situation, you are inviting a hacker to replace that string with something malicious. One could give the full access to the computer in the worst case. One should make use of the parameters instead of dynamically building a string as we have seen in the above example. Any text that you enter into a parameter will be treated as field data. This is not considered a part of the SQL statement and makes your application much more secure. We have a three step process making use of parameterized queries: - Construct the SqlCommand command string with parameters. - Declare a SqlParameter object, assigning values as appropriate. - Assign the SqlParameter object to the SqlCommand object's Parameters property. We will learn all this via step-by-step process. Arranging a SqlCommand Object for Parameters The very first step in making use of parameters in SQL queries is to build a command string that would comprise of parameter placeholders. When the SqlCommand runs, these placeholders are filled in with actual parameter values. The below lists the proper syntax of a parameter to use an '@' symbol prefix on the parameter name: Listing 2: Proper syntax of a parameter // 1. declare command object with parameter SqlCommand cmd = new SqlCommand( "select * from Customers where city = @City", conn); We have a first argument containing a parameter declaration, @City in the SQL command as mentioned above. One parameter is used by the example but one can have large number or as many parameters as required in order to customize the query. Each of the parameter will match a SqlParameter object. This must be assigned to this SqlCommand object. Declaring a SqlParameter Object Each of the parameter in a SQL statement must be defined appropriately. The code must declare a SqlParameter type and the code must define a SqlParameter instance for each of the parameter in a SqlCommand object's SQL command. The below code reflects a parameter for the @City parameter: Listing 3: Parameter for the @City // 2. define parameters used in command object SqlParameter param = new SqlParameter(); param.ParameterName = "@City"; param.Value = inputCity; One should keep in mind that the spelling of the ParameterName property of the SqlParameter instance must be same as the parameter that is used in the SqlCommand SQL command string. One should also define a value for the command and when the SqlCommand object is implemented or run, this value will replace the parameter. Combining it All Together We already are aware about the process to make use of the SqlCommand and SqlDataReader objects. The below code will illustrate a working program that utilizes SqlParameter objects. So, one must be adapt to all these now and the below listing presents the code to add parameters to queries. Listing 4: Adding Parameters to Queries using System; using System.Data; using System.Data.SqlClient; class ParamDemo { static void Main() { // conn and reader declared outside try // block for visibility in finally block SqlConnection conn = null; SqlDataReader reader = null; string inputCity = "New York";(); } } } } Here the code above in listing 1 fetches the records for each of the customer that resides in New York. This is made more secure and safer via the parameters assistance. Conclusion We learnt in this article that one should be making use of the parameters to filter queries in a much more secure mode. The process of utilizing the parameter comprises of three steps: - Define the parameter in the SqlCommand command string, - Declare the SqlParameter object with applicable properties, - Assign the SqlParameter object to the SqlCommand object. This is all for today’s article, hope you liked and until next time. See you next time.
http://mrbool.com/how-to-incorporate-parameters-to-commands-with-c/25982
CC-MAIN-2016-40
refinedweb
840
53.51
Python is a powerful multipurpose programming language created by Guido van Rossum. It has a simple and easy-to-use syntax, making it a popular first-choice programming language for beginners. This is a comprehensive guide that explores the reasons you should consider learning Python and the ways you can get started with Python. If you directly want to get started with Python, visit our Python Tutorial page. What is Python Programming Language? Python is an interpreted, object-oriented, high-level programming language. As it is general-purpose, it has a wide range of applications from web development, building desktop GUI to scientific and mathematical computing. Python is popular for its simple and relatively straightforward syntax. Its syntax readability increases productivity as it allows us to focus more on the problem rather than structuring the code. Features of Python Programming Simple and easy to learn Python has a very simple and elegant syntax. It is much easier to read and write programs in Python compared to other languages like C, C++, or Java. Due to this reason, many beginners are introduced to programming with Python as their first programming language. Free and open-source You can freely use and distribute Python programs even for commercial use. As it is open-source, you can even change Python's source code to fit your use case. Portability A single Python program can run on different platforms without any change in source code. It runs on almost all platforms including Windows, Mac OS X, and Linux. Extensible and Embeddable You can combine Python code with other programming languages like C or Java to increase efficiency. This allows high performance and scripting capabilities that other languages do not provide out of the box. High-Level Interpreted Language Python itself handles tasks like memory management and garbage collection. So unlike C or C++, you don't have to worry about system architecture or any other lower-level operations. Rich library and large community Python has numerous reliable built-in libraries. Python programmers have developed tons of free and open-source libraries, so you don't have to code everything by yourself. The Python community is very large and evergrowing. If you encounter errors while programming in Python, it's like that it has already been asked and solved by someone in this community. Reasons to Choose Python as First Language 1. Simple Elegant Syntax Programming in Python is fun. It's easier to understand and write Python code. The syntax feels natural. Let's take the following example where we add two numbers: a = 2 b = 3 sum = a + b print(sum) Even if you have never programmed before, you can easily guess that this program adds two numbers and displays it. 2. Not overly strict You don't need to define the type of a variable in Python. Also, it's not necessary to add a semicolon at the end of the statement. Python enforces you to follow good practices (like proper indentation). These small things can make learning much easier for beginners. 3. The expressiveness of the language Python allows you to write programs having greater functionality with fewer lines of code. Let's look at code to swap the values of two variables. It can be done in Python with the following lines of code: a = 15 b = 27 print(f'Before swapping: a, b = {a},{b}') a, b = b, a print(f'After swapping: a, b = {a},{b}') Here, we can see that the code is very less and more readable. If instead, we were to use Java, the same program would have to be written in the following way: public class Swap { public static void main(String[] args) { int a, b, temp; a = 15; b = 27; System.out.println("Before swapping : a, b = "+a+", "+ + b); temp = a; a = b; b = temp; System.out.println("After swapping : a, b = "+a+", "+ + b); } } This is just an example. There are many more such cases where Python increases efficiency by reducing the amount of code required to program something. 4. Great Community and Support Python has a large supporting community. There are numerous active online forums which can come in handy if you are stuck anywhere in the learning process. Some of them are: How you can learn to code in Python? Learn Python from Programiz Programiz offers dozens of tutorials and examples to help you learn Python programming from scratch. Each tutorial is written in-depth with examples and detailed explanations. Learn Python from Mobile App Programiz provides a beginner-friendly mobile app. It contains byte-size lessons and an integrated Python interpreter. To learn more, visit Learn Python app. Learn Python from Books It is always a good idea to learn to program from books. You will get the big picture of programming concepts in the book which you may not find elsewhere. Here are 3 books we personally recommend. - Think Python: How to Think Like a Computer Scientist - a hands-on guide to start learning Python with lots of exercise materials - Starting out With Python - introductory programming book for students with limited programming experience - Effective Python: 59 Specific Ways to Write Better Python - an excellent book for learning to write robust, efficient and maintainable code in Python Final Words We at Programiz think Python is a terrific language to learn. If you are getting started in programming, Python is an awesome choice. You will be amazed by how much you can do in Python once you know the basics. It is easy to overlook the fact that Python is a powerful language. Not only is Python good for learning programming, but it is also a good language to have in your arsenal. Python can help you to get started in everything, whether it is changing your idea into a prototype, creating a game, or getting in Machine Learning and Artificial Intelligence.
https://www.programiz.com/python-programming/guide
CC-MAIN-2021-39
refinedweb
980
63.19
Report Bursting How to burst a report? What are the steps for bursting? Birjesh - Nov 27th, 2019 Bursting is used to distributing the report with different client sreenivasulu pokuru - Aug 5th, 2013 Bursting is nothing but a process of runs once report and dividing the results for various recipients through burst group. There are two ways for bursting the reports. 1)through mail option. 2) throug... What is object security in framework manager? naveen - May 23rd, 2019 Object security mode is applicable for it. Hence it allows or denies for the objects like namespaces, query subjects etc, binduandme - Mar 10th, 2009 These management or all permissions granted users are known as power users and without restricting or creating any security for this level you need to grant all permission. Cognos Logs Production Support: Where and how to check Cognos Logs? Rafiq - Dec 17th, 2018 In your Cognos installed server, the logs are present in /installation_path/c10/cognos/logs/cogserver.log This is the log file, but it is not helpful most of the time. Date Validation How to Validate Date in cognos report studio (eg start date should be less than end date) srijib mandal - Aug 12th, 2018 You can use date range prompt if you need it in prompt page, in filter you can convert to timestamp and then compare Display Products by Country in Cognos Report Studio I have two prompts with country and product line, for the selected item in country prompt few items of the product line should be shown instead of showing all, how can I achieve this? Mayank Sanghvi - Mar 21st, 2018 Cascading prompt is the answer for your question. We use cascading when we need to load data in one prompt based on the other. So in the above scenario Country is the first prompt and Product Line is dependent on Country. Martina Vaz - Jun 26th, 2017 Use cascading prompt feature. For the product prompt select cascading prompt property as Yes, and select country parameter. the product query is filtered with selected country. Can we use more than one measure in crosstab ? ravindra - Nov 19th, 2017 Yes, we can use jjrkl_sandeep - Oct 22nd, 2013 Yes, It is 100% possible... Manogna - Aug 30th, 2017 We cant convert crosstab to list but reverse is possible by using pivot table function Anil Kumar - Jul 28th, 2016 We don't have any feature to change the cross tab report to List. only we change the list report to Cross tab with Pivot. How to improve performance from database side.. What to do at database side to improve performance? What is Performance tuning? Khalid Mehmood Awan - Jul 25th, 2017 Creating Table Partitions, Indexes, Materialized views, remove normalization where it is unnecessary, increasing hardware power and many other factors. Lakshmikar - Apr 5th, 2016 1.Partitioning 2.Indexes 3.Avoid maximum no. of joins. Report Studio Restrict User to Add New Data Item How restrict the user to add a new data item in a container (List) in Report Studio? Khalid Mehmood Awan - Jul 25th, 2017 In that case, the report will be a snapshot report. Reddy25 - Oct 25th, 2016 In Report properties, Goto Permissions page and select the user/group/role in which the user belongs to and deny the write permission. What is catalog and types of catalogs in Cognos and which is better Cornelius Goodwin - Jul 7th, 2017 Catalogs have not been used since Impromptu maryjane - Mar 7th, 2017 A catalog is an inventory of bookkeeping of the librarys contents. Usage Property What should the usage property of an interest rate be in Framework? Martina - Jun 26th, 2017 Usage property should be measure shashi - Apr 5th, 2017 Fact: Measures default aggregation total Attributes: Dimensions Identifier: Are key columns default aggregation count unknown Mind Twisting Questions for Report Studio: Try it There is a prompt page having the options for ascending and descending, based on the prompt value selected , the report page generates the list column value sorted in either ascending or descending: Martina Vaz - Jun 26th, 2017 Create 2 lists. one with ascending one descending. depending on the prompt selection display the list Mayank Sanghvi - Aug 2nd, 2016 In addition to above answer you can use two separate list for ascending and descending order. Framewok Manager Dimensions How many types of dimensions in Framework manager? Which dimension are used in project and who decide that? cognos_anu - Jun 21st, 2017 Role Playing Dimension Junk Dimension Conformed Dimension Degenerate Dimension Slowly Changing Dimension. Regular Dimension & Measure Dimension in DMR Model. shailendra32 - May 26th, 2012 Regular dimension and Measure dimension are two dimension.Regular dimension is used in dimensional modelling of Relational data.It generally gives drill up drill down facility of OLAP to relational data.Generally Business analyst or Data Modeller experts decide this. Difference between Scope and Relationship? cognos_anu - Jun 21st, 2017 Scope relationship exists between Regular Dimension & Measure Dimension. It determines the level at which facts/measures are available for reporting. We can view, edit, create scope relationships. They are automatically created based on the joins between tables of Modelling layer. shyam - Mar 17th, 2012 Scope is used to join the regular dimensions and measure dimensions to perform multi dimensional analysis such as drill down and drill up. relationship is join the two tables (specifies cardinality 1-1,1-n Alternative Color Rows How to add 2 different colors for alternative rows in a report? cognos_anu - Jun 16th, 2017 You can create advanced conditional style having below expression : mod(row number,2)=0 then you can assign first colour mod(row number,2)=1 then assign second colour. by this you will get alternate colours for object. shashi - Apr 5th, 2017 By using mod(rownuber,3)=0 for 3 colours mod(rownuber,2)=0 for 2 colours... Cognos Interview Questions Ans
http://www.geekinterview.com/Interview-Questions/Data-Warehouse/Cognos
CC-MAIN-2020-10
refinedweb
964
54.22
What do you need: Preparing the field First we create a simple activity that has a method with the native qualifier. That tells the JVM that that method is not implemented in Java but by the native layer on C or C++. With Eclipse and ADT 20 that is pretty easy. package com.example.testjni; import android.os.Bundle; import android.app.Activity; import android.view.Menu; import android.widget.TextView; public class MainActivity extends Activity { static { System.loadLibrary("teste"); } @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); sum(3,5); } public void setTextView(String text) { TextView tv = (TextView)findViewById(R.id.text); tv.setText(text); } @Override public boolean onCreateOptionsMenu(Menu menu) { getMenuInflater().inflate(R.menu.activity_main, menu); return true; } public native int sum(int a, int b); } The code above creates a simple activity with a TextView. Line 11 instructs the Dalvik VM to load the shared library teste.so. You will see later that this the name of the library that will be generated to hold the native implementation of the method sum(int,int). This is done inside a static block so it is executed when the class MainActivity is loaded and therefore before any instance of it is created; so Dalvik can execute the native method when requested. My intention is that the activity after creating the views will call the native method to calculate the sum of 3 and 5, and then set the TextView with the result. Generating the Header fileI created a jni folder in the project's directory to hold all the native code. Now we need to implement the native method. When the native method is called, the JVM will look for C function with a particular signature. The easiest way to know which signature you should use for your implementation is to use the javah tool from the JDK. #javah -classpath ./bin/classes:~/Downloads/android-sdk-macosx/platforms/android-16/android.jar -d ./jni com.example.testjni.MainActivity I ran the command above in the project's folder as above. javah needs the definition of all classes in your program. Thus I set the classpath to the folder where the class files for my project are put (bin/classes) and I also added the path to the android.jar. In my case I am using the API level 16, so I picked the right android.jar for it. the -d parameter tells to output the header file to folder jni. You need to pass the full qualified name ( i.e., including package name) of the class with the native method. That command generated the follow header: /* DO NOT EDIT THIS FILE - it is machine generated */ #include <jni.h> /* Header for class com_example_testjni_MainActivity */ #ifndef _Included_com_example_testjni_MainActivity #define _Included_com_example_testjni_MainActivity #ifdef __cplusplus extern "C" { #endif /* * Class: com_example_testjni_MainActivity * Method: sum * Signature: (II)I */ JNIEXPORT jint JNICALL Java_com_example_testjni_MainActivity_sum (JNIEnv *, jobject, jint, jint); #ifdef __cplusplus } #endif #endif Note that even if the code gets compile on C++, the C-linkage is forced for the function, so the JVM can find the method without had to deal with the C++ mangling . C++ supports overloading function and mangles function signatures (embeds parameters type, class and namespace scopes in the name) in order to support it. Implementing the native method I will show the implementation in C and C++. The reason behind that is that although the two languages are very similar ( you can compile C code with a C++ compiler), the JNI for C++ offer some inline methods to make it more object oriented. The C code #include <stdio.h> #include stdlib.h> #include <jni.h> #include <android/log.h> #include "com_example_testjni_MainActivity.h" #define LOG_TAG "TesteNativeC" #define LOGI(...) ((void)__android_log_print(ANDROID_LOG_INFO, LOG_TAG, __VA_ARGS__)) static char* buildString(jint a, jint b, jint sum){ const char* fmt_string = "The sum of %d and %d is %d"; const int buffer_size = snprintf(NULL,0,fmt_string,a,b,sum); LOGI("Buffer size is %d",buffer_size); char* buffer = (char*) malloc((buffer_size+1) * sizeof(char)); if( buffer != NULL){ snprintf(buffer,(buffer_size+1),fmt_string,a,b,sum); } return buffer; } JNIEXPORT jint JNICALL Java_com_example_testjni_MainActivity_sum (JNIEnv * env, jobject obj, jint a, jint b){ jint result = a+b; jclass cls = (*env)->GetObjectClass(env, obj); jmethodID mid = (*env)->GetMethodID(env, cls, "setTextView", "(Ljava/lang/String;)V"); if(mid == 0) return 0; char* text = buildString(a,b,result); if(text != NULL){ LOGI("At this point string is <<%s>>",text); jstring textToDisplay = (*env)->NewStringUTF(env,text); (*env)->CallVoidMethod(env, obj, mid, textToDisplay); free(text); } return result; } Function buildString is declared static because it is not intended to be exported, used outside of the library. Its objective is to build the string "The sum of 3 and 5 is 8". For that, we call function snprintf from the standard C library. Note that I am using snprintf , the "safe-version" of sprintf. The reason for that is to prevent buffer overrun, a common technique used by attackers to inject malicious . In our case we don't need to do that because input does not come from external source. However, best practices should always be enforced. snprintf is used twice: first with a NULL parameter to calculate the size of the necessary buffer to hold the string, and a second time to build the string. The implementation of the native method follow conventional JNI idioms and it is very similar to using reflection in Java. So I just calculated the sum, retrieved the reference to the setTextView(String) method of the activity, build the string and called the method. And, this is very important, I released the allocated char array. NewStringUTF does a copy of the passed char array, so you need to free the buffer to not get a memory leak. The C++ code #include <jni.h> #include <sstream> #include <string> #include <android/log.h> #include "com_example_testjni_MainActivity.h" #define LOGI(...) ((void)__android_log_print(ANDROID_LOG_INFO, LOG_TAG, __VA_ARGS__)) using std::stringstream; using std::string; const char* LOG_TAG = "TesteNativeCpp"; static string buildString(jint a, jint b, jint sum){ stringstream buffer; buffer<<"The sum of "<<a<<" and "<<b<<" is "<<sum; return buffer.str(); } JNIEXPORT jint JNICALL Java_com_example_testjni_MainActivity_sum (JNIEnv * env, jobject obj, jint a, jint b){ jint result = a+b; jclass cls = env->GetObjectClass(obj); jmethodID mid = env->GetMethodID(cls, "setTextView", "(Ljava/lang/String;)V"); if(mid == 0) return 0; string text = buildString(a,b,result); LOGI("At this point string is <<%s>>",text.c_str()); jstring textToDisplay = env->NewStringUTF(text.c_str()); env->CallVoidMethod(obj, mid,textToDisplay); return result; } The C++ code is not much different. I could have used the same code as of the C version for the function buildString, but I decided to do a code that is closer to latest C++ standards ( Not C++11 yet :-) ). Besides this uses STL containers. Using STL requires extra build config to compile. So I have the chance to show you how to setup the build for that case. Note that for C++, env works more like an object : if in C we have (*env)->function(env,parameters), in C++ we have env->method(parameters). Building Now we need a Makefile for the job. I created the file Android.mk under the jni folder LOCAL_PATH := $(call my-dir) include $(CLEAR_VARS) LOCAL_MODULE := teste LOCAL_SRC_FILES := teste.cpp LOCAL_LDLIBS := -llog include $(BUILD_SHARED_LIBRARY) This makefile is based on the one use for the "hello word" sample of the NDK. LOCAL_MODULE will tell to generate the shared object teste.so. LOCAL_SRC_FILES lists the files to be compiled. cpp extension is required for C++ files. LOCAL_LDLIDS tells to link against the library that allows our native code to output log to logcat. Remember I said we need extra config to use STL containers. This means to create an Application.mk file besides the Android.mk in the jni folder APP_STL := stlport_static This single line sets the STL library to be linked static. Now we can build . You just need to invoke <ndk_folder>/ndk-build on you project folder. This will generate the shared library that will be packaged on you APK. After you have built the library, you just need to build your APK as you would normally do (Eclipse or ant). And that is the final result :
http://aoriani.blogspot.com/2012/08/using-native-code-jni-on-android-apps.html
CC-MAIN-2017-17
refinedweb
1,355
56.76
Cluster Utilities This section provides information on the Hazelcast command line and other Command Line Tool This is a tool using which you can install and run Hazelcast IMDG and Management Center on your Unix-like local environments. You need JRE 8 or newer as a prerequisite. This tool comes with your Hazelcast IMDG download package. When you extract the package, you see the "hazelcast-command-line" directory. To install and start using the tool, follow these steps: Run the following commands in the extracted IMDG directory: cd hazelcast-command-line/distro make When the makecommand finishes, run the following commands: cd build/dist/bin ./hz You are now ready to use the tool. Starting a standalone Hazelcast member with the default configuration: ./hz start* Starting Hazelcast Management Center: ./hz mc start Please see the tool’s documentation for all the other usages. Using the cluster.sh Script. See the Force Start section. The script cluster.sh takes the following parameters to operate according to your needs. If these parameters are not provided, the default values are used. The script cluster.sh is self-documented; you can see the parameter descriptions using the command ./cluster.sh -h or ./cluster.sh --help. Example Usages for cluster.sh Let’s say you have a cluster running on remote machines and one Hazelcast member is running on the IP 172.16.254.1 and on the port 5702. The cluster name and password of the cluster are test and test. Getting the cluster state: To get the state of the cluster, use the following command: ./cluster.sh -o get-state -a 172.16.254.1 -p 5702 -g test -P test The following also gets the cluster state, using the alternative parameter names, e.g., --port instead of -p: ./cluster.sh --operation get-state --address 172.16.254.1 --port 5702 --clustername test --password test Changing the cluster state: To change the state of the cluster to frozen, use the following command: ./cluster.sh -o change-state -s frozen -a 172.16.254.1 -p 5702 -g test -P test Similarly, you can use the following command for the same purpose: ./cluster.sh --operation change-state --state frozen --address 172.16.254.1 --port 5702 --clustername test --password test Shutting down the cluster: Partial starting the cluster: To partial start the cluster when Hot Restart is enabled, use the following command: ./cluster.sh -o partial-start -a 172.16.254.1 -p 5702 -g test -P test Similarly, you can use the following command for the same purpose: ./cluster.sh --operation partial-start --address 172.16.254.1 --port 5702 --clustername test --password test Force starting the cluster: To force start the cluster when Hot Restart is enabled, use the following command: ./cluster.sh -o force-start -a 172.16.254.1 -p 5702 -g test -P test Similarly, you can use the following command for the same purpose: ./cluster.sh --operation force-start --address 172.16.254.1 --port 5702 --clustername test --password test Getting the current cluster version: To get the cluster version, use the following command: ./cluster.sh -o get-cluster-version -a 172.16.254.1 -p 5702 -g test -P test The following also gets the cluster state, using the alternative parameter names, e.g., --port instead of -p: ./cluster.sh --operation get-cluster-version --address 172.16.254.1 --port 5702 --clustername test --password test Changing the cluster version: See the Rolling Member Upgrades chapter to learn more about the cases when you should change the cluster version. To change the cluster version to X.Y, use the following command: ./cluster.sh -o change-cluster-version -v X.Y -a 172.16.254.1 -p 5702 -g test -P test The cluster version is always in the major.minor format, e.g., 3.12. Using other formats results in a failure. Calls against the TLS protected members (using HTTPS protocol): When the member has TLS configured, use the --https argument to instruct cluster.sh to use the proper URL scheme: ./cluster.sh --https \ --operation get-state --address member1.example.com --port 5701 If the default set of trusted certificate authorities is not sufficient, e.g, you use a self-signed certificate, you can provide a custom file with the root certificates: ./cluster.sh --https \ --cacert /path/to/ca-certs.pem \ --operation get-state --address member1.example.com --port 5701 When the TLS mutual authentication is enabled, you have to provide the client certificate and related private key: ./cluster.sh --https \ --key privkey.pem \ --cert cert.pem \ --operation get-state --address member1.example.com --port 5701 Using REST API for Cluster Management Besides the Management Center’s Hot Restart tab and the script cluster.sh, you can also use REST API to manage your cluster’s state. The following are the operations you can perform. Enabling Lite Members such as map.put() and map.get(). Configuring Lite Members You can enable a cluster member to be a lite member using declarative or programmatic configuration. Declarative Configuration: <hazelcast> ... <lite-member ... </hazelcast> hazelcast: lite-member: enabled: true Programmatic Configuration: Config config = new Config(); config.setLiteMember(true); Promoting Lite Members to Data Member Lite members can be promoted to data members using the Cluster interface. When they are promoted, cluster partitions are rebalanced and ownerships of some portion of the partitions are assigned to the newly promoted data members. Config config = new Config(); config.setLiteMember(true); HazelcastInstance hazelcastInstance = Hazelcast.newHazelcastInstance(config); Cluster cluster = hazelcastInstance.getCluster(); cluster.promoteLocalLiteMember(); Getting Member Events and Member Sets Hazelcast allows you to register for membership events so that you are notified when members are added or removed. You can also get the set of cluster members. The following example code does the above: registers for member events, notifies when members are added or removed and gets the set of cluster members. public class ExampleGetMemberEventsAndSets { public static void main(String[] args) {() ); } } } Managing Cluster and Member States This section explains the states of Hazelcast clusters and members which you can use to allow or restrict the designated cluster/member operations. Cluster States. NO_MIGRATION: In this state, there is no data movement between Hazelcast members. It means that when there is a member crash or a new member in the cluster, there won’t be any partition rebalancing, partition backup replica creation or migration. Please note that promoting a backup replica to the primary replica is a local operation and does not involve any data movement between cluster members. Hence, backup promotions occur on member crashes when the cluster is in this mode. Upon a member crash, all other members that keep backup replicas of the crashed member promote backup replicas to the primary replica role and restore availability. However, there is a limitation here. Since the maximum number of backups is 6, if you lose 7 members in your large cluster, you can lose availability of the partitions whose primary and backup replicas are mapped to those crashed members. The cluster accepts new members. All other operations are allowed. You cannot change the state of a cluster to NO_MIGRATIONwhen migration/replication tasks are being performed. When you want to add multiple new members to the cluster, you can first change the cluster state to NO_MIGRATION, then start the new members. Once all of them join to the cluster, you can change the cluster state back to ACTIVE. Then, the cluster rebalances partition replica distribution at once. FROZEN: In this state, the partition table is frozen and partition assignments are not performed. The cluster does not accept new members. If a member leaves, it can join back. Its partition assignments (both primary and backup replicas) remain the same until either it joins back or the cluster state is changed to ACTIVE. When it joins back to the cluster, it owns all previous partition assignments as it was. On the other hand, when the cluster state changes to ACTIVE, re-partitioning starts and unassigned partition replicas are assigned to the active members. All other operations in the cluster, except migration, continue without restrictions. You cannot change the state of a cluster to FROZENwhen migration/replication tasks are being performed. You can make use of FROZENstate along with the Hot Restart Persistence feature. You can change the cluster state to FROZEN, then restart some of your members using the Hot Restart feature. The data on the restarting members will not be accessible but you will be able to access to the data that is stored in other members. Basically, FROZENcluster state allows you do perform maintenance on your members with degrading availability partially. PASSIVE: In this state, the partition table is frozen and partition assignments are not performed. The cluster does not accept new members. If a member leaves while the cluster is in this state, the member will be removed from the partition table if cluster state moves back to ACTIVE. This state rejects ALL operations immediately EXCEPT the read-only operations like map.get()and cache.get(), replication and cluster heartbeat tasks. You cannot change the state of a cluster to PASSIVEwhen migration/replication tasks are being performed. You can make use of PASSIVEstate along with the Hot Restart Persistence feature. See the Cluster Shutdown API for more info. IN_TRANSITION: This state shows that the state of the cluster is in transition. You cannot set your cluster’s state as IN_TRANSITIONexplicitly. It is a temporary and intermediate state. During this state, your cluster does not accept new members and migration/replication tasks are paused. The following snippet is from the Cluster interface showing the methods used to manage your cluster’s states. public interface Cluster { ClusterState getClusterState(); void changeClusterState(ClusterState newState); void changeClusterState(ClusterState newState, TransactionOptions transactionOptions); void shutdown(); void shutdown(TransactionOptions transactionOptions); } See the Cluster interface Javadoc for information on these methods. Cluster Member States: Until the member’s shutdown process is completed after the method Node.shutdown(boolean)is called. Note that, when the shutdown process is completed, member’s state changes to SHUT_DOWN. Cluster’s state is changed to PASSIVEusing the method changeClusterState(). SHUT_DOWN: A member goes into this state when the member’s shutdown process is completed. The member in this state rejects all operations and invocations. A member in this state cannot be restarted. Defining Member: Safety Checking Cluster Members To prevent data loss when shutting down a cluster member, Hazelcast provides a graceful shutdown feature. You perform this shutdown by calling the method HazelcastInstance.shutdown(). The oldest cluster member migrates all of the replicas owned by the shutdown-requesting member to the other running (not initiated shutdown) cluster members. After these migrations are completed, the shutting down member will not be the owner or a backup of any partition anymore. It means that you can shutdown any number of Hazelcast members in a cluster concurrently with no data loss. Please note that the process of shutting down members waits for a predefined amount of time for the oldest member to migrate their partition replicas. You can specify this graceful shutdown timeout duration using the property hazelcast.graceful.shutdown.max.wait. Its default value is 10 minutes. If migrations are not completed within this duration, shutdown may continue non-gracefully and lead to data loss. Therefore, you should choose your own timeout duration considering the size of data in your cluster. Ensuring Safe State with PartitionService With the improvements in graceful shutdown procedure in Hazelcast 3.7, the following methods are not needed to perform graceful shutdown. Nevertheless, you can use them to check the current safety status of the partitions in your cluster. all backups are in sync for each partition. The method isMemberSafe checks whether a specific member is in a safe state. It checks if all backups of partitions of the given member are in sync.
https://docs.hazelcast.com/imdg/latest/management/cluster-utilities.html
CC-MAIN-2021-10
refinedweb
1,965
58.38
IRC log of ws-ra on 2011-03-01 Timestamps are in UTC. 20:30:18 [RRSAgent] RRSAgent has joined #ws-ra 20:30:18 [RRSAgent] logging to 20:30:20 [trackbot] RRSAgent, make logs public 20:30:20 [Zakim] Zakim has joined #ws-ra 20:30:22 [trackbot] Zakim, this will be WSRA 20:30:22 [Zakim] ok, trackbot, I see WS_WSRA()3:30PM already started 20:30:23 [trackbot] Meeting: Web Services Resource Access Working Group Teleconference 20:30:23 [trackbot] Date: 01 March 2011 20:30:40 [Zakim] +Wu_Chou 20:31:05 [Katy] Katy has joined #ws-ra 20:31:16 [Zakim] +[Microsoft] 20:31:55 [Zakim] + +44.196.281.aaaa 20:32:17 [Zakim] +asoldano 20:32:32 [Ram] Ram has joined #ws-ra 20:32:33 [Zakim] +Yves 20:34:26 [Zakim] +Tom_Rutt 20:34:50 [BobF] agenda: 20:35:26 [trutt] trutt has joined #ws-ra 20:35:27 [Katy] Topic: Appoval of Agenda 20:35:36 [dug] scribe: Katy 20:36:35 [Katy] Bob: Goal for CR vote 15th March 20:36:58 [Katy] Topic: Approval of minutes of F2F 20:37:08 [Katy] Resolution: Minutes approved 20:37:24 [dug] bots are sleep today 20:37:35 [dug] s/sleep/asleep/ 20:37:44 [Katy] Topic: 20:38:35 [asoldano] yes 20:38:42 [Katy] Bob: Any objection to accepting proposal in comment no 1 of proposal. 20:39:57 [Katy] Doug: Describes proposal 20:41:38 [Katy] Gil: Feel uneasy about this because assigning semantic meaning to the empty string 20:42:43 [Katy] ... How about only making the Get the special case 20:43:01 [Katy] Doug: What if someone wants to use the empty string as id value 20:43:13 [Katy] Gil: That's associating special value to "" 20:43:34 [li] li has joined #ws-ra 20:43:40 [Katy] ... we could have special string that means "unidentified" and special case that 20:44:00 [Katy] ... within the W3C namespace 20:44:49 [trutt] q+ 20:44:57 [Katy] Doug: I don't think this is special semantics as it's indicated no identifier 20:45:00 [BobF] act tom 20:45:05 [BobF] ack tr 20:45:07 [Katy] Bob: Empty string might mean no value 20:46:17 [Katy] Tom: What if there's an overloaded identifier that happens to be ""? 20:47:15 [Katy] Gil: The point is a symbol to identify no useful Id - whether it's a "" or special URI 20:48:44 [Katy] Doug: Initial problem was that the client doesn't know whether it needs an identifier or not. 20:49:14 [Katy] Gil: So when types with identifier defined those must be used, I am thinking of types with no identifier 20:49:20 [Katy] ... specified 20:52:46 [Katy] Gil: Problem is we don't know all the dialects there may be some types where we don't have an identifier. We should have a way to put these things without an identifier if people choose not to - but it's their problem 20:52:55 [Katy] Doug: But that kills interop 20:52:59 [trutt] q+ 20:53:04 [dug] q+ 20:53:12 [Katy] Gil: disagree 20:53:18 [BobF] ack tr 20:53:35 [Katy] Tom: In what scenario would someone not have an id for their metadata section? 20:53:57 [BobF] ack d 20:55:13 [Katy] Doug: If you know enough about the metadata to 'put' it, you must put the appropriate identifier. If it's optional then the clients always need to ask for everything 20:55:44 [Katy] ... either mandate the use of identifier or there's no point in it. 20:56:17 [trutt] q+ 20:56:27 [BobF] ack t 20:56:57 [Katy] q+ 20:57:18 [BobF] ack k 20:58:05 [trutt] If you require an id, but allow it to be "", it will all work 20:58:51 [Katy] Katy: Id should not be optional or clients would have to assume it's not there 20:58:55 [BobF] that means that the set of values for the id attribute is empty 20:59:58 [Katy] Bob: Empty identifier means set of values is empty/not present (in terms of XSLT test) 21:00:01 [trutt] "" is a value, it will test for presence 21:00:25 [Dave] Dave has joined #ws-ra 21:00:45 [Yves] optimization or not, absent and null value must be described 21:01:13 [dug] q+ 21:01:15 [Dave] Dave Snelling is lurking on tthe IRC only. 21:01:24 [trutt] "" is not a null value, it is a valid string "empty string" , not null 21:01:38 [BobF] zakim, I would like to report a lurker 21:01:38 [Zakim] I don't understand 'I would like to report a lurker', BobF 21:02:27 [trutt] if you test for presence of the value with "", it will be true in xpath. To test for "" you have to actually do a sting compare operation with "" as the compared value 21:02:37 [Katy]. 21:02:57 [BobF] ack d 21:03:56 [gpilz] gpilz has joined #ws-ra 21:04:21 [trutt] q+ 21:04:35 [BobF] Java will return an empty string if there is no value defined 21:04:40 [BobF] ack tr 21:04:49 [Katy] Doug: To ease confusion factor, I would like to require the identifier to be set (to "" or syntax string) else folk will think that absence = wildcard. 21:05:51 [gpilz] q+ 21:06:14 [BobF] ack gp 21:06:30 [Katy] Tom: Schema point, technically speaking a default would work but it would cause more problems to have a default than to use "" - the latter makes it easier for xpath 21:07:18 [Katy] Gil: I think we have agreed the following 1) Put needs the type; 2) in some cases value of the type is default which="" 21:07:49 [Katy] ... we need to say whether it is legal to put empty string for a dialect that mex provides an identifier to 21:07:54 [trutt] q+ 21:08:06 [Katy] Doug: I agree, I think we have come full circle back to the proposal 21:08:44 [BobF] ack tr 21:08:52 [dug] <a foo='1'/> @foo != @foo2 21:09:02 [dug] oops, <a foo''/> 21:09:20 [dug] according to xml spy anyway 21:09:22 21:09:24 [gpilz] oops 21:09:28 [trutt] q+ 21:10:19 [BobF] ack tr 21:10:46 [gpilz] - make @Identifier a required attribute of mex:MetadataSection 21:10:46 [gpilz] - you MAY use "" as the value of @Identifier except for those Dialects defined by WS-MEX 21:10:46 [gpilz] - keep @Identifier optional on the mex:GetMetadata operation 21:10:46 [gpilz] - mex:GetMetadata w/o @Identifier (not "") means match ALL @Identifiers 21:11:09 [BobF] q? 21:11:11 [dug] q+ 21:11:37 [BobF] ack dug 21:12:15 [wuchou] wuchou has joined #ws-ra 21:12:24 [Katy] Doug: I will work on this text before the next meeting when we can review 21:12:56 [Katy] Bob: Do we agree directionally so Bob can work on final text 21:13:26 [Katy] Action: Doug to write up text based on comment one with some changes to 2nd bullet 21:13:26 [trackbot] Created ACTION-177 - Write up text based on comment one with some changes to 2nd bullet [on Doug Davis - due 2011-03-08]. 21:14:05 [Ram] q+ 21:14:24 [Katy] Topic: 21:14:38 [BobF] ack ram 21:14:44 [Katy] Ram: Currently collecting feedback, will have information in the next few days 21:15:09 [Katy] ... Wait until next call prior to confirming final answer 21:15:17 [Katy] Bob: Defer to the next call 21:15:53 [dug] Puffin 21:15:59 [Katy] Topic: 21:16:21 [Ram] q+ 21:16:44 [BobF] ack ram 21:17:24 [Katy] Ram: No further testing required? 21:18:19 [Ram] q+ 21:18:40 [BobF] ack ram 21:18:43 [Katy] Bob: We will be producing new specs so we should crank through all the tests again 21:19:17 [Katy] Ram: Previous mex issue need aditional testing? 21:19:51 [Katy] Doug: Difficult to answer because the issue is clarifying the semantics 21:20:03 [Katy] ... so to some may be no change 21:20:28 [Katy] Ram: Recommend that we don't do unecessary testing as it has big resource issues 21:21:18 [gpilz] q+ 21:21:37 [Yves] we definitely have to test it, but that's what CR is all about 21:21:41 [Katy] Bob: I would prefer to be conservative in our testing, even if just syntax change 21:21:54 [dug] q+ 21:22:05 [Katy] Yves: Any changes to element needs to be re-tested if after CR 21:22:12 [BobF] ack gp 21:24:40 [BobF] ack dug 21:25:57 [Katy] Bob: If we change the spec, we should retest. Now we have gone through the process once, it should be easier 21:26:10 [Katy] ... consider this when accepting the proposals 21:26:53 [Katy] ... we could defer 11776 so we can decide whether the test impact too big 21:27:45 [dug] 21:27:48 [Katy] Topic: Misc issues 21:28:14 [Katy] Bob: need to apply for the MIME type. 21:28:22 [Katy] ... done in link above 21:28:23 [trutt] given example xml doc 21:28:25 [trutt] <?xml version="1.0" encoding="UTF-8"?> 21:28:27 [trutt] <doc> 21:28:29 [trutt] <element atr1=""> "" </element> 21:28:30 [trutt] </doc> 21:28:31 [trutt] The following xpath returns the element: 21:28:33 [trutt] //element[@atr1=""] 21:28:34 [trutt] the following xpath does not return the element (no match) //element[@atr1=" "] 21:28:36 [trutt] Thus the "" is not comparable with " " 21:28:52 [Katy] Topic: Items at risk 21:30:03 [Katy] Bob: Items at risk are no longer at risk as we have adequate implementations for WS-Eventing and WS-Enum 21:30:28 [trutt] q+ 21:31:04 [BobF] ack tr 21:31:35 [Zakim] -Tom_Rutt 21:31:50 [trutt] I just lost my connection , is the meeting over? 21:32:00 [Ram] not yet 21:32:06 [Ram] We are talking about next meeting. 21:32:07 [BobF] talking about next meeting 21:32:24 [Katy] Topic: Next week's meeting 21:33:00 [dug] q+ 21:33:07 [BobF] ack dug 21:33:08 [Katy] Bob: Clash with cloud management meeting on 8th so we will have next meeting on the 15th and meeting on 22nd 21:33:14 [Zakim] +Tom_Rutt 21:33:34 [Ram] WS-Enumeration test coverage analysis to be completed by Microsoft. 21:33:40 [gpilz] q+\ 21:33:45 [dug] q+ gil 21:33:55 [dug] q- \ 21:34:42 [li] is Darth Vadar speaking as well? 21:34:49 [dug] zakim, who is making noise? 21:35:00 [Zakim] dug, listening for 10 seconds I heard sound from the following: Bob_Freund (25%), Gilbert_Pilz (41%) 21:35:32 [dug] q+ 21:35:46 [Ram] Test coverage analysis actions: 21:35:48 [Ram] WS-Enumeration test coverage analysis to be completed by Microsoft. 21:35:56 [Ram] WS-Eventing test coverage analysis to be analysis by Avaya. 21:36:02 [Ram] WS-Transfer/WS-Fragment test coverage analysis to be analysis by IBM. 21:36:04 [BobF] ack du 21:36:08 [Ram] WS-MEX test coverage analysis to be analysis by Oracle. 21:36:13 [BobF] ack g 21:36:46 [Katy] Gil: Why aren't faults defined in the WSDL in the W3C specs? 21:39:05 [Katy] Tom: If SOAP faults they can happen anywhere so don't need to be defined in the WSDL 21:40:52 [dug] answer: we're lazy 21:40:59 [dug] answer: 'cause 21:41:11 [dug] answer: go away, use REST 21:41:30 [BobF] just log them 21:43:43 [dug] would we need a union to express multiple faults could be returned? 21:43:52 [Katy] Gil: will look into this and decide whether issue or not next meeting 21:43:53 [Zakim] -Tom_Rutt 21:43:55 [Zakim] -asoldano 21:43:55 [Katy] Katy has left #ws-ra 21:44:02 [Zakim] -Wu_Chou 21:44:04 [Zakim] -[Microsoft] 21:44:05 [Zakim] -Yves 21:44:06 [Zakim] - +44.196.281.aaaa 21:44:07 [Zakim] -Bob_Freund 21:44:12 [Zakim] -Gilbert_Pilz 21:44:13 [Zakim] WS_WSRA()3:30PM has ended 21:44:14 [Zakim] Attendees were Bob_Freund, Doug_Davis, Gilbert_Pilz, Wu_Chou, [Microsoft], +44.196.281.aaaa, asoldano, Yves, Tom_Rutt 21:44:16 [BobF] rrsagent, generate minutes 21:44:16 [RRSAgent] I have made the request to generate BobF 21:54:05 [gpilz] gpilz has left #ws-ra 22:39:39 [trutt_] trutt_ has joined #ws-ra
http://www.w3.org/2011/03/01-ws-ra-irc
CC-MAIN-2015-06
refinedweb
2,177
58.49
Opened 10 years ago Closed 8 years ago #5272 closed (fixed) Password reset form resets passwords for all users sharing an email address Description In /contrib/auth/forms.py (line 89) it loops through the users found. So if I have 2 or more accounts with the same e-mail address (because the emailfield in Users model is not unique) it would change every accounts password in this case which is not very nice.. The code is like this: (SVN commit: 6001) class PasswordResetForm(oldforms.Manipulator): "A form that lets a user request a password reset" def init(self): self.fields = ( oldforms.EmailField(field_name="email", length=40, is_required=True, validator_list=[self.isValidUserEmail]), ) def isValidUserEmail(self, new_data, all_data): "Validates that a user exists with the given e-mail address" self.users_cache = list(User.objects.filter(emailiexact=new_data)) if len(self.users_cache) == 0: raise validators.ValidationError, _("That e-mail address doesn't have an associated user account. Are you sure you've registered?") def save(self, domain_override=None, email_template_name='registration/password_reset_email.html'): "Calculates a new password randomly and sends it to the user" from django.core.mail import send_mail for user in self.users_cache: new_pass = User.objects.make_random_password() user.set_password(new_pass) user.save() if not domain_override: current_site = Site.objects.get_current() site_name = current_site.name domain = current_site.domain else: site_name = domain = domain_override t = loader.get_template(email_template_name) c = { 'new_password': new_pass, 'email': user.email, 'domain': domain, 'site_name': site_name, 'user': user, } send_mail('Password reset on %s' % site_name, t.render(Context(c)), None, [user.email]) Change History (10) comment:1 Changed 10 years ago by comment:2 Changed 10 years ago by comment:3 Changed 10 years ago by Never mind, wrong issue. comment:4 Changed 10 years ago by I agree, the current solution is rather poor. More than just that, I would much prefer a system which emailed the user a list of their usernames, with links (containing a secret key) to reset the related passwords, rather than just blatantly resetting passwords without any confirmation. I'll accept, but it's up to someone to come up with a good solution (against newforms-admin). comment:5 Changed 10 years ago by Why not just changing user model email to unique? Benefits: - just revert 'Password reset loop set new password' changeset to keep it simple, although other people could give more friendly solution - user could signin with email. comment:6 Changed 10 years ago by What if a user wanted separate accounts(one for admin work and one for normal work, for example)? Would they have to register with separate e-mail addresses? comment:7 Changed 9 years ago by comment:8 Changed 8 years ago by We're facing this problem right now. Any thoughts? comment:9 Changed 8 years ago by comment:10 Changed 8 years ago by I don't understand why this needs to be fixed, now that #7723 has been fixed. No passwords are reset until the link in the e-mail is clicked. If there is a complaint about the usability of the current system (e.g. the number of e-mails sent out etc, I can't remember exactly how it works), please file another bug. Thanks! Duplicate of #4235.
https://code.djangoproject.com/ticket/5272
CC-MAIN-2017-22
refinedweb
531
50.23
iMeshObject Struct Reference [Mesh plugins] This is a general mesh object that the engine can interact with. More... #include <imesh/object.h> Detailed Description This is a general mesh object that the engine can interact with. The mesh object only manages its shape, texture etc. but *not* its position, sector or similar information. For this reason, a mesh object can only be used in the engine if a hook object is created for it in the engine that does the required management. The hook object is called mesh wrapper. Main creators of instances implementing this interface: - All mesh objects implement this. - iMeshObjectFactory::NewInstance() Main ways to get pointers to this interface: Main users of this interface: - The 3D engine plugin (crystalspace.engine.3d). Definition at line 125 of file object.h. Member Function Documentation This mesh is being asked to build a decal for its own geometry. The mesh is given a position and radius of the decal and must create geometry through the provided iDecalBuilder. Implemented in csMeshObject. Creates a copy of this object and returns the clone. Implemented in csMeshObject. Get the base color of the mesh. Will return false if not supported. Implemented in csMeshObject. Get the reference to the factory that created this mesh object. Implemented in csMeshObject. Get flags for this object. The following flags are at least supported: - CS_MESH_STATICPOS: mesh will never move. - CS_MESH_STATICSHAPE: mesh will never animate. Mesh objects may implement additional flags. These mesh object specific flags must be equal to at least 0x00010000. Implemented in csMeshObject. Get the material of the mesh. If not supported this will return 0. Implemented in csMeshObject. Get the logical parent for this mesh object. See SetMeshWrapper() for more information. Implemented in csMeshObject. Get mix mode. Implemented in csMeshObject. Get the generic interface describing the geometry of this mesh. If the factory supports this you should preferably use the object model from the factory instead. Implemented in csMeshObject. Returns the set of render meshes. The frustum_mask is given by the culler and contains a mask with all relevant planes for the given object. These planes correspond with the clip planes kept by iRenderView. Implemented in csMeshObject. Get the current visible callback. Implemented in csMeshObject.. Implemented in csMeshObject. Check if this mesh is hit by this object space vector. Return the collision point in object space coordinates. This is the most detailed version (and also the slowest). The returned hit will be guaranteed to be the point closest to the 'start' of the beam. If the object supports this then an index of the hit polygon will be returned (or -1 if not supported or no hit). - Parameters: - Implemented in csMeshObject. Check if this mesh is hit by this object space vector. This will do a test based on the outline of the object. This means that it is more accurate than HitBeamBBox(). Note that this routine will typically be faster than HitBeamObject(). The hit may be on the front or the back of the object, but will indicate that it interrupts the beam. - Parameters: - Implemented in csMeshObject. Control animation of this object. Implemented in csMeshObject. The engine asks this mesh object to place one of his hierarchical children. It must be placed where it should be at the given time. This object might or might not have been drawn, so you can't use it's current state. Set the base color of the mesh. This color will be added to whatever color is set for lighting. Not all meshes need to support this. This function will return true if it worked. Implemented in csMeshObject. Set the material of the mesh. This only works for single-material meshes. If not supported this function will return false. Implemented in csMeshObject. Set a reference to the mesh wrapper holding the mesh objects. Note that this function should NOT increase the ref-count of the given logical parent because this would cause a circular reference (since the logical parent already holds a reference to this mesh object). Implemented in csMeshObject. Set mix mode. Note that not all meshes may support this. Implemented in csMeshObject. Register a callback to the mesh object which will be called from within Draw() if the mesh object thinks that the object is really visible. Depending on the type of mesh object this can be very accurate or not accurate at all. But in all cases it will certainly be called if the object is visible. Implemented in csMeshObject. Return true if HardTransform is supported for this mesh object type. Implemented in csMeshObject. The documentation for this struct was generated from the following file: Generated for Crystal Space 2.1 by doxygen 1.6.1
http://www.crystalspace3d.org/docs/online/api/structiMeshObject.html
CC-MAIN-2015-18
refinedweb
780
62.14
With. In this tutorial, I'll show you how to use TensorFlow Mobile in Android Studio projects. Prerequisites To be able to follow this tutorial, you'll need: - Android Studio 3.0 or higher - TensorFlow 1.5.0 or higher - an Android device running API level 21 or higher - and a basic understanding of the TensorFlow framework 1. Creating a Model Before we start using TensorFlow Mobile, we'll need a trained TensorFlow model. Let's create one now. Our model is going to be very basic. It will behave like an XOR gate, taking two inputs, both of which can be either zero or one, and producing one output, which will be zero if both the inputs are identical and one otherwise. Additionally, because it's going to be a deep model, it will have two hidden layers, one with four neurons, and another with three neurons. You are free to change the number of hidden layers and the numbers of neurons they contain. In order to keep this tutorial short, instead of using the low-level TensorFlow APIs directly, we'll be using TFLearn, a popular wrapper framework for TensorFlow offering more intuitive and concise APIs. If you don't have it already, use the following command to install it inside your TensorFlow virtual environment: pip install tflearn To start creating the model, create a Python script named create_model.py, preferably in an empty directory, and open it with your favorite text editor. Inside the file, the first thing we need to do is import the TFLearn APIs. import tflearn Next, we must create the training data. For our simple model, there will be only four possible inputs and outputs, which will resemble the contents of the XOR gate's truth table. X = [ [0, 0], [0, 1], [1, 0], [1, 1] ] Y = [ [0], # Desired output for inputs 0, 0 [1], # Desired output for inputs 0, 1 [1], # Desired output for inputs 1, 0 [0] # Desired output for inputs 1, 1 ] It is usually a good idea to use random values picked from a uniform distribution while assigning initial weights to all the neurons in the hidden layers. To generate the values, use the uniform() method. weights = tflearn.initializations.uniform(minval = -1, maxval = 1) At this point, we can start creating the layers of our neural network. To create the input layer, we must use the input_data() method, which allows us to specify the number of inputs the network can accept. Once the input layer is ready, we can call the fully_connected() method multiple times to add more layers to the network. # Input layer net = tflearn.input_data( shape = [None, 2], name = 'my_input' ) # Hidden layers net = tflearn.fully_connected(net, 4, activation = 'sigmoid', weights_init = weights ) net = tflearn.fully_connected(net, 3, activation = 'sigmoid', weights_init = weights ) # Output layer net = tflearn.fully_connected(net, 1, activation = 'sigmoid', weights_init = weights, name = 'my_output' ) Note that in the above code, we have given meaningful names to the input and output layers. Doing so is important because we'll need them while using the network from our Android app. Also note that the hidden and output layers are using the sigmoid activation function. You are free to experiment with other activation functions, such as softmax, tanh, and relu. As the last layer of our network, we must create a regression layer using the regression() function, which expects a few hyper-parameters as its arguments, such as the network's learning rate and the optimizer and loss functions it should use. The following code shows you how to use stochastic gradient descent, SGD for short, as the optimizer function and mean square as the loss function: net = tflearn.regression(net, learning_rate = 2, optimizer = 'sgd', loss = 'mean_square' ) Next, in order to let the TFLearn framework know that our network model is actually a deep neural network model, we must call the DNN() function. model = tflearn.DNN(net) The model is now ready. All we need to do now is train it using the training data we created earlier. So call the fit() method of the model and, along with the training data, specify the number of training epochs to run. Because the training data is very small, our model will need thousands of epochs to attain reasonable accuracy. model.fit(X, Y, 5000) Once the training is complete, we can call the predict() method of the model to check if it is generating the desired outputs. The following code shows you how to check the outputs for all valid inputs: print("1 XOR 0 = %f" % model.predict([[1,0]]).item(0)) print("1 XOR 1 = %f" % model.predict([[1,1]]).item(0)) print("0 XOR 1 = %f" % model.predict([[0,1]]).item(0)) print("0 XOR 0 = %f" % model.predict([[0,0]]).item(0)) If you run the Python script now, you should see output that looks like this: Note that the outputs are never exactly 0 or 1. Instead, they are floating-point numbers that are either close to zero or close to one. Therefore, while using the outputs, you might want to use Python's round() function. Unless we explicitly save the model after training it, we will lose it as soon as the script ends. Fortunately, with TFLearn, a simple call to the save() method saves the model. However, to be able to use the saved model with TensorFlow Mobile, before saving it, we must make sure we remove all the training-related operations, which are present in the tf.GraphKeys.TRAIN_OPS collection, associated with it. The following code shows you how to do so: # Remove train ops with net.graph.as_default(): del tf.get_collection_ref(tf.GraphKeys.TRAIN_OPS)[:] # Save the model model.save('xor.tflearn') If you run the script again, you'll see that it generates a checkpoint file, a metadata file, an index file, and a data file, all of which when used together can quickly recreate our trained model. 2. Freezing the Model In addition to saving the model, we must freeze it before we can use it with TensorFlow Mobile. The process of freezing a model, as you might have guessed, involves converting all its variables into constants. Additionally, a frozen model must be a single binary file that conforms to the Google Protocol Buffers serialization format. Create a new Python script named freeze_model.py and open it using a text editor. We'll be writing all the code to freeze our model inside this file. Because TFLearn doesn't have any functions for freezing models, we'll have to use the TensorFlow APIs directly now. Import them by adding the following line to the file: import tensorflow as tf Throughout the script, we'll be using a single TensorFlow session. To create the session, use the constructor of the Session class. with tf.Session() as session: # Rest of the code goes here At this point, we must create a Saver object by calling the import_meta_graph() function and passing the name of the model's metadata file to it. In addition to returning a Saver object, the import_meta_graph() function also automatically adds the graph definition of the model to the graph definition of the session. Once the saver is created, we can initialize all the variables that are present in the graph definition by calling the restore() method, which expects the path of the directory containing the model's latest checkpoint file. my_saver = tf.train.import_meta_graph('xor.tflearn.meta') my_saver.restore(session, tf.train.latest_checkpoint('.')) At this point, we can call the convert_variables_to_constants() function to create a frozen graph definition where all the variables of the model are replaced with constants. As its inputs, the function expects the current session, the current session's graph definition, and a list containing the names of the model's output layers. frozen_graph = tf.graph_util.convert_variables_to_constants( session, session.graph_def, ['my_output/Sigmoid'] ) Calling the SerializeToString() method of the frozen graph definition gives us a binary protobuf representation of the model. By using Python's basic file I/O facilities, I suggest you save it as a file named frozen_model.pb. with open('frozen_model.pb', 'wb') as f: f.write(frozen_graph.SerializeToString()) You can run the script now to generate the frozen model. We now have everything we need to start using TensorFlow Mobile. 3. Android Studio Project Setup The TensorFlow Mobile library is available on JCenter, so we can directly add it as an implementation dependency in the app module's build.gradle file. implementation 'org.tensorflow:tensorflow-android:1.7.0' To add the frozen model to the project, place the frozen_model.pb file in the project's assets folder. 4. Initializing the TensorFlow Interface TensorFlow Mobile offers a simple interface we can use to interact with our frozen model. To create the interface, use the constructor of the TensorFlowInferenceInterface class, which expects an AssetManager instance and the filename of the frozen model. thread { val tfInterface = TensorFlowInferenceInterface(assets, "frozen_model.pb") // More code here } In the above code, you can see that we're spawning a new thread. Doing so, although not always necessary, is recommended in order to make sure that the app's UI stays responsive. To be sure that TensorFlow Mobile has managed to read our model's file correctly, let's now try printing the names of all the operations that are present in the model's graph. To get a reference to the graph, we can use the graph() method of the interface, and to get all the operations, the operations() method of the graph. The following code shows you how: val graph = tfInterface.graph() graph.operations().forEach { println(it.name()) } If you run the app now, you should be able to see over a dozen operation names printed in Android Studio's Logcat window. Among all those names, if there were no errors while freezing the model, you'll be able to find the names of the input and output layers: my_input/X and my_output/Sigmoid. 5. Using the Model To make predictions with the model, we must put data into its input layer and retrieve data from its output layer. To put data into the input layer, use the feed() method of the interface, which expects the name of the layer, an array containing the inputs, and the dimensions of the array. The following code shows you how to send the numbers 0 and 1 to the input layer: tfInterface.feed("my_input/X", floatArrayOf(0f, 1f), 1, 2) After loading data into the input layer, we must run an inference operation using the run() method, which expects the name of the output layer. Once the operation is complete, the output layer will contain the prediction of the model. To load the prediction into a Kotlin array, we can use the fetch() method. The following code shows you how to do so: tfInterface.run(arrayOf("my_output/Sigmoid")) val output = floatArrayOf(-1f) tfInterface.fetch("my_output/Sigmoid", output) How you use the prediction is of course up to you. For now, I suggest you simply print it. println("Output is ${output[0]}") You can run the app now to see that the model's prediction is correct. Feel free to change the numbers you feed to the input layer to confirm that the model's predictions are always correct. Conclusion You now know how to create a simple TensorFlow model and use it with TensorFlow Mobile in Android apps. You don't always have to limit yourself to your own models, though. With the skills you learned today, you should have no problems using larger models, such as MobileNet and Inception, available in the TensorFlow model zoo. Note, however, that such models will lead to larger APKs, which may create issues for users with low-end devices. To learn more about TensorFlow Mobile, do refer to the official documentation. Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/tutorials/how-to-use-tensorflow-mobile-in-android-apps--cms-30957
CC-MAIN-2019-04
refinedweb
1,984
53.51
Signals #ifndef COUNTER_H #define COUNTER_H #include <QWidget> #include <QDebug> class Counter : public QWidget { /* * All classes that contain signals or slots must mention Q_OBJECT * at the top of their declaration. * They must also derive (directly or indirectly) from QObject. */ Q_OBJECT public: Counter (QWidget *parent = 0): QWidget(parent) { m_value = 0; /* * The most important line: connect the signal to the slot. */ connect(this, &Counter::valueChanged, this, &Counter::printvalue); } void setValue(int value) { if (value != m_value) { m_value = value; /* * The emit line emits the signal valueChanged() from * the object, with the new value as argument. */ emit valueChanged(m_value); } } public slots: void printValue(int value) { qDebug() << "new value: " << value; } signals: void valueChanged(int newValue); private: int m_value; }; #endif The main sets a new value. We can check how the slot is called, printing the value. #include <QtGui> #include "counter.h" int main(int argc, char *argv[]) { QApplication app(argc, argv); Counter counter; counter.setValue(10); counter.show(); return app.exec(); } Finally, our project file: SOURCES = \ main.cpp HEADERS = \ counter.h
https://riptutorial.com/qt/example/7016/a-small-example
CC-MAIN-2022-05
refinedweb
164
50.53
. Before we start, you'll need the following flickr information: Set Up I organized my project like so, but you can set up yous any way you want, as long as you modify my code samples to match. project/ __init__.py apps/ __init__.py photos/ __init__.py models.py ... lib/ __init__.py templates/ manage.py settings.py urls.py You'll need Michele Campeotto's excellent FlickrClient libraries, available for download here. Unzip it and place "FlickrClient.py" and "xmltramp.py" in your "lib" folder (or wherever you want to store them. Storing them inside your app is fine, too.): project/ ... lib/ __init__.py FlickrClient.py xmltramp.py ... Photos App Knowing only a few bits of flickr's data for your photos allows you a wide array of options when creating URLs. You can specify the photo size, create links to the photo's flickr page, etc. You can read more about the URLs on flickr's URL documentation page. This makes our model a snap. We only have to store four small strings of flickr data in our model, or five if you want to include the title, which I do here. You can set up your model in most any way, as long as you have the four flickr-specific models. Here's how my "photos/models.py" code looks: from django.db import models from project.lib.FlickrClient import FlickrClient class Photo(models.Model): title = models.CharField(blank=True, maxlength=100) flickr_id = models.IntegerField() flickr_owner = models.CharField(maxlength=20) flickr_server = models.IntegerField() flickr_secret = models.CharField(maxlength=50) class Admin: list_display = ('title',) def __str__(self): return self.title def get_absolute_url(self): return "/photos/%s/" % (self.id) Synchronization Function Easy enough so far, right? Now, we'll need the actual synchronization code. Add the following function to your "photos/models.py" file, adding your flickr key and user ID where needed: def sync_flickr_photos(*args, **kwargs): API_KEY = 'your_api_key_here' USER_ID = 'your_id_here' cur_page = 1 # Start on the first page of the stream paginate_by = 20 # Get 20 photos at a time dupe = False # Set our dupe flag for the following loop client = FlickrClient(API_KEY) # Get our flickr client running while (not dupe): photos = client.flickr_people_getPublicPhotos(user_id=USER_ID, page=cur_page, per_page=paginate_by) for photo in photos: try: row = Photo.objects.get(flickr_id=photo("id"), flickr_secret=photo("secret")) # Raise exception if photo doesn't exist in our DB yet # If the row exists already, set the dupe flag dupe = True except ObjectDoesNotExist: p = Photo( title = photo("title"), flickr_id = photo("id"), flickr_owner = photo("owner"), flickr_server = photo("server"), flickr_secret = photo("secret") ) p.save() if (dupe or photos("page") == photos("pages")): # If we hit a dupe or if we did the last page... break else: cur_page += 1 (Note: I know there's probably a million better ways to do this (esp. in regards to raising exceptions on "success"). You'll also want to include proper error catching. I'm no Python expert yet, so I took the easy route. Feel free to improve the code, but keep it simple. The Rest I'll leave the rest of this up to you: - You'll need to run the sync_flickr_photos function at a reasonable interval. Don't flood flickr with API calls every time someone view's your page. For instance, I use signals and the dispatcher to raise a signal whenever someone visits my photos page. If it's been more than 15 minutes since the last time I synchronized, I run the function. - To actually do anything useful with your flickr information, you'll need to create methods that build the URLs. Here's how I get a URL to a small picture (240px on longest side): def get_med_pic_url(self): return "" % (self.flickr_server, self.flickr_id, self.flickr_secret) - And of course, you'll have to do all the views and templates yourself. But that's the easy part!
https://code.djangoproject.com/wiki/FlickrIntegration?version=2
CC-MAIN-2015-27
refinedweb
639
67.76
What We are Looking At an inbuilt Broadcom WiFi Adapter. If you are using previous versions, you will have to connect an external USB WiFi adapter. Connect the Pi to your network. It is better to provide a static IP address for the Pi. Next you will have to enable and setup SSH on your Pi. Test your connection by connecting to your Raspberry Pi from another device. GPIO Make sure that the GPIO library is installed and run a simple code to test like blinking LED. Sensor UDP Sensor UDP is an app that is available in the Play Store. This app allows you to stream real time sensor readings to a remote host; in our case, the remote host is the Raspberry Pi. Go ahead and download it. After downloading open the app, set the destination port and host. Take note of this port as we will be using this port for setting up the Python listener. The Python Listener Using Python language, we will have to setup a listener script on Pi using Python that will help us to listen to the incoming packets and read the sensor values. Set the correct port number and we are set to go. Have a look at my code import RPi.GPIO as GPIO ## Import GPIO library import socket import csv GPIO.setwarnings(False) GPIO.setmode(GPIO.BOARD) GPIO.setup(33,GPIO.OUT) GPIO.setup(11,GPIO.OUT) GPIO.setup(13,GPIO.OUT) GPIO.setup(15,GPIO.OUT) GPIO.setup(29,GPIO.OUT) GPIO.setup(31,GPIO.OUT) GPIO.output(29,True) GPIO.output(31,True) UDP_IP = "0.0.0.0" UDP_PORT = 5050 sock = socket.socket(socket.AF_INET, # Internet socket.SOCK_DGRAM) # UDP sock.bind((UDP_IP, UDP_PORT)) while True: data, addr = sock.recvfrom(1024) raw=data #print raw limited=raw.split(",") x=float(limited[3]) y=float(limited[4]) #print int(x), int(y) if int(x) >= 3 and -3<int(y)<3: print "FORWARD" GPIO.output(33,True) GPIO.output(11,False) GPIO.output(13,True) GPIO.output(15,False) elif int(x) <= -3 and -3<int(y)<3: print "BACKWARD" GPIO.output(33,False) GPIO.output(11,True) GPIO.output(13,False) GPIO.output(15,True) elif int(y) >= 3 and -3<int(x)<3: print "LEFT" GPIO.output(33,False) GPIO.output(11,True) GPIO.output(13,True) GPIO.output(15,False) elif int(y) <= -3 and -3<int(x)<3: print "RIGHT" GPIO.output(33,True) GPIO.output(11,False) GPIO.output(13,False) GPIO.output(15,True) else: print "STOP" GPIO.output(33,False) GPIO.output(11,False) GPIO.output(13,False) GPIO.output(15,False) Roll Out Next thing to do is driving the motor. Each L293N IC can drive two motors independently. Configure 4 pins (or 6 pins if enable pins are counted) as output pins. Give appropriate output signals to the Motor Driver board and make sure that the enable pins of the driver board is active high. Otherwise it wont work. Great post, Your post is an excellent example of why I keep coming back to read your excellent quality commentary….
http://hackersgrid.com/2017/02/mobile-phone-accelerometer-controlled.html
CC-MAIN-2018-34
refinedweb
519
59.6
> > Except E* don't have the same value on all platforms. > > It pretty much falls down to my first idea which was > > > > /* os_support.h or whatever */ > > #ifdef __BEOS__ > > that should be #if ENOMEM < 0 Right, much simpler :) > > > > # define FFERR(e) (e) > > #else > > # define FFERR(e) (-(e)) > > #endif > > AVERR() as this should be an exportet macro (the user after all > should be > able to test for specific errors) Ok > > It suppresses the need for AVERROR_* leaving all the semantics > > attached > > to the posix errors without folding them to 10 cases. (but we still > > need to handle OSes which don't have some defined). > > are there any? do we really use such obscure E* ? Don't think so, usual ones should cover our current needs, I don't have any #ifdef EFOO in the BeOS code, didn't see any for other platforms. Except an unused ENODATA in libavutil/internal.h. Who did this anyway ? It's nowhere used, and so should be removed. > > For the time being we could keep AVERROR_* as #defined to FFERR(E*) > > possible > > > > until we increment the versions. > > no, the major versions need to be bumped if the values change Righto, I was still asleep :) Ok I'll try that and see what it gives. It seems even simpler in the end. Fran?ois.
http://ffmpeg.org/pipermail/ffmpeg-devel/2007-January/028131.html
CC-MAIN-2014-49
refinedweb
217
81.02
DTrace stuck when listing probes with "-P" and "-p"GregoryG. Sep 8, 2013 4:44 PM Hello, To provide some other feedback with UEK3/Dtrace 0.4 Beta1 1/- I've built a small USDT sample: demo_probes.d provider demo { probe progress__counter(int); }; demo.c #include <sys/sdt.h> /* ** USDT probes with DTrace and UEK3 beta1 **/ int main(int argc, char *argv[]) { int i=0; while (1) { sleep(1); i++; DTRACE_PROBE1(demo, progress__counter, i); } } 2/- When I run it and use -p and -l together, it works well: dtrace -p 7276 -l |grep demo 803 demo7276 demo main progress-counter 3/- When I use -p -P and -l together, the command get stuck: dtrace -p 7276 -P 'demo7276' -l In this case, running strace on dtrace shows: strace -p 9018 Process 9018 attached - interrupt to quit futex(0x101d3dc, FUTEX_WAIT_PRIVATE, 1, NULL Best Regards Gregory 1. Re: DTrace stuck when listing probes with "-P" and "-p"GregoryG. Sep 8, 2013 5:47 PM (in response to GregoryG.) Hello, Sorry, -p doesn't seem useful when used in conjunction -l. I'm not sure if that's expected of not. However, the -P blocks the -l command which looks like a wrong behavior: dtrace -P demo7276 -l Simple doesn't return Gregory 2. Re: DTrace stuck when listing probes with "-P" and "-p"NickAlcock Sep 11, 2013 1:25 PM (in response to GregoryG.) Note that dtrace -P 'demo*' -l works fine, as does dtrace -m demo -l: it's only non-wildcarded lookups by provider that block forever. Even -P 'demo7276*' works, i.e. a wildcard that expands only to this one thing. (I note that all the tests for this case in the DTrace testsuite we imported from Solaris are intended to be manually executed: I should automate them to ensure this never happens again.) Looking into the cause now. 3. Re: DTrace stuck when listing probes with "-P" and "-p"NickAlcock Sep 11, 2013 2:09 PM (in response to NickAlcock) Diagnosed: we attempt to explicitly lock the dpr_lock() when creating pid and usdt probes, which is prohibited since it confuses the machinery by which we make a recursive lock out of a non-recursive one that we can use in condition variables. Thus we fail to realize the lock is already taken and soon end up double-locking a non-recursive mutex, and a hang ensues. Fix trivial (already done, but coming up with some proper testcases for this too before releasing). 4. Re: DTrace stuck when listing probes with "-P" and "-p"GregoryG. Sep 11, 2013 5:38 PM (in response to NickAlcock) Nick, This wasn't that obvious for me ; despite the source code ;-). Thank you for taking the feedback into account... Gregory 5. Re: DTrace stuck when listing probes with "-P" and "-p"NickAlcock Sep 11, 2013 7:50 PM (in response to NickAlcock) The testcases revealed another bug (not the same bug, though its symptoms are outwardly identical): dtrace -p $some_pid -P a-provider-without-wildcards hung. That needs a rather more invasive fix, but it cleans up the code a good bit and fixes a bunch of other possible hangs at the same time. (And it's done, and waiting for the next release.)
https://community.oracle.com/message/11184642
CC-MAIN-2016-44
refinedweb
541
68.91
Parallel programming is the future, but how do you get to high-performance parallel programming that makes effective use of multicore CPUs? Using thread libraries like POSIX threads is certainly an option, but the POSIX framework for threads was originally introduced with the C language in mind. It's also way too low-level an approach—for example, you don't have access to any concurrent containers, nor are there any concurrent algorithms to use. On that note, Intel has introduced Intel® Threading Building Blocks (Intel TBB), a C++-based framework for parallel programming that comes with a host of interesting features and a higher level of abstraction than threads. Downloading and installing Intel TBB requires nothing special: The extracted directory hierarchy is reminiscent of UNIX® systems with include, bin, lib, and doc folders. For the purposes of this article, I chose the tbb30_20110427oss stable release. Getting started with Intel TBB Intel TBB has a lot going for it. Here are a few points of interest to get you started: - Instead of threads, you have a higher level of abstraction in tasks. Intel claims that on Linux® systems, starting and terminating a task is 18 times faster than starting and stopping a thread. - Intel TBB comes with a task scheduler that can handle load balancing efficiently across multiple logical and physical cores. The default task scheduling policy in Intel TBB is different from the round-robin policies most thread schedulers have. - Intel TBB offers off-the-shelf availability of thread-safe containers like concurrent_vectorand concurrent_queue. - Generic parallel algorithms, like parallel_forand parallel_reduce, are available. - Lock-free (also known as mutex-free) concurrent programming support is available with the template class atomic. This support makes Intel TBB suited for high-performance applications, because Intel TBB handles locking and unlocking the mutex. - It's all C++! With no fancy extensions or macros, Intel TBB stays within the language, making heavy use of templates. Intel TBB does have a fair number of prerequisites. Before getting started, you should have: C++templates and some understanding of the Standard Template Library (STL). - Knowledge of threads—either POSIX threads or Windows® threads. Although not necessary, lambda functions in C++0x find fair bit of usage with Intel TBB. This discussion of Intel TBB begins with creating and playing around with tasks and synchronization primitives (mutex) followed by using the concurrent containers and parallel algorithms. It ends with lock-free programming using the atomic template. Hello, World with Intel TBB tasks Intel TBB is based on the concept of tasks. You define your own tasks, which are derived from tbb::task, declared in tbb/task.h. Users are required to override the pure virtual method task* task::execute ( ) in their code. Here are some of the properties of every Intel TBB task: - When the Intel TBB task scheduler chooses to run some task, the task's executemethod is called. That is the entry point. - The executemethod may return a task*, which tells the scheduler the next task to run. If it returns NULL, then the scheduler is free to choose the next task. task::~task( )is virtual, and whatever resources the user task has allocated must be released within this destructor. - Tasks are allocated by a call to task::allocate_root( ). - The main task runs the task to completion with a call to task::spawn_root_and_wait(task). Listing 1 below shows the first task and how it gets called: Listing 1. Creating the first Intel TBB task ); } To run Intel TBB programs, you must have the task scheduler appropriately initialized. The argument to the scheduler in Listing 1 is automatic, which lets the scheduler decide for itself on the number of threads. Of course, you can override this behavior if you want to control the maximum number of threads spawned. But in production code, unless you really know what you're doing, it's best to leave the job of determining the optimum number of threads to the scheduler. Now that you have created your first task, let's have the first_task from Listing 1 spawn some child tasks. Listing 2 below introduces a few new concepts: - Intel TBB provides a container called task_listthat is meant to serve as a collection of tasks. - Each parent task creates a child task using the allocate_childfunction call. - Before a task spawns any child tasks, it must make a call to set_ref_count. Failure to do so results in undefined behavior. If the intent is to spawn the child tasks, and then wait for them to finish, countmust be equal to the number of child tasks + 1; otherwise, countshould equal number of child tasks. More on this shortly. - The call to spawn_and_wait_for_alldoes what its name suggests: It spawns child tasks and waits until all is done. Here's the code: Listing 2. Creating multiple child tasks #include "tbb/tbb.h" #include <iostream> using namespace tbb; using namespace std; class first_task : public task { public: task* execute( ) { cout << "Hello World!\n"; task_list list1; list1.push_back( *new( allocate_child() ) first_task( ) ); list1. push_back( *new( allocate_child() ) first_task( ) ); set_ref_count(3); // 2 (1 per child task) + 1 (for the wait) spawn_and_wait_for_all(list1); return NULL; } }; int main( ) { first_task& f1 = *new(tbb::task::allocate_root()) first_task( ); tbb::task::spawn_root_and_wait(f1); } So, why does Intel TBB require explicit setting of set_ref_count? The documentation says it's primarily for performance reasons. You must always set the ref count for a task before spawning children. See Resources for links to more detail. You can also create task groups. The following code creates a task group that spawns two tasks and waits for them to finish. The run method of a task_group has the following signature: template<typename Func> void run( const Func& f ) The run method spawns a task that computes f( ) but does not block the calling task, so control returns immediately. To wait for the child tasks to finish, the calling task calls wait (see Listing 3 below). Listing 3. Creating a task_group #include "tbb/tbb.h" #include <iostream> using namespace tbb; using namespace std; class say_hello( ) { const char* message; public: say_hello(const char* str) : message(str) { } void operator( ) ( ) const { cout << message << endl; } }; int main( ) { task_group tg; tg.run(say_hello("child 1")); // spawn task and return tg.run(say_hello("child 2")); // spawn another task and return tg.wait( ); // wait for tasks to complete } Note the syntactic simplicity of task_group—no calls required for memory allocation and so on when dealing directly with tasks, and you don't need to do anything with the ref count. That's about it for tasks. Hundreds of things are possible with Intel TBB tasks. Be sure to dive into the Intel TBB documentation for more details. Let's move on to concurrent containers. Concurrent containers: vector Now, let's focus on one of Intel TBB's concurrent containers: the concurrent_vector. This container is declared in the header tbb/concurrent_vector.h, and the basic interface resembles the STL vector: template<typename T, class A = cache_aligned_allocator<T> > class concurrent_vector; Multiple threads of control can safely be added to the vector without the need for any explicit locking. To paraphrase from the Intel TBB manual, concurrent_vector has the following properties: - It provides random access to its elements; indexing begins from position 0. - Safe concurrent increases in size are possible, and multiple threads can be added simultaneously. - Adding new elements doesn't invalidate existing indices or iterators. Concurrency comes at a price, though. Unlike STL, where adding new elements involves moving of data, with concurrent_vector data isn't moved. Instead, the container maintains a series of contiguous memory segments. Obviously, this increases container overhead. For concurrent additions to the vector, three methods are available: push_back—Append an element at the end of the vector. grow_by(N)—Append N consecutive elements of type Tto the concurrent_vectorand return the iterator to the first appended element. Each element is initialized with T ( ). grow_to_at_least(N)—Grow a vector to size N if the current size of the vector is less than N. You append a string to a concurrent_vector as follows: void append( concurrent_vector<char>& cv, const char* str1 ) { size_t count = strlen(str1)+1; std::copy( str1, str1+count, cv.grow_by(count) ); } Using parallel algorithms off the shelf with Intel TBB One of the best things about Intel TBB is that it lets you parallelize portions of your source code automatically without having to get into the nuts and bolts of how to create and maintain threads. The most common parallel algorithm is parallel_for. Consider the following example: void serial_only (int* array, int size) { for (int count = 0; count < size; ++count) apply_transformation (array [count]); } Now, if the apply_transformation routine in the previous snippet isn't doing anything strange, such as applying some transformation only to individual array elements, then nothing stops you from distributing the load to multiple CPU cores. You need two classes from the Intel TBB library to get started: blocked_range (from tbb/blocked_range.h) and parallel_for (from tbb/parallel_for.h). The blocked_range class is meant to create an object that provides parallel_for with the iteration range, so you need to create something like blocked_range (0, size) and pass it as an input to parallel_for. The second and final argument that parallel_for needs is a class with the requirements in Listing 4 (pasted from the parallel_for.h header). Listing 4. Requirements for the second argument to parallel_for /** \page parallel_for_body_req Requirements on parallel_for body Class \c Body implementing the concept of parallel_for body must define: - \code Body::Body( const Body& ); \endcode Copy constructor - \code Body::~Body(); \endcode Destructor - \code void Body::operator()( Range& r ) const; \endcode Function call operator applying the body to range \c r. **/ This code tells you that you need to create your own class with operator ( ), with the blocked_range as the argument, and code the serial for loop you created earlier inside the method definition for operator ( ). The copy constructor and destructor should be public, and you leave the compiler to provide the defaults for you. Listing 5 below shows the code. Listing 5. Creating the second argument to parallel_for #include "tbb/blocked_range.h" using namespace tbb; class apply_transform{ int* array; public: apply_transform (int* a): array(a) {} void operator()( const blocked_range& r ) const { for (int i=r.begin(); i!=r.end(); i++ ){ apply_transformation(array[i]); } } }; Now that you have successfully created the second object, you just invoke parallel_for, as shown below in Listing 6. Listing 6. Parallelizing the loop using parallel_for #include "tbb/blocked_range.h" #include "tbb/parallel_for.h" using namespace tbb; void do_parallel_the_tbb_way(int *array, int size) { parallel_for (blocked_range(0, size), apply_transform(array)); } Other parallel algorithms in Intel TBB Intel TBB offers quite a few parallel algorithms, for example parallel_reduce (declared in tbb/parallel_reduce.h). Instead of applying a transformation on each individual array element, let's say you want to sum up all the elements. Here's the serial code: void serial_only (int* array, int size) { int sum = 0; for (int count = 0; count < size; ++count) sum += array [count]; return sum; } Conceptually, running this code in a parallel context would mean that each thread of control should sum up certain portions of the array, and there must be a join method somewhere that adds up the partial summations. Listing 7 below shows the Intel TBB code. Listing 7. Serial for loop to sum array elements #include "tbb/blocked_range.h" #include "tbb/parallel_reduce.h" using namespace tbb; float sum_with_parallel_reduce(int*array, int size) { summation_helper helper (array); parallel_reduce (blocked_range<int> (0, size, 5), helper); return helper.sum; } When splitting the array into sub-arrays for each individual thread, you want to maintain some granularity (for example, each thread is responsible for summing N elements, where N is neither too big nor too small). That is the third argument of blocked_range. The Intel TBB requires that the summation_helper class fulfill two conditions: It must have a method named join to add partial sums and a constructor with special arguments (called the splitting constructor). Listing 8 provides the code: Listing 8. Creating the summation_helper class with the join method and splitting the constructor class summation_helper { int* partial_array; public: int sum; void operator( )( const blocked_range<int>& r ) { for( int count=r.begin(); count!=r.end( ); ++count) sum += partial_array [count]; } summation_helper (summation_helper & x, split): partial_array (x. partial_array), sum (0) { } summation_helper (int* array): partial_array (array), sum (0) { } void join( const summation_helper & temp ) { sum += temp.sum; // required method } }; Here's what will happen. Intel TBB calls the splitting constructor (the second argument called split is a dummy argument that Intel TBB requires) and has the partial array filled up by some number of elements (the number is a function of granularity defined in blocked_range). When the summation is complete on the sub-array, the join method adds the partial result. Slightly complicated? At first glance maybe; just remember that you need three methods: operator( ) to add the array range, join to add the partial result, and the split constructor to initiate a new worker thread. Intel TBB has several other useful algorithms, parallel_sort being one of the most useful. Refer to the Intel TBB reference manual (see Resources) for details. Lock-free programming using Intel TBB One issue that frequently crops up during multithreaded programming is the number of CPU cycles wasted on the locking and unlocking of mutexes. If you're coming from the POSIX threads background, Intel TBB will surprise you with its atomic template. It's a far faster alternative to mutexes, and you could safely do away with the need for locking and unlocking code. Is atomic the panacea of all coding woes? No. It's severely restricted in its usage; nonetheless, it's quite effective if you want to create high-performance code. Here's how you declare an integer to be of atomic type: #include "tbb/atomic.h" using namespace tbb; atomic<int> count; atomic<float* > pointer_to_float; Now, assume that the variable count from earlier is being accessed by multiple threads of control. Typically, you would want to guard count with a mutex during writes; but, that's no longer required with atomic<int>. Take a look at Listing 9. Listing 9. atomic fetch_and_add requires no locking // writing with mutex, count is declared as int count; { // … code pthread_mutex_lock (&lock); count += 1000; pthread_mutex_unlock (&lock); // … code continues } // writing without mutex, count declared as atomic<int> count; { // … code count.fetch_and_add (1000); // no explicit locking/unlocking // … code continues } Instead of +=, you use the fetch_and_add method of the atomic<T> class. And no, it does not use any mutexes internally as part of the fetch_and_add method. When fetch_and_add is executed, it has the effect of adding 1000 to count instantaneously—either all threads see the updated value of count at once, or all threads continue to see the old value. That's why count is declared as an atomic variable: Operations on count are atomic and cannot be interrupted by the vagaries of process or thread scheduling. No matter how threads are scheduled, there's no way count would have different values in different threads. For an in-depth discussion of lock-free programming, see Resources. The atomic<T> class comes with the following five fundamental operations: y = x; // atomic read x = b; // atomic write x.fetch_and_store(y); // y = x and return the old value of x x.fetch_and_add(y); // x += y and return the old value of x x.compare_and_swap(y, z); // if (x == z) x = y; in either case, return old value of x In addition, the operators +=, -=, ++, and -- are supported for convenience, but they are all implemented on top of fetch_and_add. As shown in tbb/atomic.h, here's how the operators are defined (Listing 10). Listing 10. Operators ++, --, +=, and -= defined using fetch_and_add value_type operator+=( D addend ) { return fetch_and_add(addend)+addend; } value_type operator-=( D addend ) { // Additive inverse of addend computed using binary minus, // instead of unary minus, for sake of avoiding compiler warnings. return operator+=(D(0)-addend); } value_type operator++() { return fetch_and_add(1)+1; } value_type operator--() { return fetch_and_add(__TBB_MINUS_ONE(D))-1; } value_type operator++(int) { return fetch_and_add(1); } value_type operator--(int) { return fetch_and_add(__TBB_MINUS_ONE(D)); } Note that the type T in atomic<T> can only be an integral type, enumeration type, or pointer type. Conclusion It is impossible to do justice to a library the scale of Intel TBB in a single article. Indeed, Intel's web site has dozens of articles highlighting several aspects of Intel TBB. Instead, this article attempted to provide insight into some of the compelling features that Intel TBB comes with—tasks, concurrent containers, algorithms, and a way to create lock-free code. Hopefully, this introduction ignites your interest and Intel TBB will gain yet another ardent user—much like the author himself. Resources Learn - Find Intel TBB references and other manuals at the Intel TBB site. - Learn more about Intel TBB design patterns. - Learn more about Intel TBB atomic operations. - Find details on lock-free programming. - Intel TBB blogs provide good information on concurrent vector internals. - Learn more about the Intel TBB task scheduler. - AIX and UNIX developerWorks zone: The AIX and UNIX zone provides a wealth of information relating to all aspects of AIX systems administration and expanding your UNIX skills. - Find Intel TBB code samples to help get you started with this powerful library. - New to AIX and UNIX? Visit the New to AIX and UNIX page to learn more. - Technology bookstore: Browse the technology bookstore for books on this and other technical topics. Get products and technologies - Download Intel TBB. -.
http://www.ibm.com/developerworks/aix/library/au-intelthreadbuilding/
CC-MAIN-2014-52
refinedweb
2,892
54.12
You want the standard Exporter module to define the external interface to your module. In module file YourModule.pm, place the following code. Fill in the ellipses as explained in the Discussion section. package YourModule; use strict; our (@ISA, @EXPORT, @EXPORT_OK, %EXPORT_TAGS, $VERSION); use Exporter; $VERSION = 1.00; # Or higher @ISA = qw(Exporter); @EXPORT = qw(...); # Symbols to autoexport (:DEFAULT tag) @EXPORT_OK = qw(...); # Symbols to export on request %EXPORT_TAGS = ( # Define names for sets of symbols TAG1 => [...], TAG2 => [...], ... ); ######################## # your code goes here ######################## 1; # this should be your last line In other files where you want to use YourModule, choose one of these lines: use YourModule; # Import default symbols into my package use YourModule qw(...); # Import listed symbols into my package use YourModule ( ); # Do not import any symbols use YourModule qw(:TAG1); # Import whole tag set The standard Exporter module handles the module's external interface. Although you could define your own import method for your package, almost no one does this. When someone says use YourModule, this does a require "YourModule.pm" statement followed a YourModule->import( ) method call, both during compile time. The import method inherited from the Exporter package looks for global variables in your package to govern its behavior. Because they must be package globals, we've declared them with our to satisfy use strict. These variables are: When a module is loaded, a minimal required version number can be supplied. If the version isn't at least this high, the use will raise an exception. use YourModule 1.86; # If $VERSION < 1.86, fail This array contains a list of functions and variables that will be exported into the caller's own namespace so they can be accessed without being fully qualified. Typically, a qw( ) list is used. @EXPORT = qw(&F1 &F2 @List); @EXPORT = qw( F1 F2 @List); # same thing With the simple use YourModule call the function &F1 can be called as F1( ) rather than YourModule::F1( ), and the array can be accessed as @List instead of @YourModule::List. The ampersand is optional in front of an exported function specification. To load the module at compile time but request that no symbols be exported, use the special form use Exporter ( ), with empty parentheses. This array contains symbols that can be imported if they're specifically asked for. If the array were loaded this way: @EXPORT_OK = qw(Op_Func %Table); then the user could load the module like so: use YourModule qw(Op_Func %Table F1); and import only the Op_Func function, the %Table hash, and the F1 function. The F1 function was listed in the @EXPORT array. Notice that this does not automatically import F2 or @List, even though they're in @EXPORT. To get everything in @EXPORT plus extras from @EXPORT_OK, use the special :DEFAULT tag, such as: use YourModule qw(:DEFAULT %Table); This hash is used by large modules like CGI or POSIX to create higher-level groupings of related import symbols. Its values are references to arrays of symbol names, all of which must be in either @EXPORT or @EXPORT_OK. Here's a sample initialization: %EXPORT_TAGS = ( Functions => [ qw(F1 F2 Op_Func) ], Variables => [ qw(@List %Table) ], ); An import symbol with a leading colon means to import a whole group of symbols. Here's an example: use YourModule qw(:Functions %Table); That pulls in all symbols from: @{ $YourModule::EXPORT_TAGS{Functions} }, that is, it pulls in the F1, F2, and Op_Func functions and then the %Table hash. Although you don't list it in %EXPORT_TAGS, the implicit tag :DEFAULT automatically means everything in @EXPORT. You don't have to have all those variables defined in your module. You just need the ones that you expect people to be able to use. The "Creating Modules" section of Chapter 11 of Programming Perl; the documentation for the standard Exporter module, also found in Chapter 32 of Programming Perl; Recipe 12.8; Recipe 12.22
http://www.yaldex.com/perl-tutorial/0596003137_perlckbk2-chp-12-sect-1.html
CC-MAIN-2016-44
refinedweb
643
63.19
In a recent project, I decided that using a multiple-document interface (MDI) would be the best approach. I was pleasantly surprised by how easy creating an MDI application in Visual Studio and on the .NET platform is. Simply setting the IsMdiContainer property of the System.Windows.Forms.Form allows other forms to be hosted in the application workspace. If you're like me, however, you begin to wonder what that workspace would look like with a different color, custom painting, or maybe a different border style. I quickly found that the Form control exposed no such properties to control this behavior. A search of the web revealed that many others have desired to do the same and had various approaches on how to accomplish this. After using their suggestions successfully in my application and creating a few of my own, I decided to collect all such information into one place and perhaps develop a component that would allow easy setting of these properties. IsMdiContainer System.Windows.Forms.Form Form As it turns out, the MDI area of a Windows® Form is just another control. When the IsMdiContainer property is set to true, a control of type System.Windows.Forms.MdiClient is added to the Controls collection of the Form. Iterating through the Form's controls after loading will reveal the MdiClient control and is also probably the best way to get a reference to it. The MdiClient control does have a public constructor and could be added to the Form's Controls collection programmatically, but a better practice is to set the Form's IsMdiContainer property and have it do the work. To set a reference to the MdiClient control, iterate through the controls until the MdiClient control is found: true System.Windows.Forms.MdiClient Controls MdiClient MdiClient mdiClient = null; // Get the MdiClient from the parent form. for(int i = 0; i < parentForm.Controls.Count; i++) { // If the form is an MDI container, it will contain an MdiClient control // just as it would any other control. mdiClient = parentForm.Controls[i] as MdiClient; if(mdiClient != null) { // The MdiClient control was found. // ... // break; } } Using the as keyword here is better than a direct cast in a try/catch block or using the is keyword, because if the type is a match, a reference to the control will result or null will be returned. It's like getting two calls for the price of one. as try catch is type null. With a reference to the MdiClient control in hand, many of the common control properties can be set as you would expect. The most often requested of course is changing the background color. The default background color of the application workspace is global to all Windows® applications and can be changed in the Control Panel. The .NET framework exposes this color in the System.Drawing.SystemColors.AppWorkspace static property. Changing the background color is done as you would expect, through the BackColor property: System.Drawing.SystemColors.AppWorkspace BackColor // Set the color of the application workspace. mdiClient.BackColor = value; That as well as many properties common to other controls will work as expected with the MdiClient control. What's absent from the MdiClient control, however, is a BorderStyle property. Gone are the typical System.Windows.Forms.BorderStyle enumeration options of Fixed3D, FixedSingle, and None. By default, the application workspace of a MDI form is inset with a 3D border equivalent to what would be Fixed3D. Just because this behavior is not exposed by the control does not mean it is not accessible. From this point forward, you will see that the Handle of the MdiClient becomes much more valuable than just a reference to it. BorderStyle System.Windows.Forms.BorderStyle Fixed3D FixedSingle None Handle To change the appearance of the border requires the use of Win32 function calls. (More information on this can be gleaned from Jason Dorie's article: Adding designable borders to user controls.) Each window (i.e. - Control) in Windows® has information that can be retrieved by using the GetWindowLong and set by using the SetWindowLong function. Both functions require a flag that specifies what information we would like to get and set. In this case, we are interested in the GWL_STYLE and the GWL_EXSTYLE, which get and set the window style and the extended window style flags, respectively. Because these changes are made to the non-client area of the control, calling the control's Invalidate method will not cause the borders to be repainted. Instead, we call the SetWindowPos function to cause an update of the non-client area. These functions and constants are defined like this: Control GetWindowLong SetWindowLong GWL_STYLE GWL_EXSTYLE Invalidate SetWindowPos // Win32 Constants private const int GWL_STYLE = -16; private const int GWL_EXSTYLE = -20; private const int WS_BORDER = 0x00800000; private const int WS_EX_CLIENTEDGE = 0x00000200; private const uint SWP_NOSIZE = 0x0001; private const uint SWP_NOMOVE = 0x0002; private const uint SWP_NOZORDER = 0x0004; private const uint SWP_NOREDRAW = 0x0008; private const uint SWP_NOACTIVATE = 0x0010; private const uint SWP_FRAMECHANGED = 0x0020; private const uint SWP_SHOWWINDOW = 0x0040; private const uint SWP_HIDEWINDOW = 0x0080; private const uint SWP_NOCOPYBITS = 0x0100; private const uint SWP_NOOWNERZORDER = 0x0200; private const uint SWP_NOSENDCHANGING = 0x0400; // Win32 Functions [DllImport("user32.dll", CharSet = CharSet.Auto)] private static extern int GetWindowLong(IntPtr hWnd, int Index); [DllImport("user32.dll", CharSet = CharSet.Auto)] private static extern int SetWindowLong(IntPtr hWnd, int Index, int Value); [DllImport("user32.dll", ExactSpelling = true)] private static extern int SetWindowPos(IntPtr hWnd, IntPtr hWndInsertAfter, int X, int Y, int cx, int cy, uint uFlags); Note: The values of these constants are defined in the Winuser.h header file, which is usually installed by the Platform SDK or Visual Studio .NET. Note: The values of these constants are defined in the Winuser.h header file, which is usually installed by the Platform SDK or Visual Studio .NET. We can adjust the border according to the BorderStyle enumeration by: including a WS_EX_CLIENTEDGE flag in the extended window styles (Fixed3D), a WS_BORDER flag in the standard window styles (FixedSingle), or removing both of these flags for no border (None). Then call the SetWindowPos function to cause an update. The SetWindowPos function has many options, but we want nothing more than to repaint the non-client area and will pass in the flags necessary to do this: WS_EX_CLIENTEDGE WS_BORDER // Get styles using Win32 calls int style = GetWindowLong(mdiClient.Handle, GWL_STYLE); int exStyle = GetWindowLong(mdiClient.Handle, GWL_EXSTYLE); // Add or remove style flags as necessary. switch(value) { case BorderStyle.Fixed3D: exStyle |= WS_EX_CLIENTEDGE; style &= ~WS_BORDER; break; case BorderStyle.FixedSingle: exStyle &= ~WS_EX_CLIENTEDGE; style |= WS_BORDER; break; case BorderStyle.None: style &= ~WS_BORDER; exStyle &= ~WS_EX_CLIENTEDGE; break; } // Set the styles using Win32 calls SetWindowLong(mdiClient.Handle, GWL_STYLE, style); SetWindowLong(mdiClient.Handle, GWL_EXSTYLE, exStyle); // Update the non-client area. SetWindowPos(mdiClient.Handle, IntPtr.Zero, 0, 0, 0, 0, SWP_NOACTIVATE | SWP_NOMOVE | SWP_NOSIZE | SWP_NOZORDER | SWP_NOOWNERZORDER | SWP_FRAMECHANGED); To move into the realm of customization beyond changing simple properties or making Win32 calls, we need to intercept and process window messages. Unfortunately, the MdiClient class is sealed, and therefore, can't be subclassed nor can its WndProc method be overridden. Thankfully, the System.Windows.Forms.NativeWindow class comes to the rescue. The intent of the NativeWindow class is to provide "a low-level encapsulation of a window handle and a window procedure". In other words, it allows us to tap into the window messages a control receives. To make use of NativeWindow, inherit from the class and override its WndProc method. Once a control's handle is assigned to the NativeWindow via the AssignHandle method, the WndProc method behaves just as if it was the control's WndProc method. With the ability to listen to the MdiClient control's window messages, a whole new range of customization is possible. sealed WndProc System.Windows.Forms.NativeWindow NativeWindow AssignHandle While making controls outside the application workspace accessible by scrollbars is a great feature, I personally can't remember an MDI application I've used that does the same. Turning off or hiding the scrollbars in the MdiClient is a feature that may be more often requested than changing its color. The scrollbars of the MdiClient control are part of its non-client area (area outside the ClientRectangle) and are not themselves controls parented to the MdiClient. That rules out the possibility of changing the visibility of the scrollbars and leaves us with window messages and Win32 functions that affect the size of the non-client area. When the non-client area of a control needs to be calculated, the control is sent a WM_NCCALCSIZE message. In order to hide the scrollbars, we could tell Windows® that the non-client area is a little bit smaller than it actually is and cover up the scrollbars. My first approach to this was a failed attempt at trying to determine what the size of the non-client area should be. A much better approach would be to hide the scrollbars when the non-client area is calculated using the ShowScrollBar Win32 function. The ShowScrollBar function requires the window handle, the scrollbars to be hidden, and a bool indicating its visibility: ClientRectangle WM_NCCALCSIZE ShowScrollBar bool // Win32 Constants private const int SB_HORZ = 0; private const int SB_VERT = 1; private const int SB_CTL = 2; private const int SB_BOTH = 3; // Win32 Functions [DllImport("user32.dll")] private static extern int ShowScrollBar(IntPtr hWnd, int wBar, int bShow); protected override void WndProc(ref Message m) { switch(m.Msg) { // // ... // case WM_NCCALCSIZE: ShowScrollBar(m.HWnd, SB_BOTH, 0 /*false*/); break; } base.WndProc(ref m); } After hiding the scrollbar, the WM_NCCALCSIZE message is processed as usual and calculates the non-client area less the recently hidden scrollbars. In case you're wondering, hiding the scrollbar via the ShowScrollBar function does not keep the scrollbar hidden and is immediately reset to visible. That is why it must be hidden every time the non-client area is calculated. In .NET forums around the web, another common request I see is, "How do I put an image in the application workspace of an MDI form?" The easiest way is to listen to the Paint event once you have a reference to the MdiClient. For some situations this may work fine, but I noticed a very bad flicker every time the MdiClient is resized. This is a result of the painting not being double-buffered and painting calls being made in both the WM_PAINT and WM_ERASEBKGND messages. If we had been able to inherit from the MdiClient control, this could be easily remedied by using the control's protected method SetStyle with the flags System.Windows.Forms.ControlStyles.AllPaintingInWmPaint, ControlStyles.DoubleBuffer, and ControlStyles.UserPaint. But as noted earlier, the MdiClient class is sealed and that is not an option. What is an option is listening to the WM_PAINT and WM_ERASEBKGND window messages and implementing our own custom painting. (More information on this is available in Steve McMahon's article: Painting in the MDI Client Area.) Paint WM_PAINT WM_ERASEBKGND SetStyle System.Windows.Forms.ControlStyles.AllPaintingInWmPaint ControlStyles.DoubleBuffer ControlStyles.UserPaint The Win32 items we'll need are the functions BeginPaint and EndPaint, structs called the PAINTSTRUCT and RECT, and a few more constants: BeginPaint EndPaint struct PAINTSTRUCT RECT // Win32 Constants private const int WM_PAINT = 0x000F; private const int WM_ERASEBKGND = 0x0014; private const int WM_PRINTCLIENT = 0x0318; // Win32 Structures [StructLayout(LayoutKind.Sequential, Pack = 4)] private struct PAINTSTRUCT { public IntPtr hdc; public int fErase; public RECT rcPaint; public int fRestore; public int fIncUpdate; [MarshalAs(UnmanagedType.ByValArray, SizeConst=32)] public byte[] rgbReserved; } [StructLayout(LayoutKind.Sequential)] private struct RECT { public int left; public int top; public int right; public int bottom; } // Win32 Functions [DllImport("user32.dll")] private static extern IntPtr BeginPaint(IntPtr hWnd, ref PAINTSTRUCT paintStruct); [DllImport("user32.dll")] private static extern bool EndPaint(IntPtr hWnd, ref PAINTSTRUCT paintStruct); The typical method of double-buffering is to do all painting to an Image, or rather, get the Graphics object from an Image instead of painting directly to the screen. When painting to the Image is complete then the Image itself is drawn to the screen. That way, all the control's painting is displayed at once instead of intermittent painting that may be in progress. With the MdiClient control's graphics being so simple, we could easily do all the painting ourselves, but a better practice is to not eliminate the base graphics from being drawn, but to incorporate them into our custom painting. That way, if the MdiClient were somehow changed in a way we did not expect, the painting should still be correctly displayed. This is achieved by creating our own window message (WM_PRINTCLIENT) and sending it to the base control using the DefWndProc (i.e. - Default WndProc) method. What we get back in the graphics buffer is a painting of the original control painted by the base control. (More on this can be learned from J Young's article: Generating missing Paint event for TreeView and ListView controls.) From there, any custom painting over top of it can be processed: Image Graphics WM_PRINTCLIENT DefWndProc protected override void WndProc(ref Message m) { switch(m.Msg) { //Do all painting in WM_PAINT to reduce flicker. case WM_ERASEBKGND: return; case WM_PAINT: // Use Win32 to get a Graphics object. PAINTSTRUCT paintStruct = new PAINTSTRUCT(); IntPtr screenHdc = BeginPaint(m.HWnd, ref paintStruct); using(Graphics screenGraphics = Graphics.FromHdc(screenHdc)) { // Double-buffer by painting everything to an image and // then drawing the image. int width = (mdiClient.ClientRectangle.Width > 0 ? mdiClient.ClientRectangle.Width : 0); int height = (mdiClient.ClientRectangle.Height > 0 ? mdiClient.ClientRectangle.Height : 0); using(Image i = new Bitmap(width, height)) { using(Graphics g = Graphics.FromImage(i)) { // Draw base graphics and raise the base Paint event. IntPtr hdc = g.GetHdc(); Message printClientMessage = Message.Create(m.HWnd, WM_PRINTCLIENT, hdc, IntPtr.Zero); DefWndProc(ref printClientMessage); g.ReleaseHdc(hdc); // // Custom painting here... // } // Now draw all the graphics at once. screenGraphics.DrawImage(i, mdiClient.ClientRectangle); } } EndPaint(m.HWnd, ref paintStruct); return; } base.WndProc(ref m); } Note: More information about the BeginPaint, EndPaint, PAINTSTRUCT, RECT, and WM_PRINTCLIENT can be found in the Platform SDK or MSDN library. Note: More information about the BeginPaint, EndPaint, PAINTSTRUCT, RECT, and WM_PRINTCLIENT can be found in the Platform SDK or MSDN library. Notice that in this case, we do not let the WM_PAINT message fall through to be processed by the base WndProc because that would cause it to do its default painting right over what we had just done ourselves. The WM_ERASEBKGND message is ignored because we want to do all the painting at one time in the WM_PAINT message. Now, the MdiClient control's Paint event will no longer flicker and custom painting code can be placed in the processing of the WM_PAINT message above. Rather than having to put this code into every project that is using a multiple-document interface, we could wrap this all up into a System.ComponentModel.Component that could be copied from project-to-project and dropped onto the design surface. Included in the source files is a component I call the MdiClientController and is found in the Slusser.Components namespace. The component inherits from NativeWindow and implements the System.ComponentModel.IComponent interface to give it its Component behavior. It incorporates all the functionality discussed previously with the addition of some properties that make it easy to place an Image in the application workspace. System.ComponentModel.Component MdiClientController Slusser.Components System.ComponentModel.IComponent Component To use the component with a MDI form, only the parent Form must be passed in to the constructor or set through the ParentForm property. To set the MdiClientController component's ParentForm property in the designer, we have to customize the Site property to determine if the component is dropped onto a Form. It helps here to have a knowledge of Designers. If indeed the component is dropped onto Form, we set the ParentForm property and it is properly serialized in the designer code: ParentForm Site public ISite Site { get { return site; } set { site = value; if(site == null) return; // If the component is dropped onto a form during design-time, // set the ParentForm property. IDesignerHost host = (value.GetService(typeof(IDesignerHost)) as IDesignerHost); if(host != null) { Form parent = host.RootComponent as Form; if(parent != null) ParentForm = parent; } } } One of the challenges in creating this component is knowing when the component will be initialized. Components dropped onto the designer are initialized in the InitializeComponent method of the Form's constructor. If you inspect the InitializeComponent method created by the designer, you'll note that the Form's properties are the last thing to be set. If the MdiClientController were to scan for the MdiClient control in the Form's Controls collection before the Form's IsMdiContainer property is set, no MdiClient control would be found. The solution is to know when the parent Form's Handle is created. This will surely indicate that all child controls and variables have been initialized and when we can start to look for the MdiClient. If the parent form does not have a Handle when the ParentForm property is set, the component will listen to the Form's HandleCreated event and get the MdiClient then: Components InitializeComponent HandleCreated public Form ParentForm { get { return parentForm; } set { // If the ParentForm has previously been set, // unwire events connected to the old parent. if(parentForm != null) parentForm.HandleCreated -= new EventHandler(ParentFormHandleCreated); parentForm = value; if(parentForm == null) return; // If the parent form has not been created yet, // wait to initialize the MDI client until it is. if(parentForm.IsHandleCreated) { InitializeMdiClient(); RefreshProperties(); } else parentForm.HandleCreated += new EventHandler(ParentFormHandleCreated); } } private void ParentFormHandleCreated(object sender, EventArgs e) { // The form has been created, unwire the event, // and initialize the MdiClient. parentForm.HandleCreated -= new EventHandler(ParentFormHandleCreated); InitializeMdiClient(); RefreshProperties(); } Once the MdiClientController has been added to the toolbox, simply drag it onto the Form in the designer, or double-click it and it will be displayed in the component tray of the designer. The MdiClientController will not change the Form's IsMdiContainer property, so you must set it. All of the component's properties follow the .NET naming conventions. The border style functionality is wrapped up in the BorderStyle property. The hiding of the scrollbars, I thought, was best put in an AutoScroll property. The BackColor and Paint events are now accessible from the designer for your convenience. In addition, there are three properties that control the displaying of an Image in the client area. The Image property sets the Image to display, the ImageAlign property will place it in different locations of the client area, and the StretchImage property will stretch it to fill the entire client area. In addition, I've added a HandleAssigned event to indicate when the MdiClient has been found and its Handle assigned to the NativeWindow. Of course, all this can be done programmatically. AutoScroll ImageAlign StretchImage HandleAssigned As with many projects that become articles, I had what I originally needed in about 30 minutes, but spent several days preparing something that I could share with my fellow programmers. The resulting component should suffice for the majority of requests regarding the appearance of MDI forms. It works nicely, it plays nicely, and it makes applications look nice[ly]. There is still a great deal more that could be added to the component if the need arises, which I'm sure for some programmers, it will. There is one feature, or hurdle rather, that I humbly admit I was not able to overcome: design-time preview. Using Reflector, I discovered a great number of roadblocks that prevent the design-time preview of the MDI area. I would welcome any suggestions on how to overcome
http://www.codeproject.com/Articles/8489/Getting-a-quot-Handle-quot-on-the-MDI-Client?msg=2451287
CC-MAIN-2013-48
refinedweb
3,243
54.32
Welcome to the sixth chapter of the Apache Storm tutorial (part of the Apache Storm course). This lesson will introduce you to the concept of Trident interface to Storm. Now, let us begin by exploring the objectives of this lesson. By the end of this lesson, you will be able to Explain what is Trident Explain the Trident data model Identify various operations in Trident Explain stateful processing using Trident Explain pipelining in Trident Explain Trident advantages Let us start with understanding what Trident is. Trident is a high-level abstraction on top of Storm. It provides Abstraction for joins, groups and aggregation functions. Batching of tuples so that multiple tuples can be grouped into batches. Transactional processing in Storm so that you can achieve better fault tolerance. State manipulation in Storm so that you can have stateful processing. Trident libraries are included as a part of standard Storm installation. Next, let us look at Trident data model. Wish to have in-depth knowledge of Apache Storm? Check out our Course Preview! The core data model in Trident is the stream which is an unbounded sequence of tuples. The tuples in the stream are grouped as a series of batches. Each batch has a finite number of tuples. A stream is partitioned across the nodes in the cluster. Trident operations on the stream are applied in parallel across each partition. This diagram shows input sequence of tuples – T1, T2, etc. coming into Trident Storm cluster. The tuples are partitioned into two nodes. For example, the tuples T1, T2 and T3 go to partition one on node one whereas the tuples T4, T5, and T6 go to partition 2 of node 2. Here, the tuples are grouped into batches. Each batch has three tuples which are the batch size. Batch1, batch3 and batch5 go to partition 1 of node 1 whereas batch2, batch4 and batch6 go to partition2 of node2. Please note that in Storm, the input is unbounded so the cluster will keep getting the tuples and Trident will keep grouping them into batches. Now, we will look at the stateful processing in Trident. Since Storm is an unbounded stream of tuples, there is no state maintained during processing. Stateless processing Once a tuple is processed, Storm doesn’t store any information about the tuple. So when we receive a tuple, we don’t know if it is already two or not. This is the default processing in Storm. Stateful processing Once a tuple is processed, Storm stores the information about the tuple. So we can find out if a tuple was previously processed. Trident provides the topology that automatically maintains state information. State information is stored in memory or a database. Next, we will look at Operations in Trident. Trident consists of five types of operations: Partition local operations Repartitioning operations Aggregation operations Operations on grouped streams Merges and Joins Here, we will look at the different types of partitions in detail one by one. Let us start with partition local operations. Partition local operations are performed on the batches of tuples on a single partition. There is no data transfer between nodes for this operation. There are five types of partition local one. Functions: A function is applied to each tuple Filters: A filter is applied to each tuple to decide whether to include the tuple in the output or not. (Example: LogType == “ERROR”) Partition aggregate: Outputs the aggregate of each partition Partition persist: Provides stateful processing in Storm and aggregates across partitions Projection: Outputs a subset of fields from the input tuple Partition Aggregate Partition Aggregate applies an aggregate function on each partition of a batch of tuples. There are three types of partition aggregates: CombinerAggregator: It runs an init function on each input tuple, and the output is processed by a combine function till only one value is left. The output tuple contains a single field and value. ReduceAggregator: It runs an init function once and then the reduce function for each tuple till only one value is output. Aggregator: This is a general aggregator which has three methods. An init method is called at the beginning of the batch, an aggregate method is called for each input partition, and a complete method is called at the end of the batch. Repartitioning operations run a function to change the partitioning of tuples. Repartitioning in a cluster may lead to the transfer of tuples over the network. Following functions are available for repartitioning: Shuffle: Randomly redistribute tuples across all partitions. A round robin algorithm is used to evenly distribute the tuples Broadcast: Every tuple is replicated to all the partitions PartitionBy: Tuples are partitioned by a set of specified keys Global: All tuples are sent to the same partition. Useful for calculating global sums across all batches batchGlobal: All tuples in the batch are sent to the same partition Partition: Tuples are partitioned based on a custom partitioner Trident provides two types of aggregations: Aggregate This method does the aggregate processing for all the tuples within a batch. Example: LogStream.aggregate(new Count(), new Fields("count")) If batch 1 has five tuples and batch 2 has four tuples, this will give two tuples with count as 5 and four respectively. PersistentAggregate This method does the aggregation over all the tuples of all the batches in the stream. Example: LogStream.persistentAggregate(statespec, new Count(), new Fields("count")) If batch 1 has five tuples and batch 2 has four tuples, this will give one tuple with count as 9. Statespec provides a handle for storing the state of the tuple. You can use the groupBy operation on streams. This operation first partitions the stream based on the group by fields and then groups the tuples in each partition by the group by fields. Trident provides APIs for combining multiple streams using merges and join operations. Merge This method combines the specified streams of the topology into one stream. The output fields will be the fields of the first stream. Example: topology.merge(Logstream1, Logstream2, Logstream3) If Logstream1 has five tuples, Logstream2 has four tuples, and Logstream3 has ten tuples, then the merge will give a stream with 19 tuples. Join This method combines two streams using a join operation based on the specified fields. The operation is done on batches of tuples. Example: If customer stream has (“custId,” “location”) and siteVisits stream has (“user,” URL”), then the following join: topology.join(customerStream,new Fields(“same”), visitStream, new Fields(“user”),new Fields(“user,” location,” URL”) gives all the tuples where custid matches the user. Trident provides following mechanism for managing states and transactions. Trident state can be internal to topology or external stored in the databases. Trident manages state in a fault-tolerant way, in case of retries or failure the message gets processed only once in Trident Topology. Trident provides the following semantics for achieving only once processing and takes the appropriate steps to update the state consistently: Tuples are processed in small batches. Each batch of Tuples is given a unique ID (known as Transaction ID), where if the three is replayed it is given the same Transaction ID. State update is ordered which means that the state updates are not applied to batch two if batch one is not completed. Next, we will look at how Trident topology works. Trident provides the Trident topology class that is created using Trident topology builder. This provides many methods for Trident operations. Some of the methods and their signature are given below. The newStream method creates a new stream of tuples that will have the specified transaction ID. The tuples will be produced by the specified spout. Instead of IBatchSpout, different spouts can be specified. Stream newStream (String txId, IBatchSpout spout) The merge method merges multiple input streams into one stream. Stream merge(Fields outputFields, Stream... streams) The join method performs a join operation on the two streams based on the join fields specified. The output stream will contain the fields from join fields one as well as the outfields specified. Stream join(Stream s1, Fields joinFields1, Stream s2, Fields joinFields2, Fields outFields) Let us move on to understanding the concept of Trident Spouts. Many different types of spouts can be used with Trident topology. Few useful bolts are given below. IBatchSpout: This is a non-transactional spout that emits a batch of tuples at a time. This provides limited fault-tolerance with only once processing guarantee. TridentSpout: This spout supports transaction semantics. It can provide at least once and exactly once processing using various functions. PartitionedTridentSpout: A transactional spout that reads from a partitioned data source. This ensures at least once processing. OpaquePartitionedTridentSpout: An opaque transactional spout that reads from a partitioned data source. Unlike PartitionedTrident Spout, this spout does not mandate that tuples belong to the same batch when resubmitted. This ensures fault-tolerant processing even when the source nodes fail, guaranteeing exactly-once processing. Next, let us look at the fault tolerance levels. There are three types of fault-tolerance levels in Trident, provided by various spouts. Only once processing: This is the least fault tolerance level where transaction IDs are tracked to ensure that a tuple is processed only once. At least once processing: This is partly fault-tolerant as transaction ID information is stored in memory or external database to ensure that each tuple is processed at least once. Exactly once processing: This is highly fault-tolerant as transaction ID information is stored in an external database and also allows replay of tuples in different batches. This ensures exactly once processing even when a tuple changes batches due to source node failure. Next, let us look at the concept of pipelining. Trident can process multiple batches at a time using pipelining. Batches will be processed in parallel and status updates will be ordered as per batch ID. The maximum number of batches that will be processed in parallel can be specified with topology.max.spout.pending” property for the topology. Pipelining results in higher throughput and lower latency. For example: If the topology.max.spout.pending is set to 50, Trident will process multiple batches in parallel till 50 batches are pending status update. If 40 batches are pending status update, next batch will be taken up for processing. If 50 batches are pending status update, then the next batch will not be taken up. Next, let us look at exactly once processing. Trident automatically handles state so that the tuples are processed exactly once. Tuples are grouped into small batches and are processed in batches. Each batch is given a unique batch ID called the transaction ID. If a batch is retried, it gets the same transaction ID. State updates are ordered among the batches, if there are three batches 1, 2 and 3, then the state update for batch two will happen only after the state update for batch one is successful. State update for batch three will happen only after the state update for batch two is successful. This ensures that the batch updates happen exactly once. The state information can be stored in an external database so that even in case of node failures, the transactions can persist. Next, let us look at spout definition example. Spout can be defined as follows for data ingestion using Trident. This spout reads from the file and batches the input lines. Let us take an example of log processing where we output the count of each log type occurring in the log file. The program fragment shows the Spout definition using Trident spout. GetLineSpout is defined as a class that implements the IBatchSpout interface. It has a constructor that takes the batch size as the input parameter and sets the batch size for the spout. It has an open method that is called only once for the spout instance. This function opens the input log file and sets the file handler. It has a getOutputFields method that sets the output fields as a single field returning the entire line. public static class GetLineSpout implements IBatchSpout { FileReader input = null; BufferedReader binput = null; int batchsize; public GetLineSpout(int batchSize) { this.batchSize = batchSize; } @Override public void open(Map conf, TopologyContext context) { input = new FileReader("/tmp/logfile"); binput = new BufferedReader(input); } @Override public void emitBatch() { // code presented in next screen } @Override public Fields getOutputFields() { return new Fields("line"); }} The emitBatch method of this class will be explained later in the lesson. The emitBatch method is used to output the tuples to the topology stream. The code fragment for this method is shown here. Please note that any file handling code has to handle exceptions in Java. The exception handling block is not shown here due to space constraints. The emitBatch method gets the batch ID for this batch of tuples and the output collector as parameters. It has a loop for the batch size so that it can emit batch size number of tuples. In each iteration of the loop, it calls the getNextLine method to get the next line from the file and then calls the emit function of the output collector to output the line as a tuple. The getNextLine method is a custom method and is not a part of the interface. public void emitBatch(long batchId, TridentCollector collector) { for(int i = 0; i < batchSize; i++) { collector.emit(getNextLine()); // emit batchsize number of lines } } private Values getNextLine() { String str= null; while((str = binput.readLine()) == null){ // Read each line Utils.sleep(1000); //if file ended, wait for some input to the file } return (new Values(str)); } This method reads the next line from the log file. It returns this as a tuple using the Values function. Moving on, we will look at the Trident operation example. We will have a function operation to filter the log type lines and output only logtype. The code fragment shows the class GetLogType that extends BaseFunction class. This class has a single method execute that takes the input tuple as a Trident tuple and the output collector as parameters. It splits the line using space delimiter to convert it into words and then compares each word to ERROR or INFO or WARNING. If any of them matches, then the matched word is output as the tuple using the emit method on the output collector. public static class GetLogType extends BaseFunction { @Override public void execute(TridentTuple tuple, TridentCollector collector) { String line = tuple.getString(0); for (String word : line.split(" ")) { if(word.equals("ERROR") || word.equals("INFO") || word.equals("WARNING") ){ collector.emit(new Values(word)); } } } } Now, we will look at an example to understand how to store the output. We will have another function operation to store the log type counts to an output file. The code fragment shows the class printTuple that extends BaseFunction class for storing the log type counts. Please note that Exception handling is not shown here. This class has a single method execute that takes a Trident tuple and an output collector as arguments. The input contains two fields. The first field is a log type word and the second one is a count. It opens the output file for writing and writes the log type and the count to the file. public static class printTuple extends BaseFunction { PrintWriter fOut; @Override public void execute(TridentTuple tuple, TridentCollector collector) { String logType = tuple.getString(0); Long cnt = tuple.getLong(1); if(fOut == null) { fOut = new PrintWriter(new FileWriter("/tmp/Tridentoutput.txt")); } fOut.println(logType + ":" + cnt); fOut.flush(); } } Next, let us look at a topology connecting a spout and a bolt. The method that connects the spouts and bolts to create the Trident topology is shown here. After using GetLogType to filter only log types, we use the group by and aggregate to do the aggregation by the log type. Finally, printTuple class is used to store the output to a file. The code fragment shows the buildTopology method that connects the spouts and bolts to create the Trident topology. The method first creates the GetLineSpout object with a batch size of 10, which means that the spout will put ten tuples in each batch. Next, it creates an object of class TridentTopology. In this topology, a new stream is created with the newStream method. This new stream is created with the GetLineSpout as the spout and parallelism of 1. The bolt GetLogType is connected to the output of this spout with each method of a stream. Then, the groupBy Trident operation is called to group the results by each log type. The aggregate operation is applied to the group by output with the count function. This is a grouped aggregate, so the count of tuples is done for each log type to produce a stream with log type and count as fields. Finally, the printTuple method is called with each aggregated tuple to store the count of logtypes to a file. This Trident topology is built and returned to the Trident Topology. public static TridentTopology buildTopology() { GetLineSpout spout = new GetLineSpout(10); // Batch size of 10 specified TridentTopology topology = new TridentTopology(); Stream LogTypeCounts = topology.newStream("spout1", spout).parallelismHint(1) .each(new Fields("line"), new GetLogType(), new Fields("word")) .groupBy(new Fields("word")) .aggregate(new Fields("word"), new Count(), new Fields("count")); LogTypeCounts.each(new Fields("word","count"),new printTuple(),new Fields("word2","count2")); return topology.build(); } Next, let us look at the Topology main function. The code fragment for the main function that connects the spouts and bolts to create the topology is shown below. The main function gets the topology name as the first parameter. It creates a configuration object first. The debug flag is set in the configuration so that we can see the detailed messages in the log. The number of spouts that will run in parallel for pipelining is set to 1 as we want only one spout to run at a time. Next, the number of workers is also set to 1. Finally, the StormSubmitter is called with the topology name, configuration object and the topology of the builtTopology method. public static void main(String[] args) throws Exception { if (args.length > 0) { { Config conf = new Config(); conf.setDebug(true); conf.setMaxSpoutPending(1); conf.setNumWorkers(1); StormSubmitter.submitTopologyWithProgressBar(args[0], conf, buildTopology()); } } Now, we will understand the concept of the wrapper class. The wrapper class includes all the classes and functions we explained in the previous screens. The code fragment for the wrapper class is shown here. The code fragment shows the wrapper class LogProcessTopology. Comments are shown for code blocks but represent the definition of each of those classes and functions as shown in the previous screens. This wrapper class is what gets compiled and submitted to Storm for execution. # Import required libraries public class LogProcessTopology { // GetLineSpout class definition // GetLogType class definition // printTuple class definition // buildToplogy method definition // main function definition } Next, let us look at Trident advantages. Looking for more information on Apache Storm? Watch our Course Preview! Trident adds a lot of high-level functionalities to Storm and here are some advantages of using Trident. Trident is smart about how to run the topology for maximizing the performance It processes tuples in batches improving the throughput It provides operators such as group by an aggregate which reduces the amount of programming you need to do in Storm It can process computations within a partition first before sending tuples over the network, improving the throughput and reducing network traffic It provides exactly once processing in Storm adding a higher level of fault tolerance We have come to the end of this lesson. Let us summarize the topics covered in this lesson. Trident is an abstraction on top of Storm Trident groups tuples into batches and does operations on batches It provides aggregation and other operations on top of Storm Trident provides exactly once processing of tuples and stateful processing in Storm Trident spouts and operations can be connected using the Trident topology This concludes the lesson on Trident. This also concludes the developer course on Storm. A Simplilearn representative will get back to you in one business day.
https://www.simplilearn.com/apache-storm-trident-tutorial-video
CC-MAIN-2018-47
refinedweb
3,344
55.84
The GIMP ToolKit (GTK+) is a collection of GUI widgets. GTK+ essentially provides the building blocks from which GUIs can be built. It is highly themable, and its functionality is highly extensible. GTK+-3 is a very stable release, similar only in design to GTK+-2. GTK+-3 can coexist happily alongside GTK+-2, but applications are written for one version or the other. WWW: make generate-plist To install the port: cd /usr/ports/x11-toolkits/gtk30/ && make install cleanTo add the package: pkg install gtk3 cd /usr/ports/x11-toolkits/gtk30/ && make install clean pkg install gtk3 PKGNAME: gtk3 distinfo: TIMESTAMP = 1524773157 SHA256 (gnome/gtk+-3.22.30.tar.xz) = a1a4a5c12703d4e1ccda28333b87ff462741dc365131fbc94c218ae81d9a6567 SIZE (gnome/gtk+-3.22.30.tar.xz) = 18946084 NOTE: FreshPorts displays only information on required and default dependencies. Optional dependencies are not covered. This port is required by: ===> The following configuration options are available for gtk3-3.22.30_3: BROADWAY=on: Enable GDK Broadway backend for showing GTK+ in the webbrowser using HTML5 and web sockets. CLOUDPRINT=off: Cloud printing support COLORD=on: Color profile support CUPS=on: CUPS printing system support DEBUG=off: Build with debugging support WAYLAND=on: GDK Wayland backend ===> Use 'make config' to modify these settings compiler:c11 gettext gmake gnome libtool localbase pathfix perl5 pkgconfig tar:xz gl Number of commits found: 63 graphics/mesa-libs: enable WAYLAND by default here and in consumers PR: 227509 Requested by: Johannes Lundberg, Greg V Reviewed by: tobik (earlier version) Approved by: madpilot, x11 (zeising), maintainer timeout (2 weeks) Exp-run by: antoine Remove compatibility code for FreeBSD < 11.2 from all ports. Simplify some ports where DragonFlyBSD no longer needs to be special-cased. Submitted by: rene Reviewed by: bapt, jbeich Differential Revision: x11-toolkits/gtk30: unbreak DEBUG=on after r480951 Option helpers have no effect *after* bsd.port.options.mk graphics/wayland: update to 1.16.0 - New libwayland-egl home for consumers as Mesa 18.2 dropped it Changes: Changes: PR: 227423 Submitted by: Greg V <greg@unrelenting.technology> Approved by: maintainer timeout (5 months) Patch configure so it respects CC and CPP variables. Remove patches I forgot to remove before Pointy Hat: bapt Reported by: jrm Update to 3.22.29 x11-toolkits/gtk30: Add new non-default WAYLAND option It enables building of Gdk's Wayland backend. PR: 219040 Submitted by: Johannes Lundberg <johalun0@gmail.com> (based on) Approved by: gnome (maintainer timeout, ~9 months) x11-toolkits/gtk30: add dependency on librsvg2 PR: 222495 Submitted by: Anton Yuzhaninov <citrin+pr@citrin.ru> Approved by: gnome@ (kwm) gtk30 to 3.22.15. Update gtk30 to 3.22.14. Sprinkle some fixes to make these ports build on ARCH's that don't have clang as the default compiler. Submitted by: jhibbits@ Differential Revision: Add back DEBUG option that got lost in the 3.22.12 update. Also add comment, so I don't accidently break it later. Reported by: jbeich@] x11-toolkits/gtk30: prefer warnings over crashes on bad behavior devel/glib20 and x11-toolkits/gtk20 don't use --disable-debug. Neither does gtk3 package on Debian or ArchLinux. $ cat >a.c #include <gdk/gdk.h> int main() { gdk_get_default_root_window(); return 0; } $ cc a.c $(pkg-config --cflags --libs gtk+-3.0) $ ./a.out Segmentation fault vs. (new behavior) $ ./a.out (process:40995): Gdk-CRITICAL **: gdk_screen_get_root_window: assertion 'GDK_IS_SCREEN (screen)' failed Pointy hat to: kwm (r372768 broke consistency) ${RM} already has -f. PR: 213570 Submitted by: mat Exp-run by: antoine Sponsored by: Absolight Fix configure with CUPS 2.2.x. Reported by: antoine Nuke useless @exec and @dir Approved by: kwm gtk30 to 3.18.8. Backport a patch to fix pasting a non web (epiphany) copied url into the web adressbar. Submitted by: Graham Perrin <grahamperrin@gmail.com> Obtained from: gtk+ upstream Enable lpr printing agian. While configure happily reported lpr is enabled and the module was installed. The print code wasn't configured to use lpr. The reason for this is that the configure of this code moved from the gtk/Makefile to configure. Remove the now obsolete sed line. Reported by: dinoex@ Tested by: dinoex@ Update gtk30 to 3.16.7. x11-toolkits/gtk30: convert to option helpers Approved by: portmgr blanket Gtk+ 3.16 removed Type a head support in the filechooser. While Gtk+ 3.17 has another way of archiving this, add type a head back as a optional (default off) option. Requested by: novel@ Tested by: novel@ Grab patch from upstream to fix the build. One of the patches cherry picked to the 3.14 branch depends on a glib API added in the 2.44 series. Add a patch from upstream to allow build with glib 2.42 in ports. PR: 201951 [1] Reported by: Walter Schwarzenfeld, Gary <freebsd-bugzilla@in-addr.com> [1] Obtained from: upstream gtk+ 3.14 branch Update gtk30 to 3.14.15. * Explicitly disable wayland backend. * Remove obsolete packagekit configure argument * Add patch to fix shared clipboard between X screens. [1] Submitted by: ashish@ [1] Tested by: ashish@ [1] Obtained from: gtk+ upstream gtk30 to 3.14.7. gtk30 port installs share/icons icons so it needs INSTALLS_ICONS set. Submitted by: anto Use @rmtry Let pkg deal with empty directories Convert to USES=libtool and INSTALL_TARGET=install-strip Obtained from: gnome-dev Unbreak gtk30-reference - USES=tar:xz Move immodules.cache related lines up. So the @unexec line for this file is before the lines where the dirs are removed. Submitted by: skreuzer@ Update to 3.8.8. Stagify, sort USES, use new lib_depend syntax. Use USE_GNOME introspection now that it doesn't break the build. Switch to to libtool less ltverhack. Use new gtk-query-immodules --update-cache functionality. Obtained from: GNOME devel repo (based back @dirrm for share/gtk-3.0, which was mistakely removed in the gtk+ update to 3.8. Submitted by: sunpoet@, oliver@ Fix the build when japanese/sed is installed. Get it to use the system sed instead of try to pick up japanese/sed. PR: ports/160224 Reported by: Tsurutani Naoki <turutani@scphys.kyoto-u.ac.jp> Feature safe: yes Fix a typo in the plist. PR: ports/165282 Submitted by: truck.12. 15 vulnerabilities affecting 139 ports have been reported in the past 14 days * - modified, not new All vulnerabilities Last updated:2018-11-12 20:18:51
https://www.freshports.org/x11-toolkits/gtk30/
CC-MAIN-2018-47
refinedweb
1,057
51.14
Hi All! I downloaded some open source library for my C++/CLI project. Let me call it 'library'. It is written completely in C#. The library compiles and builds fine into some library.dll. The samples provided along with it work well also. The problem starts when I try to use this library for my project. I add a reference path in the project properties in Visual Studio 2008, pointing to this dll. It seems everything is fine now, as IntelliSence that pops up after :: or -> symbols presents all possible sub-namespaces, classes, data types, etc. Unfortunately, when I add Code: using namespace library; to my code, the compilation error appears that says "namespace with this name does not exist". What's wrong? I integrated lots of other open source libraries in my projects this way, and there had been no error so far. Please help. This library seems very useful for my project and I don't want to miss this chance. Thanks to all. using namespace library; Can you show some C# code using this library? Forum Rules
http://forums.codeguru.com/showthread.php?512840-Send-mails-with-rtf-format-using-smtp-client&goto=nextnewest
CC-MAIN-2014-52
refinedweb
180
74.69
WMI Diagnosis Utility A New Utility for Diagnosing and Repairing Problems with the WMI Service Download the All-New WMI Diagnosis Utility Version 2.0 of the WMI Diagnosis Utility is now available in the Microsoft Downloads Center The newly-revised WMI Diagnosis Utility is a VBScript script written by Alain Lissoir of the WMI team at Microsoft. This utility is designed to help you ascertain the current state of the WMI service on a computer. (For a complete list of all the tests carried out by the WMI Diagnosis Utility, see Appendix A.) The utility can do everything from verify the validity of all your WMI namespaces to check for possible corruption of the WMI repository. (Although we should note that the ability to check for repository corruption is limited to computers running Windows® XP Service Pack 2 or Windows Server™ 2003 Service Pack 1.) The WMI Diagnosis Utility performs a detailed examination of the WMI service and all its related components. If a problem is found, the utility not only reports on the problem and its possible causes, but also offers suggestions for repairing the problem. (Note that this is a diagnostic tool only: although it will offer step-by-step instructions for fixing a problem it cannot fix the problem itself.) The WMI Diagnosis Utility runs on Windows 2000, Windows XP, Windows Server 2003, and Windows Vista™. The utility needs no additional software other than Windows Script Host (WSH), which is installed and enabled by default on all those versions of Windows. The WMI Diagnosis Utility must be run locally (you cannot run the script against a remote computer), and you must have local administrator rights on any computer on which you run the script. So why can’t you run the tool against a remote computer; isn’t WMI supposed to work against remote computers just as well as it does against the local computer? Yes, but remember, you might very well be running the utility because you suspect that there is a problem with the WMI service. If the WMI service is not working properly, then it’s likely any WMI script that you try running will fail. Therefore, the WMI Diagnosis Utility does not rely on WMI itself; for example, it uses Windows Script Host methods when retrieving information from the registry. This enables the tool to function even when the WMI service is not working; however, this also prevents the tool from running against remote machines. That’s because many of these WSH methods can run against only the local computer. However, the WMI Diagnosis Utility can be initiated using applications such as Systems Management Server (SMS) or Microsoft® Operations Manager (MOM). Likewise, you can run the utility with a Group Policy logon script. Is This Something I Should Be Using? Maybe, but then again, maybe not. The WMI Diagnosis Utility is designed for use by experienced Windows and/or SMS administrators. This is not just because the script requires local administrator privileges. On top of that, you will likely require administrative experience in order to make sense of the tool’s reports and recommendations. And you will definitely need this kind of experience in order to undertake any suggested remedies. For example, the WMI Diagnosis Utility might recommend that you change a number of values in the registry, a task that is not recommended for anyone new to computing. When Should I Run the Utility? The WMI Diagnosis Utility should be run any time you suspect that you are having problems with the WMI service. This suspicion might have manifested itself either because of obvious problems with WMI itself or because of problems with other software programs known to use WMI. However, you might also want to periodically run the utility on all your servers; this helps you identify potential problems before they begin to affect the functioning of a server. How Do I Run the Utility? In most cases you can run WMIDiag.vbs without any command-line parameters. The WMI Diagnosis Utility is designed to run under the CScript script host; however, you do not have to specify the script host when starting the tool. If the utility detects that it is running under WScript it will run, albeit in “silent” mode: that is, no data will be output to the screen. (That’s to prevent you from having to dismiss literally hundreds of dialog boxes.) The bottom line? You can run the WMI Diagnosis Utility, using the default settings, simply by double-clicking the WMIDiag.vbs icon in Windows Explorer, or by typing the following from the command prompt: Note. This document details many of the more useful command-line parameters that can be used when starting the WMI Diagnosis Utility; however, not all of the optional parameters are discussed here. To get a complete list of command-line parameters, along with a brief explanation of each parameter, simply specify the ? parameter when starting the utility: Although execution times will vary based on everything from processor speed to the size of the WMI repository, you can assume that it will take the utility several minutes (anywhere from 5 to 15 in preliminary tests) to complete its work. As the tool runs it displays detailed progress information on the screen: This information is displayed primarily to let you know that the tool is still working. If you have no need to view progress then you can suppress the on-screen display of information by including the NoEcho parameter when starting the tool: As it runs, the WMI Diagnosis Utility also records information to a pair of text files; these two files will have names similar to the following: - WMIDIAG-V1.10_XP__.CLI.SP1.32_TOMSERVO_2006.02.05_22.30.19.LOG.. This file contains information that – if you choose to share it – can be used by Microsoft to help track trends and analyze the overall state of WMI. The file names are a combination of WMIDiag and its version number (WMIDIAG-V1.10), the operating system (XP__.CLI.SP1.32), the computer on which the tool is running (TOMSERVO) and the date and time at which the script was started (2006.02.05_22.30.19). The WMI Diagnosis Utility always writes information to these text files, which are stored by default in the user’s temp folder. This folder (for example, C:\Documents and Settings\KenMyer\Local Settings\Temp) is determined by the value of the %TEMP% environment variable. You can redirect these files to an alternate location simply by including the LogFilePath parameter in your startup command. For example, this command causes the tool to write information to the C:\Scripts folder: Tip. Because the utility uses the current timestamp when naming files it will not overwrite any existing files. This has some obvious advantages, but at least one disadvantage: log files will begin to accumulate – and use up valuable disk space – over time. Is there an easy way to deal with that problem? Yes: simply include the OldestLogHistory parameter when starting the tool. When you use OldestLogHistory and a specified value, the WMI Diagnosis Utility will automatically delete log files older than that value. For example, this command will cause the tool to delete any log files older than 20 days: Writing Errors to the Event Log Another optional parameter – LogNTEvents – will cause the utility’s messages (including error messages) to be written to the Application Event log. In turn, that makes it easy for you to use Microsoft Operations Manager (MOM) or a custom script to retrieve these messages from all of your computers. Of course, the one drawback to this is the fact that, in a situation where the WMI service has become corrupted, you could easily find yourself writing hundreds of events to the event log. Because of that, you might want to use the LogWMIState parameter instead. This parameter causes only three events to be written to the event log: - The time the tool started. - The overall state of the WMI service: Success, Warning, Error. - The time the tool ended. A WMI Diagnosis Utility event will always have WSH as the source and will have an event ID of 0 if successful, 1 if an error occurred, and 2 if a warning was issued. For example: The WMI Diagnosis Utility for SMS Administrators The WMI Diagnosis Utility includes a command-line parameter – SMS – designed primarily for use by SMS administrators. When running under SMS mode, the tool does not display any message boxes. By default (and any time it is run in interactive mode), the utility will occasionally display information in a message box. For example, if errors are detected a message box similar to this appears; however these messages boxes do not apply when run under SMS mode: Under SMS mode the tool also enables the NoEcho and Silent parameters. This allows the utility to run unattended, and to run without displaying information to the screen. In this mode the WMI Diagnosis Utility also turns off the ERRORLEVEL return code. By default the tool sets the ERRORLEVEL environment variable to one of the following values upon termination: - 0 = SUCCESS - 1 = ERROR - 2 = WARNING - 3 = Command Line Parameter errors - 4 = User Declined When the WMI Diagnosis Utility runs under the SMS parameter, however, ERRORLEVEL will not be changed. This is due to the fact that SMS always uses the exit code to determine whether or not the script was successfully completed. If the exit code is set to 2 (Warning) SMS will assume the script failed to run; it won’t realize that the script completed successfully but simply discovered some potential problems. If you are using SMS to run the tool you should always set the SMS parameter to On. Of course, you do not need to be an SMS administrator in order to use the SMS parameter. Whether you are using SMS or not you can run the utility in silent, unattended fashion by starting the program like this: You can also use an optional parameter – RunOnce – to ensure that the utility runs no more than once a day. Each time it runs, the tool updates a value in the registry: HKEY_LOCAL_MACHINE\Software\Microsoft\WMIDiag\LastRun. When the RunOnce parameter is used, the WMI Diagnosis Utility checks the existing value in the registry. If that value indicates that the tool has already been run on this particular day, then the utility will exit without running a second time. Note that the utility bases this decision on the calendar, not on, say, a 24-hour period. If you run the tool at 11:50 PM on a Tuesday and then run it again at 1:00 AM on Wednesday, the program will run, even though barely an hour has elapsed. That’s because the dates on which the program ran (Tuesday and Wednesday) are different. The RunOnce parameter is particularly useful when the WMI Diagnosis Utility is used in a logon script; after all, it’s not unusual for administrators to log on multiple times in a single day. By using RunOnce you prevent the tool from running each time these administrators log on. Using BaseNamespace to Speed Up the WMI Diagnosis Utility One of the tests the WMI Diagnosis Utility performs by default is to verify all of the classes found in all the WMI namespaces. Because of the size of the WMI repository this testing can take quite a bit of time: on a test computer running Windows XP, for example, the tool took approximately 15 minutes to complete. If you want to use the utility to check the overall health of the WMI service you can reduce this time by as much as two-thirds by using the BaseNamespace parameter. When you specify a BaseNamespace, the tool still performs its global diagnostic tests; however, it verifies only the classes found in the specified namespace. For example, starting the WMI Diagnosis Utility using this command causes the utility to verify only classes found in the root\default namespace: When run on the same test computer, the tool required just 5 minutes to complete all its tests when the BaseNamespace was set to root\default. The BaseNamespace parameter is also useful if you have reason to believe that a particular WMI namespace might be the source of your problems. By setting the base namespace the tool will verify the classes in that namespace, but will skip verification of classes found in other namespaces. What Do I Do When the Utility Finishes? Once the WMI Diagnosis Utility finishes you should examine the log file. To be honest, much of the log file will be of little use to you: it’s simply a blow-by-blow account of each test that the tool ran. Instead, you should open the log file and search for the WMI REPORT: BEGIN section of the file. The report section provides a summary of the tests run by the tool. For example: If the WI Diagnosis Utility has detected problems of any kind they will be noted either as warnings or as errors. For example, here’s a warning that indicates that the WMI logging level has been set to 1 instead of the recommended level of 2: Errors will usually be accompanied by suggested ways to try and fix the problem: At the end of the report the tool will summarize the overall health of the WMI service: Some tests check items that are absolutely critical in order for the WMI service to function; for example, certain files must be present for WMI to even run. If one of these tests fails the overall WMI state will be marked as Error. Other tests check items that are less critical; for example, a missing DCOM registration affects only the class supported by the unregistered provider but does not affect the WMI service as a whole. If one of these tests fails, the WMI service is marked as Warning. The following items result in a health level of Warning: These items will result in a health level of Error: After analyzing the report you should follow the suggested steps for correcting any problems that were uncovered. Upon completing the recommended procedures, run the tool again to verify that the problems have been repaired. If the problem truly has been fixed, it should no longer be flagged as a warning or error in the report. What If That Didn’t Fix the Problem? Suppose you take all the recommended steps and, upon re-running the WMI Diagnosis Utility, discover that the problems have not been corrected. In that case you can do one of two things: rerun the tool using the “CheckConsistency” parameter, or rerun the utility using the “WriteInRepository=Root” parameter. Rerun the Utility Using the “CheckConsistency” Parameter This parameter – which is applicable only to Windows XP Service Pack 2 and Windows Server 2003 SP1 – checks the WMI repository for consistency. If the test succeeds, that again means that the WMI service is probably not responsible for your problems. If the test fails, then the repository will automatically be rebuilt for you (a behavior built into Service Pack 2) and your current repository is saved as %SystemRoot%\System32\Repository.001. Incidentally, CheckConsistency is automatically invoked any time the tool is run on a computer running Windows Server 2003 Service Pack 1. However, with this operating system a failed consistency test will not cause the repository to automatically be rebuilt. The repository will be rebuilt only if you manually request it to be rebuilt. Rerun the Utility Using the “WriteInRepository=Root” Parameter If you are not running Windows XP SP2 or Windows Server 2003 SP1 you can test the repository to a certain extent by writing information to the repository. The WriteInRepository parameter causes the tool to create 250 temporary class instances in each of the existing WMI namespaces. (By default this will take place in every namespace, although you can use the BaseNamespace parameter to specify a particular namespace.) If the test succeeds that means no apparent problems were detected with the WMI repository; in turn, that suggests (but by no means guarantees) that WMI is not the cause of the problems you are experiencing. Operating systems other than Windows XP SP2 and Windows Server 2003 SP1 are unable to validate the consistency themselves. Fortunately, performing a write operation across all the namespaces of the repository is a good indicator of the repository state. If the test fails, you will likely have to rebuild the WMI repository. For more information on doing that, please see the article WMI Isn’t Working. Checking for AutoRecover MOF Files The WMI repository is built using MOF (Managed Object Format) files, text files that contain definitions of namespaces, classes, properties, methods, and other WMI objects. If you delete the WMI repository, the repository will automatically be rebuilt for you; however, it will only be built using the so-called “AutoRecover” MOFs, MOF files specifically listed in the registry. If a MOF file is not marked as AutoRecover then you will have to manually compile the file in order to restore it to the repository. If a MOF file is missing, then it will not be included in the rebuild of the repository either. For more information on how to mark a MOF file as autorecover, see the WMI SDK. By including the ShowMOFErrors parameter when starting the WMI Diagnosis utility, your report will include, among other items, a list of MOF files (or at least those MOF files found in the %SystemRoot%\WBEM folder) that are not listed in the AutoRecover portion of the registry. That report will look something like this: 1) !! ERROR: MOF file(s) present in the WBEM folder not referenced in the Auto-Recovery list: 6 ERROR(S)! 0) ** - C:\WINDOWS\SYSTEM32\WBEM\CLIEGALIASES.MFL(*) 0) ** - C:\WINDOWS\SYSTEM32\WBEM\CLIEGALIASES.MOF(*) 0) ** - C:\WINDOWS\SYSTEM32\WBEM\FCONPROV.MFL(*) 0) ** - C:\WINDOWS\SYSTEM32\WBEM\FCONPROV.MOF(*) 0) ** - C:\WINDOWS\SYSTEM32\WBEM\NCPROV.MFL(*) 0) ** - C:\WINDOWS\SYSTEM32\WBEM\NCPROV.MOF(*) 0) ** => After fixing all other issues previously mentioned, if the WMI repository is rebuilt, 0) ** the listed MOF file(s) will not be recompiled, and therefore the definition they contain 0) ** will not be available in the WMI repository. 0) ** => You must manually recompile the MOF file(s) with the 'MOFCOMP.EXE <FileName.MOF>' command. 0) ** => If you want the MOF file(s) to be part of the Auto-Recovery list, make sure the 0) ** statement '#PRAGMA AUTORECOVER' is included. 0) ** Note: MOF file(s) marked with a (*) are not included BY DEFAULT in the auto-recovery process. This output also includes: - MOF files listed in the registry that cannot be found on the file system. - Providers that do not have MOF files listed in the registry key or stored in the WBEM folder. Associating Classes, Providers, and MOF Files One very useful function included within the tool is the ability to associate each WMI class with its provider and with the MOF file for that provider. To generate this information, simply include the CorrelateClassAndProvider parameter when starting the tool: Upon completion the utility will create a spreadsheet (with a name similar to WMIDIAG-V1.10_XP__.CLI.SP1.32_TOMSERVO_2006.02.12_22.45.42-PROVIDERS.CSV) that will contain an enormous amount of data about each class, each provider, and each MOF file: This is a very useful reference, and it’s highly recommended that you use the WMI Diagnosis Utility to generate such a spreadsheet even if you are not currently experiencing any problems with the WMI service. The spreadsheet helps document the WMI setup of a critical machine; this is especially important if you have applications that add WMI objects that you do not know about. The download package for the tool includes spreadsheets for several standard operating system installations. Sending WMI Diagnosis Utility Results to Microsoft The WMI Diagnosis Utility includes the ability to email your report files directly to the WMI team at Microsoft. To do so, simply specify the SMTPServer parameter, followed by the name of your SMTP server: Why would you want to send your results to Microsoft? Two reasons. For one, the WMI team is gathering log files and using the aggregate information to look for trends and commonalities. This information is useful in improving both the WMI service and the utility itself. Second, it is possible, though not guaranteed, that someone from the WMI team will contact your about your problem. Important. By including the SMTPServer parameter you agree to send your WMI Diagnosis Utility results to Microsoft and give Microsoft the right to use that information in its analyses. You can also use the SMTPServer parameter to email results to someone other than Microsoft. To do that, simply include the SMTPTo parameter followed by the appropriate email address: If you add the SmtpWMIInvalidState parameter then this information will be sent only if the WMI service is found to be in a Warning or Error state: Use the command wmidiag.vbs ? to see a list of all the available SMTP parameters. These are especially useful if you need to supply a user name and password when connecting to your SMTP server. What If I Still Have Questions? Although we cannot guarantee you will receive an answer, send questions, comments, and other feedback to WMIDiag@microsoft.com. Appendix A: Tests Carried Out By the WMI Diagnosis Utility - Verifies the date and time when the tool was previously run. A warning is displayed if the tool is run more than once a day. - Collects WMI system information based on the OS version, environment, a list of WMI system files, and registry keys. - Checks for the presence of WMI system files in the %Windir%\System32\WBEM folder and returns an error if a file is missing. - Checks for the presence in %Windir%, %Windir%\System32 and %Windir%\System32\Wbem folders of known Trojans and viruses that use WMI system file names (and thus appear like real WMI system files). - Checks for the presence of WMI repository files and returns an error if a file is missing. - Checks for the consistency of the CIM repository (on Windows XP SP2 and Windows Server 2003 SP1 only). Under Windows XP SP2 and Windows Server 2003 SP1, Windows implements a RUNDLL32 command validating the repository consistency. The generated log file (SETUP.LOG for Windows XP SP2 and REPLOG.LOG for Windows Server 2003 SP1) is parsed by the tool and any consistency error is returned by the WMI Diagnosis Utility. The utility always performs this validation under Windows Server 2003 SP1. This feature is optional under Windows XP SP2 because it rebuilds a new repository automatically when inconsistencies are discovered. - Verifies the presence of the MOF files specified in the auto-recover registry key and returns an error if a listed file is missing. - Verifies the presence of MOF files in the WBEM folder and matches them with the auto-recover registry key. An error is returned if a MOF file is missing in the auto-recovery registry key. - Checks the WMI DCOM application registry keys and returns an error if a registry key is missing. You can see DCOM configuration information by running DCOMCNFG at the command prompt. - Checks for the presence of the CLSID, TypeLib or Interfaces registry keys for all WMI COM servers and returns an error if a registry key is missing or incorrectly set. - Checks the state of the Windows Firewall service, including Windows Firewall status, RemoteAdmin status and UNSECAPP.EXE exception presence and status. - Checks the WMI service registry setup and returns an error if specific registry keys are missing or are set up differently than expected. - Checks for the presence of services that are known to be dependent on the WMI service and returns information if any are found. - Checks the RPSS service status, for example, running or stopped. If the RPSS service is stopped the utility starts it. - Checks the WMI service status (for example, running or stopped) and starts it if stopped. - Retrieves the WMI system settings such as __CIMOMIdentification, __ArbitratorConfiguration, __CacheControl, __ProviderHostQuotaConfiguration and writes their values into the log file. Any static instances coming from __ArbitratorConfiguration, __ProviderHostQuotaConfiguration, and __CacheControlStatistics are validated against the known defaults. Any modification against these static values will be reported in the log. - Uses the WMI provider registration data (CLSID) to locate the corresponding MOF file from the known list (auto-recovery and WBEM folder MOF files). An error is returned if no corresponding MOF file is found. Some well-known providers, such as the SMS providers or the performance counter provider NT5_GENERICPERFPROVIDER_V1, do not store MOF files in the standard location (or might not even use MOF file registration). In these cases, the tool writes a warning in the log file but no error is returned in the report. - Uses the WMI CLSID to check WMI DCOM registrations for coupled providers (in-process and out-of-process) and returns an error if no DCOM registration information is found. This means that there is a missing InProcServer32 or LocalServer32 registry key. - Retrieves the list of WMI providers that are using a non-authorized LocalSystem hosting model or not specifying any Hosting Model. Exceptions are allowed for missing DCOM registrations for certain well-known providers that do not have a registration by design; for example, the MICROSOFT WMI TEMPLATE ASSOCIATION PROVIDER or the RSOP PLANNING MODE PROVIDER. In these cases, a warning is returned in the log but no error is returned in the report. - Locates the corresponding DLL in the file system based on the DCOM registration data (InProcServer32 or LocalServer32 path). Returns an error if the file is missing. - Determines whether an in-process or an out-of-process provider that has no DCOM registration is still used by some classes in the namespace. Returns an error if this is the case. If the provider with no DCOM registration is not in use then a dead registration warning is generated but no error is reported. However, the event is logged in the log file. - Retrieves static information for each class in the namespace. The information is represented in MOF syntax. An error is returned if this operation fails. - Retrieves instances of each static and dynamic class in the namespace if the RequestAllInstances parameter is specified. An error is returned if this operation fails. Note that it is possible to retrieve only the static instances with RequestAllInstances=Static. The same applies for the dynamic instances: RequestAllInstances=Dynamic - Checks for the presence of some well-known namespaces. If the examined system lacks a well-known namespace, an error will be reported. - Checks the state of the ADAP process (responsible for creating performance counters) to determine any issues that might prevent the creation of the performance classes. - Checks WMI features by running queries to retrieve typical classes and instances. Any failures will be returned in the log file. - Retrieves all hardware resource information (processor, memory, DMA and IRQ settings, disk controller, network setup, operating system configuration, updates) to gather more information about the hardware and software used. Captures the controller type for the bootable partition (for example, IDE or SCSI) and returns it in the WMI report. The information inventory capture requires a fully functional WMI installation. Any error during this inventory report returns errors in the log). - Allows users to send the .LOG and .CSV files created by email.
https://technet.microsoft.com/en-us/library/ff404265.aspx
CC-MAIN-2015-32
refinedweb
4,570
51.07
Ron Panepinto Jewelers Jim Stevenson 700 Sansom St. 215-923-1980 9371 ROOSEVELT BLVD. PHILADELPHIA, PA 19114 215-698-7000 JStevenson@ChapmanAutoGroup.com Vol. XI. No. 8 (Issue 473) We Buy Gold & Diamonds Serving Citywide Political, Labor, Legal and School Communities of Philadelphia “The good things we do must be made a part of the public record” ALL SMILES are Bob Toporek, founder of TeamChildren which donates computers to schools and families in need, and State Sen. Shirley Kitchen, who arranged computers be delivered to St. Martin De Porres and St. Rose of Lima Catholic Schools. Pension Change Saves $$$ February 19, 2009 Is The Z.B.A. On Its Way Out? Zoning Battle Key To Future TeamChildren Donates Computers For Kitchen Eighteen desktop refurbished computers have been donated to two Catholic Parishes in North Philadelphia. Making that possible was a linkup between State Sen. Shirley Kitchen and TeamChildren, a nonprofit that collects computers, refurbishes them and then donates them. Eight went to St. Martin De Porres Catholic School, 2300 W. Lehigh Avenue, and 10 to St. Rose of Lima School, 1522 N. Wanamaker Street. TeamChildren received a grant of $1,600 from the Friends of St. Martin De Porres and another for $2,500 from “its friends” to fund those destined for MAYOR MICHAEL NUTTER St. Rose of Lima. According to Bob Toporek, execu- and City Controller Alan Butkovitz tive director and founder of Team- press General Assembly to approve Children, “Volunteers have distributed change to City pension fund that ove 18,000 low-cost refurbished com- can save $170 million in five years. puters to families, schools, and organizations throughout the region, reaching more than 30,000 children.” He added, “There currently exists a gap between people who enjoy the benefits of technology and those whose lives could be significantly imIf the State Legislature cooperates proved by it. In most cases, lack of access to existing and new technolo- by passing enabling legislation, the gies prevents these individuals City of Philadelphia could realize from truly taking advantage of com- $172 million in savings over the next (Cont. Page 2) (Cont. Page 2) Value 50¢ 10-STORY highrise hotel proposed for 40th & Pine Streets has stimulated debate on development planning in city. by Tony West Is the Zoning Board of Adjustment’s longstanding role as the prime shaper of development on the way out? Will it be replaced by a new model with a new power center? A ZBA hearing today may give clues to the answer. A controversial hotel proposal in University City has helped spur Mayor Michael Nutter’s administration to rethink the City’s Byzantine method of regulating major building projects. (Cont. Page 2) Can Arlen Do It Again? by Jon Delano H. John Heinz College Carnegie Mellon University Pennsylvania's senior US Senator is a unique piece of work. I have known him for nearly 30 years, and I have tremendous respect for the political skills of Arlen Specter, the longest-serving senator in Pennsylvania history. Just when people count him out, he emerges victorious, a phoenix among the political carrion. The 79-year-old Senator, first elected in 1980, has no intention of retiring next year, and barring some medical calamity Specter's name will be on the ballot for a sixth six-year term. First, though, the Republican must navigate the shoals of his own unhappy party. Specter listens to his own drummer, which tends to march back and forth across the field rather than in any straight line. He is consistent in his unpredictability. So when he joined to two Republican Senators from Maine to enact President Obama's economic recovery plan, it was not particularly surprising. But it has emboldened a number (Cont. Page 27) ALAN GREENBERGER … seeks central role for City Planning Commission in approving development. Judge Removes Slain Officer’s Pictures In 35th Municipal Judge Craig Washington placed his ability to judge properly in jeopardy when he ordered the removal of slain Police Officer John Pawlowski’s photographs off the walls of his home district, the 35th Police Dist. at Broad & Champlost. When the District Captain and the Divisional Inspector refused his re(Cont. Page 2) TOBACCO EXPRESS TRI-STATE MALL Claymont, Delaware (302) 798-7079 5 Minutes from Comm. Barry Bridge, Naaman’s Rd, Turn Left, Next to K-Mart Marlboro $ .25 39 Camel $ Carton Kool See Salutatory Advertisments In This Issue $ .25 37 Carton .25 37 Carton Eagle $ .49 29 Carton Winston $ Salem 35.25 Carton Monarch $ 29.49 Carton $ 37.75 Grand Prix $ 29.49 Carton (Prices Subject to Change) • SURGEON GENERAL’S WARNING: Quitting Smoking Now Greatly Reduces Serious Risks To Your Health page 2 The Public Record • February 19, 2009 Hotel Fracas May Spur New Zoning Channels (Cont. From Page 1) The project in question, the Campus Inn, would erect a 10story extended-stay hotel operated by the Hilton chain on a site at 40th & Pine Streets owned by the University of Pennsylvania. The site is occupied by a dilapidated, abandoned nursing home that is generally agreed to be an eyesore. There the agreement stops. The dispute has bedeviled several community groups, the Historical Commission and the City Planning Commission as well as the ZBA since June 2007. During this dizzying roundabout, the HC flip-flopped more than a fish on a pier; it was for the project until it was against the project, until it was for it and against it and for it. History played a trick on Penn. The university had bought the property in Spruce Hill, chiefly in self-defense, after its original owner was shut down by the State in the wake of a scandal in 2002. Little did it know, years before the property had been listed as “historic”. Half-buried behind lumpy additions lay an Italianate mansion that hearkened back to 150 years ago. For this reason, HC approval for demolition was needed. The developer, Campus Inn Associates, asked the HC to delist the property, arguing the building’s historic value had already been destroyed. CIA’s first plan was to demolish the structure and erect a wide lowrise hotel in its place. Its business target was the increasing flow of visitors drawn to Penn’s huge hospital complex nearby, many of whom need affordable temporary housing. Delisting was denied. The HC has never delisted a building unless it has already been torn down. That meant the developer had to spend an unexpected $3 million to restore the old mansion to external period specifications – and keep an open streetscape from which you can see it all. Since this stopped the developer from building out, the HC “agreed in concept” CIA could build up instead. So a highrise was born … and the trouble began. Fortieth Street is where Penn campus ends. Along its eastern side, highrise residences have long dominated the landscape. But to the west lies a large community zoned for 4-story hous- ing, and within its boundary the Campus Inn would be located. Fierce opposition sprang up, splitting civic associations and neighbors, even Penn faculty. Magali Larson of the Woodland Terrace Association, a former Penn and Temple professor, sums up their concerns: “The massive scale of this building is on a lot that is too small. Our residential neighborhood would be ruined by noise, light and parking congestion. The design is out of character. And Penn’s consultative process has been manipulative and undemocratic.” In a neighborhood where Penn’s expansion has often roused protests in the past, Larson’s concerns ring true for many, 600 of whom have signed petitions against this project. CIA is not without strong community backing, though. Tom Lussenhop, one of its partners, a Spruce Hill resident who was once managing director at Penn’s division of facilities services & real estate, has the backing of area hospitals and businesses, as well as the owners of most nearby properties. With 300 signatories of his own and support in local polls, Lussenhop believes many neighbors see a case for growth on this business corridor. “This location right next to the Green Line trolley portal is ideal for denser, transit-based development,” he argues. Lussenhop states the project has received a $2 million Pennsylvania Builds loan and a $9.5 million Philadelphia Industrial Development Corp. new-markets tax credit. The Campus Inn has promised to employ unionized permanent employees, winning an endorsement from UNITE HERE. “That project would also employ about 270 construction workers,” estimates Pat Gillespie of the Building Trades Council. “We’ve got a lot of guys out of work right now.” The Campus Inn project needed a pass from three separate City regulatory agencies – not an uncommon experience for developers here. In the past there was little coordination between the three as they barged off in different policy directions. Under the last administration, critics depicted the ZBA as rubbery, the HC as severe and the CPC as impotent. It may be fairer to say the scope and process of the first two bodies were often opaque, their precedents unreliable and their decisions unpredictable. Nutter has made clear he wants Philadelphia to become friendlier to development. "If we continue to reform and break down barriers, if we take advantage of new investments, and if we leverage our incredible assets … Philadelphia will be one of the great global cities of the 21st century," he told the Chamber of Commerce last week. In August 2008, Nutter named as chair of the CPC Alan Greenberger. An owner of MGA Partners, Greenberger is also an architecture professor at Penn and Drexel. His mandate from the Mayor was reform – including a comprehensive overhaul of the City's zoning code. Greenberger also became, ex officio, chair of the Zoning Code Commission, which voters had called into being in the same May 2007 primary election that Nutter won in a fiveway race. In January 2008 Nutter had replaced the entire ZBA, which is now headed by Susan Jaffe. The stage was set for a new broom. Philadelphia’s zoning has not been revised for 45 years. As a result, two-thirds of all large developments go through the ZBA variance process. Greenberger says that must change. “Zoning is a coarse tool that can never account for everything,” he asserts. “Piecemeal zoning overlays and variances have tended to come through the political process. That’s not how it’s handled in other cities.” The Nutter administration aims to wind down the ZBA’s role as a bazaar for design negotiations between developers, community groups and politicians. It would become a less active body, focused chiefly on economic-hardship adjustments. In its place, Greenberger has proposed a Design Review Committee under the CPC. Nutter’s reforms will try to shift zoning away from away from land-use governance toward a “form-based” model, which sets broad standards of appearance while allowing the DRC to work out the details. Greenberger’s goal is to “craft zoning regulations that give developers more consistency and predictability.” Of Philadelphia’s cumbersome traditional regulatory route, he says, “These kinds of things are disincentives to money that has a choice of going elsewhere.” Zoning prescriptions should not be seen as commandments graven for eternity, Greenberger argues. “Cities evolve over time,” he says, “and zoning should be flexible enough to accommodate that.” Greenberger moved swiftly to consolidate the power of the CPC. Campus Inn cleared its last hurdle before that body in September 2008, before he even took office. In November 2008, the HC presented him with an early challenge when, under pressure from project foes, it revoked its approval in concept of the Campus Inn – a rare move in city planning – based on its clash in scale with neighboring buildings. At its December meeting, Greenberger appeared, bluntly to tell the HC it lacked that authority. Its role, he asserted, was “to evaluate the proposal relative to the historic resources of the site. In this case, the only historic resource that exists at this museum is the mansion that is proposed to be repaired and brought back to a useful life.” The HC swiftly caved in. After months of observation, Councilwoman Jannie Blackwell announced Tuesday she “cannot support this zoning application at this time.” She is troubled by the way historic designation has complicated this development. Appealing “for compromise on all sides,” she requested “all sides recommit to a solution we all can embrace.” After this hearing, the ZBA may confirm, deny or continue the debate. Now matter how this case is decided, the outcome may push the City to consolidate developmental regulations under Mayoral leadership, and stake out core principles for the zoning reform that is to come. Other politicians will surely seek to have their say. (Cont. From Page 1) five years. Mayor Michael Nutter, City Controller Alan Butkovitz and the Pension Board have revealed a plan to reduce the City’s pension obligations, which would producd the savings. The pension-plan proposal includes lowering the assumed return on pension investments from 8.75% to 8.25%, increasing the period over which the unfunded liability is paid off from 20 to 40 years, and spreading out the fund’s earnings and losses from five to 10 years. The pension proposal would decrease the financial burden on the City and increase its ability to fund existing liabilities in the long-term. But two components of the plan will require legislative action from the Pennsylvania Legislature. “If we are able to work with our partners in Harrisburg to pass the necessary legislation, the positive financial impact on our five-year plan would be significant,” said Nutter. Alan Butkovitz presented this idea to the Mayor in an effort to help bridge the $1 billion budget gap confronting the City. He said, “The Mayor is to be commended for his courage in confronting this problem now and working with us to move quickly and put this plan on the fast track. This pension plan will not only save the City $170 million over the next five years, it will also avoid a looming pension-fund crisis that was likely to occur in three years’ time if action was not taken now.” Both contend lowering the earnings assumption from 8.75% to 8.25% is fiscally responsible, making the assumed return more reasonable and aligning the assumptions with other jurisdictions. The Pennsylvania Intergovernmental Cooperation Authority has recommended the Pension Board take this action. Butkovitz Pension Plan Dollar Winner Judge Has A Problem (Cont. From Page 1) quest, the judge brazenly overturned two of Pawlowski’s pictures. FOP President John J. McNesby is asking Municipal Court President Marsha Neifield to remove Judge Craig Washington from the hearing from holding hearings in the 35th. McNesby added, “It’s bad enough Police officers are being murdered by violent repeat offenders released by the SISTER Nancy Fitzgerald, principal at St. Martin De Porres Catholic School, and 6th-grade teacher Sandra Streeter demonstrate donated computer to State Sen. Shirley Kitchen. Kitchen Gets Computers FOP chief John McNesby ….judge’s sense in doubt judiciary, we now have to face a direct threat from the bench.” (Cont. From Page 1) puters and the Internet. Regardless of the reasons for this gap, there is a moral imperative to ensure everyone has equal access to information technology.” TeamChildren, he said, takes older, slower computers and refurbishes them, bringing them up to date. Computers come from many sources, including Sanofi-Aventis, Genex Services, Syntehs, IKEA, and IBM as well as individuals, and local schools and colleges. Philadelphia "Representing injured workers in Pa. for over 30 years.â€? The Public Record • February 19, 2009 3 Injured At Work! page 4 The Public Record • February 19, 2009 Brady Gives American I Am Voters Key: recovery.gov Congressman Robert A. Brady, while lauding the passage of the stimulus package, promised this week to continue to insure Americans everywhere will be able to track the investments in the bill. “Congress has deliveried on President Obama’s historic plan to start to get the American economy back on track,” said Brady. .” Brady added “The legislation also has unprecedented State Rep. Dennis O’Brien 169th District 9811 Academy Rd Phila. PA 19114 215-632-5150 State Rep. FRANCIS McGORRY, president and CEO of Phila. Coca Cola Bottling Co.; Mayor Michael Nutter and wife Lisa; along with Tracee Hunt, VP of Phila. Coca Cola joined in opening celebration of American I Am exhibit at National Constitution Center. Coca Cola was a local sponsor. accountability and transparency measures to help ensure taxpayer dollars are spent wisely and effectively — including no earmarked projects and a new recovery.gov website allowing Americans to track the investments. “America is facing an economic crisis greater than any since the Great Depression, with a staggering 3.6 million EXHIBITION OPENED with special ribbon-cutting ceremony featuring exhibition presenter Tavis Smiley; National Constitution Center President and CEO Joseph M. Torsella; Tyrone Ried of Wal-Mart Stores, Inc.; 8th-grade class of Universal Charter School, who were first visitors to see exhibition; and Francis McGorry, president of Phila. Coca Cola Bottling Co. Exhibition opened prior to Birthday of Martin Luther King, Jr. and served as prelude to Inauguration Day. It will run through May 3. American jobs lost in the last 13 months and an unemployment rate here in Pennsylvania that has climbed to 6.7%,” said Brady. “As the President works to address the mortgage foreclosure crisis and get credit flowing in our financial system, this plan will begin to turn our economy around and bring jobs, relief and hope for Pennsylvanians.” State Rep. Frank Oliver 195th District 2839 W. Girard Ave. Phila. PA 19130 215-684-3738 State Rep. State Representative William Keller 184th District RONALD G. WATERS 191st Leg. District 1531 S. 2nd Street The Public Record 6027 Ludlow Street, Unit A 215-271-9190 State Sen. Shirley M. Kitchen 3rd Sen. District 1701 W. Lehigh Ave.Ste 104 • Philadelphia, PA 19132 215-227-6161 • 215-748-6712 ROBERT C. DONATUCCI 185th District Individuals can take steps to prevent and recover from identity theft through an updated state website called Identity Theft Action Plan, w.identitytheftactionplan.com. The site describes how identity theft happens and offers prevention tips and steps to take in the event of identity theft. It also offers a free downloadable “Action Plan” brochure. The website is sponsored by the Pennsylvania Commission on Crime and Delinquency. In addition to the website, feel free to visit my office for a free copy of the brochure, or pick up a copy at local PennDOT Driver License Centers and state police stations or by calling the Pennsylvania Commission on Crime and Delinquency at 717-705-0888. 1809 Oregon Ave, Phila., PA 19145 215-468-1515 R EP. A NGEL C RUZ DISTRICT OFFICE 2749 N. 5th St. • 215-291-5643 Staffed by Joe Evangelista Debbie Toro Ready to Serve you Parkwood Shopping Center 12361 Academy Road, Phila., PA 19154, 215-281-2539 Councilman Wm. Greenlee Room 580 City Hall P. 215-686-3446/7 F. 215-686-1927 Senator Tina STATE SENATOR LEANNA M. WASHINGTON DISTRICT OFFICE 1555-D Wadsworth Ave. Philadelphia, PA 19150 (215) 242-0472 Fax: (215) 753-4538 WEB SITE Tartaglione 2nd Dist. 8016 Bustleton Avenue Philadelphia PA 19152 215-695-1020 Open Mon. - Fri. 9:00 AM - 5 PM 127 W. Susquehanna Ave. 1059-61-63 Bridge St 215-291-4653 215-533-0440 Creative Director & Editorial Cartoonist: R. William Taylor Photographers: Donald Terry Donna DiPaolo. AAUF Seeks New Board Members AMONG audeience were Ducky Birts, Glenn Ellis and Traffic Court President Judge Thomasine Tynes. Many Things To See In Black History US. Here are just a few of the month's highlights: Renewing Traditions - The latest fabric and needlework creations made by the Friendly Quilters of Bucks County, an African-American quilting group, are on view alongside related African and African American cultural artifacts. The Mercer Museum show features 30 recently completed quilts. Through Mar. 1. 84 S. Pine Street, Doylestown, Pa. (215) 345-0210, mercermuseum.org IYARE: Splendor and Tension In Benin's Palace Theatre - With nearly 100 objects from the University of Pennsylvania Museum of Archaeology & Anthropology's collection of cast bronzes, Senator Larry Farnese Salutes Black History Month District Office: 215-560-1313 carved ivories and wooden artifacts from the 16th to 21st centuries, IYARE (which translated means "May you come and go safely”) illuminates the many activities-cultural, religious, political and intensely social-that make up the experience of palace life for the Edo people of Benin. Through Mar. 1. Soul Soldiers: African Americans and the Vietnam Era - This multimedia exhibition at The African American Museum in Philadelphia examines the impact of the Vietnam War on African American life and culture. It explores Black Power, the draft, tours of duty, women in Vietnam, family life and more. Through Mar. 8. Black Hands, Blue Seas: The Untold Maritime Stories of African Americans - Expanding beyond the experience of captive Africans being shipped across the waters into slavery, Black Hands, Blue Seas at the Independence Sea- port Museum highlights the seafaring heritage of African Americans, from inventors to naval heroes and explorers. Visitors can discover centuriesold West African fishing, diving and shipbuilding practices; learn about Philadelphia sailmaker and activist James Forten; and view artifacts heralding the role of AfricanAmericans in wartime. Through Mar. 22. 211 S. Columbus Boulevard, (215) 413-8655, phillyseaport.org For over 26 years the African American United Fund has been the steward of resources it has consistently shared with hundreds of organizations that increase educational opportunities, increase access to social and human services, promote cultural development, raise awareness about health and wellness issues, provide youth leadership training, stimulate voter education and increase awareness of criminal-justice issues. Last year the Fund provided services to over 167,000 people. The value of AAUF is in its ability to quickly respond to emerging issues in this community. Its current motto, “A Call to Action”, represents the enduring spirit of the forbears to overcome barriers to freedom and equality under extraordinarily harsh conditions. You may join its endeavors by volunteering, participating in its programs and donating you time now to the organization. AAUF is seeking new board members to become stewards of the organization and to lend their expertise in the area of human resources, finance and fundraising. The Board is responsible for the management, supervision, evaluation and planning of the organization and its assets. The Board also determines policy and supervises all business affairs. Contact AAUF at (215) 236-2100 or alrbuffer@aol.com. PNC, PECO, Comcast and other sponsors were represented by their leadership. The Public Record • February 19, 2009 PUBLISHER Robert Bogle addresses over 500 attending announcement of Phila. Tribune’s selection of Influential African Americans for 2009 at Convention Hall. They are Dr. Arlene Ackerman, A. Bruce Crawley, Sandra Dungee-Glenn, State Rep. Dwight Evans, Congressman Chaka Fattah, Carl R. Greene, Sharmaine MatlockTurner, J. Whyatt Mondesire, Mayor Michael A. Nutter and Ahmeenah Young. Page 5 At The Trib’s Most ‘Influential African Americans’ page 6 The Public Record • February 19, 2009 Gov On The ‘ATAC’ GOV. ED RENDELL announces $3.5 million funding for Avenging the Ancestors Coalition. Rendell requested grant from Delaware River Port Authority to complete President's House on Independence Mall, where nation's first leader George Washington kept slaves. Money will renovate House, Photo by Donald Terry located at 6th & Market Streets. ATAC members look on as DRPA Chairman John Estey and former Mayor John F. Street listen to Darla Sidles, Deputy Superintendent of Historical Park, tells history of President's House on Independence Mall. Photo by Donald Terry Christopher Columbus Charter School 916 Christian Street 1242 So. 13th St. • Phila., PA 19147 215-975-7400 • 215-389-6000 Christopher Columbus Charter School salutes our Philadelphia African American Leaders Straighten Out Zoning Philadelphia’s developmental policies have grown over time into a mare’s nest of unpredictable decision-makers. Too often, the system we now have leaves neither community members nor developers happy. Nevertheless, this traditional state of affairs is affordable when times are good. When times are tough, as they are now, it is time for change. One of Philadelphia’s greatest potential strengths in this downturn is its central location and its wealth of cheap real estate. As a city, we should figure out how to out-compete other cities by attracting developers. If our zoning code is noncompetitive – and experts say it is – then the sooner we fix it, the better. Exactly how it’s done is necessarily a political process that must elicit cooperative input freom all sides. Feb. 19- Friends of Marian Tasco honor Council Majority Leader at Penna. Academy of Fine Arts, 128 N. Broad St., 5:30-7:30 p.m. Tickets range $250-$2500. For info call (215) 843-8482. Feb. 19- 1st Ward Democratic Committee meets at Downey’s, Front & South Sts., 6-9 p.m. Tickets $125 ($35 for committee persons). Call Joe Hoffman (215) 833-1943. Feb. 19- Fundraiser for Dawn Tancredi at George Bochetto’s office, 1524 Locust St., 6-8 p.m. For info (215) 735-3900. Feb. 20- Phila. Chinatown Development Corp. marks Chinese New Year celebration, Year of Ox, at Ocean City Restaurant, 234 N. 9th St., 6 p.m. For info John Chin (215) 922-2156. Feb. 21 - Benefit for Dan McCaffery at the Bogside Pub, 7540 Castor Avenue, Philadelphia, 8-12 p.m. Tickets $30. Call (215) 690-3950. Feb. 22- Committee to Elect Judge Pat Dugan Benefit at Liberties Restaurant & Bar, 705 N. 2nd St., 2-6 p.m. Tickets $30. Call Brian (215) 779-1330. Feb. 22- Newly formed Democrats Of Oak Lane Team hosts candidates at 6521 N. Broad St., across from Oak Lane Diner, starting 3 p.m. For info call Marion Wimbush (215) 224-9410. Feb. 23- Malik Aziz’ Nat’l Exhoodus Council unveils Proposal For Peace & Jobs at Black & Nobel Conference Ctr., 1411 W. Erie Ave., 1:30 p.m. Feb. 24- Fat Tuesday Mardi Gras Fundraiser for Judicial candidate Ted Vigilante, U.R.C. Club, 3156 Frankford Ave., 6:30-9:30 p.m. Tradi- tional N’awlins cuisine, open bar (till 9:30) and entertainment. Park in lot across from club. Tickets $30. Call (215) 743-2000. Feb. 25- Friends Of Jim Roebuck throw State Rep a Birthday Party at Warmdaddy’s, 1400 S. Columbus Blvd., 5:30-7:30 p.m. Donation levels $125-250-500. Please respond by Feb. 19 to Friends of Jim Roebuck, 435 S. 46th St., Phila., PA 19143. Feb. 26- Logan Community Development Corp. holds 1st NAC meeting at Davis Memorial Ch., 4500 N. 10th St., 6:30-8:30 p.m. Americans for the Year 2009. If we had the space, more names would be added. There is such a wealth of great concern, outstanding sensitivity and deep commitment emitting from the entire African American population, for which we owe a great deal of gratitude. We look forward to more milestones, more achievements, as their years progress. The Public Record • February 19, 2009 We could easily fill up this entire editorial page with pictures and the names of this city’s outstanding African Americans. Since we are a paper prone to carrying the activities of African Americans in politics and labor, we have placed the pictures of those who are in the forefront of making history daily from a Black perspective. On Page five of this issue, we carry the names of those selected by the Tribune as Outstanding African Page 7 Our Opinion ... We Salute Today’s Black History Makers The Public Record • February 19, 2009 page 8 by Michael A. Cibik, Esq. American Bankruptcy Board Certified Question: Can past IRS taxes ever be discharged (wiped out)? Answer: This is a complicated area of the bankruptcy law and an attorney should be consulted. You can discharge (wipe out) debts for Federal income taxes in Chapter 7 bankruptcy only if all of these five conditions are met: 1) The IRS has not recorded a tax lien against your property. (If all other conditions are met, the taxes may be discharged, but even after your bankruptcy, the lien remains against all property you own, effectively giv- ology, Drexel University/ Hahnemann Hospital, Prof. of Clinical Medicine ing the IRS a way to collect.) 2) You didn’t file a fraudulent return or try to evade paying taxes. 3) The liability is for a tax return (not a Substitute or Return) actually filed at least two years before you file for bankruptcy. 4) The tax return was due at least three years ago. 5) The taxes were assessed (you received a notice of assessment of Federal taxes from the IRS) at least 240 days (eight months) before you file for bankruptcy. (11 U.S.C. §§ 523(a)(1) and (7).) Next week’s question: Are there other debts that are also non-dischargeable under the New Bankruptcy Law? Cardiac arrest has dreaded complications. That’s because it’s critical no time be wasted in assisting an individual who has collapsed suddenly with loss of blood pressure and heart rate. The chances of an individual surviving a cardiac arrest strongly depends on their receiving CPR (cardiopulmonary resuscitation) within five minutes. If an individual is resuscitated within those five minutes, the prognosis is excellent for a recovery. That is why casinos, airports, and other public facilities have defibrillators on hand. In the past, lay individuals were apprehensive about administering CPR, using the old mouth-to-mouth tech- nique. That method is no longer recommended. What is best is the ability to administer chest compressions. Any individual can do this, but they need to perform the appropriate number and type. Witnessing a patient who collapses, one who is able to administer chest compressions effectively can be life- MEDICAL RECORD saving until trained medical personnel arrives. You could be that lifesaver for a family member. Reach out to your local hospital or nearby Red Cross to find out where you can take basic CPR lessons. Working To End Nurse Shortage Citing reports showing encouraging news regarding the state’s nursing shortage, State Sen. Christine M. Tartaglione said she has reintroduced legislation intended to continue that success. “The two top concerns that I’m hearing from people right now are jobs and health care,” Tartaglione said. “This bill will ensure the continuation of a successful new program that should help launch great careers and ease the shortage of healthcare workers.” SB 174 would ensure by law the future of the Pennsylvania Center for Health Careers, a program launched by the State’s Dept. of Labor & Industry in 2004. The Department reported projections of nurse shortages have eased by as much as 10% since 2005. But the future remains uncertain, and more progress must be made, Tartaglione said. “Even in this most difficult budget year, we must continue investments in job creation and health care,” Tartaglione said. “If we do not, we will pay more later and put lives at risk.” According to the Department, Pennsylvania will need 146,000 registered nurses in 2010, but would likely fall 510% short of that. The Center for Health Careers is advised by a diverse The Public Record • February 19, 2009 by Dr. Nicholas L. DePace, FACC, FCCP Associate Chief of Cardi- Page 9 CPR: It’s No Longer Mouth To Mouth group of professionals representing the health-care industry, labor and government. The center keeps tabs on the health job market, develops strategies for nurse retention, and uses State investments to help nursing schools expand. The center has also determined aging populations will cause the need for licensed practical nurses to nearly double by 2010, leading to a shortage of as many as 8,000 LPNs. Joe Hoffman Sr., Hosts His Famous Gala Attended By All Candidates Downey’s Restaurant & Bar Front & South Streets • Philadelphia Thursday, February 19th, 2009 A Great Chance to Meet and Greet Future Judges And Committee People Super Buffet - Bar - Entertainment Tickets: $125 per person, Committe People: $35 Call Joe Hoffman, Sr. 215-833-1943 From 6:00 till 9:00 p.m. page 10 The Public Record • February 19, 2009 Well fellow Trunksters, it’s official: The liberal “spendulus” bill was signed this week and Wall Street has responded with a resounding thud. A 300-point drop on the Dow to add to the 2000-point drop since PRESIDENT BARACK OBAMA was elected. Do you think the media would be screaming about such a horrible performance if SEN. JOHN McCAIN had been elected? You all know the answer to that question, but nary a word from the mainstream press. Why? Because they sold you on this guy and now they are fully invested; journalism – shmernalism! One other fact: more Democrats in Congress voted against this “porkulous” scam than Republicans voted for it. And who voted for it, you ask? None other than our own SEN. ARLEN SPECTER. Locally, it looks like we have a DA candidate, despite some sage advice from this cellulite-ridden rock star. Former Democratic Sheriff candidate MIKE UNTERMEYER has thrown his hat in the ring and from all accounts is a good guy who is willing to put forth a strong effort. This puts our Controller candidate AL SCHMIDT in a tough spot. Traditionally, the Controller spot plays second fiddle to the DA candidate, who would steal precious time, money and resources from or man Al. Toss on top of this the fact Philadelphia Forward Executive Director BRETT MANDEL has decided to run against incumbent “yes man” ALAN BUTKOVITZ on the Donkey side of the house, and Schmidt finds himself in a much different position from just a few weeks ago. Not for nuttin’, but this two-ton terminator is throwing all his peanuts behind “Honest Al.” Is City Committee going to “rubber-stamp” Mr. U just to fill a slot like always? Nothing personal, but Mr. Untermeyer has to prove his conservative bona fides ASAP and show he’s not (Cont. Page 23) Last Friday night, the City of Philadelphia lost another Police Officer, the fifth one killed since 2006. John Pawlowski was killed while trying to stop a gypsycab robbery in Logan. The dude that shot him, Rasheed Scruggs, is in Albert Einstein Medical Center with gunshot wounds. You see, he may have killed a cop, but he ended up with some lead in his own behind. Pawlowski was a five-year veteran, had asked to be transferred to the 35th Police Dist. (the late Officer Chuck Cassidy was also based there) and was about to become a father for the first time when he died. He had on a bulletproof vest, but the bullet Scruggs hit him with managed to nip a space just outside of the vest. When Police Commissioner Charles Ramsey talked to reporters on Saturday, he said of Scruggs, “These guys should not be among us, period. Lock ‘em up, throw away the key, build another prison, don’t let ‘em out. There are some people who are just not salvageable. Period. And he’s one of them.” But he didn’t stop there. When asked how many times Scruggs had been hit by bullets during his altercation with the police, Ramsey said, “He wasn’t hit enough. That’s the only thing that matters.” Now I understand he was speaking out of anger when he said that. If I were Charles Ramsey, I’d be more than a little angry too. Twenty-five years old is entirely too young to be dead. It’s especially too young for someone who is serving his city to be gunned down. However, when you go around saying that a suspect “wasn’t hit enough” when it comes to his being shot, you con(Cont. Page 23) Yo! Here we go again with things you learn about when you live in Georgia. 1. A possum is a flat animal that sleeps in the middle of the road. 2. There are 5,000 types of snakes and 4,997 of them live in Georgia. 3. There are 10,000 types of spiders; all 10,000 of them live in Georgia, plus a couple no one’s seen before. 4. If it grows, it’ll stick ya. If it crawls, it’ll bite cha. 5. “Onced” and “Twiced” are words. 6. It is not a shopping cart, it, barbecue sauce, Tabasco and ketchup. 23. The local papers cover national and international news on one page, but require six pages for local high-school sports, motor sports, and gossip. 24. You think the first day of deer season is a national holiday. 25. You find 100 degrees Fahrenheit “a bit warm”. 26. You know all four seasons: almost summer, summer, still summer, and Christmas. 27. Going to Wal-Mart is a favorite pastime known as “goin‘ Wal-Martin’“ or “off to Wally World.” 28. You describe the first cool snap (below 70 degrees) as good chicken-stew weather. 29. Fried catfish is the other white meat. 30. We don’t need no dang driver’s education. If our mama says we can drive, we can drive, dagnabbit. Now you may not believe these sayings are genuine Georgia sayings. They are valid – I’ve been there. Snooper Scooper: Hats off to those professionals of The 1st Judicial District’s WARRANT UNIT. Let me warn all you ‘deadbeats’ who owe enormous sums of monies on ALL your traffic tickets: They’re coming to get you. YES, I can tell ALL OF YOU these ‘pros’ don’t play, because they do mean business. I can tell all of you TRAFFIC DEADBEATS you will PAY, one way or another! What is really scary is these Warrant Officers can come any time, any day, and they may be at your front door right now. I suggest you let them in, because you’ll soon find out “THEY DON’T SELL WOLF TICKETS”. Call TRAFFIC COURT! Snooper’s Sports Extra: Here we go again with baseball and its STEROID PROBLEMS. Yes, now we have ALEX THE FRAUD, aka Alex Rodriguez of The New York Yankees. I can’t believe these players are so stupid as to ruin their oncegreat careers. Never mind a possible HALL OF FAME induction. Forget about it Alex, it will never happen. SHAME. Snooper Sighting: The gentleman we all saw recently on FOX NEWS, regarding a criminal who was shot and killed, was MR. PETER DACKO, a former Court Crier for the Municipal Court. He is now officially RETIRED, and he was quite upset, especially since it happened in his neighborhood. He stated, “This just doesn’t happen here where I live”. IT DID! Hey Pete, it’s happening everywhere. We live in bad times. Snooper’s UPDATE: Well, apparently The Mayor’s budget problems are for real. This City is in deep trouble, and we must all ‘pitch in’ and do whatever it takes to ‘bail’ us out. The Mayor has already stated, “There will be NO FEDERAL BAILOUT for Philadelphia.” He also stated, “There (Cont. Page 23) NICHOLAS STAMPONE, a legend in Democratic Party politics, a former ward leader, State Senator and Sergeant at Arms of City Council, has passed on at age 82 after a long illness. He distinguished himself as the leader of the 41st Ward. Among his many accomplishments was accumulating a collection of political memorabilia, buttons, which is reputed to be one of the finest in the Commonwealth. His recreation room is covered with buttons from political campaigns of bygone years to the present. He is survived by his wife of many years DELORES, his daughter KATHY and his son-in-law TONY RADWANSKI, who is a former chief of staff in the City’s Controllers Office under JONATHAN SAIDEL. Tony has a fine singing voice in the style of Frank Sinatra. He and his father-in-law, who was a gifted harmonica player, performed at many ward affairs and in neighborhood bars. He was succeeded as ward leader by his good friend MIKE McGEEHAN, who is a State Representative. His wake at the Galzerano Funeral Home, next to the rectory of the Church of the Blessed Virgin Mary, was extremely well attended. The crowd extended outside the parlor and around the building. Among those in attendance were the Chairman of the Party BOB BRADY; STATE SEN. MIKE STACK; Supreme Court JUSTICE SEAMUS McCAFFERY; COUNCIL PRESIDENT ANNA VERNA; COUNCILWOMAN JOAN KRAJEWSKI; and Municipal Court JUDGE FAY STACK and her husband MIKE. Common Pleas Court JUDGE EUGENE MAIER and his charming wife LANA celebrated President’s Day with a buffet dinner at their townhouse in the Fairmount part of Philadelphia. It was a night of laughter and good conversation. Among those in attendance were AL DRAGON and his wife BARBARA; JERRY SCHANIE and his wife BETTY; Senior Common Pleas JUDGE RICARDO JACKSON; and well-known Philadelphia printer GENE JACOBS and his wife PHYLLIS. This week marks the start of the first week for circulating nominating petitions for the various court vacancies, including the District Attorney’s office. Among those running for the office of district attorney are SETH WILLIAMS, DAN McCAFFERY, MICHAEL TURNER, DAN McELHATTON and BRIAN GRADY. There is just, roughly speaking, one week left in February and three weeks to go in March. Soon the first day of spring will be upon us; nevertheless, be careful outdoors for black ice. TO: UNKNOWN HEIRS OF LINDA F. SIMPKINS, DECEASED, MORTAGOR AND REAL OWNER, DEFENDANT whose last known address is 7808 Woolston Avenue Philadelphia, PA 19150. THIS FIRM IS A DEBT COLLECTOR AND WE ARE ATTEMPTING TO COLLECT A DEBT OWED TO OUR CLIENT. ANY INFORMATION OBTAINED FROM YOU WILL BE USED FOR THE PURPOSE OF COLLECTING THE DEBT. You are hereby notified that Plaintiff PERSONAL INVESTMENT, INC., has filed a Mortgage Foreclosure Complaint endorsed with a notice to defend against you in the Court of Common Pleas of Philadelphia County, Pennsylvania, docketed to No. 081202357 wherein Plaintiff seeks to foreclose on the mortgage secured on your property located, 7808 Woolston Avenue Philadelphia, PA 19150 whereupon your property will be sold by the Sheriff of Philadelphia County. The Public Record • February 19, 2009 TO: Unknown Heirs of Angelo Rossillio, , MORTAGOR AND REAL OWNER, DEFENDANT whose last known address is 204 North 65th Street Philadelphia, PA 19008. THIS FIRM IS A DEBT COLLECTOR AND WE ARE ATTEMPTING TO COLLECT A DEBT OWED TO OUR CLIENT. ANY INFORMATION OBTAINED FROM YOU WILL BE USED FOR THE PURPOSE OF COLLECTING THE DEBT. You are hereby notified that Plaintiff COUNTRYWIDE HOME LOANS, INC., has filed a Mortgage Foreclosure Complaint endorsed with a notice to defend against you in the Court of Common Pleas of Philadelphia County, Pennsylvania, docketed to No. 081003353 wherein Plaintiff seeks to foreclose on the mortgage secured on your property located, 204 North 65th Street Philadelphia, PA 19139 whereupon your property will be sold by the Sheriff of Philadelphia County. IN THE COURT OF COMMON PLEAS PHILADELPHIA COUNTY CIVIL ACTION - LAW Term No. 081202357 NOTICE OF ACTION IN MORTGAGE FORECLOSURE PERSONAL INVESTMENT, INC. Plaintiff vs. UNKNOWN HEIRS OF LINDA F. SIMPKINS, DECEASED Mortgagor and Real Owner Defendant Page 11 IN THE COURT OF COMMON PLEAS PHILADELPHIA COUNTY CIVIL ACTION - LAW Term No. 081003353 NOTICE OF ACTION IN MORTGAGE FORECLOSURE COUNTRYWIDE HOME LOANS, INC. Plaintiff vs. Unknown Heirs of ANGELO ROSSILLIO, DONNA ROSSILLIO and JOSEPH ROSSILLIO Mortgagor(s) and Real Owner(s) Defendant(s) page 12 The Public Record • February 19, 2009 IN THE COURT OF COMMON PLEAS PHILADELPHIA COUNTY CIVIL ACTION - LAW July Term 2003 No. 003150 NOTICE OF ACTION IN MORTGAGE FORECLOSURE FEDERAL NATIONAL MORTGAGE ASSOCIATION Plaintiff vs. JOHN DOE and SUNG KIM Mortgagors and Real Owners Defendants TO: JOHN DOE, MORTAGOR AND REAL OWNER, DEFENDANT whose last known address is 1624 South 7th Street Philadelphia, PA 19148. THIS FIRM IS A DEBT COLLECTOR AND WE ARE ATTEMPTING TO COLLECT A DEBT OWED TO OUR CLIENT. ANY INFORMATION OBTAINED FROM YOU WILL BE USED FOR THE PURPOSE OF COLLECTING THE DEBT. You are hereby notified that Plaintiff FEDERAL NATIONAL MORTGAGE ASSOCIATION, has filed a Mortgage Foreclosure Complaint endorsed with a notice to defend against you in the Court of Common Pleas of Philadelphia County, Pennsylvania, docketed July Term 2003 No. 003150 wherein Plaintiff seeks to foreclose on the mortgage secured on your property located, 1624 S 7th Street Philadelphia, PA 19148 Primary Dates It’s primary time again. If you didn’t know, Feb. 17 was the first day to circulate and file nomination petitions, with last day being Mar. 10. The earliest candidates can circulate and file nomination papers is Mar. 11 while the last day for a candidate to withdraw those nominating petitions is Mar. 25. The last day to register before primary is Apr. 20. Also, the last day to apply for a civilian absentee ballot is May 12. The last day County Boards of Elections can receive voted civilian absentee ballots is May 15. Election Day is May 19, 2009. WITH JAMIE FOXX, Oscar-winning actor, waiting to speak at City Hall press conference, Sharon Pinkenson confers with Mayor Michael Nutter. Pinkenson is director of Greater Photo by Bonnie Squires Philadelphia Film Office. Crime Fighters THIS CITIZENS ACTION group, including Gregg Bucceroni and C. B. Kimmins, has attended prayer vigils on behalf of slain Philadelphia Police Officer John Pawlowski. Active in daily night street vigils in high-crime drug areas, they promote community-police harmony and better understanding in partnership with each other. The Pennsylvania Legislative Black Caucus hosted a Black History Month celebration Tuesday in the Capitol Rotunda. State Rep. Ronald G. Waters, chairman of the caucus, opened the celebration, entitled “Honoring Accomplishments of African Americans in Pennsylvania.” The prayer was offered by State Rep. Thaddeus Kirkland, former PLBC chairman. Dr. Olin Harris Sr., founder and executive producer of “Gospel Cavalcade Live” on The Touch 95.3 FM, was honored at the celebration. The program also featured a tribute to Octavius Catto, a Cheyney University graduate, teacher, equal rights advocate and leader of the Negro Baseball League, who was assassinated in October 1871 defending the newly won right of Black men to vote in the United States of America. City Tourism Czars Eye African Americans The Greater Philadelphia Tourism Marketing Corp. is marketing a new other stakeholders. So enthusiastic about the enormous amount of creativity happening in the region, the group dubbed themselves the Philly 360 Coalition — a name that reflects their commitment to giving potential African American visitors a 360 view of the city’s modern and historic tourism offerings. Philadelphia’s current marketing strategy has long proven to be successful in attracting African American visitors to the region. Today the city draws twice the national average of African American overnight travelers (10.5% versus 4.9%), according to Longwoods International. The Public Record • February 19, 2009 The Committee of 70 may not have realized it, but it stepped on the toes of ward leaders of both the city’s Republican and Democratic City Committees when it sent out an advertisement seeking voters to file for “Election Officer Positions.” From day one, whenever the present system was set up, looking for and filing candidates for the election-officer positions of Judge of Election and Majority and Minority Inspectors has been the responsibility of the ward leaders. They, in turn, give that responsibility to their committee persons. For who knows their neighborhood better than the elected committee persons? The Committee of 70 is calling attention to the fact there is just a small window within which to get a petition for an elected position, gather the necessary signatures and then turn in the petitions, notarized, to the City Commissioners by Mar. 10. Failing to do that, the prospective candidate’s name won’t appear on the May 19 ballot. It’s also petition time for candidates for the State’s Supreme Court, Superior Court, Court of Common Pleas and Municipal Court. That’s why at any moment during the day, someone could well come up to you and ask you to sign a petition. Every political event will see a flock of petitions lying at the entrance hoping to get signatures. The Philadelphia Public Record’s 10th annual Birthday Party will have a dedicated space for Democratic and Republican petitions at its party at Swan Caterers. Candidates are invited to bring them. The event starts at 7 p.m. The Committee of 70 hosted a “workshop” last night at Charles Santore Branch Library for those who expressed an interest. Its next one is at the Joseph Coleman Library, 68 W. Chelten Avenue, at 6:30 p.m. next Wednesday. PLBC Hosts Black History Celebration Page 13 ‘70’ Steps Jamie Foxx At City Hall Into Party Territory page 14 The Public Record • February 19, 2009 PGW Audit Shows Big Improvement President and CEO Thomas E. Knudsen reported the Philadelphia Gas Works was pleased over the results of its most recent audit by the Public Utility Commission. The Audit analyzed and evaluated PGW’s management performance in all major areas and broadly approved that performance. The audit was conducted as part of the PUC’s regular reviews of all regulated utilities under its jurisdiction. The audit was conducted by Schumaker & Co. on behalf of the Commission from October 2007 through September 2008. The final report to the Commission was filed in December 2008. That report followed an intensive review that included over 100 interviews of PGW staff, 850 requests for written data, as well as numerous group discussions. “This audit has been helpful to PGW in identifying areas that need further improvement, Baby Boom In N.E. His Pals Stick By Judge Lynn MUSTERED at Italian Bistro for Judge Jimmy Lynn’s campaign fundraiser were, from left, Michael Brady, distinguished attorney John Elliott, Lynn and C. J. Ray. COMMONWEALTH Court, here comes Judge Jimmy Lynn! That’s hope of Judge’s friends who gathered to back him in Center City: from left, Jack Snyder, Richard Strohm, Guy Sciolla, Walter Ludwig and Annie Coughlin with Lynn. Three Bills Make Voting Easier HARRY T. LEECH, photographer for Democratic Ward Leaders Bob Dellavella of 55th and Bill Dolbow of 35th, holds his fourth grandchild, Colin John Leech, born last Wednesday. He came in at 8lb 6oz. Congratulations to Photo by Kevin Leech Harry and his family! but also in providing independent corroboration of PGW’s continued progress and improvement,” said Knudsen. In most areas in which the Audit recommended change, he added, PGW had already identified the need for change through its own internal processes and has made and continues to make improvements. Other recommendations Voting will be made easier in Pennsylvania if three bills introduced by five State lawmakers, including Philadelphia Reps. Babette Josephs and Mike McGeehan, this week make it into law. Other sponsors include suggest new areas for improvement which PGW readily embraces. PGW has fully accepted 88 of the findings, accepted three in part, and rejected two others which are beyond the control of management. Founded in 1836, PGW is the nation’s largest municipally owned natural-gas utility, serving half a million residential, commercial and industrial customers, something it continues to do in good fashion. State Sens. Michael O’Pake and Daylin Leach, along with State Rep. Eugene DePasquale. Two of the bills would allow for voting before Election Day in the Commonwealth, and permit “noexcuse” absentee balloting, which would allow registered voters to apply for and cast an absentee ballot without having to present a reason. The third proposal would pursue an amendment to the Pennsylvania Constitution to specifically guarantee the absentee voting rights of military voters and allow the General Assembly expand by law the excuses for voting by absentee ballot. The bills are being introduced in response to similar initiatives in other states that have been successful in increasing voter turnout in elections, as evidenced in the balloting of this past November. Voters in 34 other states already have access to some form of “no excuse” early voting and Maryland will soon join them after voters in November approved a ballot question by a nearly 3–1 margin. Early voting could reduce long lines at the polls in years, such as in 2008, when turnout is especially heavy. Long lines can affect the concentration and endurance of poll workers and discourage people from voting, especially those trying to cast a ballot before work or during their lunch hour. Now Available For Sale Villas @ Packer Park Model Home 2 Br - 2.5 Ba - Garage Corner Location Professionaly Decorated & Fully Furnished - asking $399,900 Ten Year Tax Abatement Four Newly Constructed Almost Completed BiLevel Townhomes Delivery Spring ‘09 Sign now for special incentives Starting at $339,900 215 551 5100 Resales @ The Reserve At Packer Park 2009 W. Reserve Dr. 3BR-2.5BA, Interior approx.. 1600sq. ft. on two Levels. Driveway, Corner Lot, Hw. Flrs Thru-Out 1st Level $389.900. Of Philadelphia and Vicinity Union Labor... Building it right for a better and stronger community! 319 N. 11th Street Philadelphia, PA 19107 Tel: 215-925-5327 • Fax: 215-925-5329 The Public Record • February 19, 2009 Laborers’ District Council Health and Safety Fund Page 15 LDC HEALTH AND SAFETY FUND Administrator, Richard Legree, Sr. Director, Juan Bacote Management Trustees: James Vail and Steve Whiney Web: Remember – Do It right, Do It Safe, Do It Union Backing Up The Controller The Public Record • February 19, 2009 page 16 Donna Fends Off Foreclosures It’s Time To Learn Port Lingo INTERESTED CROWD listens to how to survive mortgage-foreclosure efforts hosed by Councilwoman Donna Reed Miller. Chief of staff Michelle Lewis and Speaker Tamika Teffern field questions from crowd. ASSURING Controller Alan Butkovitz, 2nd from left, of their support for his reelection campaign are labor leader Mike Fera, Gene Cohen and DC 47 chief Cathy Scott. DA Abraham Honored PUBLICIST Tommie St. Hill and George Pomerantz discuss upcoming campaign with Controller Alan Butkovitz at Vesper fundraiser. DA LYNNE ABRAHAM was honored by American Jewish Committee with Judge Learned Hand Award at Rittenhouse Hotel at event co-chaired by Eleanor Dezzi and Michael Sklaroff, Esq. From left are Sklaroff, Al Dezzi, DA, Eleanor & Chris Dezzi. ALL SMILES is Controller Alan Butkovitz as he welcomes Sheriff John Green’s Chief of Staff Barbara Deeley to his fundraiser at Vesper Club. With dredging of the Delaware River getting started, looking at green lights all the way, Urban Engineers, inc., a Philadelphia firm long involved in Port expansion, has released the definitions of some familiar port-related terms for the average citizen. Michael J. Gabor, Director of Marine Engineeering Services, said, “Most Port people take for granted the words and phrases they use to describe activity on the Delaware are known to all. That not being the case, we picked out some for non-Port users to remember, since the Port of Philadelphia will be growing in importance as the year progresses.” Dredging is one of those words. That means digging out muck and silt from the bed of the river’s deep navigating channel. Restricted Channel is a relatively narrow, natural, or dredged water course. The Delaware River Course location is identified and restricted. Channel Line describes the limits of a dredged or natural channel. In the Delaware River, the width of the channel can vary from 400’ to 800’. The location of the channel and its width are depicted on nautical charts (maps). Anchorage is an open area in the river, outside of the demarcated channel, which is of suitable depth and width for vessels to wing about at anchor. The several anchorages in the Delaware River in Philadelphia are used for vessels awaiting approval to proceed to a berth or standby for repair, inspection, fuel and supplies. Berth is a location where a vessel can be safely moored, such as alongside a pier, within a slip between piers or at a fixed berthing structure, such as dolphins or ferry racks. Pierhead Line is located inshore of the channel and denotes the limit to which a fixed waterfront facility may be constructed. PRESIDENT Barack Obama supporters crowded in Sadiki’s Restaurant in N.W. Phila. as they saluted inauguration of our 44th Commander and Chief. ‘YES WE DID’, cheers crowd of Barack Obama supporters at Edgar Howard’s inauguration party in Cedarbrook. Dan Kicks Off DA Campaign DISTRICT ATTORNEY candidate Dan McCaffery started circulating petitions Tuesday at kickoff party at Ancient Order of Hibernian Division 39 in N.E. Phila. Among many joining were Ed Sweeney from Iron Workers Local 401 and Dan Santosusso from Teamsters Local 830. UT O B A ASK ULL OUR F R A 30 YE TEE AN GUAR The Public Record • February 19, 2009 WARD LEADER Edgar Howard hosts inauguration brunch at the Sadiki’s Restaurant in N.W. Phila. Joining him are Jimmy Heinz, Judge Jimmy Lynn, Howard, Bob Coleman and Matt & Ozzie Myers. Page 17 Howard Hosted Last Of Sadiki’s Parties LICE N INSU SED REGI RED STER FR ED ROO EE ESTIM FIN AT E RTIF S ICAT E G CE CITY WIDE SERVICE ALL TYPES OF Seth Opens Office In Germantown ROOFING 975 1 34ivtehrsary Ann 8 200 • New Roofs • Repairs • Hot Asphalt • Rubber & Modified Systems • Shingles • Slate & Tile • Skylights • Gutters & Downspouts EMER GEN REPA CY I 24 HO RS UR A DAY S 12260 Townsend Road 215-464-6425 215-725-8815 FAX # 215-624-9263 WE DO OUR OWN WORK • NO SUBCONTRACTORS DA CANDIDATE Seth Williams officially opened campaign headquarters in N.W. Phila. at Germantown & Mt. Pleasant Avenues. Williams said the office “will help reach out to a community that needs and deserves change.” From left are Lana Felton Ghee, State Rep. Cherelle Parker, Williams, 22nd Ward Leader Vernon Price and J. Wyatt Mondesire. ON ROOFIN NI G U • Residential • Commercial • Industrial The Public Record • February 19, 2009 page 18 NEWLY ELECTED State Reps. Vanessa Lowery Brown, center, and Kenyatta J. Johnson, back row, right, stand with employees of Keystone Mercy Health Plan who live in their districts. Brown and Johnson came to Keystone Mercy's offices because of their concern for 1.9 million Pennsylvanians – almost half of them children – on Medical Assistance (Medicaid). During their visit, they got to see how Medicaid program works and to ask questions of Keystone Mercy staff who work directly with Medicaid recipients. Keystone Mercy, a member of AmeriHealth Mercy Family of Companies, is a Medical Assistance managed-care health plan serving more than 300,000 people in Southeastern Penna. Tartaglione Warns Scholarship Deadlines Are Near State Sen. Christine M. Tartaglione urged highschool seniors and others hoping to attend college this fall to begin the process of applying for financial aid. “Spring can be a very hectic time for high school seniors and their families, so it’s important to get a head start on aid applications,” Tartaglione said. “With tax season and final exams coming up, applicants shouldn’t REENACTING Martin Luther King’s famous march, this time on Woodland Avenue, S.W. Phila. community activist Paul “Earthquake” Moore leads youths – aided by bannerbearer Seth Williams. wait until the deadline.” May 1 is the deadline for filing the Free Application for Federal Student Aid. The deadline applies to all new applicants who plan to enroll in an undergraduate baccalaureate-degree program, those in college-transfer programs at two-year public or junior colleges, as well as all renewal applicants regardless of their educational program. Students can begin filing the applications now, Tartaglione said. Students and their families can find more information on financial aid for college – including links to file the application on-line – at Tartaglione’s website,. Students who are firsttime applicants planning to enroll in a business, trade or technical school, a hospital school of nursing, or a terminal (nontransferable twoyear degree) program at a community, junior or fouryear college have until Aug. 1 to file the FAFSA to qualify for a State Grant award. Students with general questions about completing the FAFSA or the State Grant Program can also call PHEAA at 1 (800) 6927392. HOPES AND DREAMS. On MLK Day, over 25 PGW employees help to prepare 100 KIPP Philadelphia Charter school students, including visiting students from KIPP in Houston, Texas, for their historical journey to Washington, D.C. to view the Presidential Inauguration. Pamela Thompson, PGW employee working with KIPP Charter School stuPhoto Credit: Martin Regusters dents. Page 19 SCHOOL DISTRICT OF PHILADELPHIA *A pre-bid conference and site tour will be held on February 25, 2009 at 10:00 am at the Julia DeBurgos School 401 West Lehigh Avenue, Philadelphia PA 19133 B- 026 IT of 2008/09* Mechanical Contract - Various Locations $700 B- 026 IT of 2008/09* Mechanical Contract - Various Locations $1,100. ADVANCE NOTICE Auction-Real Estate and Antiques Sunday March 8th - 10 AM on Site Million Dollar Inventory 2 Stores and Warehouse Full of Fine Antiques, Art and Collectables!!! 116-118 N. High St. Millville NJ The Public Record • February 19, 2009 Sealed proposals will be received by the School Reform Commission at the School Administration Building located at 440 North Broad St., 3rd Floor, Office of Capital Programs, Philadelphia, PA 191304015, until 2:00 P.M., on Tuesday, March 17, FEE B- 025 IT of 2008/09* General Contract Various Locations $150,000.00 $200.00 New IT Core Site Room throughout the School District ADVANCE NOTICE Wednesday March 11, 2009 10 Commercial Real Estate Property Camden 18 Large Home Center Auction March 19 N. E. Phila Corner Property The School Reform Commission reserves the right to reject any and all bids and make the awards to the best interests of the School District of Philadelphia. page 20 The Public Record • February 19, 2009 Ringside With The Shadowboxer Police Vs. Fire Bout on the Boulevard It will be the Philadelphia Police vs. Philadelphia Firefighters in charity boxing matches this Sunday at the Penna. National Guard Armory on Roosevelt Boulevard. First bout is at 1 p.m. and a post-fight party will follow at the Veteran Boxing Association Clubhouse (2733 E. Clearfield Street – Just off Richmond Street). SHADOWBOXER’s sources say both teams have engaged in intense training sessions, so fans could be in for some re- ally entertaining bouts. SHADOWBOXER wants to extend his congratulations to Tony Wolfe on being elected to another four-year term as President of the Middle Atlantic Association for USA Boxing. This past Thursday, SHADOWBOXER attended a social gathering for fivetime women’s boxing Champion Jacqui Frazier-Lyde at the Bottom of the Sea Restaurant on South Street. The daughter of “Smoking” Joe Walk In’s Welcomed A.J. Sbaraglia & Toni Frazier had the room filled with family and friends, including her husband “Big” Pete Lyde. Also in attendance were Councilwoman Jannie Blackwell and numerous members of Labors Local 332 who are longtime supporters. Frazier-Lyde continues fighting but in a different arena. She now fights for judicial justice as one of Philadelphia’s Municipal Court Judges. SHADOWBOXER just got done checking out the brand new Joey Giardello Statue Project website at joeygiardello.net and recommends all of you do too. Thanks to the partnership formed between the Veterans Boxers Association, Harrowgate Boxing Club, and the website PhillyBoxingHistory.com, the legendary Middleweight Champion’s legacy will continue on for future generations. AN INVITATION The Public Record is not recession proof. We've suffered, as have other newspapers, a loss in advertising revenue. To make it up so we may continue to serve you as well as we have in the past, we invite you to join the Readers of the Philadelphia Public Record as it Celebrates Its 10th Birthday and Honors Public Servant of the Year Joe Vento At Swan Caterers: Water & Snyder Ave. in South Philadelphia March 2, 2009 from 7 to 10:00 P. M. Full Bar, International Smorgasboard, Entertainment, Meet DA, Controller and Judicial Candidates Ticket: $50.00 We hope you'll purchase at least two tickets. A table of ten is only $450.00 Those who have attended our annual birthday party, will tell you it is a fun event. We are are also taking ads for the special edition that coincides with the party and our anniversary For information, tickets or/and advertising, Call John David at 215-755-2000 DISTRICT OFFICE for State Rep. Vanessa Lowery Brown was opened at 4706 Westminster Street in Mill Creek. Pictured there from left are law enforcement Officers Parker, Riley and Coulter of 16th Police Dist.; seated is Lowery Brown. CYBERSOFT “The Carpet Contractor II” OPERATING CORPORATION!” WOMEN VETERANS Laura Elam, left and NAWV founder Cathy Santos congratulate State Rep. on her new Photos by Cathy Santos constituent-service office. The Public Record • February 19, 2009 STATE REP. Vanessa Brown joins Faith Based of the 190th Legislative Dist.: from left, Rev. Paul Lyon, Rev. Brown, Joyce Abernathy, Imam Jamil and Rev. William Hamilton. Page 21 Lowery Brown Opens Office TM THE WORLD IS AT WAR IN CYBER SPACE! Are your computers ready for the war in your spaces, is your business protected against viruses from within as well as from the outside? Do you have Baseline Management that can tell you when something is going wrong? Hrs: Mon, Tues., Thurs., Sat. 10-5. Wednesday & Friday 10-6:30 Time is Money! Can you afford to pay the enomous costs in time and losses due to NOT being prepared? Every time your computer is attacked, the manhours spent on REPAIRS is a clear indication that time is money and there has to be a better way!: In the ever changing world we live in, the hackers have become more knowledgable, viruses have become more sophisticated and your business becomes more vulnerable! Who Are We? CYBERSOFT has been the chosen defense for world governments for DECADES! CYBERSOFT has the BEST security software in the WORLD! CYBERSOFT makes the BEST security software on the MARKET! SOLUTION: It's simple, you get free consultation to analyze your problems and your company will be fully secured! All business's using: UNIX, UNIX-Like Systems, MAC OS-X, LINUX For more information: Call us at (610) 825-4748 or Email: frank@cyber.com http// Who Can Use Our Products The Public Record • February 19, 2009 page 22 Would you believe $35 for dinner at Le Bec Fin? by Len Lear If there is such a thing as a silver lining to the current disastrous economic cloud, it is the fact lots of businesses, particularly those selling non-essential commodities, have dropped their prices precipitously. For example, I would have said just a few weeks ago you’d have as much chance of seeing the Philadelphia Orchestra perform a concert with spoons, forks and accordions as you would have of getting a fullcourse dinner at Le Bec Fin for $35. I guess it must be time to buy tickets to the spoons, forks and accordion concert at the Kimmel Center, though. That’s because last week Georges Perrier, inter- nationally renowned chef/owner of the iconic treasure, Le Bec Fin, 1523 W a l n u t Street, which for decades has been the city’s most expensive restaurant, introduced a three-course, pre-fixe menu for $35 per person. The only requirements are you be seated before 6:30 p.m. for dinner Monday through Thursday or between 9 and 10 p.m. According to Georges, selections on the menu will change every day, and there will be choices for each course. In addition to the new $35 menu, Le Bar Lyonnaise, which is located in the below-ground floor underneath Le Bec Fin, now offers free hors d’oeuvres and cocktails or glasses of wine for just $5, way below the usual prices, between 5 and 7 p.m. nightly. And there is now an express lunch for $15.23. Whenever I think of Le Bec Fin, I am reminded of a bet I made in 1979 with a friend, Jesse Shelmire, who was then a recent MBA grad from Penn and stock broker for Smith-Barney and today runs his own real estate firm in Dallas. I’m not even sure what we bet on, but we agreed whoever won the bet would take the other person and a companion to dinner at any restaurant of their choice in Philadelphia — and pick up the tab. Well, I was lucky enough to win the bet and quickly told Jesse, “We want to go to Le Bec Fin,” which at the time was $60 per person plus wines for a six-course menu. (The restaurant was at 1312 Spruce Street where Vetri is today.) Mind you, I had never in my life been to a fancy restaurant up until that point and had never drunk any wine other than the cheapest jug available (usually about $1 or $2 a bottle). Len Lear When my wife and I ate out, it was at a pizza shop or a cheap Chinese place (“one from column A and two from column B”). When I was growing up in a West Oak Lane rowhouse, my family would celebrate a birthday or holiday with dinner at the Oak Lane Diner. So when I told Jesse where we wanted to go, he exploded, “You must be kidding! I meant that the winner of the bet would pick some normal, regular restaurant, certainly not Le Bec Fin.” “But you agreed that it would be any restaurant in the city! There were no other conditions or limitations,” I replied. “Are you trying to welsh on this bet?” Well, Jesse reluctantly took us there, and that dinner basically changed our lives. I never knew anything about reduced sauces, dry-aged beef or decanted wine, and the food did not taste like anything I had ever put in my mouth before. On the basis of the server’s recommendation, I ordered a bottle of Chateau Mouton Rothschild, 1969, for $100 (it would be well over $1,000 today), and although Jesse was in shock, the taste was sheer heaven. With this awakening of my taste buds, I knew I had to find a way to beg, borrow or steal so we could enjoy this type of food and wine again. In a circuitous way, that culinary epiphany led to my becoming a food writer, starting in 1982. The last time I looked at a Le Bec Fin menu about a year ago, the multi-course pre-fixe that cost $60 in 1979 was $135, or much more for a chef’s tasting dinner where the chef, not the customer, selects all the dishes. Now, however, not only has the three-course dinner for $35 been introduced, but there is also a six-course pre-fixe menu for $55 per person and an à la carte menu with options ranging from $13 to $30. So although I have nothing but rage for the greedy Wall Street lowlifes (and members of Congress who basically deregulated the finance industry) who caused this depression, at least they made it possible (not intentionally) for ordinary consumers to have dinner at Philadelphia’s palace of gastronomy, Le Bec Fin. Valet parking is available for $17 per person, or about half of the $35 pre-fixe dinner. For more information, call (215) 567-1000 or visit. Elephant Corner (Cont. From Page 10) just a fence jumper. Speaking of conservative bona fides, don’t miss the opportunity to hear great American ED TURZANSKI give a speech entitled “Hey Republicans, Stop Your BellyAching, Snap Out of It and Get to Work!” this Saturday at 10 a.m., 1500 Walnut Street. 2nd floor Conference Out & About Mayor’s Team Briefs Council AT UNPRECEDENTED budget briefing session for City Council, Council President Anna Verna speculates about Nutter Administration analyses with Mayor’s Chief of Staff Clay Armbrister. COUNCIL Majority Leader Marian Tasco chats with Redevelopment Authority Executive Director Terry Gillen as two prepare for budget briefing. Room. The event is already packed with Elephant RSVPs but MARC COLLAZZO and the Loyal Opposition will make room for you. Speaking of Marc, did you catch his Open Letter blasting GOV. ED RENDELL about the DRPA pet projects that cost you and me tens of millions? This guy has the right stuff! Give him a pat on the back next time you see him. Until next time, my “large and in charge” brothers and sisters. citizens. If this mistrust continues, murders won’t get solved, crimes will continue to go up, and Police will continue to be killed or shot at. I know that bringing that up may be unpopular, but it’s the truth. Maybe it’s something that the Commissioner Ramsey needs to think about before getting in front of a live microphone and wishing for a suspect’s death. (Cont. From Page 10) tribute to the attitude that leads to things like Officer Pawlowski’s death. If people believe the police are out to get them, guess what they’re going to do? Shoot to avoid getting shot. Right now, there is some serious mistrust between the Police and many of the city’s DON’T-OWN-THEHOUSE! Please be very careful. Check with City Hall’s Records Dept. to find out who is the LEGAL OWNER of the property you are about to purchase. Remember, don’t give C-A-S-H! Snooper’s BIG STORY: Last week we all experienced winds up to 50mph, and guess what? Your TV, especially all those who have DIGITAL, had terrible trouble. Those of you who still have ANALOG transmissions HAD NO PROBLEMS. This is why DIGITAL TV makes no sense. Keep ANALOG! The Public Record • February 19, 2009 (Cont. From Page 10) will be more ‘cuts’ coming, as our budget problems keep growing.” Wake up Philadelphians, we are in serious trouble. Now the shame of it all is we still have Council People who don’t care! Snooper’s LATE NEWS Bureau: Word has come to us a very good friend and former State Senator, HON. NICK STAMPONE, died. He was so proud of The Great Northeast and all the wonderful people up there. He enjoyed The Councilwoman, Hon. Joan Krajewski, who was always there to help him. He was a professional HARMONICA PLAYER who just loved playing in the area’s local CLUBS and BARS. Tony Carmen, local KARAOKE SINGER, said, “Nick Stampone was a great harmonica player and I enjoyed being on stage with him.” The Senator will be missed by many of his political friends. The Public Record and The Snooper will also miss him. TO: Kathleen, Richard and Nick, Jr.: God Bless. Snooper Updates (3-1-1): Hey Boss, apparently I was correct when I told you and all our readers this system STINKS. I am confused why The Mayor still defends this useless 3-1-1 system. They spent tons of money on it, and it still STINKS! That’s monies we could have spent on LIBRARIES, ETC. Snooper City Hall Bureau: Congratulations to HON. GUY SABELLI, Supervisor, Marriage License Bureau. They did it once again with their annual VALENTINE DAY’S special. I understand HON. PAMELA PRYOR DEMBE, President Judge, along with HON. MARSHA NIEFELD, the newly elected President Judge, Municipal Court, presided over these proceedings in Courtroom 653 City Hall. HON. HOLLY FORD, Judge, Common Pleas, was the “Supervising Judge”, along with Judges from both the Court of Common Pleas and Municipal Court – 16 Judges altogether performed over 45 WEDDINGS. Another year, another great job too! Snooper’s Special Warning: This comes from a very interested source, HON. LYNNE ABRAHAM, District Attorney. BEWARE of anyone who is trying to sell you a house, especially if they ask you for CASH. Your answer is NO, NO! Don’t sign any documents for purchase, because more than likely it’s A PHONY. If you’re foolish enough to sign one of these deeds, in reality, YOU- Page 23 Snooper page 24 The Public Record • February 19, 2009 Public Record Classifieds: ADOPTION: LOVING, FINANCIALLY SECURE Professional couple Ad Sales Reps. Good Pay Call John David 215 755-2000 wishes to adopt newborn. Endless love, educational opportunities, many cousins. Stay-at-home mom. Expenses Paid. Please call Rob & Nancy 800-216-4823. ADOPTION: Wishing to adopt newborn to nurture and adore. Will provide your baby with warm, loving, stable home. You will be treated with respect/ confidentiality. Expenses Paid. Please call Glenna 1-866-535-8080. AUTOS WANTED: DONATE VEHICLE, Receive $1000 Grocery Coupon. Noah’s Arc Support No Kill Shelters. Re- search to Advance Veterinary Treatments. Free Towing, Tax Deductible, Non-Runners accepted 1-866-912-GIVE BUILDINGS FOR SALE: POLE BUILDINGS: 24x40x10’, $9,995 Includes 1-9’x8’ Garage Need Documents Translated Call William Hanna 267-808-0287 English - Arabic French - Italian Spanish small Door, 1-3’ Door. 30’x40’x10’ $10,995 Includes 1-10’x10’ Sliding Door 1-3’ Door. Fully Erected. Maintenance Free. 800331-1875 BUSINESS OPPORT: 100% RECESSION PROOF! Do you earn $800 in a day? Your own local candy route. Includes 25 Machines and Candy All for $9,995. 1-800-460-4027 EDUCATION/TRAINING: ATTEND COLLEGE ONLINE from Home. Medical, Business, Paralegal, Computers, Criminal Justice. Job placement assistance. Computer available. Financial Aid if qualified. Call 866-858-2121 EMPLOYMENT: We’re looking for EXTRAS for FILM/TV. ALL types and ALL ages welcome. Pay is $100- ADS BIG Deals $300/day. Call 1-877-572-7115 EQUIPMENT: SAWMILLS from only $2,990.00 Convert your LOGS TO VALUABLE LUMBER with your Norwood portable band sawmill. Log skidders also available. Free inform a t i o n : 1-800-578-1363-Ext300-N. FARM-LIVE STOCK: PA HORSE WORLD EXPO: Feb 26-March 1, Farm Show 2400 E. Somerset Street Philadelphia, PA 19134 Complex, Harrisburg. Hundreds of vendors, seminars, demonstrations. Theatre Equus-A Musical Equine Revue. Info: 301-916-0852 Phone: 215-423-2223 Fax: 215-423-5937 Drivers: A Steady Lifestyle! Top Pay, Great Benefits! No Experience? No Problem! Werner Enterprises 800-3462818 x140 SCHOOL DISTRICT OF PHILADELPHIA Sealed proposals will be received by the School Reform Commission at the School Administration Building located at 440 North Broad St., 3rd Floor, Office of Capital Programs, Philadelphia, PA 19130-4015, until 2:00 P.M., on Tuesday, March 10,29 (C) of 2006/07* Electrical Contract Martha Washington School $250,000.00 Fire Alarm Replacement 766 N. 44th St. FEE $100.00 *A pre-bid conference and site tour will be held at the project location, on February400-5225. Make checks payable to the School District of Philadelphia. The School Reform Commission reserves the right to reject any and all bids and make the awards to the best interests of the School District of Philadelphia. STATE EMPLOYEES: LEGAL NOTICE C.C.P for the County of Phila., September Term, 2009, Nos 001682 and 001685. Notice is hereby given that on 09/12/2008 the petitions were filed praying for a decree to change their names from Danielle Christy Marino to Daniela Christy Mead and Katherine Mead Blair to Katherine Blair Mead. The Court has fixed 02/27/09 at 10 am, Courtroom 285, City Hall, Phila., PA for hearing. All persons interested may appear and show cause if any they have, why the prayer of the said petition should not be granted. 25 The Public Record • February 19, 2009 The Public Record • February 19, 2009 page 26 who lives in suburban Pittsburgh and founded a business-to-business internet firm called FreeMarkets. Now a venture capitalist and conservative talk show host, Meakem has caught the political bug. After Specter's embrace of Obama's economics, Meakem quickly circulated a statement, boldly predicting, "There will be a Republican primary fight for Specter's Senate seat in 2010, and I am going to be actively involved in electing someone who will do what's right for Pennsylvania taxpayers, not the Washington lobbyists." I've known Meakem for a number of years, and he would be an impressive candidate. Although a solid pro-life conservative, he served as campaign chair to Bill Scranton during Scranton's short run for Governor in 2006 because he liked Scranton's conservative business views and because he thought Scranton would make a stronger opponent against Ed Rendell than local Steelers star Lynn Swann. Right now, Meakem – who is Harvard-educated and served in the 1991 Iraq War – says he's not a candidate, but much will depend on whether better-known and better-funded conservatives step forward to challenge Specter in the months ahead. The Dems Smell Blood area members of Congress: the aforementioned Schwartz and Congressman Patrick Murphy, from the 8th Dist., which is primarily in Bucks Co. Schwartz is no stranger to statewide politics. In 2000, she ran for the Democratic nomination for Senate to take on Santorum, making a lot of friends along the way. With 26% of the vote, the pro-choice Schwartz came in second to the pro-life Congressman Ron Klink of Pittsburgh who got 40% of the vote. It didn't help that four other candidates on the ballot that year came from the East. Schwartz is an unapologetic advocate for women's rights and an indefatigable fundraiser, and if she gets in the race will be formidable. Murphy is a relative newcomer to politics, having just won his second term in Congress in suburban Philly. The 35-year old lawyer is the only Iraq War veteran serving in Congress, and he was the only Pennsylvania congressman to endorse Barack Obama in the Pennsylvania primary last year. During his first term, Murphy was a consistent critic of President Bush's handling of the war, and his close ties to the Obama administration could help him if he chooses to make a run for Senate. As if three Philadelphians weren't enough, there's a fourth potential candidate from that region, State Rep. Marking Lincoln’s 200th Birthday THIS Manayunk delegation joined Councilman Bill Green at City event honoring 200th birthday of Abraham Lincoln. With Green, from left, are Mike Rose of Manayunk Brewhouse, Dave Armstrong of Navy League and Wally Littlewood of Littlewood & Sons Textiles. With Arlen Specter under attack in his own party, there's no shortage of Democrats eyeing the 2010 Senate race. MSNBC's Chris Matthews would have been the most colorful of the bunch, but he has taken himself out of the race. I've known Chris since my Congressional days and I think he could have brought national visibility to the race, to say nothing of an incredible wealth of knowledge of government and politics. That leaves the field to others, including State Auditor General Jack Wagner of Pittsburgh, who many think is well-positioned for the contest. A Vietnam veteran, wounded in combat, the generally conservative Wagner has a statewide pulpit to preach fiscal responsibility. Wagner is still toying with the Governor's race, and that has encouraged others to step forward for Senate. But in a crowded Democratic field for Senate, Wagner has got to be considered a strong candidate. Joe Torsella, a 45-yearold Philadelphian best known for leading the effort to construct the marvelous National Constitution Center in Philadelphia, is hardly a household name outside of Southeastern Pennsylvania. But earlier this month he became the first Democrat to declare his interest in running for the US Senate. Torsella is no stranger to politics, serving as thenMayor Ed Rendell's deputy mayor for policy and planning while still in his 20s. A Rhodes Scholar and Phi Beta Kappa graduate of the University of Pennsylvania, Torsella has taken one stab at electoral politics, losing by just 2100 votes in the Democratic primary against Congresswoman Allyson Schwartz in the 13th Congressional Dist. in Montgomery Co. and Northeast Philadelphia. While Torsella seems intent on running, other Dems are toying with the idea, including two Philadelphia- The Public Record • February 19, 2009 (Cont. From Page 1) of Republicans who say, as they did in 2004, that it's time for Specter to retire – and if he won't, it's time to defeat him. Could 2010 be the year of Specter's demise? Maybe, and maybe not. The 2004 Republican primary showed both strength and weakness in Specter's base within his own party. Pat Toomey got 513,693 votes to Specter's 530,839 that spring, with Philadelphia and its suburban counties putting Specter over the top. The key that year was also strong support for Specter from his then-colleague, US Sen. Rick Santorum, who encouraged his conservative supporters to back Specter over Toomey. It was enough for Specter to win the nomination, and then to go on to defeat Congressman Joe Hoeffel by 10 points and nearly 600,000 votes. Specter has always been a stronger General Election candidate than he is in his own party. Specter's support for Obama has spiked renewed interest among conservative Republicans to take him on next year. Toomey would be the obvious candidate, but he seems to be more interested in running for Governor than Senator. That leaves the field open to others like Peg Luksik, a conservative activist from Johnstown, who is no stranger to Pennsylvania politics since she first made the scene by nearly defeating then-Republican Barbara Hafer for the GOP nomination for Governor back in 1990. In 1994, she ran as an independent for Governor and, again in 1998, she was the Constitutional Party candidate for Governor. This past year, she was the campaign manager for William Russell, a newcomer who took on Congressman John Murtha in the 12th Congressional Dist. Another interesting candidate is Glen Meakem, a millionaire entrepreneur Page 27 Should Arlen Specter Worry? The Public Record • February 19, 2009 page 28 Philadelphia Public Record Newspaper Published on Feb 19, 2009 Philadelphia Public Record Newspaper
https://issuu.com/phillyrecord/docs/pr-473-p
CC-MAIN-2018-22
refinedweb
14,332
63.59
Getting a runtime error in ContainerControllerjosh_on May 12, 2011 3:53 PM Using SDK 4.5.0.20967 The error is thrown on line 3182: while (containerListIndex == -1 && floatIndex > 0) { floatIndex--; floatInfo = _composedFloats[floatIndex - 1]; containerListIndex = _floatsInContainer.indexOf(floatInfo.graphic); } floatIndex is 1 when it enters the loop, then it is decresed to 0, then null gets assigned to floatInfo - when it is looking for: _composedFloats[-1] Then it throws an error when it tries to access a property on floatInfo - which is set to null Here is the error: TypeError: Error #1009: Cannot access a property or method of a null object reference. at flashx.textLayout.container::ContainerController/[C:\Vellum\branches\v2\2.0\d ev\output\openSource\textLayout\src\flashx\textLayout\container\ContainerController.as:318 2] I am not sure exactly how I am triggering this - I am trying to discern that - but if you have any thoughts - please let me know. 1. Re: Getting a runtime error in ContainerControllerjosh_on May 12, 2011 6:01 PM (in response to josh_on) Here is an example of the bug, the yellow box in the TextFlow is an InlineGraphicElement. Click the red box at the bottom of the flow to start adding text. It crashes once the InlineGraphicElement is pushed out of the visible area of the container. This was published using the SDK 4.5.0.20967 Here is the same file published with HERO - which still has the other inlineGraphic alignment problem - but does not have the runtime error: Here is the source: package { import flash.display.Sprite; import flash.display.StageAlign; import flash.display.StageScaleMode; import flash.events.Event; import flash.events.MouseEvent; import flash.system.System; import flash.text.TextField; import flash.text.TextFormat; import flashx.textLayout.container.ContainerController; import flashx.textLayout.container.ScrollPolicy; import flashx.textLayout.edit.EditManager; import flashx.textLayout.elements.DivElement; import flashx.textLayout.elements.InlineGraphicElement; import flashx.textLayout.elements.ParagraphElement; import flashx.textLayout.elements.SpanElement; import flashx.textLayout.elements.TextFlow; import flashx.textLayout.formats.TextLayoutFormat; import flashx.textLayout.formats.VerticalAlign; import flashx.textLayout.tlf_internal; import flashx.undo.UndoManager; use namespace tlf_internal; [SWF (width="500", height="700", backgroundColor="#FFFFFF")] public class SingleContainerTest extends Sprite { protected var tf:TextFlow; protected var em:EditManager; protected var um:flashx.undo.UndoManager protected var _bg:Sprite; protected var _spr:Sprite; protected var _cc:ContainerController protected var _init_fmt:TextLayoutFormat; protected var _btn:Sprite; protected var _playing:Boolean = false; protected var _count:int = 0; protected var _graph:Sprite; protected var _print_out:TextField; protected var _last_time:Date = new Date(); protected var _last_five:Array = []; public function SingleContainerTest() { stage.scaleMode = StageScaleMode.NO_SCALE; stage.align = StageAlign.TOP_LEFT; var cw:Number = 200; // the container width var ch:Number = 600; // the container height _bg = new Sprite(); _bg.graphics.lineStyle(.25, 0); _bg.graphics.drawRect(0,0,cw,ch); addChild(_bg); _spr = new Sprite(); addChild(_spr); _graph = new Sprite(); _graph.x = cw + 10; _graph.y = 250; addChild(_graph); _print_out = new TextField(); var fmt:TextFormat = _print_out.defaultTextFormat; fmt.font = "_sans"; _print_out.wordWrap = true; _print_out.multiline = true; _print_out.width = stage.stageWidth - (10 + _graph.x); _print_out.x = _graph.x; _print_out.y = _graph.y + 10; addChild(_print_out); //define TextFlow and manager objects tf = new TextFlow(); um = new UndoManager(); em = new EditManager(um); tf.interactionManager = em; //compose TextFlow to display _cc = new ContainerController(_spr,cw,ch); //_cc.verticalAlign = VerticalAlign.BOTTOM; //_cc.verticalScrollPolicy = ScrollPolicy.ON; tf.flowComposer.addController(_cc); tf.flowComposer.updateAllControllers(); //make a button to add Inline Graphic elements _btn = new Sprite(); _btn.graphics.beginFill(0xFF0000,1); _btn.graphics.drawRect(0,0,120,30); addChild(_btn); _btn.addEventListener(MouseEvent.CLICK, btnClicked); _btn.y = 600; addMessage("1"); addMessage("2"); addMessage("3", true); } public function addMessage(msg:String, add_image:Boolean = false):void { //define elements to contain text var d:DivElement = new DivElement(); var p:ParagraphElement = new ParagraphElement(); var s:SpanElement = new SpanElement(); s.text = msg; //add these elements to the TextFlow p.addChild(s); d.addChild(p); if(add_image){ var sp:Sprite = new Sprite(); sp.graphics.beginFill(0xFFCC00); sp.graphics.drawRect(0,0,100,20); var i:InlineGraphicElement = new InlineGraphicElement(); i.source = sp; i.width = 100; i.height = 20; p.addChild(i); } tf.addChild(d); tf.flowComposer.updateAllControllers(); _cc.verticalScrollPosition = _cc.getContentBounds().height; tf.flowComposer.updateAllControllers(); } protected function btnClicked(e:MouseEvent):void { _playing = !_playing; removeEventListener(Event.ENTER_FRAME, onEnterFrame); if(_playing){ addEventListener(Event.ENTER_FRAME, onEnterFrame); } } protected function onEnterFrame(e:Event):void { _count++; if(_count > 100){ tf.removeChildAt(0); } addMessage("Message Number: " + _count + " " + randomString()); printOut() } protected function printOut():void { var now:Date = new Date(); var tm:Number = (now.getTime() - _last_time.getTime()); _last_five.push(tm); if(_last_five.length > 10) _last_five.shift(); var avg_tm:Number = 0; for(var i:int = 0; i < _last_five.length; i++) avg_tm += _last_five[i]; avg_tm = Math.round(avg_tm/_last_five.length); var elapsed_str:String = "message: \t\t\t"+_count + "\ntime: \t\t\t\t" + tm + "ms" + "\navg of last 10:\t\t" + avg_tm +"ms"; //trace(elapsed_str ); _print_out.text = elapsed_str; _last_time = now; drawGraph(tm); } protected function drawGraph(tm:Number):void { if(_count % 5 == 0){ _graph.graphics.beginFill(0x0); _graph.graphics.drawRect(_count/10,-Math.round(tm/10),1,1); _graph.graphics.beginFill(0xFF0000); _graph.graphics.drawRect(_count/10,-Math.round(System.totalMemory/1000000),1,1); } } protected function randomString():String { var chars:String = "abcdefghijklmnopqrstuvwzyz "; var chars_len:Number = chars.length; var random_str:String = ""; var num_chars:Number = Math.round(Math.random() * 100); for (var i:int =0; i < num_chars; i++){ random_str = random_str + chars.charAt(Math.round(Math.random() * chars_len)); } return random_str; } } } 2. Re: Getting a runtime error in ContainerControllerJin-Huang May 13, 2011 12:46 AM (in response to josh_on) It can be reproduced with SDK 4.5 (tlf 2.0.232 in it). But your code works well without any runtime error when compiled with tlf 3.0.5 on my machine, which is not released officially to the outside of Adobe. I'm not sure when flex team will add tlf 3.0 into their SDK. 3. Re: Getting a runtime error in ContainerControllerJin-Huang May 13, 2011 1:03 AM (in response to josh_on) I will ask my workmate whether we can release a 3.0 tlf on next week. I'm not sure the time point is all right. Pls wait for the reply. 4. Re: Getting a runtime error in ContainerControllerjosh_on May 13, 2011 12:50 PM (in response to Jin-Huang) Thanks, Will this fix be back ported into TLF 2.x? We are looking for a stable version of TLF 2 to get out our next release, scheduled for June 1. We would also like to make use of a common RSL swz file such as: 5. Re: Getting a runtime error in ContainerControllerJin-Huang May 15, 2011 7:54 PM (in response to josh_on) I'm trying to connect with Flex guys. But I'm not sure of the result. 6. Re: Getting a runtime error in ContainerControllerjosh_on May 18, 2011 11:49 AM (in response to Jin-Huang) I can not find a work around to this bug. I have updated our code to use TLF 2 - but cannot deploy it because of this. We are in a bind. There are a growing number of complaints in this forum about this bug - I suspect it will be widespread. There is no version of TLF 2 available which is usable - I think it would be a great mistake not to back-port the fix for this bug into TLF 2. 7. Re: Getting a runtime error in ContainerControllerJin-Huang May 19, 2011 12:26 AM (in response to josh_on)1 person found this helpful We have realized our mistake and want to back port the fix to the the old version of TLF. It need a lot of tests and processes. And we cannot control the time point when flex team will pick up our build. TLF and Flex are not the same team. We are begging flex team to pick up new TLF build these days...Picking up new build also need a lot of test. 8. Re: Getting a runtime error in ContainerControllerjosh_on May 19, 2011 9:47 AM (in response to Jin-Huang) Thank you so much, that is good news. 9. Re: Getting a runtime error in ContainerControllerjosh_on Jul 7, 2011 6:21 PM (in response to josh_on) Has there been any progress on this? 10. Re: Getting a runtime error in ContainerControllerJin-Huang Jul 10, 2011 8:34 PM (in response to josh_on) TLF 2.1 has been released with several bug fixes but without new features. Flex SDK may pick up it this autumn. 11. Re: Getting a runtime error in ContainerControllerjosh_on Jul 11, 2011 10:11 AM (in response to Jin-Huang) IMHO it should be a priority to get the fixes to the runtime errors included in the SDK. I will try the 2.1 update as sson as I can. 12. Re: Getting a runtime error in ContainerControllertalkingcat Jul 15, 2011 2:54 PM (in response to Jin-Huang) Does TLF 2.1 fix this bug? Where can I find TLF2.1? Thanks 13. Re: Getting a runtime error in ContainerControllerJin-Huang Jul 17, 2011 7:44 PM (in response to talkingcat) 1. Yes. 2. TLF 2.1 will be on this week. 14. Re: Getting a runtime error in ContainerControllerjosh_on Aug 29, 2011 6:36 PM (in response to Jin-Huang) Thanks I see the latest TLF 2.1 files. It still hasn't been included in the Flex SDK - is there any indication that it will be? Also I could not find an Adobe hosted swz file at: .swz I came up with that URL from the BUILD and BRANCH numbers in the TextLayoutVersion class - which correspond in the 2.0 builds e.g: .swz Is there any indication when TLF 3.0 will be included in the Flex SDK or in the IDE? Thanks 15. Re: Getting a runtime error in ContainerControllerJin-Huang Aug 29, 2011 10:54 PM (in response to josh_on) TLF 2.1 won't be included. TLF 3.0 will probably be in Flex SDK next year. 16. Re: Getting a runtime error in ContainerControllerjosh_on Aug 30, 2011 12:28 PM (in response to Jin-Huang) Can you put up a signed swz file for TLF 2.1 - so that users can cache it? 17. Re: Getting a runtime error in ContainerControllerJin-Huang Aug 30, 2011 11:41 PM (in response to josh_on) I uploaded a SWZ to PS: I don't think sourceforge.net can be RSL server, because it redirects you to a new link when downloading and there is time stamp within the redirection link. You may want to find a server for yourself. 18. Re: Getting a runtime error in ContainerControllerjosh_on Sep 5, 2011 10:34 AM (in response to Jin-Huang) No free service will tolerate hosting this file - we have a 1 gigabit per second connection which would be saturated by this. I understood that the point of hosting these files on the Flash player server was that Adobe was treating these swz files much like the player itself. Something that can be downloaded once and cached and is useful for multiple third parties. One of the reasons that we adopted TLF in the first place was because despite the fact that it had a large Actionscript overhead - that code would be centrally hosted by Adobe and used by many third party sites. Is there anyway that you can have this swz hosted at the regular location:... 19. Re: Getting a runtime error in ContainerControllerJin-Huang Sep 5, 2011 8:10 PM (in response to josh_on)1 person found this helpful Usually, Flex SDK pick up one build of tlf and then upload the swz to that server. We will have a try. But I prefer to upload a tlf 3.0 swz, because I think for now using 2.1 is meaningless. 1. There is still RTE found in 2.1, but which cannot be reproduced in 3.0. 2. In 3.0, there are only one simple feature, performance enhancement and dozens of bug fixes. We did not change the framework and APIs. Since both 2.1 and 3.0 have never go through the tests of Flex SDK, why not to choose 3.0. 3. We are focusing on and sell our 3.0 and don't plan to make any changes on 2.1. 20. Re: Getting a runtime error in ContainerControllerjosh_on Sep 13, 2011 12:00 PM (in response to josh_on) Do you know when the earliest date that the 3.0 swz will be uploaded? Could that happen before next year? Thanks. 21. Re: Getting a runtime error in ContainerControllerJin-Huang Sep 13, 2011 7:04 PM (in response to josh_on) We are trying to get the access to upload rsl on that server. If we got the access, rsl will be there soon. But I cannot promise you the time. I will tell you here if state changes. 22. Re: Getting a runtime error in ContainerControllerbogdan_cs4 Sep 19, 2011 5:36 AM (in response to josh_on) 23. Re: Getting a runtime error in ContainerControllerrr404 Sep 19, 2011 9:21 AM (in response to bogdan_cs4) I'd like to +1 the questions about integrating the tlf 2.1 (waiting for the 3.0swz) to my project. In flashBuilder i have 3 projects: 2 library projects representing my appFramework (core and utils) 1 appProject In my libraries projects the framework linkage is external. - I tried to remove textLayout.swc from the flex4.5 linkage and add the one provided here: - Tried with and without the verify RSL digest option. So far no luck I also tried in a release build i did to savagely paste textLayout_2.1.0.6.swz and rename it like the 2.0... one i had (I thought it worked for a second but unfortunatly didn't) Any hints on a way to use the fixed lib instead of the 2.0 ? Thanks in advance 24. Re: Getting a runtime error in ContainerControllerJin-Huang Sep 20, 2011 3:07 AM (in response to rr404) In my libraries projects the framework linkage is external. - I tried to remove textLayout.swc from the flex4.5 linkage and add the one provided here:. zip .6.swz - Tried with and without the verify RSL digest option. 1. rsl libraries in frameworks\libs are just copies of the libraries on the server. They are not linked in your runtime. 2. it should work that you change link configuration of sdk. But will be redirect to another place when you try to download the that swz...So the link showed on sourceforge cannot work. URL for SWZ should always be a direct link. Please wait for the RSL library on Adobe hosted server or you put RSL libraries on your own server. 25. Re: Getting a runtime error in ContainerControllerJin-Huang Sep 20, 2011 3:11 AM (in response to bogdan_cs4) bogdan_cs4 wrote: If you set project as Merge-Into-Code, just place the textlayout.swc in folder \frameworks\libs. If set as Runtime-Shared-Library, you need to change the URL in configuration file frameworks\flex-config.xml. 26. Re: Getting a runtime error in ContainerControllerbogdan_cs4 Sep 21, 2011 12:06 AM (in response to josh_on) I've copied the textLayout.swc file into the \frameworks\libs folder but i keep getting the same error. Only now I can't see the content of the ContainerController class. The stack trace only indicates that the error comes from the updateGraphics method. Are you certain that there isn't anything else to do in order to make the TLF 3.0 work in the flex SDK? Is there an Ant build neccesary? 27. Re: Getting a runtime error in ContainerControllerJin-Huang Sep 21, 2011 12:52 AM (in response to bogdan_cs4) I'm sure. Pls pay attention to my last reply and your configuration. Did you set framework linkage of project as Merge-Into-Code? Have you cleaned your project and re-compile? 28. Re: Getting a runtime error in ContainerControllerbogdan_cs4 Sep 21, 2011 1:21 AM (in response to josh_on) Sorry for that. My project is set to Runtime-Shared-Library. What URL in particular do I have to modify? <!-- TextLayout SWC --> <runtime-shared-library-path> <path-element>libs/textLayout.swc</path-element> <rsl-url></rsl-url> <policy-file-url></policy-file-url> <rsl-url>textLayout_2.0.0.232.swz</rsl-url> <policy-file-url></policy-file-url> </runtime-shared-library-path> Is it here somewhere? I'm new to flex so please bare with my silly questions. I really appreaciate your help Jin. 29. Re: Getting a runtime error in ContainerControllerJin-Huang Sep 21, 2011 6:23 PM (in response to bogdan_cs4) For now, there is no available rsl-url for 3.0 rsl library. We are applying for the access of to upload 3.0 rsl library. If you have your own server, you can put 3.0 swz on it and use your own url. The url must be direct. 30. Re: Getting a runtime error in ContainerControllerzyatkov Apr 12, 2012 2:32 AM (in response to josh_on) Jin-Huang thank you too much for your help. Could you please help me. I have made chat where i used component textLayout. I had issue with freezing scrollBar. Then i downloaded textLayout 3.0 and issue was fixed. But then i had new issue at some point chat is blank, wipe and doesn't write any message. Before this happens i catch error: TypeError: Error #1009: Cannot access a property or method of a null object reference. at flashx.textLayout.container::ContainerController/[C:\Vellum\dev\output\openSource\textLayout\src\flashx\textLayout\container\ContainerCon troller.as:3461] at flashx.textLayout.container::ContainerController/[C:\Vellum\dev\output\openSource\textLayout\src\flashx\textLayout\container\ContainerCon troller.as:3152] at flashx.textLayout.compose::StandardFlowComposer/updateCompositionShapes()[C:\Vellum\dev\o utput\openSource\textLayout\src\flashx\textLayout\compose\StandardFlowComposer.as:616] at flashx.textLayout.compose::StandardFlowComposer/updateToController()[C:\Vellum\dev\output \openSource\textLayout\src\flashx\textLayout\compose\StandardFlowComposer.as:559] at flashx.textLayout.container::ContainerController/updateForScroll()[C:\Vellum\dev\output\o penSource\textLayout\src\flashx\textLayout\container\ContainerController.as:1191] at flashx.textLayout.container::ContainerController/set verticalScrollPosition()[C:\Vellum\dev\output\openSource\textLayout\src\flashx\textLayout \container\ContainerController.as:1057] at flashx.textLayout.container::ContainerController/autoScrollIfNecessaryInternal()[C:\Vellu m\dev\output\openSource\textLayout\src\flashx\textLayout\container\ContainerController.as: 1899] at flashx.textLayout.container::ContainerController/autoScrollIfNecessary()[C:\Vellum\dev\ou tput\openSource\textLayout\src\flashx\textLayout\container\ContainerController.as:1869] at flashx.textLayout.container::ContainerController/mouseMoveHandler()[C:\Vellum\dev\output\ openSource\textLayout\src\flashx\textLayout\container\ContainerController.as:2297] at flashx.textLayout.container::ContainerController/[C:\Vellum\dev\output\openSource\textLayout\src\flashx\textLayout\container\ContainerCon troller.as:2305] I used flex builder 4.5.0. I am sure that i set all properties correct (Merge-Into-Code etc). So can i take source code this library? Maybe should i update flex builder etc. Please help us. Thank you. 31. Re: Getting a runtime error in ContainerControllerJin-Huang Apr 12, 2012 7:44 AM (in response to zyatkov) You can download the source code from the link above. Copy the .as file into your project and remove the swc (I mean you compile your application with TLF code, not swc), you will know exactly which line failed. 32. Re: Getting a runtime error in ContainerControllerkevins_office Jun 6, 2012 1:12 PM (in response to Jin-Huang) In many of your post you keep saying that you can just host the libaray on your own server. Ive been trying to do that and i keep getting the following error... Flex Error #1001: Digest mismatch with RSL. Redeploy the matching RSL or relink your application with the matching library. 33. Re: Getting a runtime error in ContainerControllerJin-Huang Jun 6, 2012 9:59 PM (in response to kevins_office) Without special settings, application can only take advantage of RSL with Adobe certificate, not test certificate. On SourceForge.net, RSLs are all with test certificate. You can try the RSL on Adobe server I just uploaded them. There is a delay. Maybe you want to wait for several hours. 34. Re: Getting a runtime error in ContainerControllerkevins_office Jun 7, 2012 8:54 AM (in response to Jin-Huang) I want to clarify, ive seen you mention a few times about the test certificate. But what does that exactly mean? Are you saying that all textLayout version 3 and higher are only compiled using a test certificate which will NEVER work in a production environment? If i download the .swz from the link above and host it on my server with my applet.swf the general public still can not run the applet? If that is true, if it is not possible to use textLayout version 3 in a completed production ready applet then why is it even being offered for use? Granted it was nice of your team to fix the bugs and get a new version ready but if we cant use it... I feel frustrated because our project has already been coded using text flow v3 but if we cant offer it to the public then what was the point? All of that development was wasted as now we will need to downgrade back to mx components. You jin huang have been helpful so dont take it personal, my frustration is towards the adobe company and their inefficient structure. 35. Re: Getting a runtime error in ContainerControllerkevins_office Jun 7, 2012 8:59 AM (in response to Jin-Huang) By the way... 11 hours after you have uploaded the files they are still not available. I also do not understand how there is a "delay" in the uploading process. Did you physically upload the files to the server or did you submit the files to someone else who will upload to the server? 36. Re: Getting a runtime error in ContainerControllerJin-Huang Jun 8, 2012 2:54 AM (in response to kevins_office) I have also uploaded the rsl to. Do remember that the link of sourceforge.net cannot be the link for rsl in the configuration file, because there is a re-direction on that website. For issue, I have communicated with related colleagues. Sorry for that delay. That server is an Adobe hosted server. It needs to be really cautious to upload something onto Adobe hosted server. 37. Re: Getting a runtime error in ContainerControllerkevins_office Jun 8, 2012 8:33 AM (in response to Jin-Huang) Thank you for the 2nd upload location, i was able to download the files. Yes my intention is to host the files myself as i would not expect my project to rely on the servers of someone else outside of my control. However i am still wanting to know about the code signing issue. Are these files signed with only test certificates? Is the swf and swz able to be used by the general public's version of flash player? 38. Re: Getting a runtime error in ContainerControllerkevins_office Jun 8, 2012 9:25 AM (in response to Jin-Huang) OMG, im about to go postal... I just tried running my project using the new textLayout_3.0.0.29.swz and the error is back. The same error that exsisted in TLF2 which was fixed in version 3 is now back when using the swz. How did that happen? Was build 29 swz accidently compiled with version 2 code? The error is when using textflow in a spark textarea. If you include an image with EditManager.insertInlineGraphic() an error happens once the graphic scrolls out of the visible area and into the scroll buffer. If i do not use textLayout_3.0.0.29.swz and only use the textLayout.swc from the error does not happen. Why do you think the error happens with the swz but not when using the build 29 source directly? Was there a mistake in building the swz? How can this be fixed? However, depending on your answer about if build 29 swz was compiled with a test certificate or not, all of this might not matter if we can not use the swz at the production level. ---------- Error Given In Flash Builder ---------- TypeError: Error #1009: Cannot access a property or method of a null object reference. at flashx.textLayout.container::ContainerController/[C:\Vellum\branches\v2\2.0\dev\output\openSource\textLayout\src\flashx\textLayout\contai ner\ContainerController.as:3182] at flashx.textLayout.container::ContainerController/[C:\Vellum\branches\v2\2.0\dev\output\openSource\textLayout\src\flashx\textLayout\contai ner\ContainerController.as:3080] at flashx.textLayout.compose::StandardFlowComposer/updateCompositionShapes()[C:\Vellum\branc hes\v2\2.0\dev\output\openSource\textLayout\src\flashx\textLayout\compose\StandardFlowComp oser.as:616] at flashx.textLayout.compose::StandardFlowComposer/updateToController()[C:\Vellum\branches\v 2\2.0\dev\output\openSource\textLayout\src\flashx\textLayout\compose\StandardFlowComposer. as:559] at flashx.textLayout.compose::StandardFlowComposer/updateAllControllers()[C:\Vellum\branches \v2\2.0\dev\output\openSource\textLayout\src\flashx\textLayout\compose\StandardFlowCompose\spa rk\src\spark\components\RichEditableText.as:2974] at mx.core::UIComponent/validateDisplayList()[E:\dev\4.y\frameworks\projects\framework\src\m x\core\UIComponent.as:8999] at mx.managers::LayoutManager/validateClient()[E:\dev\4.y\frameworks\projects\framework\src\ mx\managers\LayoutManager.as:1033] at mx.core::UIComponent/validateNow()[E:\dev\4.y\frameworks\projects\framework\src\mx\core\U IComponent.as:8077] at Main::ChatWindow$/displayChatText()[C:\Users\User\Flash Projects\ChatBroadcast\src\Main\ChatWindow.as:38] at Main::ServerListener$/msgFromUser()[C:\Users\User\Flash Projects\ChatBroadcast\src\Main\ServerListener.as:20] at Main::RemoteClient$/msgFromUser()[C:\Users\User\Flash Projects\ChatBroadcast\src\Main\RemoteClient.as:12] 39. Re: Getting a runtime error in ContainerControllerJin-Huang Jun 8, 2012 9:42 AM (in response to kevins_office) Firstly, those newly uploaded swz & swf are signed with Adobe certificate. So they are able to be used by the general public's version of flash player. For the error, I think it's your fault. "C:\Vellum\branches\v2\2.0\dev\output\openSource\textLayout\src\flas hx\textLayout\container\ContainerController.as" is definitely the absolute path on your PC, right? You are compiling with 2.0 code. Please search in the forum for RSL configurations. You are supposed to change flex-config.xml, replace the swc and swf in SDK, replace the link for SWZ, and so on.
https://forums.adobe.com/message/4329956
CC-MAIN-2016-18
refinedweb
4,388
50.94
Getting Familiar with the My Namespace - 24 - Posted: Nov 21, 2011 at 9:30 AM - 28,098 Views - video demonstrates the use of the My Namespace in Visual Basic which provides a short-cut to often used classes in the .NET Framework Class Library. We demonstrate how to use the My Namespace to retrieve information about the user's computer, search for files on the file system, perform file and directory manipulation, get command line arguments, work with application settings and more. Download the source code for Understanding the My the My namespace all the time. It is a great thing to remember newbies! Thanks Bob for the command line Arguments and the I never knew about that! Bob, Thankyou very much for your clear, percise informative lessons !!! Don in Florida @Don Friedman: Thanks Don! Glad they were helpful. Hello. First of all thanks for a great series. One question what happens at 12:31? The names shown in the console window show Brian and Chuck but the names assigned to the settings were Bob and Steve (11:44). Brian and Chuck were assigned later approx. 13:30... @Kari Holgarsson: Great catch. I think you caught me here pulling a slight-of-hand ... I actually ran through this exercise off camera once (a practice run). I think the values were cached somehow -- perhaps those values were cached in the output directory (/bin) somehow. I **forgot** about this otherwise I would have probably re-recorded it. If you write the code yourself during this exercise, you will get the correct results. Sorry for the confusion. Ok, thanks for the explanation ;-) I will try the exercise myself, I was just wondering if some values could be "stock" in memory (unable to update) as this could be very difficult to debug in a more complex code. Remove this comment Remove this threadclose
http://channel9.msdn.com/Series/Visual-Basic-Development-for-Absolute-Beginners/Getting-Familiar-with-the-My-Namespace-24?format=smooth
CC-MAIN-2015-22
refinedweb
309
74.59
Recently a demo bot (to be shared in the near future) that looks up airline flight data. One of the steps asks the user to choose from a list of international airports – some 8,000 of them. It’s routed to the user via the PromptDialog.Choice(…) dialog. The airports are simply passed as a list to the options parameter of the function. This works great in Visual Studio. However, once I deployed it out to Azure for actual testing, the process fell over (which is exactly why demo’ing all the way is important!). When the bot asked for the departure airport via the PromptDialog.Choice(…) dialog and the user responded, the bot simply reverted back to the original, introductory “Hello! Welcome to the flight bot!” question in the process. What?! What in the world? Troubleshooting Knowing that there are a ton of airports in the list, I first wondered if it was somehow related to that list – specifically the count. That didn’t really make a ton of sense, because it’s not like we were bumping up against an Integer limit or something. I really thought it was more likely an issue with the size of the message. However, I tried reducing the number of items to 50 anyway just to see what would happen. Plus, if it was an issue with the size of the message, changing the count would have an impact. Sure enough, it worked fine. I played around with it some more and discovered an upper limit of about 650 airports before things stopped working. Now that I knew that the problem was somehow directly related to the airport list, and it didn’t really make sense for it to be related to the count, I focused on the message size. After all, the Bot Framework just runs on top of a Web API service (at least in the C# version), which would be subject to the various request message size limits. I tried reducing the amount of data that was attached to each airport record. That also had an impact – I could now send 1500 or so airports back and forth without issue. At this point, I was confident that the problem was related to the size of the messages. My understanding of the Bot Framework is that the bot itself runs inside of Microsoft’s Bot engine, which is ultimately rendered in a website (assuming the bot is running in a web page). The Bot engine makes calls out to our Web API service for the various interactions. As such, there are two web sites that could potentially be running into size restrictions. I only have access to my service, so I tried updating the system.web/httpRuntime/maxRequestLength of my web service to it’s maximum value. No dice. At this point, my assumption is that the message/state is being serialized back to the client, but that not all of the data is available (either because an exception was thrown or because it was cut off), and so the response just results in the dialog starting over. Since the limit is apparently inside of the Bot engine (maybe just via IIS config or something simple), and I don’t have access to that, the next step is to try and implement an alternative. The alternative As far as I can tell, the real benefit to sending a list of options is that the Bot Framework will automatically attempt to match the user’s response to the list of options and return that matching object. Luckily for us, the nice folks that are part of the Bot Framework team provided the source code for the Bot Builder classes, including the PromptDialog class, so that we can go investigate how it works. That particular class is here:. If we make our way to the PromptChoice<T> class within that file, we can see that all that’s really happening is that the TryParse function scores the response against each of the potential options and then selects the option with the highest score. Looks simple enough to reproduce. I ended up just creating a MatchHelper class that totally borrows the TryParse and ScoreMatch functions from the PromptChoice<T> class. I modified the functions slightly so that they would accept a Func that allows them to search against particular properties on the option, rather than just using the .ToString() results to match. The resulting helper looks like this: using System; using System.Collections.Generic; using System.Linq; namespace FlightBot.Helpers { public class MatchHelper { public virtual Tuple<bool, int> ScoreMatch(string option, string input) { var trimmed = input.Trim(); var text = option.ToString(); bool occurs = text.IndexOf(trimmed, StringComparison.CurrentCultureIgnoreCase) >= 0; bool equals = text == trimmed; return occurs ? Tuple.Create(equals, trimmed.Length) : null; } public bool TryParse<T>(IEnumerable<T> options, Func<T, string> propertyFunc, string text, out T result) { if (!string.IsNullOrWhiteSpace(text)) { var scores = from option in options let score = ScoreMatch(propertyFunc(option), text) select new { score, option }; var winner = scores.Where(s => s.score != null).OrderBy(s => s.score).First(); if (winner.score != null) { result = winner.option; return true; } } result = default(T); return false; } } } Now that this utility is available, I can switch my code from calling Prompt.Choice to calling Prompt.Text and then sending the text based results into the utility. The resulting set of calls looks like this: public async Task GetDepartureAirport(IDialogContext context, IAwaitable<Airport> argument) { PromptDialog.Text(context, GetArrivalAirport, "What is the name of the departure airport?", "I don't understand. What is the name of the departure airport?"); } public async Task GetArrivalAirport(IDialogContext context, IAwaitable<string> argument) { string providedAirport = await argument; _departureAirport = MatchAirport(providedAirport); PromptDialog.Text(context, GetFlightDate, "What is the name of the arrival airport?", "I don't understand. What is the name of the arrival airport?"); } private Airport MatchAirport(string airportName) { List<Airport> airports = GetAirports(); MatchHelper matchHelper = new MatchHelper(); Airport matchedAirport = null; if (matchHelper.TryParse(airports, a => a.Name, airportName, out matchedAirport)) return matchedAirport; else return null; } Working through this code, GetDepartureAirport asks the user for the name of the departure airport, the results from the user are returned to the GetArrivalAirport function and are then passed into the MatchAirport function which uses our utility to parse the results. By using this alternative method, we completely bypass having to send the full list of airports to the user and things are back to working as expected. Sweet! Thoughts? This was my way of solving my problem. How would you have solved it? Is there a better way that I just don’t understand yet? Is there some kind of configuration setting that I didn’t find? I’d love to hear your feedback telling me that I’m way out in left field and there’s a simpler solution.
https://www.jonathanhuss.com/bot-framework-bot-resets-response/
CC-MAIN-2021-39
refinedweb
1,134
64.3
Service Oriented HTML Application (SOHA) promotes clean and well structured HTML code, and the initial SOHA page layout template HTML code is based on XHTML 1.0 transitional standard. The XHTML 10 Transitional DOCTYPE helped to create well formatted XML based HTML at design time, some IDE (like Visual Studio 2010, Dreamweaver) has excellent intelliSense or code hints to help identify potential HTML syntax errors while constructing the page. However, the XHTML standards is not where HTML5 is heading to based on WHATWG specifications, and W3C already disbanded the working group for XHTML2, and devoted more resources to HTML5. Since SOHA is still new and HTML5 is where the web is moving to, it's time to make the basic HTML page template to be more future-proof and more HTML5-ready, while keeping all the benefits of Plain Old Semantic HTML (POSH) for valid, semantic, accessible, well-structured HTML. The reality about XHTML is that it has more stricter syntax format requirement but gave developers almost no new feature when migrating to it, and it's estimated that there is only 1% of HTML is XHTML based in the entire web. That's the major reason for all browsers being much forgiving in terms of rendering not-well-formatted HTML and need to stay backward compatible to deal with existing web contents that not XHTML formatted and even without DOCTYPE at all (Browser Quirk Mode). This is also an important factor that WHATWG taking into account to promote HTML rather than XHTML, it allows tags like <img>, <br> stay valid without closing tag and does not require tag attribute values to be wrapped in quotes, etc. The idea is to go with a version-less standard that keeps evolving and also always backward compatible for existing web pages. <img> <br> This article provides a practical HTML5 based page layout template for valid, semantic, accessible, well-structured HTML, it has simplified structure and less HTML code with meaningful semantic tags, works in older browsers (including Internet Explorer 6, Internet Explorer 7 and Firefox 2) as well as in modern browsers with HTML5 capabilities. By applying these HTML5 semantic tags (like <header>, <section>, <nav>, <aside>, etc.) based page structure will make web page using concise future-proof HTML5 semantically meaningful tags for contents, much easier to understand and maintain. It also enables us to start using HTML5 today while both browsers and HTML5 standards are evolving. <header> <section> <nav> <aside> A sample HTML page that leverages semantic HTML5 page layout can be viewed here. This HTML5 semantic tags page layout template can be viewed as the SOHA architecture's evolution for basic page HTML, it's shifting away from XHTML and back to HTML to be more backward compatible, more light weight and more HTML5 ready. It prompts the practice of POSH, since semantic HTML gives better markup hierarchy and meaningful groups, search engine can potentially give a better ranking because of the semantic meanings. For example, when screen reader/user agent or search engine sees <div id="pageHeader">...</div>, it cannot sense the DIV contains the page header (since those robots cannot make sense from tag id), although the page header DIV usually holds site wise general information, but it appears as a regular DIV for grouping purpose with no semantic meaning. If we change the DIV page header container to be <header id="pageHeader">...</header>, when screen reader/user agent or search engine reads it, they would get the semantic meaning for a page header, then can better process the information in a meaningful context for better content index or page ranking. <div id="pageHeader">...</div> DIV <header id="pageHeader">...</header> What above is just one potential benefits of POSH, there are lots of other posts talking about why we need semantic HTML. In a nutshell, if web professional cares about the site's long term readability, maintainability and stay standards based and future proof, they would embrace POSH for cleaner, lighter, well structured and meaningful tags for HTML contents. Since almost all HTML code are hand crafted within SOHA architecture, it improves the foundations of SOHA profoundly. The question is, for HTML5 semantic tags based page layout, does it work in older browsers (which has no HTML5 support at all)? Even with current version of modern browsers, they do not have full HTML5 tags supports yet (only with partial HTML5 support), how can we make sure this HTML5 based page layout works across browsers and across platforms? Based on WHATWG spec, most frequently used HTML5 sectioning tags includes article, aside, details, figcaption, figure, footer, header, hgroup, menu, nav, section, etc. In order to start using HTML5 tags now and still render and behave correctly across browsers, we also need some workaround from JavaScript (HTML5 shim/shiv for IE) and CSS (needs to set new tags display rules), when browsers provide more HTML5 supports, we can gradually remove those workaround in the future. Since our goal is to provide a page template, we'll focus on HTML5 sectioning tags, also includes the changes from DOCTYPE, META tags, SCRIPT tag, semantic HTML5 tags, and CSS. Let's start with page DOCTYPE. The DOCTYPE declaration for an HTML page will tell the browser to render the page in standard mode, although it has a DTD URI in the declaration, browsers do not retrieve the DTD from the specified URL to validate the markup. If a page has no DOCTYPE declaration, like the early age (before late 90s) HTML content, the browser will render them in "quicks mode" for compatibility reasons. DOCTYPE A typical XHTML 1.0 transitional DOCTYPE is listed below: <!--XHTML 1.0 Transitional DOCTYPE, absent for HTML5--> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> It's not terribly long, but I can never remember it, always rely on tools or IDE to insert it for me. Instead, HTML5 DOCTYPE is much simplified (and tested): <!-- HTML5 DOCTYPE --> <!DOCTYPE html> Much clean and simple, never need to rely on tools to write it. As a matter of fact, the longer XHTML DOCTYPE just worked the same way as the shorter HTML5 DOCTYPE: as long as browser sees the DOCTYPE declaration, no matter what type of DTD specified, it just switched to standard mode to render the page, the browser never actually download DTD and validate the HTML document, it seems only works for HTML editor at design time. Furthermore, other common tags are simplified in HTML5 too. First, the root HTML tag doesn't need the namespace (since HTML5 is moving away from XML), it only needs the LANG attribute. A classic English content HTML root tag is used to be: LANG <!-- namespace is not needed in HTML5 root element --> <html xmlns="" xml: In HTML5, root element can be written as: <html lang="en"> Additionally, both some meta tags and the common script tags are simplified. A typical UTF-8 encoding meta tag used to be: <!-- long UTF-8 charset meta tag, absent from HTML5 --> <meta http- The charset meta tag is simplified to be: charset <meta charset="utf-8"> As for the script tag, it used to be: script <!-- type attribute in script tag is not required in HTML5 --> <script type="text/javascript" src="javascripts/lib/jquery-1.4.4.min.js"></script> Since JavaScript is the only scripting language in HTML5, there is no need to specify the type attribute, it's shorten to: <script src="javascripts/lib/jquery-1.4.4.min.js"></script> What above is about some simplification in before the <body> tag in HTML5, let's see how the page layout is changed. <body> A typical HTML page would have a page header, footer and middle page content. Within the middle page content, at the top level, it may have navigation, content and aside 3 columns. Certainly, within the content, it may embed more sections, that would be up to each page's specific content. Fig.1 show this top level page layout visually: Typically, <header> groups of introductory or navigational aids. Usually, header element is intended to contain the section's heading, it can wrap a section's table of contents, search form, logos, etc; On the bottom, <footer> typically contains information about its section such as who wrote it, links to related documents, copyright data, and the like; in the middle of the page, <nav> element represents a section of a page that links to other pages or to parts within the page, usually a section with navigation links; <aside> content is tangentially related to the content around the aside element, it can be considered separate from that content, for example, pull quotes or sidebars, advertising, etc. At the very center of the page layout, <section> contains a logical or physical grouped content in generic document or application, for instance, web site's home page could be split into sections for an introduction, news items, contact information. And <article> tag is self-contained composition in a document, page, application, or site could be distributed independently. Examples: <article> can contain a forum post, a magazine or newspaper article, a Web log entry, a user-submitted comment, an interactive widget or gadget, or any other independent item of content. header <footer> <article> Here below is the skeleton HTML code that represents the above layout: <body> <div id="pageContainer"> <header id="pageHeader"></header> <div id="contentContainer" class="clearfix"> <nav id="pageNav"></nav> <section id="pageSection"> <header class="sectionHeader"></header> <article class="sectionArticle"></article> <footer class="sectionFooter"></footer> </section> <aside id="pageAside"></aside> </div> <footer id="pageFooter"></footer> </div> </body> You may use different code to lay it out, like you can use float layout for the middle row that contains <nav>, <section> and <aside>. I'm using <DIV> for grouping purpose, by using the "clearfix" style, it makes sure its direct children elements will layout horizontally in one row, then we can avoid using empty tags just for layout without any semantic meaning. I usually assign IDs to the tags that only have instance in a page, while using CLASS for potential multiple instances. For example, top level header <header> has ID of pageHeader, while section header has CLASS of sectionHeader. float <DIV> clearfix CLASS pageHeader sectionHeader Now we have a simple and easy page layout HTML code, we do need some help from JavaScript and CSS to make it work across browsers today. Since the current version of browsers do not have full support for HTML5 specific tags, they just treat them as user defined tags. All major browsers, except Internet Explorer (including IE6, IE7 and IE8), just render the tag as a inline element, and give developers the freedom to style them, as we usually do for other tags that need custom styles, we can safely say these browsers enable developers to use semantic HTML5 tag layout already, we only need to treat Internet Explorer differently. Since IE doesn't know how to render unrecognized tags and refuse styling them, we can use HTML5 shiv/shim to make them work. This the technique found by Sjoerd Visscher, if a DOM element created by script with the same name as the tag, then IE starts to honor the styling, that's basically what the html5.js does, we can use conditional comments links to put it in just for IE: <span class="com"><!--[if lt IE 9]> <script src=""></script> <![endif]--></span> Non-IE browsers will treat the above conditional script tag as comment, while only IE when version lower than 9 will run the script. The following CSS rule will tell all browsers to render all our sectioning tags as block element: /*html5 semantics tags */ article, aside, figure, footer, header, hgroup, menu, nav, section { display: block; } In order to make the page layout CSS to be a sound foundation for real world web pages, here below are the CSS rules that have simple CSS reset and basic 960 layout (that centers then pageContent within browser window), plus the clearfix styles: /*html5 semantics tags */ article, aside, figure, footer, header, hgroup, menu, nav, section { display: block; } /* light css reset */ * { margin : 0; padding : 0; } h2, h3, h4, h5, p, ul, ol { margin : 0 20px; padding : .5em 0; } img { border: 0px;} /* =page level container */ #pageContainer { margin: 0px auto 0px auto; width: 960px; } #pageHeader { margin:0px auto 0px auto; width:960px; height:82px; position:relative; } #contentContainer { margin: 0px; padding-top: 10px; padding-bottom: 20px; min-height: 500px; } #pageFooter { margin: 0px auto; padding-bottom: 20px; width: 960px; position: relative; } /* Clear Floated Elements */ .clearfix:before, .clearfix:after {content: "\0020"; display: block; height: 0; visibility: hidden;} .clearfix:after { clear: both; } .clearfix { zoom: 1; } For more sophisticated HTML5 page template, you can reference HTML5 boilerplate. The sample HTML page is actually an improvement to the sample page provided in Web App Common Tasks by jQuery, it not only uses the semantic HTML5 tags discussed in this article, but also refactored the CBEXP JavaScript file as jQuery.sohabase.js to better fit into the SOAH architecture. By comparing the source code, you can see page layout and visuals styles are pretty much the same as before, scripting functions the same way too, while the HTML code is more concise and semantically meaningful. Semantic HTML5 tag page layout simplifies basic page layout and enables to start HTML5 tags today across browsers and platform, it has the full benefits of POSH and renders web pages in a more future-proof and standard way. It uses simple while meaningful and less HTML code to construct the skeleton of a page, much easier to create, understand and maintain, also keeps the flexibility of sizing, positioning and styling by CSS. Since HTML5 is where the Web is moving to, why not give it a try when you are building something new? This article, along with any associated source code and files, is licensed under The Common Development and Distribution License (CDDL) Firstly, nice article, a little heavy at first since I hadn't read your SOHA article but otherwise pretty good. I would also add that the proper use of tags for semantic meaning can assist in human readability also, particularily tags like <aside> or <figure> where one would normally place a specially formatted <div> - much easier to find. One tip for you: use of the #pageContainer <div> tag isn't required - you can use the <body> tag for basic CSS layout & formatting. Take note though: I haven't personally tested this in many older browsers, so it's worth double checking - but it would certainly help your example page semantics by further enhancing the ease of understanding the page at a glance. So... code time... my suggestion is rather than using the following: <body><br /> <div id="pageContainer"><br /> <!-- page content --><br /> </div><br /> </body> #pageContainer {<br /> margin: 0 auto 0 auto;<br /> width: 700px;<br /> } You could use this: <body><br /> <!-- page content --><br /> </body> body {<br /> margin: 0 auto;<br /> width: 700px;<br /> } for an easier page of code to understand. General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Articles/146409/Semantic-HTML-Page-Layout?msg=3737014
CC-MAIN-2017-47
refinedweb
2,528
52.33
Capturing screenshots of website with Python August 19, 2015 If you google how to do a screenshot of a website then you will probably end with an answer depending on PyQT and WebKit. Sadly, PyQt is not a python package that you can install easily via pip and you will few others dependencies to be able to generate a screenshot of a site. This post describes an easier solution depending only on Selenium and PhantomJS. Selenium can easily be installed by pip and phantomJS via your package manager or npm. On OSX, it's just a matter of doing: brew install phantomjs # or npm -g phantomjs pip install selenium At the opposite of installing PyQT, it should be without problems. Once you have Selenium and PhantomJS installed then it becomes simple to write the small python code required to generate the screenshot of a website. from selenium import webdriver depot = DepotManager.get() driver = webdriver.PhantomJS() driver.set_window_size(1024, 768) # set the window size that you need driver.get('') driver.save_screenshot('github.png') The code above is all you need to generate a screenshot of a website. Webdriver offers others API like get_screenshot_as_png() which return a binary and can be useful to create an image in memory if you want to manipulate before saving it.
http://rachbelaid.com/capturing-screenshots-of-website-with-python/
CC-MAIN-2018-26
refinedweb
214
52.6
To All: Can anyone tell me how to use the tell( ) function when accessing data on the Web? I know how to use it when reading a file from the hard drive, but I get an error using it after reading text via the Web and then trying to use it on that particular "file object." Part of the code I am using follows. ________________________________ from Tkinter import * import os from time import gmtime import re import string import urllib fwcURL = "" try: f_object = urllib.urlopen(fwcURL) fwcall = f_object.read() location = f_object.tell() print location except: print "Could not obtain data from Web" return __________________________ Here is the error I am getting and I am not sure how to prevent it: __________ Traceback (most recent call last): File "C:\python_pgms\plot_guidance\plot_guidance_fm_web.py", line 123, in Main location = f_object.tell() AttributeError: addinfourl instance has no attribute 'tell' _____________________ How can I prevent this error? I guess I could save the data to a file, then open the file and read it, THEN use the tell ( ) (and for that matter the seek( ) ) function like I am accustomed to doing successfully in the past. But I should be able to use it as is on the file object. Thanks much. Henry Steigerwaldt Hermitage, TN _______________________________________________ Tutor maillist - Tutor@[...].org
http://aspn.activestate.com/ASPN/Mail/Message/python-Tutor/1537279
crawl-002
refinedweb
216
70.94
I have the code for this in this GitHub Repo The Problem Given a string, determine if a permutation of the string could form a palindrome. Example 1: Input: "code" Output: false Example 2: Input: "aab" Output: true Example 3: Input: "carerac" Output: true My Tests import pytest from .Week1Bonus import Solution s = Solution() @pytest.mark.parametrize("test_input", ["code", "abc"]) def test_cannot_permute(test_input): assert not s.canPermutePalindrome(test_input) @pytest.mark.parametrize("test_input", ["aab", "carerac", "a", "aa"]) def test_can_permute(test_input): assert s.canPermutePalindrome(test_input) My Solution class Solution: def canPermutePalindrome(self, s: str) -> bool: if len(s) == 1: return True char_counts = {} for c in s: if c in char_counts: char_counts[c] = char_counts[c] + 1 else: char_counts[c] = 1 values = char_counts.values() num_odd = 0 for n in values: if n == 1 or n % 2 != 0: num_odd += 1 return num_odd < 2 Analysis My Commentary OK, I was pretty lazy with this one but it didn't turn out too terrible. I thought if you go through the string and figure out how many letters appear an odd number of times, it was then a reasonable solution to check if we had more than one letter appearing an odd number of times. If we do then no permutation of the string can be a palindrome. I have not put much thought into a better solution yet so if you have any tips please let me know :) Discussion (0)
https://dev.to/ruarfff/week-1-bonus-palindrome-permutation-147a
CC-MAIN-2022-33
refinedweb
234
60.14
LINQ could be used with SQL Azure in the same way it is used with SQL Server. We are going to use School database again. So to see how we can use SQL Azure with LINQ 1. Create a console Appli8cation. For purpose of this walkthrough , I am creating Console app. You can use in same way in other managed application. 2. Right click on console application project. Select add new item and choose LINQ to SQL Class form Data tab. 3. Choose the option Server explorer and add a new connection. In Add new connection windows provide information for SQL Azure. 4. Now we need to write usual LINQ query to do CRUD operation against database in SQL Azure. Here I am fetching all the records from Person class Program.cs using System; using System.Linq; namespace LINQSQLAzure { class Program { static void Main(string[] args) { DataClasses1DataContext context = new DataClasses1DataContext(); var result = from r in context.Persons select r; foreach (var r in result) { Console.WriteLine(r.LastName + " " + r.FirstName); } Console.ReadKey(true); } } } Expected output, In this post , I explained LINQ with SQL Azure.
https://debugmode.net/2011/04/08/sql-azure-with-linq/
CC-MAIN-2022-40
refinedweb
184
68.87
Mastodon bot framework built on Mastodon.py Project description Ananas What is Ananas? Ananas allows you to write simple (or complicated!) mastodon bots without having to rewrite config file loading, interval-based posting, scheduled posting, auto-replying, and so on. Some bots are as simple as a configuration file: [bepis] class = tracery.TraceryBot access_token = .... grammar_file = "bepis.json" But it's easy to write one with customized behavior: class MyBot(ananas.PineappleBot): def start(self): with open('trivia.txt', 'r') as trivia_file: self.trivia = trivia_file.lines() @hourly(minute=17) def post_trivia(self): self.mastodon.toot(random.choice(self.trivia)) @reply def respond_trivia(self, status, user): self.mastodon.toot("@{}: {}".format(user["acct"], random.choice(self.trivia))) Run multiple bots on multiple instances out of a single config file: [jorts] class = custom.JortsBot domain = botsin.space access_token = .... line = 632 [roll] class = roll.DiceBot domain = cybre.space access_token = .... And use the DEFAULT section to share common configuration options between them: [DEFAULT] domain = cybre.space client_id = .... client_secret = .... Getting started pip install ananas The ananas pip package comes with a script to help you manage your bots. Simply give it a config file and it'll load your bots and close them safely when it receives a keyboard interrupt, SIGINT, SIGTERM, or SIGKILL. ananas config.cfg If you haven't specified a client id/secret or access token, the script will exit unless you run it with the --interactive flag, which allows it to prompt you for the instance login information. (The only part of the input you enter here that's stored in the config file is the instance name -- the email and password are only used to generate the access token). Configuration The following fields are interpreted by the PineappleBot base classs and will work for every bot: class: the fully-specified python class that the runner script should instantiate to start your bot. e.g. "ananas.default.TraceryBot" domain\ ¹: the domain of the instance to run the bot on. Must support https connections. Only include the domain, no protocol or slashes. e.g. "mastodon.social" client_id\ ¹, client_secret\ ¹: the tokens that the instance uses to identify what client this bot is posting from/as. Will be used to determine what's displayed underneath all the posts made by this bot. access_token\ ¹: the access token used to authenticate API requests with the instance. Make sure this is secret, don't distribute config files with this field filled out or people will be able to post under the account this token was created with. admin: the full username (without leading @) of the user to DM error reports to. Can be left unspecified, but is useful for keeping an eye on the health of the bot without constantly monitoring the script logs. e.g. admin@example.town ¹: Filled out automatically if the bot is run in interactive mode. Additional fields are specific to the type of bot, refer to the documentation for the bot's class for more information about the fields it expects. Writing Bots Custom bot classes should be subclasses of ananas.PineappleBot. If you override __init__, be sure to call the base class's __init__. Decorators In order for the bot to do anything, you should add a method decorated with at least one of the following decorators: @ananas.reply: Calls the decorated function when the bot is mentioned by any other user. Decorator takes no parameters, but should only be called on functions matching this signature: def reply_fn(self, mention, user). mention will be the dictionary corresponding to the status containing the mention (as returned by the mastodon API, user will be the dictionary corresponding to the user that mentioned the bot. @ananas.interval(secs): Calls the decorated function every secs seconds, starting when the bot is initialized. For intervals longer than ~an hour, you may want to use @schedule instead. e.g. @ananas.interval(60) @ananas.schedule(**kwargs): Allows you to schedule, cron-style, the decorated function. Accepted keywords are "second", "minute", "hour", "day_of_week" or "day_of_month" (but not both), "month", and "year". If any of these keywords are not specified, they will be treated like cron treats an *, that is, as long as the time matches the other values, any value will be accepted. Speaking of which, the cron-like syntax "*" as well as "*/3" are both accepted, and will expand to the expected thing: for example, schedule(hour="*/2", minute="*/10") will post every 10 minutes during hours which are multiples of 2. @ananas.hourly(minute=0), @ananas.daily(hour=0, minute=0): Shortcuts for @ananas.schedule() that call the decorated function once an hour at the specified minute or once a day at the specified hour and minute. If parameters are omitted they'll post at the top of the hour or midnight (UTC). @ananas.error_reporter: specifies custom behavior for reporting errors. The decorated function should match this signature: def err(self, error) where error is a string representation of the error. Overrideable Functions You can also define the following functions and they will be called at the relevant points in the bot's lifecycle: init(self): called before the configuration file has been loaded, so that you can set default values for config fields in case the config file doesn't specify them. start(self): called after all of the internal PineappleBot initialization is complete and the mastodon API is ready to use. A good place to load files specified in the config, post a startup notice, or otherwise do bot-specific setup. stop(self): called when the bot has received a shutdown signal and needs to stop. The config file will be saved after this, so if you need to make any last minute changes to the config, do that here. Configuration Fields All of the configuration fields for the current bot are available through the self.config object, which exposes them with both field-accessor syntax and dictionary-accessor syntax, for example: foo = self.config.foo bar = self.config["bar"] These can be read (to get the user's configuration data) or written to (to affect the config file on next save) or deleted (to remove that field from the config file). You can call self.config.load() to get the latest values from the config file. load takes an optional parameter name, which is the name of the section to load in the config file in case you want to load a different one than the bot was started with. You can also call self.config.save() to write any changes made since the last load back to the config file. Note that if you call self.config.load() during bot operation, without first calling self.config.save(), you will discard any changes made to the configuration since the last load. Distributing Bots You can distribute bots however you want; as long as the class is available in some module in python's sys.path or a module accessible from the current directory, the runner script will be able to load it. If you think your bot might be generally useful to other people, feel free to create a pull request on this repository to get it added to the collection of default bots. Questions? Ping me on Mastodon at @chr@cybre.space or shoot me an email at chr@cybre.space and I'll answer as best I can! Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/ananas/
CC-MAIN-2018-47
refinedweb
1,256
55.54
New-Smb Creates an SMB share. Syntax New-Smb Share [ Server Message Block (SMB) share. To delete a share that was created by this cmdlet, use the Remove-SmbShare cmdlet. Examples Example 1: named "VMSFiles" and grants Full Access permissions to "Contoso\Administrator", and "Contoso\Contoso-HV1$". Example 2: Create an encrypted SMB share PS C:\>New-SmbShare -Name "Data" -Path "J:\Data" -EncryptData $True Name ScopeName Path Description ---- --------- ---- ----------- Data Contoso-FS J:\Data This command creates an encrypted SMB share. Example 3: Create an SMB share with Multiple Permissions PS C:\>New-SmbShare -Name "VMSFiles" -Path "C:\ClusterStorage\Volume1\VMFiles" -ChangeAccess "Users" -FullAccess "Administrators" Name ScopeName Path Description ---- --------- ---- ----------- VMSFiles Contoso-SO C:\ClusterStorage\Volume1\... This command creates an SMB share named "VMSFiles" and grants Change permissions to the local "Users" group, and Full Access permissions to the local "Administrators" group. Parameters Runs the cmdlet as a background job. Use this parameter to run commands that take a long time to complete.. Specifies the continuous availability time-out for the share. Specifies which users SMB share. A description of the share is displayed by running the Get-SmbShare cmdlet. The description may not contain more than 256 characters. The default value no description, or an empty description. Indicates that the share is encrypted.. Specifies a name for the SMB share. The name may be composed of any valid file name characters, but must be less than 80 characters long. The names pipe and mailslot are reserved for use by the computer. Specifies which accounts are denied access to the SMB share. Multiple accounts can be specified by supplying a comma-separated list. Specifies the path of the location of the folder to share. The path must be fully qualified. Relative paths or paths that contain wildcard characters are not permitted. Specifies which users restart of the computer. By default, new SMB shares are persistent, and non-temporary. The Microsoft.Management.Infrastructure.CimInstance object is a wrapper class that displays Windows Management Instrumentation (WMI) objects. The path after the pound sign ( #) provides the namespace and class name for the underlying WMI object. This cmdlet returns a MSFT_SmbShare object that represents the SMB share.
https://docs.microsoft.com/en-us/powershell/module/smbshare/new-smbshare?view=windowsserver2019-ps&viewFallbackFrom=win10-ps
CC-MAIN-2021-17
refinedweb
362
59.3
Hello and welcome back to the channel. In this video you will learn useReducer hook. This is the alternative approach to store data inside component that useState. As you can understand the idea came from Redux approach with it's redu why do we need useReducer and how to use it? So actually it's an alternative to useState. If you didn't see my video on useState go check it first. Also if you are familiar with Redux you already know the concept. Let's check on the example. For example we want a component that will render article. So normally we will fetch an article and store in state, but additionally we need to store isLoading property so we can highlight for user that something is loading and error property where we will store our error that we want to show if request fails. This means that we need to create 3 states for this or pack all them in single object. When we have complex state or a lot of business logic regarding how our state changes it's a good idea to use useReducer how. import React, { useReducer } from "react"; function App() { const [state, dispatch] = useReducer(reducer, initialState); ... } So here we used useReducer hook from React. As you can see we need to pass 2 arguments inside reducer and initialState. As a result we are getting changed state and dispatch method. At this point I should explain a bit Redux approach so it will be more clear what useReducer is doing. So in Redux when component triggers some changes like "User clicks on button" or "Register request started" we want to dispatch an action. Action has a name inside and optionally some data. Also we have a state and this state reacts to our actions and changes itself. In this case we have single flow of data. Our component doesn't listen directly to actions but just triggers them. It listens only for data from state. Exactly this approach useReducer implements. So we need first to create initialState. It's a state that we want to change. As we discussed previously we have there 3 properties: isLoading, error, article. const initialState = { isLoading: false, error: null, article: null, }; now we need to define a reducer. This is a just a function what will change the state by actions that we will trigger. For now let's first create it and simply return state. const reducer = (state, action) => { return state } Now what we are getting back from useReducer? It's a state and a dispatch function. So state is our updated state which will always up to date and dispatch is a way how we can trigger actions from our component. Let's console log state first. console.log("rerender", state); As you can see in browser this is our initialState. Now we can try to change our state. Let's make this application as real as possible. Normally fetching of article is asynchronous operation and we need 3 actions: start, success and failure. So let's for now dispatch getArticleStart when we click on the button. <button onClick={() => dispatch({ type: "getArticleStart" })}> Start getting article </button> So we just call dispatch function on click and pass an object with type property. Type is mandatory when we are dispatching actions. So we just said that on click we want to get the article. Now our reducer need to change the state accordingly when we dispatching getArticleStart action. const reducer = (state, action) => { switch (action.type) { case "getArticleStart": return { ...state, isLoading: true, }; default: return state; } }; So here we wrote a switch on action.type. So inside we can console.log action and see that we are getting our every action. Now we want to set isLoading property to true when getArticleStart happens. As you can see in browser our state is updated with isLoading = true. Now I want to make this example much more realistic but for this we need to use useEffect hooks. I made already a video on it so if you didn't watch it put this video on pause and watch it first. It will link it somewhere here. What I want to do is to make a real API request and fully react on it with our useReducer. First of all we need to install json-server for this. It's a npm package which help us to create a fake API really Now we can access a specific article that we want to fetch by The next step to install an axios library because we need a third party package to fetch data from API. npm install axios And let's create an effect where we want to make our request. import axios from "axios"; useEffect(() => { axios.get("").then((res) => { console.log("res", res); }); }, []); Just to remind you we are making all side effects like API call inside useEffect and we are providing empty array of dependencies as a second argument to say that we don't have any dependencies and it will be called only once after mounting the component. As you can see in browser, we are getting a valid response from our fake API. Now we want to call getArticleStart at the beginning of our fetch, getArticleSuccess when it is finished and getArticleFailure if it failed. useEffect(() => { dispatch({ type: "getArticleStart" }); axios .get("") .then((res) => { dispatch({ type: "getArticleSuccess", payload: res.data }); }) .catch(() => { dispatch({ type: "getArticleFailure" }); }); }, []); As you can see the only new thing here is that we provided in success not only type but also a payload. We can actually provide inside any properties that we want but standard naming is payload. Now we need to react to this dispatches inside our reducer. const reducer = (state, action) => { switch (action.type) { case "getArticleStart": return { ...state, isLoading: true, }; case "getArticleSuccess": return { ...state, isLoading: false, article: action.payload, }; case "getArticleFailure": return { ...state, isLoading: false, }; default: return state; } }; As you can see in browser our state from useReducer is updated after every action and it completely valid. Now we can use properties from state in template like we want. return ( <div> <h1>React hooks for beginners</h1> {state.isLoading && <div>Loading...</div>} {state.error && <div>Something is broken</div>} {state.article && <div>{state.article.title}</div>} </div> ); As you can see in browser, everything is working as intended. So when to use useState and when useReducer. It's a matter of personal preference but I recommend thinking about useReducer if you have a complex logic or lot's of properties like 10 and above. It also really helps same as Redux to move logic completely out of component and just call plain actions. If "React hooks for beginners" is too easy for you I have a full hooks course which is going 8 hours where we are creating real application from scratch. I will link it down in the description below.
https://monsterlessons-academy.com/posts/react-hooks-use-reducer-tutorial
CC-MAIN-2022-40
refinedweb
1,144
66.23