text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Word count (Perl) - Other implementations: Assembly Intel x86 Linux | C | C++ | Haskell | J | Lua | OCaml | Perl | Python, functional | Rexx An implementation of the UNIX wc tool in Perl.. <<wc.perl>>= #!/usr/bin/env perl use strict; use warnings; printwc wc usage my $nfiles=0; my $opts; my ($tot_lines, $tot_words, $tot_chars); [edit] Command line First we read all options (arguments starting with '-'). Unrecoginzed options are treated as errors. <<wc.perl>>= my $arg; while($arg=shift(@ARGV) and $arg=~/^-/) { $arg=~/^-[lwc]+$/ or usage(); $opts.=substr($arg, 1); } If the user did not provide any options, we use the default, "lwc", so that line-, word- and character-count are printed. <<wc.perl>>= $opts or $opts="lwc"; The remaining arguments are file names. wc() is used to count the values, and printwc() prints the result. <<wc.perl>>= while($arg) { ++$nfiles; my ($lines, $words, $chars)=wc($arg); $tot_lines+=$lines; $tot_words+=$words; $tot_chars+=$chars; printwc($arg, $opts, $lines, $words, $chars); $arg=shift; } If there was no file names in the command line, we call wc() with "-" as file name, resulting in that stdin is counted. If there was more than 1 file, the sum is printed. <<wc.perl>>= if($nfiles<1) { my ($lines, $words, $chars)=wc("-"); printwc("-", $opts, $lines, $words, $chars); } elsif($nfiles>1) { printwc("total", $opts, $tot_lines, $tot_words, $tot_chars); } exit 0; <<usage>>= sub usage { print "Usage: $0 [-cwl] [<fname>*]\n"; exit 1; } [edit] wc() wc() takes 1 argument, the name of the file to count. If the name is "-", stdin is counted. <<wc>>= sub wc { my ($fname)=@_; my $lines=0; my $words=0; my $chars=0; open(FILE, $fname) || die "Couldn\'t open file $fname"; foreach my $line (<FILE>) { Counting characters is simple. We just use the length of each line. Lines are only counted when they end in a line-feed character. If the file ends without a line-feed, the last line is not counted. <<wc>>= $chars+=length($line); $line=~/\n$/ and ++$lines; To count words, we generate an array with the split() function and use the size of the generated array. To avoid extra empty words at start and end of the array, we remove all leading and trailing spaces. <<wc>>= $line=~s/^[ \t]*//g; $line=~s/[ \t\r\n]*$//g; my @w=split(/[ \t]+/, $line); $words+=@w; } close(FILE); return ($lines, $words, $chars); } [edit] printwc() printwc() will use the $opts argument to decide which values to print. The file name is printed after the values. <<printwc>>= sub printwc { my ($fname, $opts, $lines, $words, $chars)=@_; $opts=~/l/ && printf("% 8d", $lines); $opts=~/w/ && printf("% 8d", $words); $opts=~/c/ && printf("% 8d", $chars); $fname ne "-" && print " $fname"; print "\n"; }hijackerhijacker
http://en.literateprograms.org/Word_count_(Perl)
CC-MAIN-2014-42
refinedweb
437
69.01
Active directory structure well it is broken... there is no structure. its a hospital environment with three other locations on the same domain. numerous departments and areas. i started to do the visio diagram of how i think it would be most ideal and leave room for growth later down the road. but it just looks like too many ou's in the third level and i dont want to take it anymore then 4 levels. at my first level i have the locations. then at the second level of each i have ou's named servers, departments, and areas. then on the third level i get a little more detailed with the departments and areas. Active Directory - A Logical Structure what came to mind when you mentioned there is not structure, a hospital with 3 locations on the same domain is what gets most IT guys confused about Active Directory. Its a logical structure that does not "need to" mimic [and more often than not should not mimic] physical layout of the company. To mimic the physical layout of a company invites problems because it limits flexibility in a growing company. Example: Companies typically have a sales department, manufacturing, engineering, management, accounting, human resources. under these department headings can be any # of "sub departments". Under sales can be regional sales, local sales and a branch off the top level sales such as customer service. OUs can be created at the top level such as Sales OU and then sub OUs nested inside the Sales top level OU. You can then collect users or computers in those OUs regardless of their actualy physical location and have GPOs applied [or delegate admin]. There is flexibility in this design approach and simplicity because your not limiting the design to match the physical design. This holds true for forest/domain design as well which is also a logical structure. is physical sometimes better then logical? in our environment here we do more of trying to control things based up where they are located. our off sites are different specialties then at the main campus. so there isn't much repeating of the departments across the locations. With the past companie's i was with a logical structure was good.. control didn't need to be as granular. and everything was within two levels DNS and Active Directory You can create an AD structure that mimics physical layout. you can group computers into OUs that match physical layout. There's nothing in the rule books that says you can't, only that its "better" to go with logical structure because its easier to manage and flexible. AD Design Remember that you can apply GPOs on a "per site" basis also. You talk of different physical locations... Will these be in different sites? (i.e. are they on different subnets? Do you have a DC on each remote location?) Second - it is advisable to keep the directory as flat as possible. Remember that you only NEED to create more than one domain if you require a different password/security policy for different users. Of course we are assuming that you will only need one tree in your forest! That is correct isn't it? :) We do create OUs named after our geographic locations for USERS only. Our computer accounts exist in OUs that reflect the role they have. For example we have an OU for laptops, one for Service PCs, one for kiosk PC, one for general staff PCs etc. I think you need to consider things in this order: 1. DNS namespace - get that sorted first! 2. Domain structure. 3. Site structure. 4. OU structure. Give us a clue about your sites (physical locations), whether you are looking at multiple domains, a very brief organisational structure if possible. Good luck! Spend the time carefully working through this, don't hesitate to ask questions! It'll look great on your CV!! :) ad design there is already a domain in place here. i am just redoing the AD structure before we bring in the exchange server. with it being a hospital environment we like to have very granular control over things. the physical layout that i came up with works fairly well with our group policy design. as of right now only the main campus has servers, plan on having servers at each location in time. mainly to help free up the wan line. for the most part things are on the same subnet with the exception of the phones which and a few other medical equipment devices. we like to have control over things based upon where they are. and in our opinion the placement of where people and computers are added would be easier in a physical layout. this is because they will only be using machines in that area. and coming up with a logical structure has been very difficult. Simple is best Sometimes defining the physical can lead to more problems and admin for yourself as departments grow. Its best not to break departments down into too many OUs as it becomes a nightmare creating users/computers in the correct OU. When designing the structure it is often best to try and reduce the number of physical departments into one OU,then use groups to allocate policy. Its often worth considering what policys you plan to enforce before thinking about the OU structure. Define by GPOs Personally, i like to define my AD by how I want to implement Group Policies. For example, if I keep all my workstations in one OU and my Users in another, I can define machine policies and seperate user policies. By being more granular, I can put those workstations and users into additional sub-OUs based on who needs which policies. I can define top-level GPOs that impact all users, and create more specialized policies to apply to the sub-OUs. Same subnet Earlier you stated "for the most part things are on the same subnet with the exception of the phones." Why are most things on the same subnet? Isn't that sucking up bandwidth? Different locations should be on different subnets and then these subnets broken down further for departments. This would free up the WAN line. What makes a good ad structure? This conversation is currently closed to new comments.
https://www.techrepublic.com/forums/discussions/what-makes-a-good-ad-structure/
CC-MAIN-2018-22
refinedweb
1,058
65.12
Up to [cvs.NetBSD.org] / src / lib / libc / stdlib Request diff between arbitrary revisions Default branch: MAIN Revision 1.9.56.1 / (download) - annotate - [select for diffs], Thu May 22 11:36:54 2014 UTC (5 years, 6 months ago) by yamt Branch: yamt-pagecache Changes since 1.9: +4 -2 lines Diff to previous 1.9 (colored) next main 1.10 (colored) to selected 1.3 (colored) sync with head. for a reference, the tree before this commit was tagged as yamt-pagecache-tag8. this commit was splitted into small chunks to avoid a limitation of cvs. ("Protocol error: too many arguments") Revision 1.9.62.1 / (download) - annotate - [select for diffs], Sun Jun 23 06:21:06 2013 UTC (6 years, 5 months ago) by tls Branch: tls-maxphys Changes since 1.9: +4 -2 lines Diff to previous 1.9 (colored) next main 1.10 (colored) to selected 1.3 (colored) resync from head Revision 1.11 / (download) - annotate - [select for diffs], Fri Apr 26 19:37:04 2013 UTC (6 years, 7 months ago) by w.10: +3 -3 lines Diff to previous 1.10 (colored) to selected 1.3 (colored) Add commas in list. Revision 1.10 / (download) - annotate - [select for diffs], Fri Apr 26 18:29:55 2013 UTC (6 years, 7 months ago) by christos Branch: MAIN Changes since 1.9: +3 -1 lines Diff to previous 1.9 (colored) to selected 1.3 (colored) add {at_,}quick_exit(3) from FreeBSD Revision 1.9 / (download) - annotate - [select for diffs], Thu Aug 7 16:43:38 2003 UTC (16 years, Branch point for: yamt-pagecache, tls-maxphys Changes since 1.8: +2 -6 lines Diff to previous 1.8 (colored) to selected 1.3 (colored) Move UCB-licensed code from 4-clause to 3-clause licence. Patches provided by Joel Baker in PR 22280, verified by myself. Revision 1.8 / (download) - annotate - [select for diffs], Wed Apr 16 13:34:45 2003 UTC (16 years, 7 months ago) by wiz Branch: MAIN Changes since 1.7: +2 -2 lines Diff to previous 1.7 (colored) to selected 1.3 (colored) Use .In header.h instead of .Fd #include \*[Lt]header.h\*[Gt] Much easier to read and write, and supported by groff for ages. Okayed by ross. Revision 1.6.12.2 / (download) - annotate - [select for diffs], Fri Mar 22 20:42:28 2002 UTC (17 years, 8 months ago) by nathanw Branch: nathanw_sa CVS Tags: nathanw_sa_end Changes since 1.6.12.1: +1 -1 lines Diff to previous 1.6.12.1 (colored) to branchpoint 1.6 (colored) next main 1.7 (colored) to selected 1.3 (colored) Catch up to -current. Revision 1.6.12.1 / (download) - annotate - [select for diffs], Fri Mar 8 21:35:44 2002 UTC (17 years, 9 months ago) by nathanw Branch: nathanw_sa Changes since 1.6: +2 -2 lines Diff to previous 1.6 (colored) to selected 1.3 (colored) Catch up to -current. Revision 1.7 / (download) - annotate - [select for diffs], Thu Feb 7 07:00:27 2002 UTC (17 years, 10.6: +2 -2 lines Diff to previous 1.6 (colored) to selected 1.3 (colored) Generate <>& symbolically. Revision 1.6 / (download) - annotate - [select for diffs], Thu Feb 5 18:49:47 1998 UTC (21 years, 10: +3 -1 lines Diff to previous 1.5 (colored) to selected 1.3 (colored) add LIBRARY section to man page Revision 1.5 / (download) - annotate - [select for diffs], Fri Jan 30 23:37:44 1998 UTC (21 years, 10 months ago) by perry Branch: MAIN Changes since 1.4: +5 -5 lines Diff to previous 1.4 (colored) to selected 1.3 (colored) update to lite-2 Revision 1.1.1.2 / (download) - annotate - [select for diffs] (vendor branch), Fri Jan 30 21:41:40 1998 UTC (21 years, 10 months ago) by perry Branch: WFJ-920714, CSRG CVS Tags: lite-2 Changes since 1.1.1.1: +4 -4 lines Diff to previous 1.1.1.1 (colored) to selected 1.3 (colored) import lite-2 Revision 1.4 / (download) - annotate - [select for diffs], Thu Dec 28 08:52:01 1995 UTC (23 years, 11 months ago) by thorpej: +2 -1 lines Diff to previous 1.3 (colored) New-style RCS ids. Revision 1.3 / (download) - annotate - [selected], Mon Nov 29 22:07:08 1993 UTC (26 years ago) by jtc) Use "Er" for argument to -width in the lists in the ERROR sections so that formatting is consistant. Revision 1.2 / (download) - annotate - [select for diffs], Sun Aug 1 07:44:36 1993 UTC (26 years, 4 months ago) by mycroft Branch: MAIN Changes since 1.1: +2 -1 lines Diff to previous 1.1 (colored) to selected 1.3 (colored) Add RCS indentifiers. Revision 1.1.1.1 / (download) - annotate - [select for diffs] (vendor branch), Sun Mar 21 09:45:37 1993 UTC (26 years,) to selected 1.3 (colored) initial import of 386bsd-0.1 sources Revision 1.1 / (download) - annotate - [select for diffs], Sun Mar 21 09:45:37 1993 UTC (26 years, 8 months ago) by cgd Branch: MAIN Diff to selected 1.3 (colored) Initial revision This form allows you to request diff's between any two revisions of a file. You may select a symbolic revision name using the selection box or you may type in a numeric name using the type-in text box.
http://cvsweb.netbsd.org/bsdweb.cgi/src/lib/libc/stdlib/atexit.3?r1=1.3
CC-MAIN-2019-51
refinedweb
906
77.94
Created on 2014-12-09 01:27 by stockbsd, last changed 2014-12-14 08:58 by serhiy.storchaka. This issue is now closed. in py3k, the following simple code will throw an uncatched exception when executed with pythonw: import warnings warnings.warn('test') the problem occurs in showarning function: in py3k's pythonw , stderr/stdout is set to None, so the file.write(...) statement will thorw AttributeError uncatched. I think a catch-all except(delete 'OSError') can solve this. def showwarning(message, category, filename, lineno, file=None, line=None): """Hook to write a warning to a file; replace if you like.""" if file is None: file = sys.stderr try: file.write(formatwarning(message, category, filename, lineno, line)) except OSError: pass # the file (probably stderr) is invalid - this warning gets lost. Here is a patch. 2.7 is affected too. 1. py2 is unaffected, because stderr/stdout is not None in py2's pythonw and the catch works correctly. 2. as to py3, I prefer this patch: - execpt OSError: + execpt: + execpt: Please never use that, but "except Exception:" instead to not catch SystemExit or KeyboardInterrupt. I afraid this will silence unexpected errors. warnings_stderr_none.patch looks good to me. It can be applied on Python 2.7, 3.4 and 3.5. New changeset d04dab84388f by Serhiy Storchaka in branch '3.4': Issue #23016: A warning no longer produces AttributeError when the program New changeset 0050e770b34c by Serhiy Storchaka in branch 'default': Issue #23016: A warning no longer produces an AttributeError when the program New changeset aeeec8a4b9b8 by Serhiy Storchaka in branch '2.7': Issue #23016: A warning no longer produces an AttributeError when sys.stderr > + # sys.stderr is None when ran with pythonw.exe - warnings get lost s/ran/run/ New changeset 70b6fe58c425 by Serhiy Storchaka in branch '3.4': Fixed a typo in a comment (issue #23016). New changeset da1ec8e0e068 by Serhiy Storchaka in branch 'default': Fixed a typo in a comment (issue #23016). Thanks Arfrever.
https://bugs.python.org/issue23016
CC-MAIN-2020-40
refinedweb
327
68.16
NAME usleep - suspend execution for microsecond intervals SYNOPSIS #include <unistd.h> int usleep(useconds_t usec); CONFORMING TO. NOTES The type useconds_t is an unsigned integer type capable of holding integers in the range [0,1000000]. Programs will be more portable if they never mention this type explicitly. Use #include <unistd.h> ... unsigned int usecs; ... usleep(usecs); . SEE ALSO alarm(2), getitimer(2), nanosleep(2), select(2), setitimer(2), sleep(3), ualarm(3), time(7) COLOPHON This page is part of release 3.23 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
https://linux.fm4dd.com/en/man3/usleep.htm
CC-MAIN-2022-05
refinedweb
103
61.53
Scrape Everything! Building Web scrapers with Python and Go Diretnan Domnan ・12 min read Table of Content - Introduction - Recognizing web scraping opportunities - Key components of a scraping project - Packaging and Deploying your scraper - Example Scraping Project with Python - Example Scraping Project with Go - Conclusion Introduction The web we all know and love is a huge pool of unstructured data, usually presented in the form of pages that can be accessed through a web browser. Web scraping seeks to harness the power of this data by automating it's extraction using well selected tools of choice, to save us time and effort. In this article, I will guide you through some of the steps involved in writing web scrapers to tame the beast. The concepts explained are language agnostic but since I am transitioning from the dynamically typed joyous Python syntax to the rigid static Go syntax, this article will explain with examples from both sides. It then rounds off by giving a brief description of two projects written in Python and Go respectively. The projects in question achieve very different goals but are all built on this awesome concept of web scraping. You will learn the following: - Recognizing web scraping opportunities - Key components of a scraping project - Packaging and Deploying your scraper 🚩 A lot of the code snippets shown in this article will not compile on their own, as most times I will be making reference to a variable declared in a previous snippet or using an example url, e.t.c Recognizing web scraping opportunities There is tremendous data online in the form of web pages. This data can be transformed from unstructured to structured data to be stored in a database, used to power a dashboard, e.t.c. The trick to recognizing web scraping opportunities is as simple as having a project that requires data which is available on web pages, but not easily accessible through standardized libraries or web APIs. Another indicator is the volatility of the data i.e how frequently the data to be scraped updates or changes on the site. A strong web scraping opportunity is usually consolidation. Consolidation refers to the combination of two or more entities. In our case refers to the merging of data obtained from different sources into one single endpoint. Take for instance, a project to build a dashboard for displaying major disease statistics in West Africa. This project is a major undertaking and will usually involve several processes, including but not limited to manual data collection. Let us assume however that we have access to multiple websites, each having some small part of the data we need. We can scrape all the websites in question and consolidate the data into one single dashboard while saving time and effort. Some caveats exist however, one being that some websites have terms and conditions barring scraping, especially for the purpose of commercial use. Some sites resort to blacklisting suspicious IP addresses that send too many requests at too rapid a rate as it may break their servers. Key components of a scraping project There are some persistent components in every web scraping project: - Manual Inspection - Web Request - Response Parsing - Data Cleaning and Transformation Manual Inspection To successfully create a web scraper, it is important to understand the concepts of HTML pages. A website is a soup containing multiple moving parts of code. In order to extract relevant information, we have to inspect the web page for the pieces of relevance. Manual inspection is first carried out before anything just to get a general feel of the information on the page. To manually inspect a page, right click on the web page and then select "Inspect". This allows you peer into the soul or source code of the website. On clicking "Inspect", a console should pop up. This console lets you navigate and see the corresponding HTML tag for every information displayed on the page. Web Request This involves sending a request to a website and receiving the response. The request being can be configured to receive html pages, get files, set retry policy, e.t.c. The corresponding response received contains a lot of information we might want to use such as status code, content length, content type and response body. These are all important information that we then proceed to parse during the Response Parsing stage. There are libraries to achieve that in Python (e.g requests) and Go (e.g http) Here is an implementation in Python """ Web Requests using Python Requests """ import requests from requests.adapters import HTTPAdapter # simple get request resp = requests.get("") # posting form data payload = {'username':'niceusername','password':'123456'} session = requests.Session() resp = session.post("",data=payload) # setting retry policy to 5 max retries session.mount("", HTTPAdapter(max_retries=5)) Here is the corresponding implementation in Go /** Web Requests using Go net/http **/ package main import "net/http" func main(){ // simple get request resp, err := http.Get("") // posting form data resp, err := http.PostForm("", url.Values{"key": {"Value"}, "id": {"123"}}) } Unfortunately, net/http does not provide retries, but we can make use of third party libraries e.g pester /** Retry Policy with pester **/ package main import "github.com/BoseCorp/pester" func main() { client := pester.New() // set retry policy to 5 max retries client.MaxRetries = 5 resp, _ := client.Get("") } Response Parsing This involves extracting information from the web page and still goes hand in hand with manual inspection mentioned earlier. As the name suggests, we parse the response by making use of the attributes provided by the response e.g the response-body, status-code, content-length, e.t.c. of which HTML which is contained in the response-body is usually the most laborious to parse, assuming we were to do it ourselves. Thankfully, there are structured stable libraries that help us parse HTML easily. <!--- Assume this is our html --> <html> <body> <p> This is some <b> regular <i>HTML</i> </b> </p> <table id="important-data"> <tr> <th>Name</th> <th>Country</th> <th>Weight(kg)</th> <th>Height(cm)</th> </tr> <tr> <td>Smith</td> <td>Nigeria</td> <td>42</td> <td>160</td> </tr> <tr> <td>Eve</td> <td>Nigeria</td> <td>49</td> <td>180</td> </tr> <tr> <td>Tunde</td> <td>Nigeria</td> <td>65</td> <td>175</td> </tr> <tr> <td>Koffi</td> <td>Ghana</td> <td>79</td> <td>154</td> </tr> </table> </body> </html> For Python, the undeniable king of the parsers is BeautifulSoup4 import requests from collections import namedtuple from bs4 import BeautifulSoup TableElement = namedtuple('TableElement', 'Name Country Weight Height') request_body = requests.get('').text # using beautiful soup with the 'lxml' parser soup = BeautifulSoup(request_body, "lxml") # extract the table tb = soup.find("table", {"id": "important-data"}) # find each row rows = tb.find_all("tr") table = [] for i in rows: tds = i.find_all("td") # tds would be empty for the row with only table heads (tr) if tds: tds = [i.text for i in tds] table_element = TableElement(*tds) table.append(table_element) # print first person name print(table[0].Name) For Go, there is no undisputed king of parsing in my opinion, but my immediate favourite has come to be Colly. Colly combines the work of net/http into it's library, so it can perform both web request and parsing package main import ( "strconv" "fmt" "github.com/gocolly/colly" ) type TableElement struct { Name string Country string Weight float64 Height float64 } func main() { // table array to hold data table := []TableElement{} // instantiate collector c := colly.NewCollector() // set up rule for table with important-data id c.OnHTML("table[id=important-data]", func(tab *colly.HTMLElement) { tab.ForEach("tr", func(_ int, tr *colly.HTMLElement) { tds := tr.ChildTexts("td") new_row := TableElement{ Name: tds[0], Country: tds[1], Weight: strconv.ParseFloat(tds[2], 64), Height: strconv.ParseFloat(tds[3], 64), } // append every row to table table = append(table, new_row) }) }) // assuming the same html as above c.Visit("") // print first individual name fmt.Println(table[0].Name) } Data Cleaning and Transformation If somehow scraping the html tags gave you the final form of the data you are looking for then 🎉 , your work is majorly done. If it is not, then the next part is usually very integral and involves rolling up your sleeves and doing some cleaning and transformation. Text coming from within tags is usually messy, unnecessarily spaced and might need some regular expression magic to further extract reasonable data. """ Extracting Content Filename from Content-Disposition using Python regex """ import re # assume we have a response already content_disp = response["Header"]["Content-Disposition"][0] filename_re = re.compile('filename="(.*)"') filename = re.match(filename_re, content_disp)[0] print(filename) # Avengers (2019).mp4 /** Extracting Content Filename from Content-Disposition using Go regex **/ package main import ( "regexp" "fmt" ) func main() { re := regexp.MustCompile(`filename="(.*)"`) content := response.Header["Content-Disposition"][0] filename := re.FindStringSubmatch(content)[1] fmt.Println(filename) // Avengers (2019).mp4 } Furthermore, to clean data while easily melting, pivoting and manipulating its rows and columns, one could make use of Pandas in Python and it's Go equivalent Gota import pandas as pd # using the previous tables variable which is a list of TableElement namedtuples df = pd.DataFrame(data=tables) print(df.head()) """ Console output Name Country Weight Height 0 Smith Nigeria 42 160 1 Eve Nigeria 49 180 2 Tunde Nigeria 65 175 3 Koffi Ghana 79 154 """ package main import ( "github.com/go-gota/gota" "fmt" ) func main(){ // using the previous tables variable which is a slice of TableElement structs df := dataframe.LoadStructs(tables) fmt.Println(df.head()) } /** Console output Name Country Weight Height 0 Smith Nigeria 42 160 1 Eve Nigeria 49 180 2 Tunde Nigeria 65 175 3 Koffi Ghana 79 154 **/ Refer to this awesome article on data cleaning with Pandas. Packaging and Deploying your scraper Packaging This is all dependent on the use case of your scraper. You can determine your use case by asking questions like: - Should the scraper stream data to another endpoint or should it remain dormant until polled? - Are you storing the data for usage at a latter time? They will guide you into selecting the proper packaging for your scraper. The first question helps you consider the scraper as either a webservice or a utility. If it is not a streaming scraper, you should consider using it as a library or Command Line Interface(CLI). Optionally you can decide to build a Graphic User Interface(GUI) if you are into that sort of thing. For CLI, I would suggest Python's argparse and ishell for Golang. For creating cross platform desktop GUI, PyQt5 for Python and Fyne for Golang should suffice NOTE 📝 : If you are setting up a streaming type scraper especially, you may have to apply a few tricks such as rate limiting, ip and header rotation, e.t.c so as not to get your scraper blacklisted. The second question helps decide whether to add a database into the mix or not. View Python psycopg and Go pq to be able to connect to a PostgresQL database I have been able to rely on Flask for spawning simple Python servers and using them to deploy scrapers isn't much of a hassle either. from flask import Flask from flask import request, jsonify app = Flask(__name__) def scraping_function(resp): """ scraping function to extract data """ ... data = {"Name": "", "Description":""} return jsonify(data) @app.route('/') def home(): username = request.args.get('category') # dynamically set the url to be scraped based on the category received url = f"{category}" resp = request.Get(url) json_response = scraping_function(resp) return json_response if __name__=='__main__': app.run(port=9000) For Golang, net/http is still the way to go for spawning small servers to interface with logic. package main import( "net/http" "encoding/json" "os" ) type Example struct { Name string Description string } func scraping_function(r http.Response) Example { // scraping function to extract data data := Example{} return data } func handler(w http.ResponseWriter, r *http.Request) { category := r.URL.Query().Get("category") if category == "" { http.Error(w, "category argument is missing in url", http.StatusForbidden) return } else { url := "" + category resp, _ := http.Get(url) scraping_result = scraping_function(resp) // dump results json_output, err := json.Marshal(scraping result) if err != nil { log.Println("failed to serialize response:", err) return } w.Header().Add("Content-Type", "application/json") w.Write(json_output) } func main(){ http.HandleFunc("/", handler) // check if port variable has been set if os.Getenv("PORT") == "" { http.ListenAndServe(":9000", nil) } else { http.ListenAndServe(":"+os.Getenv("PORT")) } } To interact with the server, just use curl. If it is running on port 9000, the command to test both servers is shown as curl -s '' { "Name": "", "Description": "", } Deployment Your scraper web application was never meant to live on your system. It was built to win, to conquer, to prosper, to fly (okay enough of the cringey Fly - Nicki Minaj motivational detour). PaaS such as Heroku makes it easy to deploy both Python and Golang web applications by just writing a simple Procfile. For flask applications, it is much better to use a production ready server such as Gunicorn. Just add it to your requirements.txt file at the root of your application. # Flask Procfile web: gunicorn scraper:app File layout for flask scraper |-scraper.py | |-requirements.txt | |-Procfile Read Deploying a Flask application to Heroku for a deeper look into the deployment process. For the golang application, make sure to have a go.mod file at the root of the project and a Procfile as well. Using your go.mod, Heroku knows to generate a binary located in bin/ named after your main package scipt # Go Procfile web: ./bin/scraper File layout for net/http scraper |-scraper.go | |-go.mod | |-Procfile Read Getting Started on Heroku with Go for more information Example Scraping Project with Python I was playing around with creating a reverse image search engine using Keras's ResNet50 and Imagenet weights and made a small search page for it. The problem however, was that whenever I got the image class in question, I would have to redirect to Google.I want to display the results on my own web application, I cried. I tried using Google's API but at the same time I wanted the option of switching the search engine to perhaps something else like Bing. The corresponding script grew into it's own open source project and ended up involving multiple contributors and numerous scraped engines. It is available as a python library and has been used by over 600 other github repositories, mostly bots that need search engine capability within their code. View Search Engine Parser on Github Example Scraping Project with Go I was visiting my favourite low data movie download site netnaija, but that day was particularly annoying. I moved to click and I got redirected to pictures of b**bs. I just want free movies, I cried. I proceeded to create Gophie. It is a CLI tool which enables me to be the cheapskate I am without getting guilt tripped. Search and download netnaija movies from the comfort of my terminal. Scraping 1, Netnaija 0. Conclusion In this article, you've come across some of the ideas that go into creating web scrapers for data retrieval. Some of this information, with a little more research on your part can go into creating Industry-standard scrapers. I tried to make the article as end to end as possible however this is my first attempt at writing a technical article, so feel free to call my attention to anything crucial you feel I might have missed or leave questions in the comments below. Now go and scrape everything! PS ✏️ : I am not sure if it is legal to scrape Google, but Google scrapes everybody. Thank you Visual Studio Code can now convert your long chains of Promise.then()'s into async/await automagically I love how you broke down everything in this article from html, colly and python. One thing that I surely relate to is the popups! Sites like netnaija have a lot of them and I'm glad you pointed this out. Good writeup Diretnan! Keep scribbling... Thanks, that's a good overview of the basics. With scraping things can get big and serious fast and the codebase would get very big quickly. The majority of the work would be maintaining different scrapers/parsers for different websites that are always changing etc. There's an excellent library/framework for creating scrapers (spiders) in Python: Scrapy. It takes a bit of a learning and setup but it's really really powerful once you master the concepts. There are daemons like scrapyd, web admin interface like Spiderkeeper etc and these work quite nicely together. If you're serious about scraping then you'd need a proxy solution also. I've had really good experience with Luminati. They're expensive but the best. And the comes cracking the captchas and other advanced topics. So scraping is a big world on its own. Happy scraping! :) Exactly!... things get complicated quickly but there are excellent libraries out there to help. Thanks for the helpful references too Nice Article I loved the way you wrote this article. Super good!!!! Thanks! Nice content dotunsblog.com.ng/
https://practicaldev-herokuapp-com.global.ssl.fastly.net/deven96/scrape-everything-building-web-scrapers-with-python-and-go-34i7
CC-MAIN-2020-16
refinedweb
2,856
55.54
Asynchronous django: continued I wrote a while ago about how you can add asynchrony to django. Now the last post is only of historical interest, if you haven’t read it, then don’t read it. I wrote only about django orm, and the stumbling block there is accessing the database, but the approach can be generalized to any code containing I / O operations. What we want to get: in general, native support for asynchrony, in any form, and in this article – so that you can use the entire django API in an asynchronous context without changing anything. For example, make calls await MyModel.objects.get() In general, it turns out that adding asynchrony to django is essentially no more difficult than porting it to asynchronous once and for all. You can roughly imagine how it should be done: we take an asynchronous driver with an API roughly similar to django drivers and replace in the depths of the code calls like driver.execute_sql(sql, params) to calls like await async_driver.execute_sql(sql, params)… Of course, we will immediately receive SyntaxError because you cannot use await in a normal function, and we will have to add async and await to the entire call stack. The entire django API is perfectly applicable to the asynchronous version, with a little adaptation. For example, there will be no more lazy attributes, and access to such an attribute will be possible if you previously fetched it from the database, for example, using prefetch_related or select_related (to access an attribute that requires accessing the database, you need a different API). In principle, the work is simple and even relatively small. So, my point of view is that adding asynchrony (without breaking the old, synchronous, API) is just as easy – perhaps with a little overhead. In general, there is such a thing in python as yield from, which is equally applicable in both synchronous and asynchronous contexts. Under the hood, of course, it doesn’t change anything. In my opinion, it should be used for projects that want to support both synchronous and asynchronous versions. Even for those who write from scratch – well, it seems to me. And for Legacy – no talk at all. So we have pull request – to the main django branch (by the way, the pool from the main branch has never led to conflicts). It, of course, only supports a small portion of the django API – of course, because there were no other goals. What do we see in it? Of course, a lot of yield from calls before functions – as I promised. SyncDriver and AsyncDriver are not actually drivers, but classes with helper functions. It’s too lazy to rename, and it’s not very clear what to call. As you can see, they implement the function execute(operations)The fact is that we turned our code into a generator, and these same 2 classes, which are not drivers, request “operations” from it, one at a time, which need to be executed: functions, as they are, with parameters. Where are the drivers themselves? This is also quite interesting. Here’s where: class SQLCompiler: a = Asynchrony() @a.branch() def execute_sql(self, result_type=MULTI, chunked_fetch=False, chunk_size=GET_ITERATOR_CHUNK_SIZE): ... @a.branch() async def execute_sql(self, result_type=MULTI, chunked_fetch=False, chunk_size=GET_ITERATOR_CHUNK_SIZE): ... As you understand, these are 2 branches of the same function – synchronous and asynchronous, which use the driver. It’s that simple, and you can have as many such branches as you want. By the way, asyncpg is not in the least compatible with dbapi – for no good reason, I’m sure. Everything else is not that interesting. This is how, for example, I made Awaitable quersets def __await__(self): return self.__iter__().__await__() Or, for example, many of our functions contain yield from – what to do if our function is itself a generator (in django, yield is often used to create iterators). I found a simple solution: put the yield constructs inside another function (in my case – def iteratorand they won’t conflict). You can also see the decorator @consume_generator… It turns the generator into a regular function (which will possibly return a coroutine). Designed to be called from outside (read by user). In general, the curious will see everything. It’s all. The pwtail / django repository can be set with asterisks
https://prog.world/asynchronous-django-continued/
CC-MAIN-2022-40
refinedweb
720
62.88
- What's new in CherryPy 3.0 - Speed - Config - Tools - Hooks - Dispatch - URL construction - Autoreload - WSGI improvements - Logging - Code inspection - Redirection and Deadlock - Drop privileges - Native support for mod_python - Multiple HTTP server support What's new in CherryPy 3.0 This document only describes new features in CherryPy 3.0. A detailed "How To Upgrade" document is at UpgradeTo30. Speed CherryPy 3 is much faster than CherryPy 2 (as much as three times faster in benchmarks). Config _cp_config: attaching config to handlers In CP 2, you could only specify "config" in a config file or dict, where it was always keyed by URL. For example: [/path/to/page] methods_with_bodies = ("POST", "PUT", "PROPPATCH") It's obvious that the extra method is the norm for that path; in fact, the code could be considered broken without it. In CherryPy 3, you can attach that bit of config directly on the page handler: def page(self): return "Hello, world!" page.exposed = True page._cp_config = {"request.methods_with_bodies": ("POST", "PUT", "PROPPATCH")} This can be done at any point in the cherrypy tree; for example, we could have attached that config to a class which contains the page method: class SetOPages: _cp_config = {"request.methods_with_bodies": ("POST", "PUT", "PROPPATCH")} def page(self): return "Hullo, Werld!" page.exposed = True. Separate configuration scopes CherryPy 2 used a single config dict for global, per-application, and per-path config. CherryPy 3 separates these scopes in a couple of ways: First, and most important, cherrypy.config now only holds global config data; that is, config entries which affects all mounted applications. Each Application object keeps its own config in app.config. You must pass global config to cherrypy.config.update, and per-application config to cherrypy.tree.mount. You may use a single config file and hand the same file (or filename) to both methods; put your global config in a [global] section to signal cherrypy.config.update which entries to grab. Second, when a request is processed, these two config sources (global and per-application) are merged and collapsed to form a single config dict stored inside cherrypy.request.config. This dict contains only those config entries which apply to the given request; that is, per-path config. Note that when you do an InternalRedirect?, this config is recalculated for the new path. Configuration namespaces In CherryPy 2, config entries were somewhat haphazard about their naming and scope. They were always inspected as late as possible, often multiple times, and their default values were locked away inside the source code. In CherryPy 3, all config entries (except "environment") are now prefixed with a namespace. When you provide a config entry, it is now bound as early as possible to the actual object referenced by the namespace; for example, CP 2's "stream_response" is now "response.stream", and actually sets the "stream" attribute of cherrypy.response. In this way, you can easily determine the default value by firing up a python interpreter and typing: >>> import cherrypy >>> cherrypy.response.stream False This also means that some objects (the Request class in particular) have grown a number of new attributes, to avoid the need for config.get(). Entries from each namespace may be allowed in the global, application root ("/") or per-path config, or a combination: Custom config namespaces You can define your own namespaces if you like, and they can do far more than simply set attributes. The test/test_config module, for example, shows an example of a custom namespace that coerces incoming params and outgoing body content. The : If you need additional code to run when all your namespace keys are collected, you can supply a callable context manager in place of a normal function for the handler. Context managers are defined in PEP 343. Tools Using builtin tools Filters are gone! In their place are Tools, which allow for much more flexibility. If your favorite builtin filter has changed to a tool, it's easy to convert your code. See UpgradeTo30 for a complete list of name changes. Instead of this: [/docroot] static_filter.on: True static_filter.root: "/path/to/app" static_filter.dir: 'static' ...use the "tools" namespace like this: [/docroot] tools.staticdir.on: True tools.staticdir.root: "/path/to/app" tools.staticdir.dir: 'static' We can also use our new friend _cp_config (see above):! New and improved builtin tools tools.proxy This replaces and enhances the old baseurl_filter. The old way: baseurl_filter.base_url = "" baseurl_filter.use_x_forwarded_host = False The new way:. tools.log_tracebacks This replaces the CP 2 feature: "server.log_tracebacks". tools.log_headers This replaces the CP 2 feature: "server.log_request_headers". tools.err_redirect Turn this tool on to redirect all unhandled errors to a different page. Supply the new URL via tools.err_redirect.url. By default, this raises InternalRedirect?. To use HTTPRedirect, set tools.err_redirect.internal to False. tools.etags This new.basic_auth A tool for doing basic authentication. It takes a "realm" setting (a string) and a "users" dict of {username: password} pairs (or a callable which returns that dict). If authentication fails, 401 Unauthorized is raised. tools.digest_auth A tool for doing Digest authentication (RFC 2617). It takes a "realm" setting (a string) and a "users" dict of {username: password} pairs (or a callable which returns that dict). If authentication fails, 401 Unauthorized is raised.,." Custom tools You can make your own tools and register them to gain all the benefits the builtin Tools enjoy. Usually, this is as simple as: cherrypy.tools.my_tool = cherrypy.Tool('before_request_body', my_callback) cherrypy.tools is an instance of _cptools.Toolbox. When you add your Tool to it, then config entries in the "tools.my_tool.*" namespace automatically get passed to your callback as keyword arguments. See cherrypy._cptools for more examples. Custom toolboxes If you're building a framework on top of CherryPy, you might want to use your own toolbox to avoid conflicting with builtin tools. It's just a single line: mytools = cherrypy._cptools.Toolbox("mytools"). This one line creates a new Toolbox and automatically registers the "mytools" config namespace. Hooks Tools use hooks under the covers. Each Hook has a "callback" attribute, and is registered at a "hook point" in a HookMap? called cherrypy.request.hooks. As a request is processed, hooks are called at the following hook points: 'on_start_resource', 'before_request_body', 'before_handler', 'before_finalize', 'on_end_resource', 'on_end_request', 'before_error_response', and 'after_error_response'. If you can't make a Tool, you can provide custom hooks in config by writing hooks.<hookpoint> = function, and the function you provide will be called at that hook point. If you want to do it in code (especially for a custom Tool, see above), use cherrypy.request.hooks.attach(self, point, callback, failsafe=None, priority=None, **kwargs). Some Hook objects are "failsafe", which means that they are guaranteed to run even if other Hooks in the same hook point raise exceptions (if more than one fails, they are all logged, but only the last exception is raised). You can either set Hook.failsafe = True, or provide it as Hook(callback, failsafe=True). Additionally, you may be able to set callback.failsafe = True, in which case the Hook will automatically copy that value to itself. Hook objects also have a "priority", in the closed interval of [0, 100]. By default, Hook.priority is 50, but you can change it (as with failsafe, above). This is a necessary evil to make sure that, for example, the encoding Tool's hooks run before the gzip Tool's hooks (if they were reversed, the request would almost certainly fail, because the encoding Tool was designed to operate on text output, not binary). Dispatch "Dispatch" refers to the way the framework looks up and calls application code. By default, CherryPy traverses a tree of objects to find a page handler that you've written. Then it calls that function, passing any virtual path segments as positional arguments and any request parameters (form or querystring values) as keyword arguments. In CherryPy 2, this process was hard-coded into the core; to change it, you had to subclass the Request object. CherryPy 3 separates dispatch into a new "request.dispatch" object, which you can specify in config per-path. It must refer to a callable that 1) takes a path_info argument, and 2) sets cherrypy.request.handler (a callable that takes no arguments) and cherrypy.request.config (a flat dict containing all config entries that apply to the current request). There's a new MethodDispatcher and RoutesDispatcher in cherrypy.dispatch, too. Feel free to try them out. URL construction There's a new cherrypy.url(path) function which can be used to construct portable URL's for your application. It calculates new paths relative to the current SCRIPT_NAME (if you pass a path which starts with "/") or relative to the current PATH_INFO (if you pass a path which doesn't start with "/"). Autoreload The autoreload feature has been completely reworked. In CherryPy 2.x, it would immediately start a second process (using os.spawnve(os.P_WAIT, ...)). This caused repeated confusion and complaints when applications would "mysteriously" run startup code twice. In CherryPy 3, the autoreload mechanism does nothing to the initial process, it simply replaces its own process when needed (using os.execv). You can also now trigger this behavior yourself, outside of the autoreload file-checking logic, by calling cherrypy.engine.reexec. Finally, if your platform supports the HUP signal, then a SIGHUP will automatically call cherrypy.engine.reexec (whereas SIGTERM shuts down CherryPy, now). We've also borrowed an idea from Turbogears: engine.autoreload_match is a regular expression pattern (default .*) that you can change to filter which files are monitored. WSGI improvements WSGI server The builtin WSGI server is now HTTP/1.1 compliant! It correctly handles persistent connections, pipelining, Expect/100-continue, and the "chunked" transfer-coding (receive only). It also now emits a custom WSGI environ entry: ACTUAL_SERVER_PROTOCOL. Clients can calculate min(SERVER_PROTOCOL, ACTUAL_SERVER_PROTOCOL) in order to determine which level of HTTP features to support. CherryPy applications can see this min() value in cherrypy.request.protocol. It also supports HTTPS/SSL! Just set server.ssl_certificate and server.ssl_private_key to the names of each file in your config. As always, the code in wsgiserver.py is usable anywhere, as it doesn't depend on CherryPy in any way. Feel free to use it with other WSGI stacks. WSGI applications cherrypy.Application objects are now WSGI applications, automatically. Whenever you call cherrypy.tree.mount(Root()), the "Root" object you pass is wrapped up in an Application object, and added to cherrypy.tree.apps. One big difference between CherryPy Application objects and a lot of other WSGI applications is that CherryPy apps usually know their own SCRIPT_NAME before being called. If you cannot or don't want to set this in stone, set app.script_name to None, and the Application will provide it from the WSGI environ['SCRIPT_NAME'] on each request. In addition, cherrypy.tree is also usable as a "WSGI application"; it acts as dispatching middleware to all mounted apps. WSGI middleware In addition to mounting cherrypy.Application objects onto cherrypy.tree, you can also mount plain 'ol WSGI callables, too, using cherrypy.tree.graft(wsgi_callable, script_name=""). Then hand cherrypy.tree to your WSGI server, and it will happily dispatch to both CherryPy apps and foreign WSGI apps. The profile module is now implemented as WSGI middleware, too. Use cherrypy.lib.profiler.make_app(nextapp, path, aggregate=False) to use it. If 'aggregate' is False, a separate profile dump will be made for each request. If True, all requests (for the same 'nextapp') will be aggregated together into a single results file. Finally, there's a new "pipeline" helper in cherrypy.wsgi. The config entry wsgi.pipeline = [(name, wsgiapp_factory), ...] will pipe the request through the supplied wsgiapps before handing it off to the CherryPy application. See help(cherrypy.wsgi.CPWSGIApp) for details. If you want to do it in code instead of config, write: app = cherrypy.Application(Root()) app.wsgiapp.pipeline.append((name, wsgiapp_factory)) cherrypy.tree.mount(app, config={'/': root_conf}) Logging CherryPy 3 now uses the standard library's logging module, which means you have access to its RotatingFileHandler(s), SocketHandler?, SysLogHandler?, NTEventLogHandler, SMTPHandler, and HTTPHandler (and other goodies). In CherryPy 2, log config was specifiable per-path (since it used very simple handlers). Now, there are separate error and access logs for each mounted Application (named "cherrypy.error.%s" % id(app)), as well as global error and access logs (named "cherrypy.error" and "cherrypy.access"). This naming scheme means that messages sent to "cherrypy.access.723863" will automatically also be sent to the global "cherrypy.access" log. Code inspection A lot of work has been done to make CherryPy 3 play nice with the interactive interpreter. If you don't know or can't recall how something works or even what features are available, start with help(cherrypy), and work your way through the available attributes. Two items in particular need mentioning: - The cherrypy.request and response objects are dummy objects, and exist only for your benefit when you write an application. The values of their attributes should be considered read-only, and are only intended to let you see default values easily. - Tools have two faces. They have their own answers to help() that tell you how to use them as tools; if you want to see the docstrings for the functions they wrap, try help(tools.<toolname>.callable) instead. They do, however copy the argument names of the wrapped function to themselves as attributes (all None), so you should be able to use dir(tools.<toolname>) with no problems. Redirection and Deadlock The cherrypy.request object now has improved support for InternalRedirect situations. First, on redirect, it creates an entirely new Request object, and sets Request.prev to point to the previous Request object. It also inspects the list of seen URL's at each redirect, and, if the new path + querystring has already been visited during this request, raises an error. This stops infinite redirect loops. If for some reason you want to visit the same path twice in a single request, set wsgi.iredir.recursive = True in config. You may also now raise InternalRedirect at any time during the run of a Request. In the past, you could only do so during the "before_main" hook and inside page handlers. Each response object also has a time attribute (set to time.time() when created), a timeout attribute (default 300 seconds), and a timed_out attribute, a bool. Assuming cherrypy.engine.deadlock_poll_freq is greater than 0, a monitor thread will check if now > response.time + response.timeout; if so, it sets response.timed_out. This is checked at various places in the core, and cherrypy.TimeoutError? is raised if response.time_out is True. Feel free to check it and raise TimeoutError? in your own code's critical sections. Drop privileges There is a new engine.drop_privileges function which may setuid/gid and/or set a new umask, or raise NotImplemented?, depending on your platform. If you're on UNIX, you'll probably see engine.uid, engine.gid (names or numbers), and engine.umask attributes which you can set (from config, if you want). If you're on Windows, you'll only see the umask attribute. Other platforms may see none of these. Whatever happens, it'll get logged so you know when it works and it'll raise errors when it doesn't work. Native support for mod_python The popular "mpcp" module has been ... uh ... "embraced and extended" and is now included in the standard CherryPy 3 distribution as cherrypy._cpmodpy. Thanks to Jamie Turner for his ingenuity and generosity! Multiple HTTP server support The new cherrypy.server object can now control more than one HTTP server. Add additional ones via server.httpservers[myserver] = (host, port). This can be used to listen on multiple ports or protocols.
http://cherrypy.org/wiki/WhatsNewIn30
crawl-002
refinedweb
2,629
59.5
Introduction After you have successfully created a dynamic recommendation system, this time, MAGE will teach you how to generate link predictions by using a new spell called node2vec. If you don’t know what node2vec is or what node embeddings are, we got you covered with two blog posts for deeper understanding: - Introduction to node embedding - In this article, you can check out what node embeddings are, where we use them, why we use them, and how we can get embeddings from a graph. - How node2vec works - After the first blog post, you should have an idea of how node2vec works. But if you want to fully understand the algorithm, its benefits and check out how it works on a few examples, take a look at this node2vec blog post which covers everything mentioned. As already mentioned, link prediction refers to the task of predicting missing links or links that are likely to occur in the future. In this tutorial, we will make use the of MAGE spell called node2vec. Also, we will use Memgraph to store data, and gqlalchemy to connect from a Python application. The dataset will be similar to the one used in this paper: Graph Embedding Techniques, Applications, and Performance: A Survey. Don’t worry, you are in safe hands, MAGE will guide you through dataset parsing, the creation of queries that will be used to import data into Memgraph, embeddings calculation with the node2vec algorithm in MAGE, and metrics report. Now let’s get to the fun part. Prerequisites For this to work, you will need: - The MAGE graph library - Memgraph Lab - the graph explorer for querying Memgraph and visualizing graphs - gqlalchemy - a Python driver and object graph mapper (OGM) You can also try out MAGE on Memgraph Playground. Contents This is how we will set up our tutorial: - Dataset and query import - Splitting edges into test and train sets - Run node2vec on the train set to generate node embeddings - Get potential edges from embeddings - Rank potential edges to get top K predictions - Compare predicted edges with the test set 1. Dataset and query import We will work on the High Energy Physics Collaboration Network. The dataset contains 12008 nodes and 118521 edges. MAGE has prepared a script that will help you parse the dataset and import it into Memgraph. After you have downloaded the dataset from the link above, you should see the following contents: # Directed graph (each unordered pair of nodes is saved once): CA-HepPh.txt # Collaboration network of Arxiv High Energy Physics category (there is an edge if authors co-authored at least one paper) # Nodes: 12008 Edges: 237010 # FromNodeId ToNodeId 17010 1943 17010 2489 17010 3426 17010 4049 17010 16961 17010 17897 The dataset description says it’s a directed graph and that it contains 237010 edges. But earlier we mentioned it contains 118521 edges. Actually, both are true. Depends on your view. The graph in question is directed, but it contains edges in both directions: from node u to node v and from node v to node u, u⟶v and u⟵v. The direction means that author u co-authored at least one paper with author v. Since co-authoring goes both ways we can act as if the graph is undirected with only one edge, u - v. The script below will create exactly 118521 undirected edges. So all is good. Phew. We will import these 118521 edges, and act as if they are undirected. The Node2Vec algorithm in MAGE accepts parameters whether to treat graph from Memgraph as directed or undirected. Note: Memgraph only accepts directed graphs, but the Node2Vec algorithm saves the day for us in this case. Here is the function to parse edges from the file. It will return a List of int Tuples, which will represent undirected edges. FILENAME = "CA-HepPh.txt" def parse_edges_dataset(filename=FILENAME) -> List[Tuple[int, int]]: with open(filename) as file: lines = file.readlines() edges: Dict[Tuple[int, int]] = {} for line in lines: if line.startswith("#"): continue line = line.strip() line_parts = line.split("\t") edge = (int(line_parts[0]), int(line_parts[1])) if (edge[1], edge[0]) in edges: continue edges[edge] = 1 return list(edges.keys()) We need to create Cypher queries from the given undirected edges. If you don’t know anything about Cypher, here is a short getting started guide. You can also learn a lot about graph algorithms and Cypher queries on Memgraph Playground. We need to create queries from edges so that we can run each query and import data into Memgraph.Let’s use the MERGE clause, which ensures that a pattern we are looking for will exist only once in the database after a query is run. That means that if the pattern (node or edge) is not found, it will be created. Now, let’s create the queries: NODE_NAME = "Collaborator" EDGE_NAME = "COLLABORATED_WITH" edge_template = Template( 'MERGE (a:$node_name_a {id: $id_a}) MERGE (b:$node_name_b {id: $id_b}) CREATE (a)-[:$edge_name]->(b);') def create_queries(edges: List[Tuple[int, int]]): queries: List[str] = ["CREATE INDEX ON :{node_name}(id);".format(node_name=NODE_NAME)] for source, target in edges: queries.append(edge_template.substitute(id_a=source, id_b=target, node_name_a=NODE_NAME, node_name_b = NODE_NAME, edge_name= EDGE_NAME)) return queries def main(): edges = parse_edges_dataset() queries = create_queries(edges) file = open(OUTPUT_FILE, 'w') file.write("\n".join(queries)) file.close() if __name__ == '__main__': main() This function create_queries() will return a list of strings. Each string represents a query we can run against our database. Note: you can import datasets through one of the querying tools. We have developed our drivers using the Bolt protocol to deliver better performance. You can use Memgraph Lab, mgconsole or even one of our drivers, like the Python driver used in this tutorial. We recommend you use Memgraph Lab due to the simple visualization, ease of use, export and import features, and memory usage. But here, we will use a Python driver in the form of gqlalchemy. 2. Splitting edges into test and train sets Theory First, we need to split our edges into a testing (test) and training (train) set. Let’s explain why. Our goal is to perform link prediction. This means that we need to be able to correctly predict new edges that might appear from existing ones. Since this is a definitive dataset, there will be no new edges. In order to test the algorithm we remove a part of the existing edges and make predictions based on the remaining ones. A correct prediction would recreate the edges we have removed. In the best case scenario, we would get the original dataset. We will randomly remove 20% percent of edges. This will represent our test set. We will leave all the nodes in the graph, it doesn’t matter that some of them could be completely disconnected from the graph. Next, we will run node2vec on the remaining edges (80% of them, in our case that would be something like 94000 edges) to get node embeddings. We will use these node embeddings to predict new edges. You can imagine this case as a Twitter web, where new connections (follows) appear every second, and we want to be able to predict new connections from connections we already have. How exactly we will predict which edges will appear is still left to explain, but we hope that you understand the WHY part of removing 20% of the edges. 🤞 Practical Firstly, we need a connection to Memgraph so we can get edges, split them into two parts (train set and test set). For edge splitting, we will use scikit-learn. In order to make a connection towards Memgraph, we will use gqlalchemy. From Github description of gqlalchemy: “GQLAlchemy is a library developed to assist in writing and running queries on Memgraph. GQLAlchemy supports high-level connection to Memgraph as well as modular query builder.” And after we create a connection towards Memgraph, we will call these two functions down below in order to run a query. This query can be anything from getting edges, removing edges, running a node2vec procedure. memgraph = gqlalchemy.Memgraph("127.0.0.1", 7687) def call_a_query_and_fetch(query: str) -> Iterator[Dict[str, Any]]: return memgraph.execute_and_fetch(query) def call_a_query(query: str) -> None: memgraph.execute(query) Okay, so to get edges we need to make a query. With the connection we have, we will get edges, split them into two sets, and then make queries (plural) to remove each one of them in the test set from the graph. edge_remove_template = Template( 'MATCH (a:$node_a_name{id: $node_a_id})-[edge]-(b:$node_b_name{id: $node_b_id}) DELETE edge;') def get_all_edges() -> List[Tuple[gqlalchemy.Node, gqlalchemy.Node]]: results = Match() \ .node(dataset_parser.NODE_NAME, variable="node_a") \ .to(dataset_parser.EDGE_NAME, variable="edge") \ .node(dataset_parser.NODE_NAME, variable="node_b") \ .execute() return [(result["node_a"], result["node_b"]) for result in results] def remove_edges(edges: List[Tuple[gqlalchemy.Node, gqlalchemy.Node]]) -> None: queries = [edge_remove_template.substitute(node_a_name=dataset_parser.NODE_NAME, node_a_id=edge[0].properties["id"], node_b_name=dataset_parser.NODE_NAME, node_b_id=edge[1].properties["id"]) for edge in edges] for query in queries: call_a_query(query) def split_edges_train_test(edges: List[Tuple[gqlalchemy.Node, gqlalchemy.Node]], test_size: float = 0.2) -> ( List[Tuple[gqlalchemy.Node, gqlalchemy.Node]], List[Tuple[gqlalchemy.Node, gqlalchemy.Node]]): edges_train, edges_test = train_test_split(edges, test_size=test_size, random_state=int(time.time())) return edges_train, edges_test This will be the “main” part of our program. We want you to notice a few things from here: - When getting all edges with a query, instead of edge object we got two nodes ( gqlalchemy.Vertexobject), one represents the head and the other represents the tail of the edge, but we will treat it as an undirected graph. - The split_edges_train_test()function accepts these edges and splits them into a train and test set. - We received an object, but it will be easier to work with the idproperty of the node We will just map from our list of edges, to a list of int tuples, where one pair will represent an undirected edge def main(): print("Getting all edges...") edges = get_all_edges() print("Current number of edges is {}".format(len(edges))) print("Splitting edges in train, test group...") edges_train, edges_test = split_edges_train_test(edges=edges, test_size=0.2) print("Splitting edges done.") print("Removing edges from graph.") remove_edges(edges_test) print("Edges removed.") train_edges_dict = {(node_from.properties["id"], node_to.properties["id"]): 1 for node_from, node_to in edges_train} test_edges_dict = {(node_from.properties["id"], node_to.properties["id"]): 1 for node_from, node_to in edges_test} 3. Run node2vec on the train set to generate node embeddings Theory After we have removed edges, we need to run the node2vec algorithm. Node embeddings will be calculated just from a train set of edges. Repeat: we will get embeddings for every node, but for that, we will only use a certain amount of edges (80%) from the original graph. If a new node was to appear in the graph, we can’t predict anything for that node, since we don’t know it exists yet. We can only make predictions for the nodes we have in the graph. Practical Here we will call the node2vec query module to calculate node embeddings. There is a procedure called set_embeddings() in the node2vec module, which we will use to set embeddings in the graph as properties. So even if we lose power on the computer, we will still have those embeddings, since Memgraph acts as an in-memory database. Node2Vec has some crucial hyperparameters like num_walks and walk_length. When we set them on higher value, they will cause the algorithm to run longer, but we should get better predictions if embeddings don’t overfit to data we have. Another problem we need to handle is to set proper p and q parameters. Since we are dealing here with a collaboration network, we will try to predict connections inside natural clusters. We can obtain clusters by sampling walks in more DFS like manner. If all these terms sound confusing to you, we would suggest checking out the blog post on node2vec where we have explained those terms. 💪 If we were to set node2vec params in a more BFS manner, so that hyperparameter p is smaller than hyperparameter q, then we would be looking for hubs, which isn’t our intention. # NODE2VEC PARAMS is_directed: bool = False p = 1 # return parameter q = 1 / 256 # in-out parameter num_walks = 10 walk_length = 80 vector_size = 100 alpha = 0.02 window = 5 min_count = 1 seed = int(time.time()) workers = 4 min_alpha = 0.0001 sg = 1 hs = 0 negative = 5 epochs = 5 def set_node_embeddings() -> None: call_a_query("""CALL node2vec.set_embeddings({is_directed},{p}, {q}, {num_walks}, {walk_length}, {vector_size}, {alpha}, {window}, {min_count}, {seed}, {workers}, {min_alpha}, {sg}, {hs}, {negative}) YIELD *""".format( is_directed=is_directed, p=p, q=q, num_walks=num_walks, walk_length=walk_length, vector_size=vector_size, alpha=alpha, window=window, min_count=min_count, seed=seed, workers=workers, min_alpha=min_alpha, sg=sg, hs=hs, negative=negative)) def get_embeddings_as_properties(): embeddings: Dict[int, List[float]] = {} results = Match() \ .node(dataset_parser.NODE_NAME, variable="node") \ .execute() for result in results: node: gqlalchemy.Node = result["node"] if not "embedding" in node.properties: continue embeddings[node.properties["id"]] = node.properties["embedding"] return embeddings And this is our main part. After the node2vec query module finishes with calculations, we can get those embeddings directly from the graph, which is awesome. def main(): test_edges_dict = {(edge[0].properties["id"], edge[1].properties["id"]): 1 for edge in edges_test} # Calculate and get node embeddings print("Setting node embeddings as graph property...") set_node_embeddings() print("Embedding for every node set.") node_emeddings = get_embeddings_as_properties() 4. Get potential edges from embeddings Theory And now to the most important section ⟶ edge prediction. How do we predict edges exactly? What is the idea behind it? We expect nodes that have similar embeddings and still don’t have an edge between them to form a new edge in the future. It’s as simple as that. We just need to find a good measure to be able to check whether two nodes have similar embeddings. One such measure is cosine similarity. The Image 3 above contains an explanation of cosine similarity, the measure that will calculate how similar two vectors are. It’s essentially the cosine angle between two vectors. Notice that node embeddings also represent vectors in multi-dimensional space. Practical So for every pair of node embeddings, we will calculate the cosine similarity to check how similar two-node embeddings are. The problem with 12000 nodes is that there will be around 72 million pairs (72 000 000), which means that an average computer with 16GB of RAM would die at some point (open up a Chrome tab if you dare). To fix that, we will only hold a maximum of 2 million pairs in memory at any given point in time. We will also run a sorting algorithm to only keep the top K pairs. What would be this top K number? We will answer this question shortly and it’s related to the [email protected] measurement method. def calculate_adjacency_matrix(embeddings: Dict[int, List[float]], threshold=0.0) -> Dict[Tuple[int, int], float]: def get_edge_weight(i, j) -> float: return np.dot(embeddings[i], embeddings[j]) nodes = list(embeddings.keys()) nodes = sorted(nodes) adj_mtx_r = {} cnt = 0 for pair in itertools.combinations(nodes, 2): if cnt % 1000000 == 0: adj_mtx_r = {k: v for k, v in sorted(adj_mtx_r.items(), key=lambda item: -1 * item[1])} adj_mtx_r = {k: adj_mtx_r[k] for k in list(adj_mtx_r)[:3*PRECISION_AT_K_CONST]} gc.collect() if cnt % 10000 == 0: print(cnt) weight = get_edge_weight(pair[0], pair[1]) if weight <= threshold: continue cnt += 1 adj_mtx_r[(pair[0], pair[1])] = get_edge_weight(pair[0], pair[1]) return adj_mtx_r 5. Rank potential edges to get top K predictions To calculate the accuracy of our implementation, we will use a famous precision method called [email protected]. Some nodes (their embeddings) will be more similar than others, meaning the cosine similarity value will be larger. And let’s say our manager arrives and says, give me the top 10 predictions. Would you give him pairs with lower or higher similarities? Probably the best ones. The same principle can be applied here. We will take the top K predictions, and evaluate our model. At every point, we will remember how many guesses we had until then. And we will divide the number of our guesses by the number of tries we had until then. This is how it would work for [email protected]: The first one is easy: 1 guess / 1 try. For the second one, we have: 1 guess / 2 tries. The rest is clear. def compute_precision_at_k(predicted_edges: Dict[Tuple[int, int], float], test_edges: Dict[Tuple[int, int], int], max_k): precision_scores = [] # precision at k delta_factors = [] correct_edge = 0 count = 0 for edge in predicted_edges: if count > max_k: break # if our guessed edge is really in graph # this is due representation problem: (2,1) edge in undirected graph is saved in memory as (2,1) # but in adj matrix it is calculated as (1,2) if edge in test_edges or (edge[1], edge[0]) in test_edges: correct_edge += 1 delta_factors.append(1.0) else: delta_factors.append(0.0) precision_scores.append(1.0 * correct_edge / (count + 1)) # (number of correct guesses) / (number of attempts) count += 1 return precision_scores, delta_factors Here is the main part of the code: # Calculate adjacency matrix print("Calculating adjacency matrix from embeddings.") adj_matrix = calculate_adjacency_matrix(embeddings=node_emeddings, threshold=0.0) print("Adjacency matrix calculated") # print(adj_matrix) print("Getting predicted edges...") predicted_edge_list = adj_matrix print("Predicted edge list is of length:", len(predicted_edge_list), "\n") print("Sorting predicted edge list") # We need to sort predicted edges so that ones that are most likely to appear are first in list sorted_predicted_edges = {k: v for k, v in sorted(predicted_edge_list.items(), key=lambda item: -1 * item[1])} print("Predicted edges sorted...") print("Filtering predicted edges that are not in train list...") # taking only edges that we are predicting to appear, not ones that are already in the graph sorted_predicted_edges = {k: v for k, v in sorted_predicted_edges.items() if k not in train_edges_dict} # print(sorted_predicted_edges) print("Calculating [email protected]") precision_scores, delta_factors = compute_precision_at_k(predicted_edges=sorted_predicted_edges, test_edges=test_edges_dict, max_k=PRECISION_AT_K_CONST) print("precision score", precision_scores) with open("../results.txt", 'a+') as fh: fh.write(" ".join(str(precision) for precision in precision_scores)) fh.write("\n") 6. Compare predicted edges with the test set from matplotlib import pyplot as plt import numpy as np #tribute to with open('../results.txt', 'r') as fh: lines = fh.readlines() parsed_results=[] for line in lines: values = line.split(" ") parsed_list=[float(value) for value in values] parsed_results.append(parsed_list[:2**5]) stddev = np.std(parsed_results, axis=0) mean = np.mean(parsed_results, axis=0) print(parsed_results) x = np.arange(1,len(parsed_results[0])+1) y = mean error = stddev plt.plot(x, y, 'k-') plt.fill_between(x, y-error, y+error) plt.show() After running our code a couple of times, we can plot our results. Since we didn’t take any features into account and only worked with the graph structure when doing link prediction, we can say that our results are good. It can be a lot better, but for 16 edges we have a precision of around 70%. MAGE is satisfied at the moment. Conclusion So that’s it for the real-time link prediction tutorial. Hope that you learned something and that we got you interested in graph analytics even more. If you got lost during the tutorial at any point, here is a link to the Github repository for link prediction with MAGE..
https://memgraph.com/blog/link-prediction-with-node2vec-in-physics-collaboration-network
CC-MAIN-2022-05
refinedweb
3,208
64.71
Closed Bug 836102 Opened 8 years ago Closed 8 years ago Ion Monkey: Avoid double math in base .js's Math .random() replacement Categories (Core :: JavaScript Engine, defect) Tracking () mozilla21 People (Reporter: sstangl, Assigned: h4writer) References (Blocks 1 open bug) Details Attachments (1 file) The following Math.random() replacement from v8's base.js uses double arithmetic: > Math.random = (function() { > var seed = 49734321; > return function() { > // Robert Jenkins' 32 bit integer hash function. > seed = ((seed + 0x7ed55d16) + (seed << 12)) & 0xffffffff; > seed = ((seed ^ 0xc761c23c) ^ (seed >>> 19)) & 0xffffffff; > seed = ((seed + 0x165667b1) + (seed << 5)) & 0xffffffff; > seed = ((seed + 0xd3a2646c) ^ (seed << 9)) & 0xffffffff; > seed = ((seed + 0xfd7046c5) + (seed << 3)) & 0xffffffff; > seed = ((seed ^ 0xb55a4f09) ^ (seed >>> 16)) & 0xffffffff; > return (seed & 0xfffffff) / 0x10000000; > }; > })(); Since the effective bits beyond 32 are discarded, the 0xffffffff (or |0) should propagate. This is worth about 3% on v8-regexp. Looking quickly into this, we do use integer arithmetic. The problem is 1) We don't specialize the loadFixedslot of "seed" to integer loadFixedSlotT 2) The code does StoreFixedSlot after LoadFixedSlot. This can get removed. See bug 801872. Okay, scratch that previous comment. I'll look tomorrow when I'm more awake ;). There is a simple fix for this. I recently added a pass that identifies computations that overflow and are truncated, and forces them back to integer computations. It requires identifying int32->double conversions and removing them. The problem here is that operations like |seed + 0xfd7046c5| involve seed getting coerced to double because the other operand is a double. Since the constant 0xfd7046c5 is a double, and not converted from an integer, we don't convert the operation to an integer. Adding a simple (in pseudo-code) virtual bool MConstant::isBigIntOutput() { return typeIsNumber() && frac(number) == 0 && log_2(number) - 32; } basically, if the constant is a double, and all of the fractional bits are zero, then we want to return the number of bits past the 32nd bit that would need to be set in order for the number to be represented exactly. There may also be some magic required to truncate the double constant into an integer constant at compile time. Running this benchmark 1000000 times gives: Without patch: real 0m1.882s With patch: real 0m0.031s Assignee: general → hv1989 Attachment #708086 - Flags: review?(mrosenberg) Comment on attachment 708086 [details] [diff] [review] Truncate constants Review of attachment 708086 [details] [diff] [review]: ----------------------------------------------------------------- I'll add testcases for the issues I found here. ::: js/src/ion/MIR.cpp @@ +340,5 @@ > + if (js::ion::EdgeCaseAnalysis::AllUsesTruncate(this) && > + value_.isDouble() && isBigIntOutput()) > + { > + // Truncate the double to int, since all uses truncates it. > + value_.setInt32(int32_t(value_.toDouble())); Off course this should be: ToInt32() instead of int32_t cast ::: js/src/ion/MIR.h @@ +710,5 @@ > + double value = value_.toDouble(); > + int64_t valint = value; > + int64_t max = 1LL<<53; > + if (double(valint) != value) > + return false; if (valint < 0) valint = -valint; // This should get added to handle the negative case. Comment on attachment 708086 [details] [diff] [review] Truncate constants Review of attachment 708086 [details] [diff] [review]: ----------------------------------------------------------------- The patch mostly looks good. You may want to lower the limit from 1<<53 to 1 << 33 for now... It will handle all of the cases present in the Math.random() replacement, and won't involve handling incredibly annoying cases such as: var x = 0x123456789abcd; x = x + x; // x = 640511947003802 x = x + x; // x = 1281023894007604 x = x + x; // x = 2562047788015208 x = x + x; // x = 5124095576030416 x = x + x; // x = 10248191152060832 print(x+1) // prints 10248191152060832 print(x+1 | 0) // prints -248153696, with optimization and without the previous print, will print -248153695 Since we cut off after 20 operations, limiting the size of the constant to 33 bits should prevent us from ever reaching the magical 2**53 plateau. Or you can figure out how to limit the number of additions to a lower number when one of the immediates present is well over 2**32. Attachment #708086 - Flags: review?(mrosenberg) → review+ Follow-up bug to do this better: Status: NEW → RESOLVED Closed: 8 years ago Flags: in-testsuite+ Resolution: --- → FIXED Target Milestone: --- → mozilla21
https://bugzilla.mozilla.org/show_bug.cgi?id=836102
CC-MAIN-2021-10
refinedweb
665
63.19
ok I had looked on the internet and found one form of running an exe file from a py script, i goes like this. import os os.system("c:/windows/iexplore.exe") # Or wherever it lives Problem is, whenever i run the script with the appropriate path loaded in, i get the following: Traceback (most recent call last): File "C:\csuite\startup.py", line 14, in <modul choises(choise) File "C:\csuite\startup.py", line 11, in choise os.execv("C:\csuite\Re-Codon\rcodon.exe", cho TypeError: execv() arg 2 must be a tuple or list I origginally got some kind of error involving a batch or runnable file, so I switched to this because I looked up the os module in python, and apparantly the os.execv will do the job better for me cause it requires a path.... os.execv("C:\csuite\Re-Codon\rcodon.exe") but when i run the script like that, I get the following... Traceback (most recent call last): File "C:\csuite\startup.py", line 14, in <module> choises(choise) File "C:\csuite\startup.py", line 11, in choises os.execv("C:\csuite\Re-Codon\rcodon.exe") TypeError: execv() takes exactly 2 arguments (1 given) I don;t know what those 2 arguments are, how do I find them out? Here is the python script just incase you need to look at it.... EDIT: The purpose of this little snippet is for a software suite im desging, I made a little program that codes standard alphabet, into the language of DNA, then there's another that goes in reverse. Anyways, I compiled both in py2exe and put the dist folders in one folder called, CSuite. Basically, this script will server as a start screen and when you type 1 or 2, and hit enter, the other one of the two programs will pop up. BTW: Everything is in console form, I plan to make it a UI in the next release. import os print "Welcome to the DNA codon encoder, and decoder." print "Plese select the program you want to use from the list and press enter." print "If you would like to use the encoder, press 1." print "If you would like to use the decoder, press 2." choise = input("So, what program would you like to use?") def choises(choise): if choise == 1: os.execv("C:\csuite\Re-Codon\rcodon.exe") elif choise == 2: os.execv("C:\csuite\Codon\codon.exe") choises(choise)
https://www.daniweb.com/programming/software-development/threads/69642/running-a-program-from-within-a-py-script
CC-MAIN-2017-51
refinedweb
412
73.47
We are trying to build dbus-1.2.26, but looking at git, the same problem exists with 1.4.x. RHEL 4 and other slightly older linux don't have inotify, so dbus uses dnotify and builds dir-watch-dnotify.c. Unfortunately this does not contain a bus_set_watched_dirs function, so linking fails. This naive patch allows it to build, but I am doubtful that it is correct: --- dir-watch-dnotify.c 2009-09-08 16:42:18.000000000 +0000 +++ dir-watch-dnotify.c 2011-01-11 18:06:48.645838358 +0000 @@ -32,6 +32,7 @@ #endif #include <dbus/dbus-internals.h> +#include <dbus/dbus-list.h> #include "dir-watch.h" #define MAX_DIRS_TO_WATCH 128 @@ -91,3 +92,16 @@ num_fds = 0; } + +void +bus_set_watched_dirs (BusContext *context, DBusList **directories) +{ + DBusList *link = _dbus_list_get_first_link (directories); + while (link != NULL) + { + bus_watch_directory(link->data, context); + link = _dbus_list_get_next_link (directories, link); + } +} + + Patch is much too naive. This commit should probably not have been backported to the 1.2 branch without also ensuring that dnotify support was also included. Will try to find time to work on a complete patch Created attachment 41932 [details] [review] patch to update dnotify watch This is a little better, make check reports a memleak, but I can't find it: Echo service echoed string: "Test echo message" Sending HelloFromSelf Sent HelloFromSelf Received the HelloFromSelf message Reply from HelloFromSelf received Shell echo service echoed the command line ./bus-test: checking for memleaks 1 dbus_malloc blocks were not freed ./bus-test(_dbus_print_backtrace+0x1c) [0x48b5fc] ./bus-test(_dbus_abort+0xd) [0x485f31] ./bus-test(_dbus_warn+0x105) [0x471d83] ./bus-test [0x43b2db] ./bus-test [0x43b33c] ./bus-test(main+0x1d5) [0x43b513] /lib64/tls/libc.so.6(__libc_start_main+0xd7) [0x2a958d61d7] ./bus-test(readdir_r+0x42) [0x4144da] Process 13922 sleeping for gdb attach RHEL 4 is pretty old; are Linux systems with dnotify but not inotify still interesting? I'm tempted to say the solution should be to remove dnotify support in favour of "if you don't have inotify, you have to pkill -HUP dbus-daemon", since virtually nobody is going to be exercising the dnotify code path. *** Bug 66238 has been marked as a duplicate of this bug. *** (In reply to comment #0) > RHEL 4 and other slightly older linux don't have inotify At this point I would tend to categorize RHEL 4 as "extremely old Linux". inotify is present in Linux 2.6.13 (2005) and according to Wikipedia, RHEL 4 left "production" status over a year ago. If the other D-Bus maintainers (particularly those who work for Red Hat...) don't object, I would be inclined to remove dnotify support altogether: in general, code that isn't tested doesn't work. Chengwei Yang seems to be interested in fixing or deleting all our dead code and rarely-used code paths, which seems like a good opportunity to do this :-) (In reply to comment #5) > If the other D-Bus maintainers (particularly those who work for Red Hat...) > don't object, I would be inclined to remove dnotify support altogether: in > general, code that isn't tested doesn't work. Fine by me; certainly at this point there is no way we'd rebase a new version of DBus onto RHEL4 - any patches would be backported. Created attachment 81598 [details] [review] [PATCH] dir-watch: remove dnotify backend (In reply to comment #7) > [PATCH] dir-watch: remove dnotify backend Applied for 1.7.6. No more dnotify \o/ Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.
https://bugs.freedesktop.org/show_bug.cgi?id=33001
CC-MAIN-2020-24
refinedweb
592
55.95
#include <SIM_Query.h> This class provides an interface between a SIM_Data and the Houdini expression language. It also helps in building the tree view of a simulation. Definition at line 24 of file SIM_Query.h. Simple constructor. Destructor for this class. Reimplemented in SIM_QueryCombine. Fills the provided array with the names of all valid values that can be passed to the getFieldFloat function. This includes all versions of the field names with appropriate suffices. Sets the result pointer to a UT_OptionEntry subclass that depends on the data type of the field. It may be any of the UT_OptionEntryImpl classes. It is up to the caller to free this result pointer. These functions use getFieldRawand process the result to generate a nice string or single float value as required by the expression library. Fills a UT_InfoTree structure with all our data. Give SIM_QueryCombine special access to our protected methods. Definition at line 127 of file SIM_Query.h.
https://www.sidefx.com/docs/hdk/class_s_i_m___query.html
CC-MAIN-2022-40
refinedweb
156
60.41
Java Object Class is basically the parent class of all classes. In other words, it is the topmost class of java. By Default, It extends to every class of java. Object class comes under the java.lang package. Lang package has included by default. If any Java class does not extend any other class then it is a direct child class of Object and if extends other class then it is indirectly derived. Therefore we can easily use all the methods of Object class. Object class has many inbuilt methods that we can use to fulfill the requirement. Check other Java Turorial and examples. Object Class Declaration public class java.lang.Object Class Constructors Object class has only one constructor. Object() Object Class Methods Java Object class has many inbuilt methods. These methods are very commonly used in programming. But we do not know exactly which method belongs to Object Class. There are list of all methods of Object Class. In this tutorial, we will explain each and every method with example. 1.) protected Object clone() Parameters : Not Accept Any Parameter. Return : Return the copy of this object. 2.) public boolean equals(Object oj) Parameters : It accepts object which we want to compare. Return : Return the boolean value true if given object matches otherwise it returns false. 3.) protected void finalize() Garbage Collector calls this method. When garbage collection determines that there are no more references to the object. So this method automatically call and perform operation. Parameters : No Parameters. Return : NA. 4.) public final Class getClass() Parameters : No Parameters. Return : getClass() method returns the object of the class. 5.) String toString() Java.lang.toString() method is very important and common used in Java. Because it converts any Object to String. Parameters: NA Return: This method will return the String representation of the Object. 6.) void notify() Java.lang.Object.notify() method used in the Java Multithreading. But this method comes under the java.lang package. This method wakes up a single thread that is waiting on this object’s monitor. If many threads are waiting on this object, one of them is chosen to be awakened. So we can use this method. Parameters: NA Return: This method does not return anything. Because it has different functionality mentioned above. We hope you like the post. If you have any doubt please comment below. So we can improve the content.
https://www.developerhelps.com/java-object-class/
CC-MAIN-2021-31
refinedweb
399
61.83
I am getting an error I do not understand, mind you i'm new to coding so this may be a simple error. #include <iostream> using namespace std; int main() { //Initialise Fahrenheit float Fahrenheit = 95.0f; //Initialise Celcius double Celcius = float (Fahrenheit - 32)*0.5556; cout << float Fahrenheit << "F is equal to" << double Celcius << "C" << endl; cin.get(); return 0; } cout << float Fahrenheit << "F is equal to" << double Celcius << "C" << endl; You want to write cout << Fahrenheit << " F is equal to " << Celcius << " C" << endl; You can't add type names when using variables. Once you define a variable you just use it by its name. Btw, casting float to float is superflous. And I fail to see the need to mix doubles and floats. Just use double over float unless you have benchmarks to prove you need the smaller type.
https://codedump.io/share/uIp1ToRDetX4/1/type-name-not-allowedunexpected
CC-MAIN-2017-39
refinedweb
140
81.73
>> bus in Boston, every building site, and countless stores carry signs saying "Boston strong". It took two morons with two pressure cookers. In a sense, yes, they won. It is interesting to note that terrorism has also been successful in stoking existing tensions within the US domestic political sphere. For instance, the environment of increased fear that has resulted from terrorist attacks (e.g. Boston Marathon bombings of 2013) has contributed to the heightened focus on the gun control debate, an issue that incites conflict between liberals and conservatives. One would wonder if the terrorists are intentionally seeking to inspire fear to generate a wedge between US political leaders, and "divide and conquer," so to speak. Or perhaps it is more to, "divide and stalemate," since the heightened tension mainly results in challenges to efforts to compromise and ultimately inaction on a spectrum of issues. Additionally, as the "gridlock" in Congress only serves to further encourage the executive branch to consolidate power, the fear inspired by potential terrorist attacks has a deep impact on the balance of power within the US government. When this balance is upset, there is the risk of government overreach (e.g. drone strikes, NSA surveillance, executive orders), and subsequently, the alienation of the US citizenry. In this way, the terrorists are even generating a wedge between US political leaders and the general populace. Are the terrorists winning? Hard to say. Certainly they are clever strategists. ''One would wonder if the terrorists are intentionally seeking to inspire fear to generate a wedge between US political leaders, and "divide and conquer," so to speak'' - Don't wonder, read the ''project'' document from the ikhwan. Read the ''isna'' document detailing the process of ''settlement'' of the continent of North America. The answer is yes, they are. ''To destroy its' miserable house from within''. Oh good. Now I get to feel smart for extrapolating sans facts. And I'm too lazy to read those. Unless of course you'd like to post links and make my life easier. Destroy from within... That's obvious enough. It's just rather ironic that even the leaders that tell us that to doubt the government is to support the terrorists don't account for how their own behavior accomplishes the same ends. Also, why not address the balance of power argument more specifically rather than point me to primary sources? I'm curious to hear what you think PG, not to confirm how smart I am by reading supporting documents. Any choice quotes to share with the lazy reader? - The project. - The ''isna'' document. English translation begins on page 16, preceded by a list of organisations the missive is targeted to. The ''isna'' document can be said to be a refinement of ''the project'', being 9years further into the 100 year ''plan''. Both are far better understood with a knowledge of islamic i.e. koranic and hadith/sunna/islamist doublespeak, but you will recognise the tactics from what IS occuring right now. - ''The Project'' is aimed at supplanting those in authority, those with any kind of governmental or judicial or civic power, with islamists. Demonising the State of Israel, pushing the palestinian cause...They've been staggeringly successful in ''western'' countries, but recent events in Egypt have caused the ikhwans' mask to slip somewhat. '…'' - Page 21 of the pdf ''isna'' file. ''Every stratagem of war'' - Koran surah 9 - al tauba - the repentance, verse 5. The ''ayat al sayf'' - the verse of the sword, from mohammeds' finally-descended surah, and therefore the final instruction to his followers as regards islamic supremacism. To a muslim, the koran IS the word of their god allah, relayed to mohammed through the angel ''jibril''...unchangable and unchallengable by any muslim. Its' demands are eternal. They include terrorising, lying to defend/promote islam(submission), persecuting the unbeliever, oppressing women, making child abuse lawful, slavery lawful, islam over all. True, mohammeds' early days were somewhat more ''peacable'' and conciliatory towards others, but those verses are superceded - abrogated by later more violent verses. Which mirrors islamists attempts to portray islam as a ''peaceful'' religion, when in fact its' core values, its' fundamentals, are anything but. You cam find out for yourself, you don't have to take my word for anything. What do I know, eh? As far as giving up Freedoms to beat terror goes, I'm in. There's no other choice. Sure it's uncomfortable, and the media love to play on the downside, on the inconvenience and invasion of ''privacy'', but they don't broadcast the failed or intercepted terror plots so much, do they? They don't labor on the successful activities of our security services which *have* prevented tragedies, in the main. Ah this is actually amazing. Thanks to you I'll learn something today rather than just write self-laudatory, articulate abstractions. Research the life of the central religious figure of this quite relevant religion, for greater understanding; You will learn earthshaking facts you did not know, gaining a new perspective about the intentions of adherents of a certain ideology. Search "Banu Qurayza." The following comment is not original and has probably been made several times in some form on this thread. The line that the alerts have emboldened pro-surveillance supporters is an interesting one. Is there a threat? Are the terrorists beneficiaries of this sense of fear, or those, closer to home, in desperate need of an enemy? time will show, Mr. Marcus. But go study islamic teaching in the meantime, ok? You may be surprised at what you find if you can look at it objectively, and without your own pre-formed bias clouding the issue. Just a question PuckGreenman. Already read as much as I can on various global topics including islamic teaching. Do hope that is ok with. Not sure that anyone can truely look at anything in a full and objective manner utterly divorced from their surroundings but again I try to. Not entirely sure where you are going with that. But I assume that you are a far more balanced, fair and informed character than I am. So please enlighten me.. I'm not here to disabuse you of your ignorance, Mr. Marcus. You can do that for yourself. Because firstly, it would take far too long, and secondly, you wouldn't believe much of what I said anyway, as from your original post you seem to consider the ''war on terror'' to be an instrument of control and oppression in the hands of your government, and not a legitimate concern. I think you have answered the original article's question there old fruit. I can't say as I'm as well versed in all the literature that you clearly spend much of your time pouring over. I hope you enjoy it and find it enlightening. I think we are both as ignorant as each other. Perhaps I'm naive in worrying about the consequences of the "war on terror" on our own society. Perhaps your hate filled vision is a more secure one. I'd rather admit to my failings, carrying on hoping for a better outcome. I'd also like to point out that the knife cuts both ways. We've slaughtered more of your hated enemy than they have of us over the course of the last millenium. But I'm sure that's not a message that the tea party will transmit to the tinfoil hatted faithful. Lawls - so you don't get the response you demand, and feel that demonising me is a legitimate response to that? No, you are OBVIOUSLY not well-versed in islamic teaching and practice, or you at least are now pretending not to be, so your original claim of having ''read all I can'' is rubbish, yes? there's no need to answer that. Yes, you are naive, you'll hear no disagreement from me on that count. I however, have studied the subject to great depth, and for you to accuse me of naivete is laughable, really :) '' We've slaughtered more of your hated enemy '' - and who may that be, Mr. Marcus? please enlighten me, are you accusing me of hating ALL muslims? Sounds to me like you are. I've already addressed that pathetic accusation, and there is no need to repeat myself for your benefit. Sounds to me like you are merely trotting out the same tired old cliched responses that ALL the ignorant trot out when faced with their own ignorance. Know what I mean? '? '' - the answer's no to all the above then, eh? Your objective, bias free judgement is most fascinating PuckGreenman. I never accused you of naivete. If you could calm down for two minutes you'd notice that I was accusing myself of such a thing. But that would undermine the rant now wouldn't it. Anyway please do accept that my acceptance that I'm sure you've read far more than me on this subject. My orignial statement of reading all I can was simply that. I try to read all I that is possible for me to read in the time given me on a number of subjects. But again that would undermine your rant. Do feel free to run yourself into paroxisms of superior moral outrage at all of us poor mortals. I'll sit bck and enjoy the show. In the meantime as someone who has personally witnessed acts of terrorism and nearly lost my family to a bomb I'll still hold the line that I worry about the consequences of all of this on our society. You ain't heard me rant, yet. ''I'll still hold the line that I worry about the consequences of all of this on our society.'' - Worry all you like, it won't do any good. ACTIONS are what is needed, at the very least educate yourself on the subject, which was my original point. Then *maybe* you will understand why we all must give up certain Freedoms. As much as the present and previous administration take pains to deny that we are at war with ''islam'' - truth is ''islam'' is at war with us, and always will be, until either we submit(aslama - from which we get ''islam'' which means SUBMISSION), or islam is exposed and dealt with for what it is. Your choice. i think by definition terrorism thrives on the fear far more than any single event. I live in a free society therefore anything can happen at any time. I'm not one to trade the freedom for an all powerful unaccountable government. Yes, the terrorists are winning. Demographics are on islam's side. Islam has deeply penetrated america, europe Practically noone in the west has any clue of the history of the prophet who led armies lopping off heads en masse, peace be upon him. Do you doubt what I say? Search "Banu Qurayza." Mo was killing folks en masse, leading armies. That's your prophet, taking heads, peace be upon him. People in the west are largely clueless, nonjudgemental, know-it-all but know-not-much, and predominantly aimless besides wealth accumulation and recreation. (Speaking broadly, the west has lost any focus). In a military sense, governments in the west are chasing their tail because they are in denial of the challenge which they will face. Think of the multi-trillion dollar return (in losses) which some muslims got out of some plane tickets and some boxcutters. Then consider the further trillions in losses in the afghanistan and iraq wars, and the high oil price tax paid for a decade plus due to 911, iraq and afghanistan. Deployed into a foreign land an entire generation with great cost, to be blown up and severely damaged. Al Qaeda is a flag. How hard is it to make a flag? Look at the population in which Al Qaeda, Wahabbism, Salafism, The Taliban, Jabhat/Jamma al (Your name/cause/event here) develop, from indonesia/malaysia/phillippines to india/pakistan/afghanistan to chechnya/dagestan/syria/iraq to somalia/sudan/egypt/libya/tunisia to mali/nigeria/morocco to spain/france/germany/netherlands/sweden to the united states/canada. What is the common thread? All of those areas have had repeated terrorist attacks, from a growing islamic population. You could take down all of the members of Al Qaeda, and the next year there would be a million more raising the same flag/reconstituting. This is the reality the leadership sees but does not acknowledge because it would be unpopular to acknowledge "You won't be safe and it will not be over anytime soon." Meanwhile, the area in which a christian, a hindu, a buddhist, an atheist, a jew, or even a shia, can travel is getting smaller. Sunni Islam actively converting and spreading, by the sword or the word, just like the prophet who led armies taking heads, taking advantage of know-not-much, nonjudgementalist western inclusiveness. Unless you work at an embassy or you're a high profile potential target, you do absolutely nothing. Statistically, the chances of getting caught in a terrorist attack are way lower than getting run over by a car back home. We are prone to fear, but we don't have to succumb to it: The real reason that some debt-strapped "western" nations are temporarily shutting down their foreign embassies is to deal with their own domestic fiscal problems, but the action is conveniently spun to their own media outlets as a "terrorist threat". The US federal government already put their staff on various lengths of unpaid leave. They are just doing the same with their foreign embassies and vilifying a virtually non-existent "enemy" at the same time. Also by advising of a foreign threat the Americans can encourage their citizens to take their summer holidays domestically. They would rather Americans spend their money at Disneyland than Dubai. "It’s a dangerous world, and taking precautionary measures is part of the new reality." Is it a dangerous world? It seems that Western governments are the most dangerous force to the bulk of Western citizens, not terrorists. America's congress single-handedly (with executive office assistance) created the world financial crisis which has surely caused many more deaths than any and all terrorists in the past 20 years. The world is a dangerous place if citizens of each country of the world do not monitor closely, and promptly fire their leaders, legislators and jurists when they are shown to be lacking. "America's congress single-handedly (with executive office assistance) created the world financial crisis" what an ignorant statement!!! I guess Greece, Italy, and Spain, etc's economic woes were all due to America's Congress. And the reason (at least in the US) for having greedy, power hungry do-nothing politicians is because of an ill-informed, non-voting citizenry. Bin Laden got lucky in New York because US airport security was nonexistent at the time. Al Qaeda is an idea, and spending billions of dollars fighting such a disparate group wont change the idea-it is simply a virulent form of Arab nationalism which cannot find an outlet in its home region due to the lack of a modern political state which would allow contrary opinions. Simply fearing and hating all Muslims and abusing the entire Islamic world is parading ignorance which doesnt alter a thing. The US government approach has been over a decade of murderous stupidity which has entrenched opinion and created many of the enemies the US now has. Lastly, if the religious types care to look into the matter, both Islam and Christianity share a great deal of the tales told to keep simple folk down on their knees - Ramadan/Lent - being just one good example. ''Ramadan/Lent - being just one good example.'' - Umm...maybe if Christ and subsequently Christians had used the period of lent to assail caravans in the Holy Land and to re-build and strengthen his/their armies you could possibly have a point...but the similarity you claim both begins and ends in your own paradigm, and has nothing whatsoever to do with reality. Ok we get it, enough already. We get it that you believe you have the inside line on what the true intent of Islam is. You've also explained it in excruciating yet condescending detail to several people here. That doesn't make it necessarily any more true. Regardless of what a religion says does not necessarily translate into what people do. The pope might say don't use birth control and around the world plenty of people still buy condoms. I don't reckon the vast majority of muslims plan to subjugate the west. I suspect they will develop their economies and then go buy flat screen tvs to watch movies on (Religiosity has a strong correlation with lack of GDP). But feel free to correct me in your usual way :) Terrorists won ... Republicans & Fox news hacks yelling "Benghazi!" help them. After all the circus with Benghazi, any organization that threatens to so much as fart past an embassy will provoke a massive shutdown. I can't even imagine how scared Americans be without their 13 carrier groups and crap.. Scared much? I guess the State Department will be damned if they do, damned if they don't. At least from news reports, it appears we have a specific actor, if not a course of action. The new deputy leader of Al Qaida who is dual-hatted as the chief of Al Qaida in the Arabian Peninsula (AQAP) allegedly received a command of sorts (to the extent Al Qaida central can give commands these days) to conduct a high-impact attack on one or more Western targets in the Middle East. According to Members of Congress briefed on the matter, intel analysts seem to think this is more than the usual dialogue between Al Qaida and friends so the U.S. shuttered most of the Embassies that are in the country most threatened by AQAP or otherwise couldn't entirely rely on the local security forces to disrupt or resist a potential AQAP-organized attack. As part of the "no double standard" policy, if the State Department has information pointing to threats to both its employees and to the general public, then State has to make the public aware of the threat as well, which explains the widely distributed travel warinngs. There are some Embassy closings that don't seem to make much sense (like Bujumbura), but I think the closings seem to be a pretty rational action. After all, not only do Embassies have to worry about protecting their own, they also have to balance the risk posed to local and U.S. nationals visiting the Embassies for consular services. Many employees may find shelter behind today's fortress-like Embassies, but visitors are much more exposed. Perhaps a more interesting question is the fact that Al Qaida is returning to its roots in the Arabian Peninsula instead of Iraq, the Levant, and South Asia. is it that the terrorists have won, or that the American government would prefer to keep their citizens in a state of fear so that they can justify things like the NSA? I am not a conspiracy theorist, but let's not forget that though threats may be real, the government very much benefits by "crying wolf" as well, so long as they don't do it too much. Study islam(submission) for yourself, until YOU understand what it teaches and represents, and you will be in no doubt that it is a VERY REAL danger to the whole world, and the WHoLE WORLD will have to deal with it sooner or later. Burying your head in the sand of conspiracy theorists rhetoric is a gift to the islamists...But of course, you cannot see how that may be true, as you don't know enough about it. Do you? @Puck Greenman you are ridiculous lol I really hope you come to your senses at some point in time. Have you ever been to a mosque in the US? I guarantee you'll be treated with more kindness and respect than you ever have been because Islam teaches muslims akhlaq (you're so well versed on Islam that I'll assume you know what that means- but for everyone else: it's manners- and that you surely know how crucial it is to have akhlaq if you consider yourself Muslim).If I judged all atheists as assholes who like to argue with people and be rude, and I judge all Catholics by the Spanish Inquisition and all Buddhists by the massacre happening in Myanmar, we'd have a problem. Currently, that's the type of logic you and the rest of America use. The blatant discrimination is evident. It is too much to say that al-Qaeda has won because American tourists are now asking themselves these nerve-racking questions. Mistakes happen. Is it too much to say this post belongs in Gulliver? Tourists? I highly doubt more than a few thousand tourists are in all of these countries combined, and the ones at the center of the "crisis" have pretty much zero. The USAF evacuated literally every US citizen from Yemen, including embassy staff, on one C130. ~ "All war is deception." ~ "If you are far from the enemy, make him believe you are near." ~ "The ultimate in disposing one's troops is to be without ascertainable shape. Then the most penetrating spies cannot pry in nor can the wise lay plans against you." - Sun Tzu Bukhari:4.268 "Allah's Apostle said, 'War is deceit.'" hateful koran 8:30 "Remember how the unbelievers plotted against you (Muhammad). They plotted, and Allah too had arranged a plot; but Allah is the best schemer.". - The ayat al sayf - the verse of the sword, taken from the most latterly ''descended'' surah of the koran, and representing the paedophile warlord ''prophet'' mohammeds' final and most authoritative instructions to his followers. *every stratagem of war* I agree with you - these ancient religious books are very violent. It is actually the same god with different names - Yahweh, God and Allah. He is clearly quite a violent gent. In fact,. On the link between violence and religion - these guys were drunk most of the time. The issue is were the "revealed books"." Yahweh/God/Allah was a drunken being (high on Manna). Sorry, but you're wrong - the islamic ''god'' ''allah'' is NOT the same God worshipped by Jews and Christians, although muslims and many others claim it is, and the ignorant are only too willing to fall for the bullcrap. It is a standard islamic hi-jacking attempt...The islamic ''god'' is nothing more than a continuation of the PAGAN arab god ''al-ilah'', worshipped by the pagan arabs at the time of mohammed, and co-opted by him into his own brand of monotheism. ''Al-ilah''(the god) was the foremost of 360 worshipped by the pagan arabs, represented by the hajr al aswad - the black stone of the kaab'a - mohammed merely continued its' worship in another guise. You can check the facts for yourself, I don't expect you to believe me, as you are obviously anti-theist, and will, no doubt, disagree, as it ruins your claim that ''they're all the same'', yes? Nice try though :) I suggest you actually study ISLAM if you want to know what ISLAM(submission) represents, and where it originates, rather than merely choosing the ''cop-out'' path of the devout anti-theist, eh? What I believe is irrelevant, as is what you believe. It is what ISLAMISTS believe that we will ALL have to deal with sooner or later, regardless of our own theology or lack of. p.s. See my post below. You are a prime example of that wilful ignorance. No offence :) It is the same gent - the term is in use throughout the Middle East. The main common link is the "chief ancestor" - i.e., Abraham. ". (Even the Arabic-descended Maltese language of Malta, whose population is almost entirely Roman Catholic, uses Alla for "God".) Arab Christians for example use terms Allāh al." "As Hebrew and Arabic are closely related Semitic languages, it is commonly accepted that Allah (root, ilāh) and the Biblical Elohim are cognate derivations of same origin, as in Eloah a Hebrew word which is used (e.g. in the Book of Job) to mean '(the) God' and also 'god or gods' as in the case of Elohim, ultimately deriving from the root El, 'strong', possibly genericized from El (deity), as in the Ugaritic ’lhm "children of El" (the ancient Near Eastern creator god in pre-Abrahamic tradition)". There are various scholarly books that document this issue in great detail, and give the precise linkages between the 3 Abrahamic religions. The 2 books that I have read are: - The Evolution of God, by Robert Wright. - The Revenge of God: The Resurgence of Islam, Christianity and Judaism in the Modern World, by Gilles Kepel. As an agnostic who has traveled widely my view is that Allah remained leaders are required, who can go beyond the narrow confines of ancient religious books. In this process, the nation-state (which is only a recent, and very temporary, invention) will have to cede power to multilateral institutions. That is a probably the only way to restore sanity, and to build a rational system of global governance that is secular and just. So you think that you can legislate, or force the 5+BILLION people on the planet who have ''Faith'' to give it up, eh? Good luck with that. Anti-theist supremacists, lol. Some of you are just as dogmatic as those you claim superiority to. And you're still wrong. There are 7.1 billion people on the planet - the number of people who actually believe in God/Allah/Yahweh are, at best, 1 billion. The rest of the people of the planet are quite content to lead their lives without belief in pagan gods and antiquated customs. Your conspiracy theories are quite absurd. And so you think that the ONLY ''Faiths'' there are are those of ''God, allah and Yahweh''? What about Hinduism, Sikhism, Buddhism, Daoism, Paganism, Wicca, etc., etc., etc.?? YOUR ignorance of fact is quite absurd too, person with the ridiculous moniker. I have no belief in conspiracy, I can PROVE my case. Shame you can't even prove you know the first thing about the multitude and variety of the Worlds' Faiths, eh? Try getting out more. The 3 Abrahamic religions (particularly Christianity) have been responsible for most of the violence in the past few centuries. The non-Abrahamic religions have not been expansionist in their behavior to the same extent. My point was that the Abrahamic religions are the ones that need to modify their behavior drastically. To do so will require creating a more rational and secular system of global good governance. no, your point was that there were ONLY 1.7billion people of Faith on the planet, it's quite clear what you meant, person with the ridiculous moniker, and wriggling away from your statement will just serve to expose your paucity of genuine arguments further. And your ignorance of anything outside your sphere of experience i.e. not among those you hate. So you also believe in a ''global'' empire then? And what will you do to those who do not wish to be part of your regime? Read my comment carefully - it was: "There are 7.1 billion people on the planet - the number of people who actually believe in God/Allah/Yahweh are, at best, 1 billion." Strong multilateral organizations are not a "global empire" - in fact, they are the antithesis of empire - such organizations erode the excessive power of the nation-state. Global policing should not be left to super-powers - e.g., the USA. The reason is arbitrary and tainted decisions, and inept execution. I stand by my comment that "the 3 Abrahamic religions (particularly Christianity) have been responsible for most of the violence in the past few centuries". '' 1) 6) Jews - 14,826,102 And... a) Others - 814,146,396 b) Non-Religious - 801,898,746 c) Atheists - 152,128,701'' taken from here; Obviously you can choose your own sources, but you'll get similar results. Seems you're just talking garbage and anti-theist supremacist cant, eh, person with the ridiculous moniker? ''Strong multilateral organizations are not a "global empire" - So that's a refusal to give me a straight answer to a straight question then? You call it ''strong'', I call it hypocrisy, and you just as bad as those you claim superiority to. Most of the Hindus, Muslims, Christians and Jews that I know are non-practicing - i.e., religion is not important in their lives at all. Regarding my other point on global good governance - it is clear that you did not understand my comment. An inter-connected planet in an era of globalization needs global institutions. These institutions would need to be secular in order to gain worldwide acceptance. It is a very simple formulation regarding a path of progress that is inevitable, given the nature of the world that we live in. The consequences of not creating such institutions is global chaos. Your question was meaningless ("And what will you do to those who do not wish to be part of your regime?"). Since there was no intended "regime", there is no answer for this tautological and illogical question. Your fundamental claim is wrong. BILLIONS wide of the mark, in reality. Obviously you cannot accept that, so you wriggle and attempt to change the parameters, to subdivide, in order to lend yourself some credibility. Fail. ''Most of the Hindus, Muslims, Christians and Jews that I know '' - Do you even understand how lame that statement is? How, far from lending your ridiculous claim credence, it just shows what an isolated and self-centred view of the WORLD you have. Ask someone to explain it to you, 'mkay? You spout unproven garbage, and you know it. There's no further point discussing anything with you, 'cos as we all know; anti-theist is anti. I agree - there is no point in discussing anything. You are clearly a religious fundamentalist who believes in weird conspiracies - e.g., " - one of your paranoid comments. You must be living in a really rough neighborhood?? ''We have been reminded that the terrorists' most accessible weapon is fear.'' - Not true. The ''terrorists''' most accessible weapon is WILFUL IGNORANCE. Those who bury their heads in the sand regarding the ETERNAL islamic desire to dominate, regarding the teachings and requirements of islamic supremacism, regarding the VERY REAL and ONGOING aims of the islamists are their most accessible weapon. By that I mean the lamestream medias' and far too many political and personal beliefs relating to ISLAM(submission) that are completely wrong, and held by those not willing to even examine the truth of islamic teaching i.e. ''islam(SUBMISSION) over all''. The ETERNAL aim - not just a year, or the next election, or even a single lifetime. Eternal. Until ''they'' win. Until the WHOLE WORLD submits. Wake up. "We have been reminded that the terrorists' most accessible weapon is fear." . Actually, the closures are a defensive position by the Obama administration after Fox continues to politicize the Benghazi attacks. Are we reminded of the previous DiA post? " It’s a dangerous world, and taking precautionary measures is part of the new reality." Err no, it is a less dangerous world. For proof I direct your attention to The Economist of 20 July 2013 in which the fall in crime over the past couple of decades is demonstrated convincingly. We live in a safer world. It is only when the topic of terrorism comes up that rationality is thrown out of the window and, otherwise sober media outlets, see the sky falling. To that extent the terrorists have won. Two questions: a) What's the chance that top Al Qaeda operatives are discussing major operations on the sort of channel that is subject to NSA hacking? b) Even more importantly, if they WERE in fact using such a channel, what's the chance that the NSA would announce this fact to the world? (Surely, such a "leak" would compromise "intelligence sources & methods" FAR more directly and critically than ANY of the materials released by Manning or Snowden.) Conclusion: clearly, this is a U.S. intelligence psy-op, hoping to stem the rising tide of public misgivings concerning the "Total Information Awareness" Surveillance State by "exhibiting its utility" in intercepting "enemy chatter", while stirring up widespread, unfocused fear and anxiety at the same time. The sheer beauty of the scam: if one or more bombings actually occur, anywhere in the entire world (as is always fairly likely in any given week), they can claim to have predicted them (and given us a "heads up", however vague and unhelpful)...and if nothing happens, they will SURELY claim that the Total Surveillance State has prevented it. Are we really THIS gullible? "Do you avoid big attractions like the Eiffel Tower and Arc de Triomphe and steer clear of Montmartre in favour of, say, the underexplored 17th arrondissement? Do you hole up with a bag of croissants in your hotel and stick the kids in front of the TV?" _ So now victory is measured by the extent they've inconvenienced upper middle class Americans abroad? I guess to some missing the Eiffel Tower after a transatlantic flight is tantamount to the PTSD we've imposed on countless innocent families with out incessant drone attacks throughout the region. However, to me its an obscene suggestion. According to some civil libertarians this is what the intercepted conversation sounded like: al-Zawahiri: We will bomb the government building. al-Wuhayshi: What's that clicking noise? al-Zawahiri: Oh that's the NSA. al-Wuhayshi: Shouldn't we like stop talking about this then? al-Zawahiri: No, it's okay. They're on our side. They're going to use this conversation as a reason to tighten security. al-Wuhayshi: Wait, why do we want the Great Satan to tighten security? Doesn't that just make our job harder? al-Zawahiri: No, because our goal isn't to drive the Great Satan out of our lands or to drive the Jewish occupiers into the sea or to institute a worldwide caliphate but to make people wait longer at airports. If fliers can't carry shampoo onto planes, we win. al-Wuhayshi: I'm beginning to question our cause. Yeah, I remember a story a while back of a smart NSA or CIA guy giving a tour to a reporter who, demonstrating America's safety, asked the reporter if he wanted to see the transript of Osama Bin Ladin's latest satellite phone conversation. After that (and before 9/11) Al Queda was reported to have stopped using sat phones. Which is to say, I'm a little astonished at this story and have to wonder if saying into a satellite phone that there will be an attack isn't a brilliantly efficient alternative to attacking. Terrorism is very confusing. False threats seem like they would nominally be effective. The spate of shootings by mentally disturbed people also makes clear there isn't a high barrier to that sort of incident. Then of course we have the sniper from all those years back and the fake anthrax letters. Would the expression, the tail is wagging the dog be appropriate here? [Not to ignore what must be good work to prevent airplane hijackings/bombings] What if the terrorists are a bit smarter than we think? What if they're trying to destroy our society - our freedom, not just our economy? What if the point of all of this is that it comes just at the time that we're outraged by the evidence of massive government overreach in their surveillance programs? What if the intent is to make us accept the surveillance rather than respond with outrage and demands for change? What if that's really what al Qaeda wants - for us to turn into a surveillance society, to make it easier for us to turn into a tyranny in the future? I'm sure my post is going to spawn the usual "false flag" government conspiracy paranoia, but that is not at all my point. My point is that I fear that al Qaeda is smart enough to use our fears to manipulate us into destroying our own freedoms. Or maybe it's even deeper reverse double Trojan horse counter psychology. They're actually pro-America and they're doing this all to raise awareness of civil liberties and begin a conversation about the security state. . Or maybe they just wanna kill Americans. . Occam's razor. I'm not a big conspiracy theory guy. My suspicion was based on some things bin Laden said about his hopes and intentions with 9/11. Whether al Qaeda is still trying to pull off something like that, or if it's become *just* "wanna kill Americans", I do not know. There is a reason it is called Terrorism and not Happyism. I'm sure they want to kill some Americans. But failing that, making us hide in our embassies, disrupt our plans and inject some fear into the 24 hour news cycle is a passable consolation prize. or maybe they just want to raise the Al Qaeda's profile. two guys with a phone sending the most powerful country in the world into a hissy fit is pretty good advertising. Terrorism is the use of terror to affect change in the behavior of your enemy. Al Qaeda seems to be pretty good at creating attacks that can do this whether they succeed or fail. I would say that, since the London bombings, their failed attacks were more effective than than their successful ones in affecting or behavior Why the hell would any of these Islamic groups care about Americans' freedom? They've explicitly stated their goals -- to cleanse the holy land of kaffirs, institute a global Islamic caliphate, implement Sharia, and live (while forcing others to live) according to the Koran. And they've done precisely this whenever they've gained control of a country/territory -- see: Al-Shabab, the Taliban, Al-Qaeda, Hamas, etc. They couldn't care less about Americans' freedom. They just want the Great Satan to die or leave the Middle East entirely, and by extension, to leave the Jews to die. They also want to remove Muslims they disagree with, whether moderates or members of opposing sects. Al-Qaeda's greatest victims by far are Shia Muslims, not westerners.
http://www.economist.com/comment/2112568
CC-MAIN-2014-35
refinedweb
6,488
62.68
Introduction The following is a real C# Code Review. While its not an enterprise scale application, it will give you an understanding of the type of things that are assessed as part of a code review. Background The sample application is a simple ROGUE LIKE game I built in a few hours. For the original article that demonstrates how I built it click here. I have kept all the code in the one single file for the moment mainly to demonstrate the code review process. By being able to compare just 2 files I feel this is the easiest way to observe the review and the implementation of the review. Future versions will include a properly designed project structure. Code The code is available on codeplex at. If you are interested you will also see the development in progress including: bugs, issues, tasks and releases. The original program RRRSRogueLike0.1.cs I built is included as well as the version that includes the implementation of the review RRRSRogueLike0.2.cs. The Review The following are the individual items that were raised during the review: When the monster gets created the number of live monsters gets incremented. The dungeon subscribes to the "ImDead" event of the monster, which gets fired in the die method. The event handler decrements the number of live monsters. Similarly, the dungeon should subscribe to the "ImDead" event of the player and set a boolean to indicate if the player is dead or not. Then your IsGameActive property is doing a very simple evaluation. public bool IsGameActive { get { return (playerALive > 0 && liveMonsterCount > 0)); } } This is more efficient than querying every object when the property IsGameActive is accessed. I think the working out of the where the player is should be done in the player class. Can't a player move outside of a dungeon? What happens if I'm using a superhero player? Then my move is actually +=2 in the direction of my choice but that can't be accommodated when the dungeon is in control. So the player class should have methods: GoSouth, GoNorth etc etc which do the appropriate location modifications. The player should expose a location property to give grid coordinates. The dungeons' job is to take the user input and convert to the GoNSEW call, then ask the player where it is and work out if the move is allowed. Implementing the Review I implemented most changes as I reviewed them in detail and agreed they were worthwhile doing. However, I did not implement number 8 because I basically ran out of time and given the limitation of item functionality. I will be looking to implement some changes in the next version based on this recommendation. Viewing the Changes in Code It's worth having a look at the first version of the code RRRSRougeLike0.1.cs before you look at the new version. You will notice the new version of the game contains Refactor comments like. //Refactor 14: Add the new Game Manager Class This indicates we are making a change that refers to the Review Item that Xhalent raised in his review. You should see these for each of Xhalent's original review items with the exception of no 8.Running the CodeYou can download the source from codeplex as described above or you can paste either of the code files into a C# console application and just add a reference to System.Drawing. In Summary ©2017 C# Corner. All contents are copyright of their authors.
http://www.c-sharpcorner.com/UploadFile/dommym/a-real-code-review-in-C-Sharp/
CC-MAIN-2017-04
refinedweb
586
63.29
Over the past few months, I’ve been contacted by a good number of readers who have had problems downloading our guides, or why they can’t see the login buttons or comments not loading; and in 99% of cases, it’s because they’re running one these plugins – AdBlock, NoScript, or Ghostery – which I shall hereby refer to as the “trifecta of evil”. Here’s why. AdBlock Matt has already written an extensive article on why AdBlock plugin is destroying , but I want to throw my own opinion in here too. For those of you who don’t know, AdBlock silently removes all advertising and social buttons.. Apologies if you think my definition of free is defective, but you’re arguing over semantics and kind of missing the point. What makes me angry about the AdBlock plugin is that the author – while happy to destroy our revenue stream – is also profiteering from the very same free content model by asking for PayPal donations when the plugin is installed. Talk about hypocrisy.. If you want online content to all be premium priced then go right ahead and continue using Adblock. Ultimately you need to remember that if everyone cheated the system like AdBlock users do, the Internet would only exist behind paywalls. NoScript). But the Internet has very much moved on and evolved from those early days. Browsers aren’t as vulnerable as they used to be. Moreover, Javascript is an integral component of modern HTML5 standards, and jQuery – the most popular Javascript framework – has pushed forward web interfaces far, far beyond pages full of images, links and tables. The modern Internet must have Javascript. So when you use NoScript,. From a user perspective, you’re going to find a whole host of features that don’t work as expected. In an ideal world, websites would be able to degrade all of their advanced functionality to users without Javascript with some kind of no-JS alternative. In the real world, we’re limited in what we can do by working hours and budgets – and really, why should we support you if you’re not willing to support us by displaying ads? Ghostery I hadn’t heard of this until recently, but Ghostery appears to be the ultimate do-not-track plugin. It tells you exactly what companies, ad networks, and tracking services are being downloaded from a site, and allows you to selectively enable them. It presents users with 2 types of cookies (‘trackers’) – those downloaded directly from the site (such as WordPress remembering you’re logged in) – and so-called “3PES” – or third-party elements. The latter are any cookies from ad networks, analytics platforms, and user behavioural trackers. On the one hand, I think it’s important that users are educated about what’s going on behind the scenes on a site. Ghostery maintains a know your elements glossary of all the known tracking scripts and the companies they belong to – it’s comprehensive, and I applaud it. But educating people and blocking them are different, and given that the majority of users simply leave it blocking everything, the end result is the exact same as NoScript or Adblock – users enjoying our content, without creating revenue. So how much can these companies actually “track” your web usage? Well for one, they certainly aren’t able to see what you’re doing in other tabs, other windows, or general Internet searching. They only keep a record of sites in their network which you’ve browsed to. If company X puts a cookie on the New York Times and MSNBC site, and you browse to both those and Wikipedia, it only knows about the two upon on which it was placed. In other words, they can’t tell that your other tab is open on Asian Hotties or cheatonmywife.com... Heck, there’s even a bus-stop ad campaign in the UK that only shows itself when a female walks by. Hows that for targeted? Scare tactics are part of the problem, from conspiracy theorists who believe the government is watching them and now the Internet tracking companies know their every move too. Trouble is, a lot of people without technical knowledge on the subject believe those scare tactics. Now the Internet knows you’re secretly into big ladies smothered in whipped cream, and you can be sure they’re going to use it against you. Basically though, it comes down to this – we provide thousands of articles, free book guides, and a community-driven technical support service – in return for which, we ask that you don’t block adverts. Now I realise of course that I’ve only presented one side of the argument here. I’ll admit right now that when you throw social networks into the mix, we may have serious privacy concerns – because suddenly, all this data can be traced back to you and not simply an anonymous user. I’ll leave that to another time or another author to present that side of the argument though. And just for the record, we won’t be locking you out of the site if you decide to not support us by removing ads. We may show a little message asking you not to do it, but we will never lock you out. Do you disagree completely with what I’ve said? Feel free to vent your frustrations in the comments. Or do you agree with me, and think the whole do-not-track movement is crazy? Image credit: Devils from Shutterstock Experimented with this article with adblock disabled and ghostery set to just report on trackers. It took 3x longer to load. 3 times, wtf! And ghostery detected 30-50 trackers. I've been running adblock for years so I might be out of the loop, bit is this really the best an ad supported website can do? I know pageload times are inversely correlated with conversions, so if you do figure out how to monetise this in another way, consider toning down on the ads. This is a debate that always concerns me and as such, I would like to intervene. First of all, I must say, respecting the author and his point of view, I agree completely with some of the criticism that has been made to the tone of this article, even for a personal opinion, it's pretty much in a blaming posture instead of a constructive one and besides, on my land we have a saying "The costumer is always right", because in the end, if you can't secure your costumers, they'll just fly away to somewhere else where their needs are met. Actually, in the costumers' point of view, most of us couldn't care less about the income of this website if there is any chance that we might being tracked and/or profiled and if the ads give us a worse experience. Also, most of us just don't feel like we are in control of our own privacy and our own *personal* information. If that choice had been given to everyone since the beginning, we could make a more selective decision. But it wasn't, so we block, because the tracking companies didn't care about telling us what they were collectiong and tracking, so we don't know and better to prevent by default. Yes we may block all the tracking and advertising technologies by default. And that's not our fault. You can't blame us for wanting to preserve our identity and/or remain anonymous, it should be a right to do so, because if the control over all the information collected were given to the costumer in first instance, we would feel much more rested, less likely biased by conspiracy theories and probably more receptive to cooperate. But greed has led the advertising companies to *impose* this model rather than *propose*, because then only the awareness of a minority could counteract their strategies. That is the power of knowledge, they knew how to collect data from us and we, on our majority, had no clue that such thing was being made and how to avoid it, and for years that worked just fine in detriment of the costumer. They never had the costumers best interests above their one profit thirst since the very beginning. This model is all about profit. Unfortunatelly, our raising awareness is hurting the publisher, who provide us this free and useful content, yes I agree that one has to make a living. But hey, you did agree in adopting this model didn't you? Weren't you aware that people could potentially get uncomfortable with the current model? Did you not know that you were complying with denying people's right to control their own data before any other entity could? If you didn't, you should have. If you did, well blame yourself now. You bet on this branch, now the outcome is not being the expected, well, that is business isn't it? Why do you blame the companies that do what should have been done since ever, to give costumers back the control of THEIR OWN data? Maybe if that had been done primarly by the advertising companies, regarding our best interest, such addons wouldn't even exist. Once again, instead of proposing, it was imposed on us, and imposed in an increasingly cumbersome, painful and heavy way, with bigger, animated, resource hog banners, at expense of the user experience, which made even those who wouldn't care so much about tracking, find a much better experience online with AdBlock Plus enabled. Once again, you can only blame yourself because you agreed with following this model, you bet on it as the best way to sustain your website, so you had to be aware that so much bombardment in the users screen could turn out to be a real pain. I disagree completely with the author when he argues that there would be no Internet without the ad system - of course I can't make the counterfactual but the Internet, as a global network connecting every, is a very privileged medium with so much potential. The Internet would, in my perspective, still exist, but built on a different model, that's all. I don't think it would be dumped just because one model couldn't thrive. And that's precisely what is happening now, people got aware and don't like it anymore and once again, blaming us is the last thing you should do because we just will go somewhere else where ads are not intrusive, complete control over information is provided from the user side and still good articles come out. That is what the market is demanding. It is competition. You couldn't explore our lack of understanding about tracking and profiling forever. And yes, we do use Facebook and other social networks, but do you really believe that someone behaves the exact same way when his activity is associated with his real life identity? Even if one has no clue about tracking technologies, on Facebook, I know that if I comment on something, my face is there. If I like something, my face is there, if I like a product, my recommendation to my friends is there, with my face. Will I like redtube on my facebook? Probably not. And even for dummies, it is not hard to presume that even private activities, such as chatting, will remain saved remotely. We do use Facebook, but all in all, if FB doesn't safeguard our privacy, they are providing a really bad service, and as such, it can't be use as an excuse to be tracked all around the web as well. my god what drivel. Writers, 99% just cut and paste and call it an article. for my money these ad-ons dont go far enough, I look forward to the day I can block blogs/*cough* news articles/poor content web sites and the like permanently for even showing up in a search engines. advertising is the death of the internet with every fat ad-man extending his greasy fat hand and gasping his ad-man mantra of "more money" or "pay me" should be given 30 lashes daily. These people are no more used car salesmen than well, used car salesmen. I find it laughable that you consider these companies/people are paying your salary. perhaps you should move onto a used car salesmen career as telling readers that these legitimate and useful tools are taking food of your table is something you would hear down at the 'lot' followed by a chuckle of laughter. I install these and more on every single computer I can get my hands on. Sir it may pay your wage, but it wastes my time and I find it offensive that these must be used by any reputable website. sorry but the days of getting easy advertisment monies are long gone. it's a little ironic you have 10 !@#$ing tracking beacons on this very article. and lots of laughs about it affecting your livelihood. you're lucky you made one red cent doing nothing but typing words onto a virtual network. ad revenue is for suckers. try farming, then blogging. Try web developing, then commenting. I myself host a couple of sites that use Google AdSense. I am writing very specifically targeted articles, and about 85% of my visitors do not see my ads. I think that's okay though, as I value people's integrity higher than my personal profits. Personally I only use AdBlack, AdBlock Plus and Do Not Track Plus. DNT+ I think does most of what NoScript and Ghostery do but without disabling all Javascripts (thus allowing me to use this comment section for example). Boo freaking hoo. The way the world wishes to view the web doesn't fit my current businesses model. Here's a free tip: Adapt or die, ***hole. Don't tell people how they should be rendering web pages. Thank's for telling me about Ghostery! You're welcome Elliot. As a registered user though, the site will stop working for you if you activate it; you'll no longer get points, and you'll be unable to claim anything from the rewards page. Jus' sayin. >Hating on NoScript because apparently JavaScript is now the safest language on earth JavaScript isn't the only reason we use NoScript, many of us use it because there are MAJOR security flaws in Flash and other embedded libraries. Ever heard of flashblock? For a geek you sure do miss a lot of blatant flaws. 1) Flashblock only blocks Flash, NoScript blocks scripts. 2) If a legitimate website (take makeuseof for example) was hijacked and malicious script injected into the homepage, imaging the amount of damage that could cause without NoScript or other plugins to prevent it. 3) Flash is not the only asinine security hole out there. 4) >James is a keen gamer with a passion for the iPad 5) >gamer with a passion for the iPad 6) >gamer 7) >iPad 8) >Geek Followup: This site is so hilariously bloated it isn't funny, and any way to lessen the impact that has should be welcomed. Seriously, an HTML document should NOT have 13,612 lines of code unless you're doing something ridiculously huge where size is not an issue, such as a database. Did you consider most of that is COMMENTS? Your last point makes no sense. Are you saying "Geek" as a derogatory term? Because I'm not the one who puts "Install Gentoo" as their name. Talk about pot calling the kettle black. I do not use Ad Block, no script or ghostery because adds don't really annoy me. But I will stop from visiting your site from now on. Everyone who tries or just thinks of trying to disable ad block by those funny little scripts does not respect my freedom and i will in no way support such a person. Have a nice day. Hello, James! My name is Ben Kite. On the few occasions when I watch television, I use the bathroom on commercial breaks. When I browse the web, I like to optimize screen space (and save myself a headache by flashers and talkers) by using Adblock. I also render pages more quickly with JavaScript disabled, and I preserve my privacy by managing cookies. Your article has not convinced me to change any of my aforementioned behaviors. However, I invite you to ask the gentlemen at MakeUseOf to block my IP address, to prevent me from stealing content in the future. Thanks! - Ben TV companies still get paid regardless of whether you go to the loo or not. See the difference? FYI, we dont block users, and have explicitly stated that. Hmm, I'm reading this article and writing here without seeing any filthfy ads, thank you adblock! Users who don't want to see ads tend to not click them. Saving themselves bandwidth. Idk about you but I never purchased anything via spam or an ad. If I want to add 5 inches to my penis instantly risk free I will visit a tube site as after all its the internet. They are never relevant. The only kind of advertising I enjoy and find relevant are those the smart companies use. Amazon for example in suggestions when going to shopping cart tend to be right on the money. You're about to buy an rpg video game for x system they show you rpg games for x system. Not miracle cures for scalp cancer. When they do things like buy this with your order and we will knock off 10% now that is smart practice. I block everything unwanted because it is just that. Unwanted. People are full of wants and advertisers just gotta smarten up on how to address the wants they have w/o trying to make them want something generic. Its progress. Whatever can't adjust should die. I know greedy isps with all their data capping plans are loving it. How many tbs are saved every second because videos aren't on an unwanted or expected autoplay on one of my 40 tabs somewhere? As for tracking if unsought again screw that too. On any level. Your traffic is what makes your site. Its not fully on your talents, your advertising partners wellness,etc. People should listen to them when they say take they are tired of the boots on the tires or they will rip them off themselves and resent everyone for it. The increase in use of these programs should be telling that the way it is now....is not the way it can stay much longer. Advertising works, regardless of whether you think they do or that you're one of the special few it doesn't affect. And context ecommerce advertising is only relevant to ecommerce sites - it isn't even advertising and certainly wouldn't support a site like this. @ted didn't say advertising doesn't work, he said it's unwanted. Advertising may well work, but then so do blackmail, Sarin gas and shoplifting. That they work does not make them any more desirable for the majority. Read the first line: "Users who don’t want to see ads tend to not click them". This is what I'm referring to, because it's BS. Just because we block your crap doesn't mean we don't support you. We don't always look at billboards while driving down a road in fact many of us ignore them completely so why should online be any different. I'm looking at this at the basic level but I do it to protect myself not to stop your ads. Find a safer medium to show me your stuff that can't be used to give me spyware/adware. I understand all your reasons, however I think you are missing very important point. Once a tracker is able to connect your behavioral information about your browsing habits with some info which allows personal identification, like IP address, email, etc. than this knowledge itself is very valuable personal information. It really does not matter if you find it paranoid or not to assume that personal information can be misused. The think which matters, that by using these ad services you help these companies to collect personal information WITHOUT asking users if they agree or not. So it is fair from user of your site to BLOCK this unauthorized collection of data. So it is you who should bleak this circle. Either use ad companies who do not collect personal information and inform users about it (asking them to unblock ads) or clearly state, that if a user is not happy with tracking s/he should not use your page. Jakub the fact is that a lot of websites now carry instead of picture ads but video ads which begin to play automatically and not just one but depending on the website there can be quite a lot cluttering up the webpage, now I don't have a big problem against using ads, but videos even small videos drain my internet speed so if I'm downloading something I can see my speed decrease from 150kb/s to 50kb/s which significantly lengthens my download time. Also the fact that many of these advert start to play straight away and can not be paused, you have to wait 15 seconds for it to end. Now the fact that you are saying the creator of adblocker is trying to do the same thing aka profiteering. The very key difference here is the fact that he is saying if you like it and use it why not give me a tip not here wait 10 seconds then I'll show you what you want to see. Now think of this 10-30 seconds watching an ad everytime you want to do something on the internet eventually adds up to a hell of a lot of someones time. I don't use adblocker because like you said I don't like letting people lose money but I donate significantly more money than I do click on ads, I never click on ads as a matter of fact. It's something I think a lot of businessmen forget people like donating to something they enjoy or are a part of but people don't like to be forced to pay (it's human nature). Hence why a lot of videogame reviewers, online mechanics and other video creators get significant amounts of people sending them valuable stuff and why wishlists are so popular these days. Heres my final word, people hate the concept of money but people enjoy the concept of value; thats why people prefer to give item orientated gifts at christmas rather than money. Because if you give an item of value it feels as if you participated in the event because you can say I gave them that or this, where as with money it becomes a statistic in a sense. If only people would build a model of payment around this concept but that is living in an idealistic world. As a passionate web developer, graduate of an arts degree, and huge fan of all forms of creativity and innovation, I am a huge supporter of sites being able to advertise to their users. I totally value, respect, and encourage it. I do not approve of stealing content. However, I use AdBlock. I installed AdBlock after the fourth time I had to re-install Windows due to a virus that got to my computer through a compromised flash ad on a website that I wanted to read and support. I would totally allow ads on every website on the internet to be displayed in my browser if they weren't a security risk, but they are, and I'm tired of getting burned for it. On sites I read often and know display safe ads that don't use flash, I disable AdBlock. After years of resistance, I began trying out Notscript. Naturally, as a web developer, I love JavaScript. While I develop a non-JavaScript fallback in every project I can, I understand that most of the best behaviors on sites today rely on JavaScript at the very least for user experience enhancement. However, I have gotten devastating viruses at least three times by clicking a link to a site that immediately began downloading malware through JavaScript. Although I run a constant antivirus with a trusted sites database and a constant malware filter, I'm just tired of biting my nails every time I open a link to an new domain. Again, this is still a security risk for me. I'm trying out Ghostery for the first time today, mostly out of curiosity as to how often I'm tracked online (as it's turning out, A LOT). I'm okay with services I trust tracking me, but I'm not particularly thrilled about a program I installed on my computer tracking my behavior off of one site and onto another. I love analytics programs and their ability to help me improve a user experience, but using someone else's personal property to spy on their behavior, and then use it to advertise to them seems kind of shady and invasive to me. How about at least asking for my permission before doing that? All in all, I don't like disabling aspects of my technology, and I don't like anything that impedes online innovation, but security comes before that for me. After that, I'd like a company to respect me before they ask for something. You want to keep reminding me about a service I looked up? Probably fine with me, as long as you ask me first. Wanna keep doing it without asking me every time, how about asking me once before you do that? You're asking people to respect what you do, and how you support what you do, and that's great. If you want me to do that for you, respect me and how that impacts me. Ask me before you do stuff with my computer. Make sure your site isn't infecting my computer and putting me at risk. Do those things, and I'll disable my security for your service. If your advertising partner can't guarantee quiet, unanimated non-annoying ads, can't you find a new advertising partner? I definitely would like to see this kind of article push back harder on the advertisers than the users, it's our computers and eyeballs you're asking to leverage to make money here. That's a great point Kevin, and the approach I would recommend. I have a strict policy on my own blogs with regards to advertising, and I take the time to approve each and every one. Ultimately I'm just a writer and developer here, but I'm sure my bosses are listening and will take that to heart. So, you can't make money from people like me? Tough. I have no intention of helping anyone make money on the internet. I refuse to make any purchase from any sponsor of any website I visit. Aside from that, I try and block any tracking of what I do online. Not just for corporate reasons, but 99% of what I use the internet for could be considered illegal, So I make every effort to prevent the man from getting in my face about stuff. So, how else would I use the internet freely without track and adblockers? Or is it your opinion that people like me shouldn't be allowed online at all? 99% of what you use the internet for is illegal? Wow, thats quite an admission to make. I'm afraid I dont really have a response to that. Mr. Bruce - You are wrong. The companies that are your "revenue stream" create parasites. These parasites are invasive to privacy and other fundamental liberties. Moreover, privacy laws differ internationally and not everyone across the globe is provided with adequate protections. Further, you are culpable for helping trackers invade these liberties. Finally, I acknowledge the slight deference you give Ghostery in your article but Ghostery empowers Internet users with the ability to protect their fundamental liberties and should be applauded. You are wrong because the companies that provide you with revenue create parasites; parasites that you, Mr. Bruce, invite into the intimate recesses of people's homes. These parasites, cookies, beacons, pixel gifs, etc. (collectively trackers) feed off of a user by gobbling up and recording the data trail a user creates while on the Internet. This is not a symbiotic relationship as marketing trackers help themselves to user’s own computing resources while they compile and transmit the user’s personal data. When done, they give nothing back. Generalizing, these trackers record a user’s IP address then tracks the webpage that a user just navigated from, tracks the webpage that they ultimately navigate to, records the fields a user populates with text while on the website, records the links that a user mouses over or clicks, records comments a user leaves on a website (and even the feedback they leave another’s comment), keeps track of the duration of a visit to the website, transmits this information to where it is inadequately stored, creates a “dossier” about the user’s online activities for target marketing, and ultimately sells this dossier to others. Contrary to what you may urge others to believe, many of these trackers, once inserted, also continue do this on every webpage a user visits on the Internet - even after they left the website that originally inserted the tracker. One might ask, "What harm in having a user's IP address? Its not like its their name and address." I will assume you are a U.S. citizen (and also discuss U.S. liberties below) but I am unclear where you live or where your webpage is located. So, anyway, what harm? Well, in the United States there are no federal laws that adequately prevent an Internet Service Provider (ISP) from giving out or selling the personally identifying information attached to an IP address to a private entity. Once a user is personally identified and connected to the dossier of their online activity, the already present harm your article disregards intensifies. Trackers are invasive and harmful to fundamental American liberties. First, these trackers are inserted on a user’s computer without any notice or consent and I consider that to be a trespass to ones personal property. Second, similarly, these trackers can build up on a user’s computer and degrade needed computer resources like RAM, processing power, and bandwidth because they are all being used by the trackers to record and transmit data. Trackers might also have compatibility issues amongst one another when competing for resources and cause a user’s computer or software to become unstable or crash. I consider that to be conversion. Third, I, like many others, happen to find privacy, also known as the freedom to be left alone free from intrusion, to be extremely important and do not appreciate eavesdropping or prying whether the offender is a public or private entity. It chills freedom of speech and deters seeking out new ideas or educating ones self of the unpopular ideas of another... in other words, it inhibits the exchange that is supposed to occur within a free marketplace of ideas because new or radical ideas are likely not vocalized (and conversely not asked for) when someone untrusted is watching. Fourth, it hurts ones right to speak anonymously. For example, in writing this comment to your article I will not put my real name down nor will I use a valid email address. I do this because I prefer to remain anonymous. Should one of your revenue streams read this comment that they recorded with their tracker and decide they want to identify me by getting my personal information from my ISP and publish it, then they can do so and destroy my right to speak anonymously. A lack of the ability to speak anonymously also chills free, open, and candid speech. Fifth, it has an uncomfortably high potential to diminish a U.S. citizen’s right to be free from unreasonable searches by the government. You see, U.S. citizens are protected here because the government must get a warrant to search an area in which one has a subjective expectation of privacy that society would find to be reasonable. If these tracker intrusions are pervasive and Americans are complacent with their snooping, then we speak with a voice that says we don’t believe that our computers or Internet activities are private - ergo the government would be empowered to search them warrantlessly as such a search would not be objectively unreasonable. As you can see the harms your article minimizes are serious. I recognize the EU, if that is where you are located, has just begun enforcing new privacy laws to protect and limit the information trackers may gather. From this I infer the EU embraces the liberties discussed in the previous paragraph. However, this also raises another issue. The inconsistencies with privacy laws internationally mean that those residing within the EU have federal laws that safeguard their privacy while other countries remain vulnerable to unwarranted intrusions. Add-ons like Ghostery provide people whose governments have yet to pass legislation affording them the commendable governmental protections offered elsewhere with equal or greater protections. You are culpable for the harm trackers cause because you allow them to be inserted onto your reader’s computers in exchange for a payment. You act as though you are entitled to this revenue stream that you have been receiving for betraying your readers’ trust. But you go further. You have the audacity to write an article complaining that there is a trifecta of evil that is robbing you of the money you earn for selling your readers’ liberties. You brazenly urge your readers to believe that Ghostery, which empowers them with the choice not to have their computers invaded by parasites, is evil because you can no longer sell their privacy. You think Ghostery is evil because it takes shrinks your paycheck a bit? Well whoop-de-do. Fundamental liberties like privacy, speech, anonymity, being free from trespass or conversion, and guarding the integrity of our protection against unreasonable searches and seizures are at stake and are worth far more than a few extra dollars in your pocket. Countless brave people have died defending these liberties and I take offense to your complaint that you’re not getting enough for selling them. How much do you make on each user’s computer that you infect? 0.5 cents… if that? Is that the value you would place on your own privacy? On your family’s or children’s privacy? Your views are so revolting I need to shower the ick off. I can respect your desire to earn the best living possible but I have no respect for your desire to sell these fundamental liberties to do so. Ghostery blocks the trackers that you, Mr. Bruce, invite into our homes. It allows users to contact your revenue stream directly and tell them their uninvited snooping is not welcome in their computers. I applaud the developers of Ghostery, their continued nonprofit work, and their patriotism – so should your readers. Thanks for your input. Unfortunately, you've demonstrated a complete lack of technical understanding as to what these "trackers" are capable of, and it's exactly this kind of scaremongering by users such as yourself that really annoys me. I don't blame you, though. Fair enough, you make some great points - but when you start talking about the things cookies and tracking bugs can do, it just negates from the rest of your points. Please, learn a little more about the technologies from a realistic perspective and not the conspiracy theorists websites you're clearly learning this stuff from now. I'm from the UK, by the way. We have sufficient privacy laws to protect our personal data already: no ad tracking systems are currently in breach of them. Also, I'm salaried; my paycheck is unaffected. Sometimes things are about more than just your personal situation though. Hope you enjoyed the shower. Mr. Bruce (I believe you said you are also muotechguy) - I appreciate your fast reply, but my account of what these trackers are capable is not scaremongering - nor has the origin of my knowledge come from any conspiracy theorist website. Additionally, as explained in my original post, I am concerned about the collective situation of privacy and not my own personal situation as you believe. I know what trackers can do from case law. For example, one case details how a persistent cookie collected "names, addresses, telephone numbers, email addresses, dates of birth, genders, insurance statuses, education levels, occupations, medical conditions, medications, and reasons for visiting the particular website." (In re Pharmatrak (2003) 329 F. 3d 9, 15, available at; see generally Deering v. Centurytel (2011) U.S. Dist. LEXIS 51930 (D. Mont. May 16, 2011), available at.) Another case, one that might enhance your own technical understanding of trackers, and that can aid others generally on the workings the internet and how trackers are inserted, collect information, and transmit that information in disregard of consumer approval or their expectation of what type of information advertisers collect, is In re DoubleClick Inc. Privacy Litigation (2001) 154 F Supp. 2d 497, 500-507. (Available at.) My technical understanding is reliable, accurate, and a demonstration of real life litigation over information trackers obtain - they are not products of conspiracy theory. I don't claim to know more about technology than you, but it is clear you are either uninformed by the scope of what the trackers you advocate for are capable of, are in denial, or are simply spreading falsities for your own financial gain. In regards to my concern with the liberties of persons across the globe, I would simply refer back to my original post where it plainly demonstrates my concerns have nothing to do with my personal situation. Many reliable peer-reviewed articles can be found about this as well. (See Matthew Keck: Cookies, the Constitution, and the Common Law: A Framework for the Right of Privacy on the Internet (2002-2003) 13 Alb. L.J. Sci. & Tech. 83.) Forgive the sloppy citations, this was written hastily, but I hope you review some of those materials. Thanks for the links C, obviously I was being hasty in judging you there. I've reviewed them, and the only truly alarming one was the first really. However, I'd like to clarify that a persistent cookie is NOT capable of doing what is outlined in that case. A cookie is a text file, no more and no less. Cookies can be used to identity sites you visit because the tracking companies are integrated with them, and with those sites only. If you visit a site that doesn't contain the trackers code, it cannot do anything with that cookie nor can the cookie do anything by itself. It certainly can't record information you type into forms, the address bar, download and links or anything else. That said, the technology descibed in that case is clearly on a malware level - it talks about Javascripts and Java applets (the latter of which would be blocked by most systems), which did indeed covertly intercept various bits of data. Please be aware though that this is NOT the norm - this is considered malware - and this is why there was a legal case. To use this as evidence to claim all tracking companies do the same is quite absurd, basically. **** The CenturyTel ISP case is also quite worrying, but its not related to ads per se; rather, the ISP was altering the users web traffic - diverting ad calls to a rival network and such; intercepting packets to read personal information. This is just a downright dirty trick, but again, not something that the typical advertising network or tracking script is even capable of - you need to either be providing ISP services, or providing public wifi for your customers, for example. I'm no laawyer of course, but although it's a dirty trick to intercept and manipulate your users traffic, it did explicitly say they would in the TOS... *** The third case against DoubleClick (now Google, I believe), looks to be testing the legality of the whole cookie issue and tracking in general, and if I'm reading this correctly the case was dismissed. Please correct me if I'm wrong, but this particular case didn't reveal anything untoward I don't think. Thank you for continuing the discussion C, I really do appreciate the input here. I can only go so far in discussing legal cases of course, but I hope I've understood most of what you've outlined. I agree that there's a lot of nefarious activity out there, but I think it's wrong to punish authors because of the actions of a few bad ad companies (and ISPs). I run a weekly woodworking class. I have a guy who pays me so he can root through my student's houses to find personal information, while the students are at my class, then with all the stolen information he sends their home addresses to advertising firms who bombard my student's with junk mail, including letters loaded with emp technology to disrupt my students' home electronics, and secret cameras so the advertisers can look at my students when they're watching tv or in the shower. My woodworking class is free, but I have to get paid. When my students complained to the police their houses were broken into I put a letter on the noticeboard telling them they're evil for complaining and a guy's gotta get paid, if they go to the police again there will be "consequences". What a truly ridiculous anology. Does your woodworking class employ 20 full time workers? How does it pay them? In the real world, you'd charge your students, or in this little fantasy world you've made up you'd take the model we do: if you come to the class, please have a look at this 2 minute commercial first. If people started sneaking into your class without watching the commercial, you'd be unable to pay your workers and your class would disappear. I use the complete "trifecta". The reason? The ads are annoying, and often intrusive. The scripts that you claim you "need" often pull dirty tricks like timers that fire at just about the time I am aiming at the "close" button, suddenly changing to an "I accept" button, or a link to a boner pull site. Tell me why those scripts are needed for a site to function correctly. Tell me why some script window pops up when my mouse is detected as I move it to leave the site. No, sir. Those things are most definitely not wanted or asked for! If more people blocked this crap, maybe the practice would be abandoned. Also, a site's statement in a TOS that I somehow implicitly agreed to is not enough for me. Maybe if these TOS were less loaded with legal double-speak, I would consider them more valid and legitimate. As they exist now, they are meaningless i use all 3 evils of the trifecta, and speaking from my personal stand point the reason why is simple. I do not like being marketed too, on your obviously biased side i can understand why you think they might be "evil" because they hurt your bottom line. But its not something i choose in order to economically undercut you or any other person profiteering from advertisement based business model. Asking people not to adblock in itself i have no issue with. That invloves you making a plea to people and allowing them to decide. But writing slosh articles slandering work that people put their honest time into.. the only goal of which is to provide CHOICE. braindead I'd bet that the vast majority of "trifecta" users are highly technical. They know how to enable ads and scripts when they want to enable ads and scripts. These are people that almost certainly would not click on your ad if they could see it, and may quickly abandon your site if your advertising is too obnoxious, or your tracking is too obtrusive. Ultimately, these users add value to websites in other ways, like contributing to the discussion. Your impressions may look lower than they actually are, and you may have to sell advertising for less money. But you *should* sell advertising for less money, because (at least for traditional advertising) this segment has an ad click-through rate of approximately zero. There are effective ways to monetize this group, but a traditional text ad or banner ad model is not one of them. And the idea that it's somehow unethical to opt-out of tracking is absurd. I work in the web marketing industry, so I know exactly how creepy (and arguably unethical) these tracking tools can be. Using tools like noscript to disable tracking is not an aggressive act by the user. It's a defensive response to an aggressive act by the producer. That's not to say that there's anything wrong with tracking. As an analytics junkie, I depend on it. But there's plenty of data out there, and my greed for data isn't great enough to justify being a dick to the end user. I hear you. And I don't. I find ads annoying and ignore them all. It's nice to have the blocked. But I understand the concern of those who are using the ads for income as a recompense for providing the content. But ads (unless blocked) are visible. I can choose not to click on them. I can chose not to see the content they are linked to. But ads are not all that people are concerned about. Since installing Disconnect (You didn't mention that plug-in here) Blogger blogs stopped loading correctly in my browsers. The header would disappear and the log-in buttons were gone, etc. Today I tested some things and found out that Disconnect was blocking 18 (count 'em, EIGHTEEN) "Google requests" which were apparently hidden in the banner of my friend's blog. It's that kind of subtle (that's a nice word for "sneaky") undetected, hidden tracking that many of us are disturbed by. I don't want Google tracking my browsing, even if it's only on a particular site, and ESPECIALLY if they do it secretly. Sorry. It's the equivalent of someone following me around their store noting everything I look at and for how long, what color, trying to surreptitiously measure my shirt size, etc. It's just wrong. Yeah. Talk about "evil." And you know those 18 scripts were "trackers" do you? Not just random bits of javascript that aided site functionality, like font loading and jQuery? And what makes you think even the tracking scripts were tracking *you*? Have you considered perhaps they track page load time from your country in order to identify bottlenecks and such, or the type of browser most prevalent? Would you then like all of this to be declared in the header, and for a popup to confirm you're okay with it before going ahead? On every single webpage, a new popup, asking for permission to load scripts essential to the page functionality? And then a seaparate popup to ask if youre ok with the page metrics one? And another for advertising! Excellent. Nice sarcasm! I won't try to match it. Hey, I'll be honest and say "No, I don't 'know' that those 18 requests are trackers." I'm not a web designer or programmer. That said, let's look at this logically. Do you really think Google is the only people who cannot code a web site so that it requires all that to display a banner, fonts, etc.? WordPress, Twitter, news sites and EVERY OTHER SITE I VISIT display just fine for me with AdBlock, Ghostery and Disconnect running on Safari and Firefox or Chrome. And why does the Blogger banner show up SOMEtimes but then get blocked again. Doesn't make sense (unless maybe, Google is collecting information to sell. But no, wait, we know they would never do that!) And the guy who wrote Disconnect used to work for Google (and AdClick... I think). Is he really so unskilled as to write his plug-in so that it blocks sites' legitimate requests to display fonts, etc.? I doubt it. Plus if his plug-in were to make sites unviewable as a general rule, then he would not get many loyal users. And just because technology allows sites to learn so much about the user (for whatever reason) doesn't make it right, much less comfortable) for them to do so. My analogy here would be Radio Shack (or the hardware store here where we work) who ask for my name and address, etc., when I made a cash purchase. Sorry, but they don't really need that. But at least in that case, I can decline. They aren't doing it by scanning the magnetic strip on my driver's license as I walk through the door, unbeknownst to me. Thanks! I'll explain it though. Twitter hosts scripts on their domain. Googles hosts them in one central place - it doesnt store a different copy of scripts for every blogspot blog - that would be an incredible waste. Those are being blocked by that plugin for being "third party" - ie, hosted elsewhere and therefore presumed not needed. We actually use the google hosted plugins too - this ensures they load from your local cache instead of reloading from ours. Its to keep the internet moving along without wasted bandwidth. I dont know the disconnect plugin precisely, but i suspects its the same ilk as ghostery or noscript. As for erractic loading behaviour, it has nothing to do with collecting info - its just the order in which load events are fired. Some scripts wait for the page to load first so the content is readble; then they add in functionality. Sometimes when these can't load, random things happen. Remember, as designers we don't plan for things to not be loaded. Do you use any store loyalty cards at all? Or a credit card? Unless you only pay in case, your life is already intricately tracked. Hardly needed. Just making it the norm that ads come from subdomains such as ads.somesite.com, trackers from track.somesite.com etc etc, then users could exercise choice easily without a myriad of buttons for every page. Why not? We're always told it's about choice by advertisers, so why not offer choice in a way that doesn't require an arsenal of plugins and the knowledge to configure them? Since an awful lot of scripts are indeed trackers, and advertisers/trackers do their level best to disguise what their scripts do, is it any surprise people assume the worst rather than the best. It doesn't take a tinfoil hat or fear of government to detest the laissez faire approach businesses take to users privacy. Thanks for pointing out NoScript and Ghostery, I didn't know about them. I have been using AdBlock for a while. After reading this I installed NoScript and Ghostery. It is pretentious to think that it is OK to just do whatever you want with people's computers. I actually believe that there should be a law that would make these three types of addons built into browsers and they should all be enabled by default. At the very least make it so people are given an easy to read heads up display of everything that is going on with their connections. Seriously, if I want to go shopping for cars on the internet, I will. I don't need anyone to help "guide" me there, and I sure as hell don't need someone probing my surfing habits. It really is privacy invasion. Yeh, a new law for TV sets too that automatically blocked out commercial breaks. That would be really successful wouldnt it? Oh, until the TV networks all died. Except the non ad funded networks, which in the UK would leave you with the BBC - 5 or 6 good quality TV channels and a dozen radio channels. Your sarcasm reeks of desperation and your argument doesn't actually add up. You realise the BBC is PAID FOR by taxpayers right? It's not just "free". I have to pay ~ £100/year. If you think restricting the choice of consumers is a good thing and that everything should be based on a subscription model, then ... I can't eve write that here. Right after I read this, I went and downloaded all 3 of these plugins. They are very useful for when browsing free porn sites. If your content is actually worth a damn, then shouldn't people be willing to pay for it? Most people aren't actively aware that your precious content only comes at a hidden price. I mean, what's next? Devious trojans in all your stuff? It's your perogative as a "content provider", right? Devious trojans, eating babies... who knows where we'll go from here, eh? What if people can't afford to? Do you think, perhaps, we made a conscious decision to keep the content free deliberately so that everyone can access it? I find it interesting and very much hypocritical that you would rail against these add-ons but advocate the use of UnoDNS. Sites like Netflix and Hulu have location restrictions due to the licenses agreements they enter in to with content providers. You ask people to accept the ads on your site so as not to hurt your income but then turn around and advocate the use of a services to circumvent content restrictions potentially harming the ability of content distribution services to license content thereby hurting their income. Am I missing something??? Explain how me paying for Netflix harms their revenue, please. By circumventing the location restrictions required by the content providers you are endangering Netflix's ability to obtain licenses from the content providers. If Netflix cannot obtain content they cannot make money. What part of this isn't clear? You are showing people how to circumvent a restriction that enables Netflix to license content and thereby make money. You realise that if I get on a plane, and go to america, that also works to "circumvent restrictions". Is that somehow different? Apart from which, Netflix themselves don't unlock the content to me, and therefore doesnt break the terms of their licence. If the content providers strike a licensing agreement with UnoDNS, then we can talk again. Right now, no one is breaking any terms of any licenses; no one is losing any revenue. That's the difference. You're drawing a parallel where there is none, so this discussion is pointless. If you want to argue against the point I made in this article, go ahead. If you'd like to jump on over to the UnoDNS giveaway and argue about it's legality and impact on content providers there, then go ahead. The two are unrelated however, and I'm a little sick of silly analogies in the comments so far. Why don't you just accuse me of skipping ads using a DVR and be done with it? Well technically, as consumers, we're breaking the agreement when we watch DVDs from outside our regions or bypass geolocation tools. But we're still paying for it so morally we're compensating them for their work. It's (or seems to be - read on) greed and arrogance than makes them think it's OK to sell movies in one place and not another or make people in some regions deal with worse versions at the same or higher prices. They setup this system decades ago when physical books and VHS tapes were the norm and now they're stuck with contracts to local distributors. "You did it to yourselves." paraphrasing Mass Effect's Shepard. Dear Algernon(s), perhaps it's time to change the income source of your meaningless sites then, and if abandoning the raping of naive users means your shit site going down, maybe that's just evolution, too. i don't respect your pathetic, undereducated, unskilled income anyway, which merely pollutes the internet with cheap commercial propaganda, moralising idiocy, and clearly uninformed opinion. Readers should recognise the trivial second layer to your efforts, namely that you are trying to do the exact same as your advertisements. Like them, if you can only support your existence by advertisement itself, 99% percent of the time not even creating any actual product associated, noone will miss anything by letting your business die, as again, you don't create anything. You can't stop this, and trying to imply that users installing rapist-non-comliance software on their very own computers is somehow immoral, is at least as pathetic as the existence of some apparent idiots who willingly give up their rights for privacy based on this assumed morality of zero content. Finally, I also have to ask, you many registrations did you create to agree with yourself? Tell us how you really feel. Come on, don't hold back! :P Err, not exactly a conspiracy theory. Just Google "UK email monitoring" and it's the first link on the BBC from April 2012. And it's not even the first time the mainstream press has mentioned it. Nor is the UK the only ones asking for this. You almost have to be living under a rock to be using the Internet and not know about this. Not sure what relevance this has to ad blocking... Your web page is filled with adverts that use a lot of our 10GB monthly allowance and takes a lot longer to load without AdBlock. A) Your ISP needs a kick in the ... well it needs to be replaced. Unless you're on a cell phone or dial up, that's insane. 10GB is possible to go through in less than an hour on many cable modem connections. I assume you live in Australia or somewhere like that where the government deliberately has policies that lead to a technologically backwards implementation of the network. B) Flash Block is very effective at blocking video/audio ads while leaving banner ads alone. It also blocks Flash drive-by exploits. :P I use Ghostery. My primary purpose there is not to block ads, but because I don't want companies building up profiles of my browsing. You sneer at the notion that governments might want to watch people. The current internet snooping bill being proposed in the UK parliament is evidence that governments would love to have more access to our browsing habits. Our government's object is to keep records on every website we visit, and the date/time and recipient of every message we send somebody. But apart from the growing concerns around government motivations, I think we'd all be concerned if we found private investigators rummaging through our paper recycling bins in our back yards to find out what we've been buying so as to build up a profile of our buying habits. I don't really see why our online activities are any more public property than our offline ones. (Now counting down to the moment someone drags out the "nothing to hide? nothing to fear" argument.) I have little idea whether or not or how much to be personally concerned by companies are tracking me. But I don't think that nearly enough legal protections exist in the US or Europe to ensure our data is not abused in the future. We're on a slippery slope, and no honest person can say they know where it ends as we're still learning. So I think it is very good for people to be wary and keep this debate open, until our legal rights catch up with our technology. Stating that people are evil if they hold a different opinion is not a smart way of keeping the debate open. There are other revenue streams coming along that are much more direct than advertising. Flattr.com, as an example, allows people to pay you directly if they like your content. The AdBlock developer's donation model is also perfectly valid, and there's nothing to stop content-providers offering a donation button. Or perhaps some content providers worry that they'd have to provide much better quality content before people were prepared to pay for it? Ultimately, if you feel passionately that you have to fund your website through traditional advertising, I'm sure you could find a way of checking to see that a web bug has loaded and, if it hasn't, perhaps obscure your own content with a message saying "Please turn on advertising and commercial surveillance before browsing this site". This is the first time I've ever visited your site (because someone linked to your screed about "the trifecta of evil") and it will be the last as well. You complain about Adblock, Noscript and Ghostery destroying the Internet. No. They are not. They are negatively impacting your bottom line. Two different things. I'm not interested in your content-farm garbage or your advertisers. I don't give a rat's ass about your revenue stream. You seem to think that without your crappy site, I lose something. I've been on the 'net for over 20 years and I haven't lost a thing by never visiting this site -- and not a fuck was given. In conclusion, I'm taking my browser (with two of the "trifecta of evil") and going elsewhere. And if I ever see a link to your site, I'll remember not to visit. Have a nice day! Content farm garbage eh? Glad you like the site! Well, I see this site got rather popular with this article. I don't think I'd ever heard of "makeuseof" before, either, but when I saw Ghostery included in this "trifecta of evil" I realized it was just what I needed to block all the crap that was getting through AdBlock Plus and NoScript. It's amazing how fast pages load when all that extra crap is blocked. I just can't work up much sympathy for somebody whose livelihood is based on selling my browsing habits to all manner of sleazy ad networks. Right away I can tell there's something amiss when authors refer to their writing as "content" and my reading as "consuming". Clearly such authors work in the advertising business. What's not clear is how they purport to have the expertise to create all this "content" that is supposed to be worth my while to read--I mean "consume"--and, if they did have the expertise, why I should trust them, since the main goal of their writing is to sell me the stuff displayed in the ads they so desparately want to show. I don't think there's a whole lot of love or money lost, anyway, because by and large the people who buy stuff from the ads this guy wants to display aren't going to be sophisticated enough to install these ad-blockers or know or care what goes on behind the scenes of those ads on the internet anyways. I for one use NoScript because I don't want arbitrary scripts and programs and all kinds of other spyware and adware to run on my computer without my permission. I use AdBlock Plus because I don't even want to load all this bloated Javascript in the first place. And I really resent being tracked from one unrelated site to another by all manner of web bugs/trackers/analytics crap. No, what's far more likely to break the web is that it'll slowly become so overrun and bog-slow with advertising, adware, malware, spyware, content farms, analytics, and affiliate networks that no one will be able to find anything useful on it anymore. Just like what almost happened to email before decent spam filters were invented. This site has ads? who knew? Statistics for Ad Muncher v4.93.33676/4132 Beta Adverts removed: 10,391,480 Bandwidth saved: 305,925 MB Counter started: October 20, 2001 Well, thanks for being a loyal reader! Curious about the author's actions and beliefs... I Imagine they have a DVR. Do you watch the commercials when watching a recorded show? Should we all watch commercials on recorded shows? Yes, I have Windows Media Center to record TV, and YES I watch the ads. Irrelevant analogy though; the TV companies get paid regardless of whether you use a DVR/PVR or not. We don't. See the difference? One of our sites shows 140 pageviews in Statcounter, but when we use server side stats, the real number is 2200 pageviews per day! So that is 2060 pageviews (or 94% of pageviews) that are blocking Statcounter. They will also block our adverts. While we do not want them to click on the adverts, or even look at them, we do expect them to allow the adverts to display. We are a free site and NEVER make our costs in advert (or any other) revenue. If all visitors had adverts displaying, we would make more than our costs and could expand. Yet most people are worried about SOPA and PIPA? While I too disagree with these insane and heavy handed laws, I see our site losing money and I have to pay money from my own pocket just so that the adblock and noscript users can view our website without adding any value to our site. The internet is the wild wild west. How many people try to attack your server every day and how many are actually reported? Put simply, it is not fair to take free content while blocking adverts that are on the site. Lets say you go to a talk where you get a free radio for listening to the talk, but before the talk begins, you take a radio and walk out. That is similar to what adblock users are doing. If you do not like the adverts on the site, go elsewhere. We will be blocking ALL adblock and noscript users from our site using custom code. They are simply not welcome on our website as we all work for free and pay out of our own pocket for the server, yet 93% of people take the content without contributing towards our site. Do you know what the answer is? Paywall! And that is funny because adverts seem to have become so popular because users were willing to have adverts displayed so that they could access content without paying. Now they are blocking it, so back to a paywall! Yet these people would be the first to complain about poor content, even though they are preventing the site from making a better income and thus being able to spend more time on the content. People are idiots. Just look at all the stupid comments that have been made thus far on this issue. They are merely content thiefs. They want to have their cake and eat it. I would rather throw the cake in their face :D Thank you, Dave. Your stats sound quite disproportionate, but youre clearly getting a larger percentage blocking ads. I suspect we are too, but we havent gone to the bother of checking. What's really funny? Everywhere will switch to a paywall, then 10 years later, a new "free model" will arise, where instead of paying for the content, you just have to view some advertising! How forward thinking! You throw that cake Dave! The thing with NoScript is, if you use it correctly, it simply prevents scripts from running until you whitelist the page (something I do whenever I figure out a page is legit). So the only people who SHOULD be losing revenue are people who make garbage, content-less sites or websites with no reason to exist other than to bother you. I do understand that a lot of people probably use it to stop ads, however. Load of bollocks mate. I also run add-supported websites, and nonetheless have AdBlock installed myself. I for one support the user having a choice of whether or not to see ads on mine or any other sites. Excuse me for not putting greed before freedom. And by the way, I think you will find that many users do not necessarily mind the odd ad on a discreet corner of the page. It's when they can't see the content for all the spam that people get pissed off. Santello - It is your constitutional right to act like a fool. It is your choice to have your ad-supported website bring in $0 in ads and pay all the hosting costs yourself. The fact is that none of this has anything to do with greed. Perhaps your content is worth nothing - which is probably the case. As a journalist, my content is worth something. People come to my site because they'd like answers to questions and I've written good articles to help them. The price for admission is something any reasonable person would pay for - to see my sponsor's ads and know that these people are helping me pay for my site. Now if that is too much "inconvenience" for you, then take your browser elsewhere and get out of my site. If you want free lunch in my restaurant then you have to listen to a 15 second pitch. But if you just want the food for free and leave me with no money, then I'd tell you that you're foolish because then all these restaurants - where you don't have to pay any money and just listen for 15 seconds - are all going to go out of business. In the end, large companies will probably do away with the web as we know it. They will have some type of javascript that will run in a browser or application not subject to browser tracking laws. People are fools. They don't know a good or reasonable thing until it's gone. I think it's worth considering how web browsers and web servers actually work. Whenever you load a site that loads content from Facebook (for example) your web browser sends a request to Facebook including (naturally) information on which site you're viewing. How does it do this? There are multiple ways of doing it, but a simple way is to encode it in the URL, for example something like ''. The App ID is assigned by Facebook to each web site that is integrated into Facebook's social graph. Facebook receives and logs this information as well as your IP address - even if you're not logged into Facebook! If you're a Facebook user, then Facebook can easily de-anonymize this information (to some extent) by matching your IP address against any Facebook logins from the same address, enabling them to match web browsing logs to Facebook users. Of course, if you're logged into Facebook, the de-anonymization step isn't necessary. Whenever you load a page that runs Google analytics, your web browser sends a request to Google that results in logging the site and specific page, time, IP address, browser type and other things. This information is later provided back to the site, of course, but is also available to Google itself. Like Facebook, Google can easily de-anonymize data by matching IP addresses against logs of Google searches (which as the AOL search log leaks demonstrated are often very personally identifiable) as well as logins to Gmail/Google+/Google Chat/Google Talk, etc., from the same address. Of course if you're logged into Google, they don't need to do that. Not everyone thinks it's necessary that Facebook and Google maintain a log of every other web site they visit, so they use Ghostery to block Facebook and Google from any page which isn't actually on facebook.com or google.com, respectively. Yep, I acknowledged that Facebook and social networks can be an issue, and I know how they work because we've implemented social logins here. And I totally get why you have a problem with that; but realistically, what do you think Google does with that? They use it for advertising. That's their business. They don't have a team of private investigators who look at your web history and flag you as a potential terror suspect; they don't dig into your sordid porn interests; and they don't send an automated Gmail to your wife if they think you're cheating on her. Sure, it's the principle of thing, not what they *actually* do with the information that counts, but I have a problem personally with fighting something just for the sake of it. The remarkable thing is that most people who don't want Facebook tracking their web history are also quite willing to post intimate details, likes, interests and photos there everyday. As the joke goes - its the best intelligence gathering tool the CIA ever invented! Should it be up to us webmasters to remove FB like buttons though? Or should be up to Facebook to stop tracking just because we have the button. We're at a disadvantage - severely - if we remove it. Why are we the ones being punished? Let me explain to you the one simple reason that I use Ad Block... Tracking... Do you know how creepy it is for me to shop for OTC medication on amazon then see it advertised when I visit makeusof (yes it actually happened on this site) Yes, I know that I can install the "do not track plugin" or cookie, but literally I have to install one for every advertising network. Considering that every website, including this one, uses that type of advertising, I dont see myself as having much of a choice. I do donate to a select few websites and I donated to AdBlock. I *attempted* to justify my usage of AdBlock, now your turn to attempt to justify your usage of these advertising networks such as DoubleClick (I know it is widely used but that doesn't make it acceptable) Every ad network tracks, not just DoubleClick. If they didn't , they wouldn't pay as well, and you'd need more ads to make the same revenue. Personally, I don't find it "creepy" that things get advertised to me after searching for them. I'd much rather see a relevant ad than some BS lose-weight or "how my grandmother made $500 in a day working from home", which is what you'll get if they aren't personalized. That's a personal preference though, and you're entitled to think that way too; just bear in mind that if everyone did as you do, half the web would disappear. Exactly the same whining we got when popup blockers appeared. Oh, the content will disappear! Oh, it's unfair! Too bad. It's my browser, my computer and my privacy. The advertisers will take everything they can. They will make pages unreadable with their Flash popovers. They will slow down our browsers with their stupid java ads. They will cause seizures with their strobing ads. They will scream in our ears with their autoplaying video ads. They will annoy everyone with their endless streams of popups. And they will complain when eventually people have had enough and click the block button. Tough luck. Regarding user tracking, you're already in big trouble, as opt-out tracking is as of a few days ago plainly illegal in the EU, it's mandatory opt-in for tracking. And even if you ever get the law changed so you can have your opt-out, people will still opt-out and you will be forced to not track them. The solution is easy. Create an advertising service that respects the user. Not like Google, pretending to respect the user but spying on them behind their backs, but earnestly respect the user. Limit your ads to text and static imagery. Do not force the sites in your network to include your javascript. Do not accept any dynamic content, like Flash, Java or javascript from your advertisers. Do not accept ads for shady crap like these phony lotteries and similar scams. Do not track users over different sites. Maybe you'll be able to convince the blockers not to block you. If you do not use javascript, noscript will not block you, and if you do not track users Ghostery will not block you. You may even be able to work out a deal with the adblock people; they are not against ads per se but they are for getting an advantage in the ads arms race. And ultimately, as long as your content shows up on the users computer, they will decide what they get and what they block and you'll never win that arms race. Either you will adapt or you'll go the way of the dodo. The point of Adblock, is largely because of the sheer volume of sites that use incredibly tacky, and at times - noise making, advertisements. Some people will chose to view the ads of those who decide to not be obnoxious with them. As for NoScript - noscript isn't a "turning off of javascript". It's control of javascript. It's that simple. I whitelist the sites I trust. And if you think that javascript can't be used today to do some malicious stuff, they you don't know what you are talking about. "We believe strongly in a free content model – whereby we provide free, high quality, full content to you with no restrictions – in exchange for showing you advertising. Apologies if you think my definition of free is defective, but you’re arguing over semantics and kind of missing the point." This isn't quite so. "Free" also means free as in freedom, which is part of what AdBlock tries to promote. The Internet wouldn't fall apart if advertising disappeared. It was a free exchange (in both senses) before the advertisers showed up. "The modern Internet must have Javascript." No. The "Internet" is just a set of data exchange protocols with machines that follow them. The Web sits on top of it, and does many things (like letting me read this article) very well without JavaScript's interference. JavaScript is useful for a certain class of web apps, but the Internet doesn't depend on it. "So how much can these companies actually “track” your web usage? Well for one, they certainly aren’t able to see what you’re doing in other tabs, other windows, or general Internet searching." Unless those sites happen to display ads from Google's advertising network, which controls almost half of the global advertising market. "By far the easiest way to keep your private browsing actually private is to keep one particular browser, a portable thumbdrive version perhaps, to do all those browsing needs in." This presumes that the only reason to care about privacy is to keep "those browsing needs" secret--which is hard to take seriously. I agree with you, partially. I use luakit for my web browsing, which doesn't come with an adblocker (yet) as it is basically just an LUA engine bound to WebKit. that said, I -do- use Privoxy, being sent request via Squid (to reduce the load being sent to Privoxy, since Squid is a chaching proxy). I use a bash script that I wrote to convert AdBlock lists to Privoxy rules and I use all of the EasyList lists. it isn't perfect but I haven't spent a great deal of time perfecting it either. I suppose at a later date it will be. now, I don't disagree with advertising or even targetted advertising. I don't like being tracked from site to site but I do understand that some websites rely on advertising and I do notice the 'please add us to your whitelist' replacements for ads (such as on WoWHead). if I visit a website regularly, I'm more than happy to allow them to receive revenue for my visits, so long as the extra page elements don't up my load too much or get too annoying (those annoying flash ads for instance). I think you'll find a lot of people are actually in this middle ground. the amount of crazy 'no trackers, no ads' people are a lot fewer than you think and the aforementioned plugins are leaning towards allowing non-intrusive adverstising, which some people have already written articles against. Why don't you let people use their computers as they wish? That's kind of what big corporations like Apple are being called evil for. If someone doesn't want to run JavaScript, his bad. If someone filters content that his browser displays, that's completely his choice. Come to think of it, I'd love to hear what you think about filtering content for children. Is that evil too? What is one of your precious ads was naughty and blocked because of that? Demanding visitors to look at ads is much like demanding a donation. And when you think about it, the people who hate ads probably wouldn't click on them. And even if you get paid for just displaying the ad, the ad is only useful to the advertiser if it actually results in a sale. You need to think of ads as a currency. Ignored ads reduce the amount of money paid for advertising. How is this in any way related to filtering content for children? You're completely diverting from the real issue, creating a straw man to prove your point. Of course I have nothing against parents using filtering software to prevent access to inappropriate material. With regards to inappropriate ads - any ad network worth dealing with has explicit settings to prevent those. And when you look at actual behavioural data instead of just randomly theorizing, you'll find the kind of people who block ads are no less likely to click on them than other users; moreover, ads are not about clicks necessarily - some simply serve to strengthen a brand. The fact remains that when you use an ad blocker, you consume content, and you dont give back. If everyone did that, the content would cease to exist. Its a very simple equation. This site is supported by ads; if you enjoy the content, don't block the ads please. "How is this in any way related to filtering content for children?" Both are content filtering. You shouldn't force people to load content they don't want. The whole-package-or-nothing systems are unethical. I also hate software that comes with yahoo toolbars and shit like that (but at least that's optional) "you consume content" I don't consume content. Anyway, I understand the need for advertising and I don't blame you for it. They're an unfortunate necessity. Hell, I use ads on my website, but I don't go on telling my viewers how much of a prick they are for filtering them out. I'd be a bloody hypocrite if I would, wouldn't I? AdBlock Plus is a wonderful tool, though. And it even prevents commercials on Youtube (commercials that force you to wait are the worst). You'll also be glad to know that AdBlock Plus now allows non-annoying ads by default. You would be a hypocrite if you said that, but that's not relevant at all: I don't use it, for the reasons in the article above. Also, what makes you think you don't consume content? You read this article, didn't you? There, you "consumed" it. There's a vast difference between having big ad companies infecting the internet like a virus and sit back to watch the bags of passive income roll in vs. smaller sites funding their bare existence. (never mind the data they sell on to 4th and 5th parties) When adverts are placed appropriately within a decent layout and well designed site, they don't harm the flow of a website - Just like in "free" printed newspaper. Showing adverts in such a manner is largely acceptable, and hey, if you want to sneak in a cookbook advert whilst I'm looking at a recipe article, then kudos to your smart ad server. When sites start putting adverts at the top, side middle, bottom, bottom corner popup/layover, do-no-leave, click stuff now, stop reading your article and look at this advert, then, really - pull out all the ad-blocking software you can find. Well said, and I completely agree. Well I feel for you. But as a user who has to pay for bandwidth, here are a few of the things which will drive me away from your site FOREVER = NO AD REVENUE. 1, The overlay window which asks if I want to sign up. That thing is a guarantee no thank you, and 50% chance I never see your content 2. Music - Don't care what your page says - I am gone! 3. Things which pop up, sometimes up and down, like the meebo bar at the bottom. It is in the way and annoying. 4, ads which keep running over and over their little gif movie or what ever You may be worried about your salary, bur with a hundred thousand sites to choose from if you wand me to see your ad, do it without ramming it down my gullet. Remember the mouse is still in my hand and I can get away from you faster then tour page will load. The short of it is. want us (users) to see your ads? be nice. The “trifecta of evil” exists because YOUR web designer is an *** (opinion) Okay so i turned off both adblock and ghostery on this site just to see what it looks like, and i am annoyed by all the adds. i read someone say not to go to the site rather than block the adds, but just think, if everyone that didnt want to see your ads didnt come, then you'd be in the same boat as if most people used a blocker, so that wasnt good advice. Not to mention that ghostery says there is 45 trackers... that is disgusting.. make me SICK. I dont want to be tracked, i dont want to see ads because it thinks i like cars, i dont want to see that single woman who DOESNT really live near me. or this one ON YOUR SITE "is he cheating on you?" no, no, no. It might be costing you money "taking food off your table," but what makes you think anyone on the internet cares when "pirating" constantly goes on. you say the internet will be trapped behind pay walls? NO it wont, you know why? because no one is going to pay to see these websites- and you'd be out of business. I have NEVER seen a good ad on the internet that I wanted. theres a big difference between tv ads showing cookware on food network and what internet ads do, because i never see the "cookware" on a food website. its always just junk. Plus i dont want ads because I DONT WANT TO BUY ANYTHING. Protip: Don't look at the ads I don't know how so many people apparantly have trouble with this. On TV ads are an unavoidable menace that eats up 50% of the running time of a show. On the interwebz, ads usually take the form of easily ignorable banners to the side. Not wanting ads because you don't want to buy anything is the same sort of logic as not going to a resteraut because they have the option to serve salad, It doesnt affect you in any way whatsoever, and by making yourself known you come across as a dick I use Adblock. I am blocking ads on my browser. I don't want to see them. How does that impact your livelihood? My job is maintaining this site. If the site no longer made revenue through advertising - which if it wouldnt if everyone blocked them as you do - then we would no longer be able to run the site, and it would close; laying off 8 or 9 full time staff in the process and our entire team of writers. I DON' T want your damn adverts. There is far too much advertising anyway. To Hell with you adverts. People like you should be in leg irons performing hard labor. We're setting the web back 10 years because parasites like you have spent the last 10 years ruining it. Look around you, idiot - capitalism is dying. Soon the homeless mobs will come for those still supporting plantation slavery. If you really gave a damn about "making a living" you would say NO to co-operating with the corporations and governments bleeding you dry. But since you are a hypocrite, you play along with evil and bully the little guy. Congratulations, I already use Adblock Plus and NoScript; you've now convinced me to download Ghostery. So you accuse 'some group' of fear mongering over personal data and online behavior yet you call this article about perfectly safe to use plugins the 'trifecta of evil'? AVG recently offered a 'Do Not Track' functionality and your site is a perfect example of why I love these additions. Facebook (+analytics), Google+, Twitter and Linked in all harvest information simply from me being here, where I got simply by clicking a link in google search, or a link I come across in another article. You never gave me the option to be tracked by these external companies. The reason adblocks and ghosting plugins exist is because internet content generators have been complete blind as to how annoying and intrusive many of these ads are and more importantly how they often invade our privacy by tracing our behavior without consent. The internet did not start out this way, there is no reason why we should blatantly accept this invasion of privacy. By all means but a header on this site that you would prefer me to leave if I have an ad blocker, but this is not something I can select on before I actually get to your content. Just as I can't select whether or not you let facebook, linkedIn, twitter or an exotic variety of ad networks track me. Otherwise put in some effort. You sound like the music industry when mp3 came out. Either proactively deal with marketing and ad revenue independently, or work with blocking software to set up advertising guidelines for which does not get blocked. I have ad-block plus and I use a predetermined list. I would have never installed or even searched for a way to get adds to stop if It wasn't for intrusive and questionable adds. I put up with adds trying to con me into clicking them by pretending to be something I may be interested in. I could even tolerate the ones who took over the whole page and hid the close button and the ones who blasted you with stupid sound 5 times the volume of everything else. But when a very graphic adult add randomly started blasting at work on one of my many work related open search results while I was listening to something work related, I almost lost my job and that was the last straw. I do block most all adds, I do this because advertisers,you should concentrate more on attacking these unethical add sites and spreading information on what we can do! Ill gladly turn off the resource hungry app if we can ditch those! I think this is just nonsense. If you got an advertizer who is serious, then you integrate his ads on your page and no adblock would even be able to block them. These adds are reviewed by you or your team and you know in advance they are not annoying or anything you wouldnt want to bother your viewers with. By using ad-networks that go for the max you make a bad bad user experience and ad-block is a gift from heaven to get rid of it. Think about it. Ad networks want maximum clicks -> hence they go for the ads and options that will distract the user as much as possible. thx. I lol'd >Moreover, Javascript is an integral component of modern HTML5 standards But that's fucking wrong. Much of HTML5 is intended to reduce the use of clientside Javascript for DOM interaction. Stopped reading there. Reduce, not replace. Any modern sites has html5, CSS and JavaScript. lets face it, you don't run this blog out of the kindness of your heart, you do it to make money, just admit it and move on with life. you mention how addons are causing all kinds of hell and effecting your profit, but what about people tricked into visiting your site because you make use of plugins used to cheat them such as "All in One SEO" are you really saying that these poor individuals people who inadvertently clicked your site links MUST view the ads you force down their throats? you sir bring hypocrisy to new levels. Most people I would think would use Noscript for security reasons. to say that javascript isn't a threat is a laughable, it's an important component in attack vectors. Adblock is used also for anti-tracking. there both reactionary to the downside of the internet - loss of privacy and security threats. The downside is that is can hurt websites revenue, and that a shame, but to not use such protections is not practical for those that already do, there's a reason they do. What the system lacks is trust, trust that our privacy will still be there even though advertisers are so tempted by our every detail. The current debate on how to remedy these two conflicting aims will hopefuly yield a solution to enable the continuation of a freely accessible web to all. Seriously? Whining about people not taking your ads is like a street musician complaining about people not putting money in his jar? Nobody asked you to set up there, if you make money doing it good for you, but no one is obliged to help you make money doing it, they aren't being evil if they don't help, they're being neutral. HuwOS +1 Absolutely agree with you on that!!! With may 25 years experience as the system administrator I will tell you that active content killing Internet. All malicious activity come out from ActiveX, flash, Java and JavaScript. Period. There absolutely no needs in active content to be able to see what you wrote or to see images. Passive content didn't dig deeply in my computer and DON'T USE WITHOUT MY PERMISSIONS MINE RESOURCE to trick me to buy something that I really don't needed. By the way, how about if plumber come in to your home and SILENTLY install some special device in a toilet that will trick you to poop every single seconds - 24 hours, just because he also want to live and make money when he fix your overloaded toilet. Is it Ok with you , yes? "Moreover, JavaScript is an integral component of modern HTML5 standards" Remember my words - in a near future we will see new type of "super-duper special web-antivirus that will hunt malicious JavaScript and slowdown computers more and more" :) You're a sysadmin with 25 years of experience, and you just said “super-duper special web-antivirus that will hunt malicious JavaScript and slowdown computers more and more”. Why do I doubt your credentials? The analogy is terrible, but if you insist: how would you feel, as the artist or musician in question, if someone came along and put the lid on your tip jar, preventing you from getting tips. Or if someone was standing there shouting "dont give this guy tips! It'll just encourage more buskers, and kill off the legitimate music industry". Like, I said, its terrible analogy, but since you raised it.... I unblock ads on websites I support... such as you guys! I agree with the author on the point that it hurts revenue generation but many people also use adblock to block ads or scripts that block the loading of the webpage. Pure irony. The whole point of advertising is to psychologically manipulate people into buying products they don't want. This is pretty close to "evil" yet the article attempts inversion to suggest that actually avoiding this is the moral failing. As for stealing, I know this was intended as hyperbole but the article does strongly suggest the author thinks that way. The fact is sites have chosen a business model whereby they offer free content in the hope that ads will net some user interest and indirect revenue. Viewing the ads is in no way part of the business contract with the user and blocking them is no more stealing than is skipping a page of ads in a newspaper or accepting a free sample of food with no intention to buy the product. Blocking ads is the modern equivalent of changing TV channels during a commercial break. NoScript temporary exception made so I could post a comment :) Not a fan of marketing in general then, are you? Not really much I can say in response to that, but just wanted to let you know I do read all your comments. I don't recall saying it was stealing, actually. Merely that if everyone did that, we wouldn't be able to survive and the site would be shut down. In just the same way that if everyone had a magic adblocking box for their tv, the advertisers wouldn't pay for shows and free-to-air tv would fail to exist. More to the point though Loco, why don't you tell us how you would run the site? How would you pay 20 full time writers and editors, cover server costs to handle hundreds of visitors every second? I'm not in control of the monetization myself, but I'm sure my boss would love to hear your ideas, genuinely. We are always open to reader feedback and ideas, and I can assure we take it all very seriously. The responsibility lies in the hands of the people showing us these advertisements. Don't tell us "just deal with the annoyance, we need money", because (as is becoming more apparent) we'll just say "No, we don't have to." I wouldn't mind advertising, if what was being advertised was at all related to my interests. ("Hot [your age here] year old single females in [your town here]" doesn't count). Unfortunately for you, I have absolutely no interest in supporting Train's newest album, or buying Resolve carpet cleaner. If you're going to track everything I do on the internet, you might as well use it to send me ad's about things I've Googled. If I'm going to be honest, the idea of demanding payment for writing an internet blog is laughable. Why should you have to get paid so that other people can hear your opinion, most people do that for free. I like your scare tactics though, a lot of people are going to believe that the internet will just disappear and their browsers will just be white screens if they don't remove adblock. Let me rephrase that second part. "I would absolutely LOVE advertising, if it were at all related to my interests" In that case, let me suggest something crazy . Turn off your adblocker for a month to let it gather data about you. Now, do you suppose that having gathered data about your interests, the ads shown will be more or less related to your interests? Also, you're obviously not a regular reader if you believe MakeUseOf solely exists as a platform for me to voice my opinions. I would suggest you read around the site before making blanket statements like that. I also never suggest4ed the internet would disappear for using these plugins, nor would we ever choose to block users at this particular site. However, if you are blocking javascript, and modern features rely upon javascript, then it should be patently obviously that said features won't work. "James Bruce" and "muotechguy" are the same person ha ha. Wonders never cease to amaze me. A "Good cop, bad cop" tactic to get people to disable their add ons ha ha. I reckon you're website advertisers are threatening to pull the plug on Makeuseof hence the rhetoric on so called "evil add ons" Arrival of Adblock and Noscript, Notscript (Chrome) has taken control from websites and given it back users. Resulting in websites having a "hissy fit", claiming theft of content that they made public, ha ha. Using aliases on the forum to manipulate the outcome shows real desperate act to win the argument. Call me "Tinfoil wearing" or "paranoid", I don't really care. My duty is to protect my computer from these intrusive ads. If reveal you are connected to Ehow or SEO sites that Google has banished from their search engine. I'll do the same with my host file. Ps. Is Tina also you're avatar too? Never denied being the same person, haha! I have two accounts, one with admin privileges to run the site, and one with author privileges to write articles. It's not really that funny, but sure, ok. I ought comment with only one, but sometimes I forget to log out, and it ends up being posted under my admin account. Haha, such a conspiracy. Tina is not me though ;). She has the enviable job of reading every single comment posted, and moderating for offensive language, as well as reminding us when a comment goes unanswered. None of our advertisers are threatening to pull the plug on anything - though I don't actually deal with that side of things. I am however being quite vocal to our boss about changing the advertisers, because I've listened to everyones feedback, and tend to agree that low quality advertising is detrimental - we should be doing more to curate our brand, and consequently the ads we display. But , please do continue to make assumptions about me. I was highly surprised by your opinion. Then I read this: "James is a keen gamer with a passion for ipad boardgames." And I wasn't surprised anymore. I'm not sure I follow. Are you implying I'm biassed because I run a website? My revenue has been largely unaffected, actually. I'm also a web user, so I think that actually makes me less biassed than yourself, presumably you're just a consumer? Though i see your point of view on the subject at hand and i somewhat agree with you. my main concern falls into commercials that you are forced to watch such as youtube channels. i used to have zero issues with advertisements in the past, whether they were on the sides of a webpage or a small pop up in a video. They were always easy to click to close or easy to ignore. But you have to admit that they've gotten way out of hand; and because of this it's forced myself to get an adblocker to deal with this ongoing nuisance of forcing these advertisements to be watched before entering a page or watching a video. perhaps if all web programmers stood up for the people and forced the advertisements themselves to be instantly closeable or at least have a maximum of 2 seconds before click to close required then most of us would probably not care enough to bother with an adblock. Why? because let's face it, having to be forced to watch the same commercial over and over again from start to finish while watching numerous videos get's old pretty quickly. It also makes me 100% sure to never buy the product being forced down my throat or buy anything from that company out of sheer spite and anger to the company for pushing me to watch things i had no intention of buying in the first place The thing with Youtube ads is that the average time spent watching a youtube ad is roughly 20-40 seconds per, let's say a 7-20 minute video. This ratio is FAR better than anything TV or Radio can offer. idon'tknowwhypeoplecomplainaboutyoutubeads.jpg Thank you for convincing me to install Ghostery. If a company cannot make money without surreptitiously exploiting my personal information, then I won't feel very badly about it when they fail. My privacy is at least as valuable as their bottom line. I personally use 2 of the 3 - for many many reasons. I'd prefer donating makeuseof some money (yes, using PayPal) then seeing ads. People are generous, especially if they feel part of a community. Ask for money (like the "evil" people do), or be creative and find other ways. But in any case, thanks for the nice opinion article! I don't agree, but I appreciate it! It would just be terrible if you had to do like other people and work hard for your money. Wouldn't it...? If you think this type of work is easy, then maybe you're in the wrong job. I have Adblock Plus and NoScript enabled right now. This 'trifecta of evil' could be quite easily be compared to a Tootbrush, Toilet paper and Soap. Thus, the 'trifecta' is bound to cause a stir in the bacterial community (as pictured elegantly by James Bruce above). Wow. You really know how to make a well reasoned and logical argument, don't you? I'm terribly sorry, but is that meant to be ironic in some way? If that is the case, could you please elaborate on how ads could be considered anything else than an annoying off-topic clutter clouding the user experience and page content? No, I don't care to respond in any other way I'm afraid, not when you begin a conversation with as eloquent personal insult as you just did. I use "the trifecta of evil" on all my devices (PCs, laptops, even phone) now internet is faster than ever. You don't care user experience, you only want to make more many on my clicks. Sorry pal, maybe you should considerer another aproach to ads. I recomend to every one i know to use these apps. Website ads days are over. A new internet has born. Deal with it. That would be the internet situated behind paywalls then I presume? Thanks Mr. Bruce! I was not aware of Ghostery until reading your article. It's a great addition to my other Fx addons! Your welcome. Just think - if the site didn't exist , you would never have heard about it! Yet if everyone ran it, the site wouldn't exist, and you wouldn;t have heard about it, so the site would exist. Waah, my brain is exploding from the paradox~! Thank you for telling me about these amazing add-ons. I just installed all of them and they work great! I'm always amazed at people who so conveniently ignore the difference between voluntary and involuntary. Pay Pal is voluntary. Stuff popping onto my screen that I don't want in involuntary. Wasting my time by making me deal with crap is involuntary. You're a fool if you think that getting rid of ads will destroy the internet. It may destroy your job, one hopes, but there are other ways to pay the piper. Some are here now, some need a little more implementation research to make real, some aren't thought up yet. But forcing people to have to deal with crap they don't want popping onto their screens is the wrong way to go. It's always been the wrong way to go. Also, of course, after a certain point (name recognition), advertising doesn't work, as a 30 million dollar Pepsi research project proved in the early 1980s. Pepsi learned it can't advertise it's way to beating coke, and Apple learned, and showed the world, that it's products that make people buy, not brilliant ads. I don't mean to be rude but using Apple as an example here was a terrible idea. Apple's success lies in brilliant marketing much more than in actual product design. A simple comparison with competing products shows that more than half of the price of each Apple product is the brand itself. I mean, look at the iPad: they managed to create a brand new need for tens of millions of hapless consumers. Offended for being called hapless, but regardless - thanks for your input, "poverty stricken" It bothers me a lot that you claim these add-ons "break the web." Foremost, it is the responsibility of any web developer to make sure each page degrades gracefully. If that doesn't happen, it's simply a matter of poor web design practices. The real problem is how pervasive this mentality has become. It isn't just your website that thinks "Users should view the content this certain way, or we can't be responsible for what happens." That mindset is flawed, and doesn't help promote progress or standards in any way. That being said, it isn't really your place to tell the user what s/he should or should not be doing with their browser and their web experience. It's not your business. If you really want to put out free content, then it should be on your dollar, not mine, or it isn't really free. Claiming that I'm not really paying anything by having to view ads is ridiculous. It takes extra time to load all of those worthless, malware-vector spam images. It takes time to load a lot of heavy js. Time is money, and my time is just as valuable as yours. The problem with these ads and malicious tracking cookies and javascript/flash/java exploits is that they are underhanded, and take advantage of unwitting users without any background in web technologies. They didn't sign up to be tracked or monetized, and, even if they did, in most cases didn't understand the full implications of doing so. This is a pervasive social problem, and you are only fueling the problem. You shouldn't claim things are free when you are really just hiding the real price. If you can't figure out how to do that, or believe that it gives you the right to violate your user's freedoms and privacy, maybe you shouldn't be doing it. The thing is, if your content is ad-supported than it isn't free. I guess that would make you a liar now wouldn't it? I find it funny that you don't seem to realize that the web has been around longer than ads and spam have been around. If you like these things, more power to you, but it seems some of your users have more sense. If you are unable to support the bandwidth costs of your "free" content, than maybe you should find yourself another line of work. Or perhaps you could wise up like some of your betters and quit pretending this "free" content is some kind of altruistic service. I would say the incendiary nature of this article seems successful though. Too bad you can't seem to be satisfied with users who like more control over their web experience than "content creators" would like to force on them. I'm perfectly satisfied with my web experience, perhaps you should look into how you can be too, because these plugins and workarounds are not going to go away. They are going to become more prominent as the general user becomes more tech-savvy. Even without any of these, literally all of your revenue could also be blocked with a simple host file, and stock browser settings in any major browser. 10/10 nice trolling. Don't call me a liar please. Kindly go look up the dictionary definition of free. One of the meanings is "without cost". Is there a monetary cost to you for reading the content? No? Well it's free then. The net evolved significantly because it was able to have money injected into it from ad companies. Without the ads, we would be stuck at the web as it was 10 years ago - unprofessionally made, ugly, and badly written pages posted by the geek elite. I should know, I was one of them. You're right about needing to adapt, of course. I do realise that these plugins are here and not going away, which is why my next article is on the topic of "how to deal with adblockers", so perhaps you'd be interested to read that too. Bear in mind that I don't own this site though, I simply work here. I don't have any control over the revenue sources, but I am crying out for changes and imploring my bosses to look at other sources such as sponsored posts (ie, something you can't block). This seems to be in line with what you're suggesting? The problem is that it's a two edged sword. So on one hand, many websites would never have survived this long or even appeared in the first place. But on the other, the web is turning into a large, mindless shopping mall overflowing with too much superficial content and too many users looking for a quick buck. So there's a fair chance the internet will end up choked by the very people who helped it grow in the first place. I don`t want the malware that publisers insert in ads. Or defective ads selling scams or some personal info profiling disguised has a "free" product. Deceptive ads are annoying, and we always ask for them to be removed as soon as they appear. Malware actually in ads is a myth though, and any modern browser is a good defence against it. Would you please explain how "malware in ads is a myth" ? This would be a good article for you to write about as many people block ads because of the malware issue. I'm sure that flash plug ins now using policy based protection can alleviate some malware (I believe adobe uses their own definition of 'sandboxed' flash plug-in). But it would be a disservice to your readers if you make a blanket statement that malware in ads is a myth without some info and or data to back up such a claim. Perhaps some other writers can take up the task, personally I'm sick of being flamed for posting anything related to advertising. perhaps. It's a bit disingenuous to claim you get flamed for posting 'anything about advertising.' Your article is titled: ' AdBlock, NoScript & Ghostery – The Trifecta Of Evil [Opinion]'. I was referring to my other article, on how to deal with the adblock problem (intended for site owners), which people jumped on without actually reading it. Perhaps some other writers can take up the task, personally I'm sick of being flamed for posting anything related to advertising. In hindsight, "myth" is the wrong word to use. "Extremely unlikely to make it worth my worrying about, ever" would have been better.. U mad capitalism? Why don't you get a real job? Oooh, because writer and web developer is not a real job? Excellent and well reasoned argument sir, I commend you on the thought you put into that . If nobody will pay you for it then no, it's not a real job. And since the whole point of this article is to whinge that you can't pay the bills with what people are willing to pay you, well... I get paid just fine, and people are willing to pay me what I charge; I don't pay the bills for this particular site. This was not an article about MakeUseOf; it was an article about the internet in general and how adblock is very much harming it. If advertisers weren't such assholes and started creating ads that slide over the stuff I'm reading or play horrendously annoying sounds in the background or redirect me to another site, I never would have download adblock + noscript in the first place. The advertisers brought this on themselves and at this point, they can just deal with it. True as that is, it's not the advertisers that suffer - it's the site producing content. It's not hypocrisy by Adblock Plus to ask for donations since advertising and donations aren't the same goddamned thing. Here's an idea, why don't you switch to asking for donations so we don't have to sit through another bitchfest like this, driving us away from your site and making sure we never click your ads? Here's an idea - because it doesn't pay the bills. Thanks for playing "wheel of monetization" though, you've been a great contestant! "My view of the world starts with the end user... It's important the advertising model doesn't scare the user." - Google Cofounder Eric Schmidt Today's advertising model involves intentionally degrading the user's experience- the user only wants to see the text of the article, the ads want him to see only the ads. And it's not just displaying ads, but tracking the user's behavior through "anonymous user data", a concept which has been debunked. Furthermore, as users we don't have any control over what "anonymous" data you collect, what you do with it after you collect it, or how long you keep it, or have any way to correct errors. Furthermore, you have no legal responsibility to safeguard any of this data. On top of that, all web advertising follows this model, "if you see an inappropriate advertisement report it and maybe we'll take it down." When that ad involves running arbitrary Flash or Javascript code inside the user's browser, and subrequesting arbitrary third party domains, that's not good enough. Right now, the advertising and datamining industries are reeling; for years they have silently exploited end users for profit, with no accountability or recourse on the part of users- and they took full advantage of this. Now users are realizing they don't have to tolerate that anymore, and instead of innovating solutions they're bemoaning the use of choice by consumers. I think you're right- alot of sites are going to end up behind paywalls, and I think people will pay. And lots of people will get the same thing for free somewhere else. A couple details you got wrong: Ghostery is now OWNED by the advertising companies, and uses THEIR OWN OPT OUT SYSTEMS to block tracking cookies. It is impossible for this addon to interfere with downloads, or displaying comments. It sets opt out cookies that any user can set themselves by manually going to each advertisers' website and clicking opt out; except some of us like to clear our cookies and cache sometimes. You should also note that adblock plus, by default, now allows some advertising, which the ABP authors have checked and found to be unobtrusive, that is, polite. Javascript is not required for most things on the internet. In fact, most of it goes into "widgets", tracking, and login functions. What you don't seem to understand is that *I don't want an account on your site, for the same reason I block your ads.* Which is why things such as Disqus and OpenID are so popular; And in fact on a site which uses those, IU can selectively allow that while blocking things like Google Analytics and Twitter. Anyone who leaves everything blocked in noscript isn't using it right. Ghostery is owned by Evidon - - and they provide privacy solutions to company. They are not an advertiser, so I have no idea where you got that idea. It is also doesn't use OPT OUT systems , because no such thing exists. It blocks cookies, thats all. And you're right, anyone who leaves everything blocked in noscript isn't using it right. Which is exactly what 99% of users of these plugins do. Anyway, I'm inclined to agree with you that the types of adverts need to change, and web site owners need to be more selective in who they work with. Expect changes around here. Excuse me. I got a little hot under the collar. I had ghostery confused with TACO, another anti-tracking extension. >entitlement stop saying that word....I think bloggers feel entitled that they can make blogging they're job. Why don't you contribute in a semi-meaningful way instead of talking about things people are already talking about.... Writing is a very small part of my job, the majority of my time is spent developing and maintaining the site. Blogging alone rarely pays anyone's wages. (*their*) Hilarious...broken model? it's the consumers fault THE CONSUMER NEEDS TO TAKE RESPONSIBILITY. and sadly I don't have noscript so this page got one extra view from me. Sorry I promise never to waste my time clicking a link that leads to this website. Agreed. You, the consumer, need to take responsibility. Asking for donations is not the same thing as user tracking, ad-targeting, SEO spammers, and mining every but of information you can about people just to try and force their money out of their pockets. Because all of these things actively hurt you, because the big scary corporations know what sort of things you get up to on the interwebz. BTW it's not money out of YOUR pockets, but out of the advertising companies. Wouldn't you like to be taking from the rich and giving to the poor? The donation model is the best model. Drop the ads, ask for donations. and you would know this from having actually operated a large-scale website, I presume? Everything that needed to be said about this asinine article appears to have been said. Adblock stays on. Enjoy being an elitist by attempting to tell me how MY experience is SUPPOSED to be on the internet, because I certainly won't be coming back to read about it. Enjoy having to fork over credit card info to go to any website once everyone catches on to your way of thinking, John ." Those are totally different things. Showing cooking ads during cooking shows is like showing car ads on car websites. Tracking what websites you visit is like your TV tracking what shows you watch, reporting them to the studios and then having them show car and cooking advertisements just to you. Ok, point taken. And I'm inclined to agree, somewhat. I vet all the advertisers to my own site ipadboardgames.org, and intentionally disallow anything not iPad/iPhone related. It's a better model, and I'm looking for us to make changes around here. Thanks for the reminder, I've been meaning to download NoScript on this computer but keep on forgetting. The real irony is when you will eventually start deleting and blocking comments because people aren't eating up your bullshit and instead are telling you off with valid arguments. (a) That isn't irony, it's just deleting commentsa. (b) We haven't deleted any comments just because they don't agree with us. (c) If your comment went into moderation, it's because it contained a link (which we vet to ensure it isn't random malware spam - yeh, that taks requires someone to be paid too you know?), or some offensive language. Adding to what James said: We do occasionally edit or delete comments that are abusive or use foul language. Noscript is absolutely essential for websites that are not familiar. Noscript + common sense gives almost perfect protection from viruses. That said, ghostery+noscript+adblock user here. Suck it. Started off as a good comment Ended up a total douche Awesome If you don't think people should use your site, view your content, without also viewing your ads, then don't let them.If you try to control how users use your site, you're going to drive them away. It is your job to adapt to the ever changing climate of online activity, and the users job to create it. If you don't want to adapt, then go right ahead and die like the rest. If you're not willing to do anything about the problem, and still whine about it then your part of the problem. Repeating myself again here. I am one writer, and this is an opinion post, this is not a corporate view and there are 20 other writers here. I have explicitly stated on many occasions that we will not block the adblock users. Ok? >Fights against people blocking your site against loading up scripts they don't want loaded >Has a privacy-breaching Facebook plugin on the page Yeah, no. You don't deserve to be pitied at all. If I do not block ads, but do not click on them and do not buy their products, I am then by extension hurting the advertisers just as the siteowners because the money which they payed to put up the advertisements is being wasted. If I bought everything which was advertised to me, the internet would become a very expensive place. Advertising cheapens one's contend and should be blocked at all times. I tune them out in real life. Should I look at all billboards too? Upboated That point has been countered numerous times already, sorry, not repeating the lengthy argument yet again. Until Ads respect my privacy and freedom, I shall block them. If you do find an ad provider that does just that, please notify your readers and I, and hopefully others, will unblock your ads. ... Working on it. Here's an interesting suggestion... Get a real job. B'awwwww why wont they let me spam them with ads for my shitty blog?! Moderated links, I'll make a point of not allowing accepting those. Until Ads respect my privacy and freedom, I shall block them. If you do find an ad provider that does just that, please notify your readers and I, and hopefully others, will unblock your ads. You criticize AdBlock for using a paypal button? Why don't you add a paypal button and stop the ads too? Becuase it wouldn't pay nearly as much, and many people around here would lose their job. Have you tried getting a real job?. LOL. No, I like to sit here in my underpants arguing with random net users all day. (If you don't realise, this comment was a joke) I like how you didn't bother addressing people who use text based browsers which thereby completely block any image ads. Are they somehow ripping off your "content"? Any writer who refers to themselves as "content" providers, is obviously not writing because they love it. I find it hard to be lectured from someone on their high horse about "content" when they themselves have no love for the medium. You degrade the works of actual artists and writers when you refer to it as "content". Furthermore, you seem completely detached from reality. This is what the market wants and someone is facilitating a need for that market. People (not consumers, /people/) want to view sites without annoying ads or potentially unsafe ads. People want this. The market will be facilitated whether you like it or not. Staying behind the times means your business model will dry out and you will simply blame others for changing while you stay the same. Get with the times. Figure out a new way to make money. Stop clinging to old business models. Not everything gets to remain the same. And for the love of Odin, stop calling works "content". Bravo. Free-markets rule everything, content providers will be left in dust. Less than 0.01% of users still use a text based browser. Is your entire point based on a simple dislike the word "content"? That isn't really something I can respond to. Books are content, TV shows are content. "something that is to be expressed through some medium, as speech, writing, or any of various arts: a poetic form adequate to a poetic content." Anyway, you are somewhat right. We need to adapt and create other revenue streams. Hopefully we can survive, and you can continue to make use of our great free *content* and guides. This seriously sounds like a YOU problem. I am astounded by the audacity of "content" providers who think that just because they decided to maintain a web page, then all of the sudden they have the right to tell their visitors what they should and should not be consuming. Everyone out there wants to get rich farming data from their consumers, they want to leave the doors wide open getting paid for everyone who crosses the threshold, yet still complain when some people decide to walk away with the stereo. It's pretty sad when you can insult your consumers in this manner. Completely ignoring the positive impact our sharing of your articles, our connection to you via social media and our feedback can have on a site. You seem to forget that the only reason this site and others like it are even remotely viable is your viewer base. It's not your articles, your snazzy design, or your giveaways. You get X number of people to cross that threshold a month. If you provide content that they find acceptable, they will come back, tell their friends, etc. Most people aren't savvy enough to block all ads or tracking scripts, so your complaint seems more akin to the BS "we are losing money on MP3's" argument made by the RIAA. Digital content in infinitely reproducible. Other than the initial cost of production and the finite costs of distribution, it has NO inherent value. Your articles can and do get you mileage far beyond YOUR bandwidth. So while I generally get the foolish and insulting point you were trying to make, I do not sympathize with you when it comes to SOME of your users deciding they don't want visual spam to read a blog post. You want revenue? Ask for subscriptions, period. I suspect you aren't confident enough in your offerings to do so, and that is not an insult. Not EVERY site has subscription worthy content. But the bottom line is, so long as your door is left open, you have to accept the fact that your stuff is FREE, reading and ad or clicking a link is a COURTESY not an OBLIGATION. You would do well to remember that. Well, glad you value our articles and free guides so much. Your other points have already been responded to multiple times in other comments, so I apologise for not infinitely repeating myself. As someone else already pointed out, our valuation of your content has nothing to do with not wanted to be spammed with BS ads to read it. You want support? Ask for support, charge a small fee for your content, granted that would mean your content would have to be 100% original, which it most certainly is not. Just like every other blog out there you comb the internet for materials to post and discuss. I wonder how much revenue you share with the sources of your material? Oh wait, you generally LINK to those sources thus increasing traffic and stirring up interest, hmmm why does that sound familiar? And there is no need to repeat yourself, you don't have a leg to stand on so long as you are leaving the door wide open. Make good content, provide something more substantial that a rehash of the rest of the web and people will(and do) respond. tutsPlus has ALWAYS used a subscription model. Some content free, some content behind the door. They have a fantastic collection of materials and I have in the past had NO problem shelling out cash to view it. I believe you give your readership far to little credit in that respect, so as I said earlier, this is a YOU problem, not an US problem. From the comments in this thread YOU are the one looking at this bassackwards. People like a clean viewing experience, if you really want our help in maintaining the site, step up the game, work out an actual revenue model and stop complaining about how your users view the web. Personally, a random post here and there is not something I would pay for, but then again I am not your primary audience, your guides are generally to basic for most of the people savvy enough to block ads and disable scripts. The answer is out there, bashing your users and generally acting like a tool when they call you on it, well that isn't part of the solution. As I quite clearly said earlier we CHOOSE not go with a premium subscription model, because we want the information to be available FOR FREE. Do not presume to dictate your preferred business model to us - it isn't relevant that YOU prefer to pay for content, there are hundreds of thousands of visitors to this site daily who do very much appreciate the fact that we offer material for free, offer free technical support and free guides - and accept quite willingly that this means viewing ads along with it. If you wish to call them "non-tech savvy" then kindly take your elitist entitlement attitude elsewhere. First of all, I said no such thing. I can assure you without a doubt, having been a web developer for more than 10 years, MOST PEOPLE are not tech savvy enough to realize such things exist. That is a FACT, not an insult. Take your head out of your butt. "hundreds of thousands of visitors to this site daily" Then EXACTLY what are you complaining about? SOME of your users block ads, MOST of your users wouldn't have a clue these things existed if you didn't make posts about them, regardless the whole point of this post and the other one than brought me here is nonsense. You want to help your users? Make a post detailing the common mistakes that can lead to missing content elements. Explain how they can fix some of the issues caused by improper use of these tools. Oh wait, that would have made TOO much sense on a site supposedly devoted to giving its users information. You could have easily informed your users AND made your point about supporting the sites you visit without being a d-bag and insulting the people kind enough to visit. Choice is a wonderful thing, no? You've chosen to publish content without a paywall. With that decision you've agreed to allow readers to consume content as they see fit. Like it or not, that includes the option of stripping ads. Even if one accepts your (rather naive, imo) dismissal of security and privacy concerns, there still exist numerous valid reasons to use these products. Your concerns might be taken more seriously if not for the astonishing abuse of readers' trust and resources committed by this very site. On this comment page alone I count twenty-nine (29!) separate javascript requests. This is both unnecessary and unacceptable. Even on a desktop, I find such waste borderline offensive, but imagine the effect on smartphones with their less powerful processors, smaller screens, limited battery life, capped data plans, etc. When supporting the authors via advertiser networks also means considerable bloat you make the decision that much easier. The web's openness will always be a hindrance to this revenue model. All other ad supported media uses a standardized format. We needn't worry about completely separate policies when changing television channels. Things aren't so simple in internet-land. Consequently, some choose to block the worst offenders, and many others become collateral damage. Finally, you dislike empowering NoScript's author with deciding which ads are "non-intrusive," yet you've decided for everyone that javascript is mandatory and flash is not? Well done, sir. Even if you find everything I've said unconvincing, consider this: The choice of how to publish is yours and yours alone. Own it. When you start paying for my bandwidth then maybe you can have some say about what is sent over it. Until then you are going to just have to learn to deal with it. Sorry, but have you given a thought to *our* bandwidth? We pay for that too - bandwidth is a two way street. You don't seem to understand. I'm writing this here because you mentioned paying for bandwidth, and you mentioned it under the tone that people accessing the FREE content on this site is somehow detrimental to your well being (income). You should realize, when you're not making money, you change your business model, not your consumer. You've been looking at this the wrong way, and from the other comments I've seen, you're apparently too self-righteous about your own opinion on the matter to notice it. I also happen to be the only vocal one around here about actually changing the revenue model, so when you do see changes you can thank this particular self-righteous writer and developer for said changes. Have I mentioned how very profoundly I do not care about your whining about "lost revenue"? Have you considered that *your model* is broken and not the browser tools? I happy employ all three of the above. Eat my analytics. Maybe it's your attitude that's broken? Enjoy the free content, anyway. I don't care about your stupid money. This is my computer, my browser. "The Trifecta Of Evil" reminds me about the silly snappy words warmongers find to turn people against someone and take them down with support. Talk about "scare tactics"... Do you Tivo your TV shows and sit through the ads? Do you sit and watch the advertisement in the cinema before the film or you choose to enter in the theater room, only 10 minutes later? Do you allow advertisement in your physical mail box? Do you subscribe to every newsletter with ads of every site you have interest in? So... Why diabolizing tools that give users the ability to navigate through the internet without getting tracked or spammed or slowed down with useless bandwith usage for them? If you had advertisement that was just an Image and an url link no one would block it would it? Why is the user problem and not the system? Why don't you blame the ad companies or try to get revenue from alternative paths than blaming the same people that make your site viable for a revenue? And yeah a lot of people don't configure them much, but a lot do and it does fine for the website one is regular, so why not make a damn article explaining how to unlock those for cherished websites instead of pounding on the software and the users? Cheers 1. I dont have a TiVo. I have a Windows Media Center PVR, and yes I watch the ads. 2. I always sit through the trailers/ads on that rare occasion I go to the cinema. 3. I dont allow unsolicited flyers for which I am getting nothing in return, but I accept them if theyre coming *with* the free newspaper. 4. Not sure the relevance of a newsletter and adblock, but I am signed up to quite a few newsletters from ebay, amazon, and victoria secret 0_o 5. I am "diabolizing" the tools because they are killing the *free* internet. If you wish to pay for all your online content, go right ahead and continue to slaughter websites with your ethical crusade against evil advertisers. 6. An ad that is just an image and a link is neither targetted nor effective, the effect of losing revenue is the same. With regards to why we dont try alternative routes - that's because, as ive already explained a million times - we believe the content should be free of monetary cost the user. If you would like to pay us a premium subscription in exchange for being able to use the adblock plugin, go right ahead. 7. We already have an article on how to whitelist sites. Surprise - this wasnt it. In fact, we have over 15,000 useful tutorials and articles, all free to view and ad-supported. Does that sufficiently answer your questions? Thanks for reading. LOL, sure ya do... Which point exactly you are doubting? I don't appreciate being called a liar. Your definition of free is wrong. The price of something is what you give up to get it. If I'm giving up bandwidth, time, and patience to read your articles, they are not truly free. As a consumer, why should I pay a higher cost for something if I don't have to? If other markets exist where I have a dominating strategy, then why should I even bother with others? There is no reason. For that matter, you aren't even paying attention to WHY users are moving in this direction in the first place. Furthermore, if this "trifecta of evil" was such an issue, you probably wouldn't be complaining about it on your blog, you'd be out finding another job. Suck it up and learn how markets work, and stop bitching because your business model doesn't fit the demand of the consumer. For the sane bunch: additional to the absolutely essential ghostery and noscript, also consider privacychoice trackerblock, user agent switcher, and referer control. Can check greasemonkey, will need to write some javascript (really simple, but there are also thousands of readily availbe scripts for download). Fingerprinting is still an issue, with fonts installed (flash and javascript both reveal ALL your fonts) being the main sources of entropy, and there is of course your actual IP, so proxying is also worth considering. Tor offers an excellent solution, while a very low profile one being anonymouse.org. Check how much junk there is to track, trace, and fingerprint you: panopticlick.eff.org browserspy.dk analyze.privacy.net anonymitychecker.com then check and compare your results with Tor enabled or even by just visiting these links through anonymouse.org. Of course you can also just let your head to be used as a wastebin for arbitrary ads to generate some pathetic income for bums like the author of the above article. LOL. How many sheets of aluminium did you use in that tinfoil hat? The author of this article clearly has absolutely no clue about network security, or even consumer targeted marketing indeed. Oh, please enlighten me as to the evils of targeted advertising. I've listened to the arguments with adblocking. I'm very shocked you would try to ignore the dangers of the internet for a quick buck! Malware is a myth? Javascript is safe in modern browsers? What planet are you living on? For a so called tech site spouting this rubbish it makes me wonder do any of you have even the basic common sense. It seems that you scrape content from other sites too! Do you share any revenue with those websites you have scaped from? I couldn't care less if your blogger site goes behind a paywall. I meant malware in ads is not a myth. There are new addon's for Firefox and Chrome. Disconnect.me and Facebook.disconnect.me, Google.disconnect.me, Twitter.disconnect.me. They block any of those services from tracking you. Google them or go to the chrome addon's page and install them. Same for Firefox addon's page. Enjoy. I never knew what ad blockers where before these and would sometimes click on them on the other postings on this website. Thank you for the information and ill make sure everyone I know uses these great programs! Wow. I didn't know about Ghostery. Thanks for letting me know about its existance. Installing it now! :) You're welcome. Adverts and the tracking they all do are against freedom Also using programs made by advertising companies (like Chrome) is against freedom and therefore malware ... using chrome is against freedom. Well, thats a new one. Not really. Google is the largest advertising company in the world. Not everybody thinks freedom requires selling their personal data to the highest bidder. And yes, I'm typing this on Chrome, and that's a gmail address in the email field above. Haw! But I'm aware of what I'm getting (and my main computers' browsers and primary email are from other providers). Does anyone still believe Google gives away "free" stuff just to be NICE? Now who's naive? Versus browsers that seem to do quite well based on the unobtrusive donation model, Chrome is a long, long way from "freedom." I'm sorry you're not getting money from ads, well, I don't really care really. Having a safer browsing experience is more important for me than your 'salary'. Get a real job and stop complaining. This really assumes that we are all out to get you. I use Noscript, it has saved my bacon on many occasions and generally makes browsing the Internet safer and faster. You have no right to make me download your Javascript, Flash, Java or any other plugin. I have no idea how secure your site is or how competent your developers are. Don't worry though, I've white listed Google so you'll still get the analytics. I also use Adblock, but I only black list ridiculously annoying adverts. You should probably research this sort of thing more before coming out with blanket statements and painting your readers as immoral. You should probably research the actual threat of internet-borne malware before refuting my blanket statements without actual reasoning or counter argument. Here's an idea - download Chrome. It sandboxes every site you visit with no access to OS or your files. Malicious anything is a whole lot less scary then. (Ooooh, but there was this one hack that no one ever actually used that did manage to break through, so go ahead and use that one time to justify everything you say) This is all nonsense. Noscript and Adblock protect you from malicious and intrusive scripts and advertisements. To the author: you complain about losing revenue... Well, I don't know, maybe you should have gotten a better job than blogging about irrelevant browser plugins. Like say, I don't know, line cook at McDonalds. Are you trying to spam us with that name of yours? Well, sorry to tell you but we've disabled URL linking on names. ... It's funny because in the "Similar stuff" section, I saw this: Use Adblock To Block Online Ads and Malware () Ads and javascript are huge security holes. Why shouldn't I block them? It's not really *that* funny. This was an opinion piece, and there are 20 odd writers here. Others have posted tutorials on it in the past, and opinion seems to be divided straight down the middle in terms of those of us who believe it destroys the web, and those who just don't care. You should direct your annoyance to advertising companies and ones that use tracking. I for one use Adblock plus to prevent viruses from annoying event-driven advertisements, however Adblock plus and other blocking addons are working to permit adverts which do not distribute viruses, cookies and do not annoy or track the user. All I can really say is use a company that does not have a bad history, google adsense's text ads are not blocked. Adsense tracks you more than anyone - only they have your complete web search history, youtube preferences - everything. Don't think that just because you have web history turned off means they dont track you - they do, they just keep the information to themselves. Ads which dont track, dont perform well. They dont pay, simple. You know the kind of ads they end up being? Scammy, low paying, get rich quick, lose fat now, aunt makes $70/hour doing nothing... if those are the kind of ads you'd like to see more of, then by all means continue blocking. "If those are the kind of ads you'd like to see more of, then by all means continue blocking." Er, the whole point of blocking ads is that you don't see them at all, much less "more of" them. So thanks, I think I WILL continue blocking. You can not compare ads on this site to TV, watching a cooking program and getting ads for Jamie Olivers new book is fine, as would be seeing an ad on this site for Dell or PC World, the ads should be random, TV channels don't know the other channels I'm watching, so why should you? If I decide to show my friend an article on the latest app, I don't want him seeing ads inviting me to join GayDar, or my partner seeing ads for engagement rings. And if your watching a drama on TV rather than something specific, they don't know what your interested in, they show you whats currently available, and that's what online companies should be doing. Not putting cookies on my computer without my permission and building up a profile of my interests, I don't want people to know them, yes, I use Facebook, but iv never once clicked Like on a page, because my interests and thoughts are my own. Besides, show me an ad for something new, and Id probably click it, show me an ad for something iv already been looking for, and I've probably already found something cheaper or better.. TV ads do the same kind of profiling. They do samples of audiences to find that people watching Jamie Oliver are also switching over to that gardening show afterwards, so they show gardening ads. Your problem just seems to be with anyone knowing anything about you specifically rather than you as a stereotype, but thats not something I can argue against. However, your last statement about showing you "new" ads vs something you're already interested in is just plain false. There is years of data to backup the personalization tactics used. You think you're so special as to be unaffected by it? Years of data may be great for TV, but the internet changes everyday as you should know. If these Ads were interesting and non intrusive then the uptake of these add-ons would be much less. And no, I don't think I'm special, its based on experience, I search Google for replacement bits for tools, find there £3 a blade but ebay has 5 for £10, so I get those, the next 3 days, all i have is scrolling adds going through every tool iv already bought available with another company. That just doesn't work any more, shops are closing down because people are more savvy on the internet and can find it as cheap as possible, and get cashback to. I'm also a movie buff, yet I never see any ads for movies, which I would defiantly click, there more interested in something I'm going to buy instantly. But people want to be informed about a product, which they can buy in there own time after finding a good deal. " If these Ads were interesting and non intrusive then the uptake of these add-ons would be much less." I dont think that's true at all - people dont consider using an adblocker because they find ads intrusive or aren't interesting, even if that's their default excuse. They do it because it's easy, and they can. Download a plugin, hit activate, hey presto ad-free web. It's human nature to take advantage of something where possible with minimum risk. Why do you think file sharing and torrents took off in such a huge way? Some might valiantly say its because the music/movie industry has "failed to adapt to modern technology", but really who are we kidding? It's because you can get something for free, with minimum risk. Most people wouldn't have bought the stuff they download anyway - but if they can grab it for free, they will. Human nature. Nothing more. You know what, I dont see movie ads either. Perhaps movie marketeers realised that they aren't that effective online? When ads aren't in your face people are happy browsing with them there, take facebook, there blended in with the rest of the page, they don't open up in the middle and expect you to actually close it. Thats when people seek a way to stop them, and it just so happens the add-ons block everything, which is like everything in life, people put up with little annoyances, until one day its right in your face and you just snap. So blame the ad company for crossing the line, and making an ad that has more bells and whistles than the page your viewing. And using facebook as an example first, I know there still not innocent, with there "Like" Button and comments box on every page on the internet, I see one day they will start putting a different kind of target ad on user profiles, under the guise that friends have the same interests, "Kevin listens to...." With an ad for Gaydar Radio, even thought I never leave a comment through Facebook on that site because I wouldn't want people to know. There are some great movies out there, which never get advertised, so they flop. Yet when they make a new movie, and they WANT it to be big, even though no one has ever seen it, the advertise online, but its happens so so little. Take Avatar for example, it made billions because they got off there ass and advertised, I know it was on the news and TV a lot, but I'm sure online Ads played a big part. This article was very informative. Its sad to know that turning on ad-block reduces your revenue. But turning it off introduces a lot of irrelevant stuff to the page. Also, if I am true, loading those ads also increases my bandwidth usage. :( So I guess leaving them on would be the right option. Such minute bandwidth use unless you're still on a dial up connection or 3G mobile. Hell, I seriously thank Tim Berners-Lee for NOT thinking like you do when he released the first html code. But I do thank you for pointing Ghostery, which I didn't know, and am really happy to use now. Hi there! I disabled Ad-Block here - I didn't like what I saw, so I'm switching it back on and moving on (by that I mean unsubscribing from the mailing list, and no longer visiting MUO). Why? Since I now understand how much my use of ABP on this site adversely affects the contributors, I think it's the only honorable thing to do... and I'm not ready to give up using ABP! It really isn't end-users responsibility to tell a website owner when an ad appears on their site that breaches their standards - whilst I accept that on a large site it's probably unfeasible for the owner to vet all ads, expecting users do so isn't acceptable! Also, I don't think its up to us end-users to design a workable business model for authors/websites to get paid - I think when one does appear, I and others like me will accept (and buy into) it. For now though, the current model doesn't suit me. As mentioned, I really do respect the right of authors to be paid for their work - so I sincerely hope that a mutually acceptable solution can be found. So long, etc. We'd rather you stay and continue commenting, even if you are blocking ads. Hence why I explicitly said we would never block the ad blockers. Why is it unfeasible for a large website to vet its ads? I don't see porn ads or scams when I'm watching the news. Why should I see malware, trojans, scam sites, etc when I visit websites? When advertisers quit allowing viruses and malware to infect computers through their ads, and sites with ads reimburse me for lost data, lost time and damaged hardware after a virus attacks my system, I'll quit using blockers. Until you can guarantee that, I will keep on blocking. Major ad networks screen their ads extensively. Viruses through ads is quite a thing of the past and its about time we lose that myth. The main delivery method was javascript, which has also been hardened since those dark days. If you used Chrome, it would be inconsequential since tabs are given a sandbox - theres no chance of a virus ever getting to your OS. Furthermore, would you expect MS to pay compensation for your loss because their OS allowed virus code to run? How about the browser? In the past year or so I've seen a dozen or so well-respected sites that have been marked as malware by Chrome due to serving malware through their ads. Thankfully I have adblock, ghostery, noscript and block the ad domains at the hosts level, so I wasn't infected. I disabled for this page, took a look, then enabled again. You are fighting a losing battle. Adblock is the best add-on available to man, the internetz become unusable if it is turned off. Never in a million years, unless advertising stops being so bloody intrusive. The internetz would become non-existant if it was all blocked. Funny how the internet existed and thrived before ads. Yeah, and Piracy is killing music too, right? I choose to leave them all enabled and will continue to do so. Stocking your fridge is not worth sacrificing my personal security. Most ads from large networks do not attack you or contain spyware, and if you used a secure browser the few exploitative ones on dodgy sites wouldn't be an issue anyway. Is that so? Can the governments you comply with take that to the bank (or get you back if it ain't so?) That's false at least 10 ways, not even counting samizdat, the content and ad delivery networks v. the things themselves, and that most sites and browsers have dodgy moments several times a year (as opposed to just when they recommend IE8. As opposed to bat with thorn, a really useful, easily personalized information tool.) I also note the closeness (????t) of your hail to Chrome's sandbox to Google's payment of $270k Finders' Fees on vulns in the sandbox. Where's that version string, before the patch or not? If your government (employees, contractors, either side of G20, etc.) can't suffer the ads (e.g. visit Forbes.com on breaks) because of those resource nuisances, is that content free? Try loading 3 tabs of Forbes.com and/or Facebook (if those qualify as large?) Using all your cores yet? Do you still have to click more to see the whole article, making it still less free to read and more distasteful to consider the ad booking's association with the article, which editors used to (and sometimes still) do? Okay, now what part of those ads (script/image/metadata/etc.) does your firewall/proxy at work let through? Any notion? If you ask, do you get a used issue of the Paris Review free for your trouble? Do checks at the ad sources at o-o.2340-97.likely.youtube.com et al bring up traceable affirmations of the scripts' rampant use of bandwidth (Movies of a credit card at two beaches! Just my kind of drek, sure, yeah, beach movies. Sure. The article is on mass gay burials in the Sudan, so good connection!) and computing resources in your favor, or reek of distrust of ad impressions (clickfraud, etc.) rather than honestly thoughtful oversight and regard for the publication at hand? Was this run through Google translator? I'm sorry, but I find what you've said to be completely incoherent. Apologies if you're a non-native speaker. The bit I did understand was something about Forbes and Facebook being processor intensive - to which I say I currently have open 3 browsers, each with over 20 tabs in them from various sites including facebook. The only time I've ever heard reports of website using massive amounts of processor power was when the user was running ADBLOCK, which caused some kind of endless loop or memory leak. Hmm i hope you do not get annoyed with this question, but how much does a site like yours earn? No idea. But just so you know, I disabled URL linking so your SEO spammy 'name' won't work here, whether this was an automated comment or not. "We believe strongly in a free content model – whereby we provide free, high quality, full content to you with no restrictions – in exchange for showing you advertising." Please don't take this the wrong way, but this sentence demonstrates that there are clear gaps in your understanding of the concept "free content model". It suggests to me that you don't realize that *having to deal with advertising is a cost*. Commercials are a cost. Pop-up ads are a cost. Anything that distracts from desired content is a cost. The "free content model" applies only to content which is *actually free*, i.e., without cost. If you can't afford a site on your own external income, that is not an issue of your readers -- it's yours. In a perfect world we would all like to be paid, stay-at-home internet bloggers but most of us can't be, and that's fine. Just like people can't walk around being philosophers anymore expecting to get paid. There are a lot of jobs that simply aren't fiscally feasible, that's just the way things are. No shame in that. The point is, people will always have a choice. If they want to be free of ads and trackers and all that stuff, they have the right and freedom to control what is presented to them on their own computers. If your content is good enough, maybe enough people will accept certain costs to access it. But the point is, people should always have a choice, and by labeling these blocking addons as "evil", and by extension implying that their users are, is like getting mad at Wikipedia because fewer people use Britannica. Call it "ad supported model" then, I'm not here to argue the semantics of what you call "free". The dictionary has many definitions, one of which is "without charge" . We don't charge for content, therefore it can be termed as free. You're simply being pedantic by arguing that ads are a cost. Anyway, let's hope someone doesn't demean your job to the status of "financially not feasible". Feel free to paint my words in any light you desire ("pedantic", "semantics" debate, etc.), but it doesn't change the fact that ads are most definitely a cost. Sure, any single ad isn't much, but over dozens of pages loads and several sites visited each day, the increased load time and distraction can add up. Put most simply: if they aren't a cost, why are people blocking them? And yes, if someday my job becomes fiscally infeasible, then I will change as needed. It may be frustrating, annoying, and I might fall on hard times, but life isn't always fair. I find no difficulty in accepting that and moving on, whether that means looking for alternative models (e.g., offering free content but paid perks to my users) or finding a more fiscally sustainable profession. I may even decide to make my entire website ad-supported, even though I may lose some visitors in the process. But the point is I would never call someone "evil" for blocking my ads, nor would I advocate for a policy which removed the freedom of choice from everyday people just to make my personal lifestyle more sustainable. Let every man and women decide for him or herself, I say, and if they want to blocks ads, by God let them have the freedom to! :-) I didnt call "someone" evil, I called the plugins evil. I believe users are misinformed and misguided in using them. And at no point did I say you couldn't have the freedom to choose. In fact, I explicitly stated we will not block anyone who chooses to do use those plugins. This is an opinion piece, set out to try and convince people and inform them. Information is the key to making a personal, ethical choice. Heh; financial investment is steadily both unfeasible (no solicitation and all that,) and done, like making the blocker bits. As opposed to the browser plugin, javascript, and user script options; take leisure to back up your opinions (eeeviiiil! A term dating baaack to the time when a Common Commercial Code blackened the land...) with the usual tracking data, cite some public sources (not everypony can toss $20k at GlobalData every month,) etc. Calling you out for using the word "free" is not pedantry. You're deliberately ignoring an important distinction. Let me put it this way: If you truly operate your site "for free," then how are you making any money off of it? Actually is it pedantic, because there are two meanings of the word and you;re simply choosing the one that you prefer. Let me put it this way - how much did you PAY to read any of the content on this website? I disabled the trifecta of doom for your domain only. Wow! This page lit up like a Christmas tree. Personally I think the great content on this website is definitely worth having that christmas tree show. It is a *very* small price to pay. If it weren't for the article on this very page, that is. Dear James. I'm so sorry for posting in two places, this was not my attention. I was going to reply to 'Anomaly', but my reply wandered of from that internal topic. Hope it's all right. Reply to article: ‘. Hi Salantrax. Thank you for taking the time to write such an elaborate response, and I did indeed read it all. However I believe your first argument is a bad analogy, and here's why. Linux is a free *alternative* to Windows. While everything you have said about it being more powerful, for users who wish to take the time to learn etc, is completely true; it's also illogical to draw an analogy to adblock. Rather, I would say, if there is a website out there, similar to MUO, to which users contribute high quality content completely for free, and the hosting costs are presumably covered by an anonymous benefactor, or some kind of free torrent 'hosting' is enabled - then the analogy is good. Rather, adblock is akin to downloading windows and using an activation hack to prevent microsoft from being renumerated. It is taking content without giving back to the author - plain and simple. I could just as easily say "The fundament of generating revenueby purchasing windows licences is users being stupid and lazy enough to not know how to use an activation hack. The default option with windows is to pay!”. **** So then we get into the issue of how to generate revenue, and that's a more valid discussion. this website doesn't run on Apache, because the performance is terrible once you scale beyond X requests per second. We use NGINX - which I grant is also free - combined with dedicated servers (at a cost of at least $1000/month, though I dont know specifics), and a separate Content Delivery Network which also costs per TB of data transferred. The bottom line is, a website needs money to operate, we the OSS model is simply not valid. As for making the content premium - that's a moral decision our boss made, and that's his prerogative to make, not yours. He decided that it was better to let users access our content for free - so that anyone on the internet could make use of it - not simply those who can afford yet another monthly deduction from their salary. It's not a case of "is this content worth it?" - some people simply can't pay, and certainly don't wish to after spending god knows what on a PC, a net connection (and windows!). So we have chosen the free content model, and we expect users to comply with that. If you don't , we're not going to block you, but that's an ethical decision that you have personally chosen to prevent us from being paid your percentage of the bill. You went to the restaurant - you ate with everyone else - and yet you didn't pay. Everyone else picked up the bill for you instead. As for newspapers, personally I never did buy them - because I believe strongly in a free content model - as opposed to paying monthly subscriptions for it. Perhaps this is a difference of opinion, but I would rather get my news for free, from anywhere, in exchange for viewing ads - than pay one particular news outlet (and then another, and another). Donating to users for insightful comments is an interesting idea - and we kind of do run a similar system. If you were logged in right now, you would be generating points on your account. The same for sharing content. Right now I'm developing the next stage - which is where those points can be exchanged for competition entries and software licences (and possibly even real world goods at some point). So yes, we do (or rather, we will) be giving back to users who contribute more than others. Anyway, thank you again for your comment. As I say, we certainly won't block you for the decision you have made to not give anything back renumeratively, just as I would not presume to tell someone who runs a pirated copy of windows that it's wrong. You can be your own judge of that. Thank you muotechguy, for answering despite my somewhat hostile attitude in the previous post! :) I answered in part to James' earlier statement that OSS is free mostly *because it is inferior software*. This is simply not true, which you also say, and statements like James' make me a bit angry. *He makes this an argument for requiring people to pay (or watch ads) for "quality content", because quality content can never be free of charge and free of ads.* (This is also not true. And whenever I encounter a web page that is truly Free in this way, I am encouraged act in the same way and/or donate!) That is why I felt a need to draw the rather vague (but in many ways valid) analogy. As for running a big web server like MUO, I agree, it probably does cost a bit of money. But that is not *my* problem, as harsh as that may sound. Shifting this liability to the users is completely irrational. If something on the web server does not work, it is the administrators problem. "Rather, adblock is akin to downloading windows and using an activation hack to prevent microsoft from being renumerated." Not at all. That is an illegal activity! However, it can never be illegal (as far as I can tell) to opt out of downloading part of a web page. We can opt out of running javascript if we think there's a security issue with the page. We can opt out of displaying flash elements (James readily suggests that you do this, beause "they are annoying". What about flash ads? Don't some web sites depend on *those* for income? Now *that's* hipocrisy if I ever heard it.) What I'm mainly attacking in this article is that awful idea that users should feel bad about blocking ads. That's ridiculous. If you want people to whitelist MUO, then you should ask for this of your users, not begin by suggesting they are doing something morally wrong by not doing it. That's akin to stating that not donating for something worthwile is morally wrong. This article only hurts MUO, but its scope extends to the internet as a whole. That is too arrogant to go unanswered. What's wrong with the ad-based revenue model is that people don't know how *not* to download the ads. The default option has always been to "download all". Hopefully, with adblock, the internet will be more OSS-oriented (in a vague way, I agree) in that it is the user who makes an active choice to support a website they like, be it through donations or by whitelisting ad-wise. I thank you for not blocking me, despite my choice. However, I want you to feel fully entitled to block me from this website based on me blacklisting you. That is entirely your choice. I only suggest then, that you are clear about your policy with new users before you block them, should you ever choose to do so (which you have already stated that you will not do). Thank you for an otherwise great website. PS As far as web server administration goes, you, muotechguy, clearly have more knowledge on the subject, and I acknowledge this fact. Note, however, that I did not suggest that MUO runs on Apache specifically. My point is that MUO runs on open source software, perhaps not mainly because it is free of charge, but because OSS generally spawns more secure and stable software for the end user. In that sense, my guess was correct. Sorry Salantrax, didn't mean to deceive here, but muotechguy is me, James. I am the developer for this site. I have two accounts for admin and authorship, forget which I'm logged into sometimes. - Flash ads are unneccessary in this day and age. Website do not rely upon them, but sometimes they are served by ad networks. I'm happy to block those, and thats not hypocritical because it's not blocking advertising on principle, its blocking a dangerous and outdated technology. - You seem to imply that it's somehow disfunctional that it costs money to run a website ("it doesnt work"?). I dont really follow that. It's a "problem" for our admin to deal with - it's just a fact. Bandwith isnt free, storage isnt free. You're welcome to try and maintain a major website in your free time for free, but if you had any experience of web technologies you wouldn't really be questinging this point. - I didnt say OSS software was inferior. I think I said linux sucks, and will never reach mainstream. But again, software and websites are completely different things and the model for one does not neccessarily equate to another. By your reasoning, everything in the world should be free , made by donations and people volunteering their time. That sounds remarkably naïve. - Illegality and ethics are not the same thing. While it may not be illegal to block ads, it is certainly unethical. My point is that this analogy is far better than comparing it to using linux. - "people dont know how to block ads, so the ad revenue model is wrong" just isnt logical, sorry, can't respond to that. "people dont know how to use torrents, so charging for software is wrong". See how silly that sounds? - Users should feel bad about blocking ads here. And we will be rolling out a whitelist plea very soon that will permanently show itself to adblock users (not blocking content though). However, this was an opinion piece, and as such represents my opinion on a topic, not an official plea from MUO. It should not be taken as such, is quite clearly labelled opinion, and does not necessarily represent the viewpoint of MUO or any of the other writers. The last point is critical. Please don't take this article as anything other than a personal rant from the developer of this website who makes it his full time job to maintain it, help out users with queries, continually write content, keep the website online, and make the user experience better; and is paid for that. I see :) ! No problems... They both seem like nice people. About Flash ads: I'm sorry. You are definitely right. :/ I somehow got attached to your argument about it simply being *annoying*, but you definitely have some valid tech arguments for blocking them too. I take my hypocrisy statement back. "- You seem to imply that it’s somehow disfunctional that it costs money to run a website..." No, I've said nothing of the sort. Of course it costs money. That does not mean a website in general can't be provided without ads. And that is why "the trifecta of evil" can't possibly kill the Internet, or even hurt it noticably. I may be too radical, but I think the world will be a lot nicer when people know when and what they are contributing to on the interweb. It might kill *your* site though, if you can't find a viable alternative to cover your costs. I would sincerely be sorry for such a thing to happen (although I don't visit here much), but again, that's part of another discussion. What I'm saying is that any financial problems are not the users' responsibility; it is the server admin's problem. Neither is it the Ad-block Plus author's fault, that software was probably bound to happen. What do you mean by "profiteering from the same free model"? He is not! He only asks for donations. He doesn't blame anyone for not viewing his ads. I'm not sure he even has ads. I don't see how he acts hypocritically in any way, regardless of what one might think of the effects of his software on the Internet. "- I didnt say OSS software was inferior. I think I said linux sucks, and will never reach mainstream." I was referring to this statement: "Sure, there are some people who will provide content for free, just as there are some who work to provide Linux and free apps there. There’s also a reason why most users don’t use Linux…" Here is where I picked up the OSS analogy. You clearly imply that free (truly free) content can't be better than paid content. This is not true, and had to be adressed. However, one can have opinions about Linux itself, absolutely. OSS can't finance a server infrastructure by itself, since there is no money involved. I was making a subtle hint, that you should give what you can, and enjoy what other's give to you, because there *is* content on the internet that is truly free of charge (e.g. Wikipedia). On the other hand, I'm not stupid. I suspect many sites would die without ad support. I say "so be it". Future web sites will need to learn not to rely on ads, or be appreciated enough that users are prepared to pay the extremely small price of white-listing them. "By your reasoning, everything in the world should be free , made by donations and people volunteering their time. That sounds remarkably naïve." No! I haven't said anything like that. It is each and everyone's choice to provide information free of charge. But when you claim to do so, you can not at the same time flame the users for not paying you by viewing your ads. That, I would say, is not ethical. Default beviour should be to block ads. If a website asks for a whitelist, and handles the ads well, that's another story. My model of a free site would go something like this: 'Hello! This is my site. Everything here is free of charge, and you may (may not) redistribute it as you like. Currently, we have problems financing the server costs, because a lot of people have realized they do not need to download ads (and never needed to). Therefore we ask you for a small contribution in the form of a donation, and/or an Ad-block whitelist of the entire site (which is cheaper!). Thank you!' If this doesn't work, then yes, I suggest you make members pay. This is not a demand, it's simply a suggestion. - “people dont know how to block ads, so the ad revenue model is wrong” No, I think I just put that part (what you interpreted as the above line) a bit out of context. This was part of my personal opinion: The ad revenue model itself is not wrong, ethically. What's ethically wrong is how the end bill is presented to the user. Users are default:ed to contribute, whereas I think all contribution should be an opt-in feature. In this context, the model has been flawed, because while it has always been theoretically possible to block ads, it has not been easy enough for an ordinary user to do so. This is what I think Ad-block et al. are going to change, if/when some of the major browsers implement it by default. As you say, I may have taken this a bit too far by blacklisting MUO. I somehow got the impression that you were representing MUO, and I don't want to support any website with this attitude regarding user responsibility. I do respect your opinions otherwise. The problem with opt-in is that nearly all users will not opt-in. They will leave it on the default of nuking everything, and the ad model fails. It's human nature to take without giving back if you believe you can get away with it. Not everyone takes the time to educate themselves as you do or whitelist sites you enjoy. You are the minority, and this is the tragedy of the commons. [] I hear ya loud and clear. It does suck. Nothing we can do about it. Good luck anyway, because you're doing a great job on this site. You're whitelisted now, of course. thank you ;) I block everything all the time. No sympathies. If services or worthy I pay for them. 'nuff said. You can thank the big boys - FB, GOOGLE, ect ect for this happening. if i want to buy something guess what i search for it. never ever once have i ever made a spark of the moment purchase from an ad on someone's website. nor taken a suggestion from forced advertising. those programs which stop this madness are free because those who made it were just as sick and tired of it as we all are. Action/Re-action. Cause/Effect Causality.the best you as smaller websites could do is either take donations like Wikipedia does, or work with the programmers to fit it in in such a way as to not bother us. in looking at Ghostery there is 12 bugs trying to capture my information. i find that disturbing. when a disturbing person comes near my house i ask them to leave. when the don't i pull the shotgun out. FB & GOOGLE and the other have removed themselves from my shotguns aim. so ghostery and the others put them back into the crosshairs. Deal with it. we do not wish to be bothered by nonsense. when i want shoes i'll tell google to find them. but this impulse purchasing crap has got to stop Yet again, we find the fallacy that you're "unaffected by advertising" and "never click on them anyway, so what's the point". Other commenters have explained that idea in numerous places, so I'm not going to repeat them here. Also, we're not small, thank you very much! I understand your argument that you deserve to be paid for your work, but... Accepting donations is a humble alternative to taking in revenue from advertisers. This model, by definition, cannot be defined as "profiteering". I don't think any entity should belittle the donation model until it tries it. If I weren't in a day-to-day struggle to pay my rent, I would be happy to pay for more services on the web. I wonder how many years it will be until we have to opt-out of individual genome tracking. I'm so happy to allow AdBlock for Chromium to help me block the ads on HULU.com. Somebody still hasn't figured out how to prevent commercials from being exponentially louder than the show you're trying to watch. When I have the capital and the neccesity to make a purchase, I know where to look to make that purchase, most likely because I've heard through WORD OF MOUTH that a product or service is worthy of my own hard-earned currency. I don't need any commercials reminding me that Home Depot, for example, exists. Sheesh, I was in your group with all the others, however, the fact pure and simple is that people and entities buy stuff that they are familiar with, the buy stuff that they hear and see about. For instance how did you get hired? Were you sitting on the poddy and one day a leprechaun popped into your bathtub and said "hey bub! You gotta job ... go here and get paid!". Naw, I bet you made a resume "translation: advertising flier/packet" or filled out an employment form "translation: subscription to a banner exchange or profile directory", but that is not where it stopped. Then you had to let your prospective employers "translation: your customers" know about your advertising. But that didn't impress them so you had to dig down deep and inform them more about yourself and how you would benefit their company "translation: the sales pitch". Finally, after countless, if not innumerable attempts, somebody said yes "translation: purchased your product - 'you'". Unfortunately, after you sold your time to one customer, several others finally got the same idea but were too late to make the deal. Now consider what would happen if the company you worked for stopped advertising "translation: telling their prospective employers the consumers what they can do for them" and gradually lost their entire revenue stream "translation: money generated from consumers purchasing goods and services" to some other company or individual that actually wanted the business. Yep, that's right, you'd be let go, you would have to remarket yourself all over again "translation: marketing". Not only that, your company would probably fold. So, that's not going to happen, because the company you work for is going to continue to advertise, call on potential customers, make the sales pitches, so everyone in the company can make money and pay their bills and otherwise enjoy life. Now, consider what happens when a sales pitch "translation: banner ad/commercial/flier etc" stops converting "translation: bringing in new or continued sales". Yes, that's right, we website owners discontinue it. In the case of over-arching analogy here, that'd be you in your company. Once your company attributes a loss of sales and or revenue potential from you, after a probationary period (if you are lucky) "translation: the consultation" if there is no improvement you are let go "translate: we get rid of the advertiser". Then the company starts the process all over again to attract another advertiser "translation: employee". And you will start the process all over again and make another resume "translation: advertising packet". Granted, I've simplified this and it might be more complex than this, but nah I don't think it is. You can find me - Al or we design less intrusive adds and thus there will be less to block. Its not ads from Microsoft of Apple sites that is the problem. It's the run of the mill sites that think it is ok to host adds that actually detract from the user experience. If your adds get more of my attention then your site, I am blocking you, as simple as that. If I wanted dawn dish soap, i would have gone to dawn.com. "run of the mill" ads don't pay the bills, simple as that. Tough titty. It's not the customer's fault that your revenue model doesn't work. Sounds like your problem is with your advertising partners. If I see bottom of the barrel ads because I'm "not using the internet right," that doesn't exactly make me want to go around getting blasted by more and more bottom of the barrel ads until supposedly, according to you, they might somehow get better (but also creepier). The ad-based model turns readers into a commodity and ad companies into your customer. It's very clear from this article where your loyalty lies. If you cared about the reader's experience, you would take steps to correct it instead of browbeating and guilt-tripping them into changing their behavior so that their personal information is easier to sell. And don't kid yourself -- the type of article a (supposedly anonymous) user reads IS personal information, and you are selling it. The articles are essentially bait. If you can't "pay the bills" without being intrusive about the kind of information you collect and obnoxious about the type of ad you display, either get better at your job or find a new one. That's what the rest of the world has to do if the boss tells us our performance isn't cutting it at work. Ad-blockers are a sign that your readers AND your advertisers find your performance lacking. Shape up or ship out. That's how work WORKS. I agree, the ads suck. My own sites have much better advertising; this is not my site. You'll pleased to know I am being very vocal about changing them here though. Thanks for introducing me Ghostery, this add-on is really awesome. Another great add-on and reason for Mozilla Firefox Browser! Thanks! Doh! Maybe create a selective blocking addon based on all 3 of these , blocking badness but not blcking select add's & scripts from an online kept repesotory lst of url's & not letting users selectively block (all) ads/scripts Too much bother, no one would adjust the defaults. And who would maintain the list? You can be sure they'd start accepting bribes from ad companies to allow their network by default. It's a nice idea, but ... we got tired of seeing 40 SECOND ADD to watch a 20 second video... thats just rediculous... blame youtube What on earth does that have to do with static banner ads? the only reason me and a million other people got adblock was because annoying video ads playing before the actual videos on youtube... blame youtube for doing this... when adblock came along it was like saviour to us... we didnt care about ads all over the page.. but the video ads is what killed it... and you can blame you tube for taking it too fucking far... correct me if I'm wrong but you only get paid when some clicks an ad right? and as I never do my ad block makes no difference one way or the other That's wrong. Read my responses elsewhere in the comment thread please, I dont like repeating myself endlessly. You're only 10 person to say that. well, I always whitelist the pages I like in both AdBlock and NoScript.. the problem here is that if I disable them totally the Internet becomes totally unusable, because of all this Flash s*** on 95 % of pages.. my PC simply cannot stand all of these ads/eyecatching gadgets (that are not useful for anything), RAM and CPU is feeling pain in bones.. So, surprisingly, for me and my PC, YOUR advices would kill the Internet, not those two mentioned above.. Anyway, I disabled them on your site, because of great content you have here.. :) Flash block is a separate plugin, and I completely understand Flash kills your PC - mine too. by all means, block Flash. It's a dead technology (and yet people still whine about iOS not supporting it? How sad for them.. ) I just turned off Glimmerblocker to see what ads I am missing. I'm sorry, but most of these ads are crap, especially the ones at the top of page from google. Do you really depend on these?? As I already said on numerous occassion, you see crap ads because you always have adblock enabled. With no tracking on you, you get shown the lowest common denominator from bad ad networks that pay terribly and have ugly flashing deceiving ads. If you actually used the internet properly, you would have ads customized to your interests (as I do, and many others). It's your own fault that you see crap ads. FYI; we took steps this morning to block those low quality ads and restrict the types of ads that can show. Hopefully you'll see less of them, if you are kind enough to whitelist us. I didn't know about ghostery. Thanks, installing it now! ;) hahaha i have all three installed :) Hahahah you sneaky sneakster you! I read this article as sympathetically as possible (I don't want to see anyone's livelihood crushed) and have followed the replies with interest. Thank you for providing the perspective and forum. I nevertheless, having done so, still feel the matter is extremely cut and dry: if the best motivation a site/author can provide for users to willingly submit themselves to--(i) the perceptual/mental overhead of ignoring ads, or (ii) the overhead of continually re-educating themselves on how much risk s/he may be incurring by exposing her/his browser to ads--is that they ought to do so because that's how writers get paid, I can't blame the user for choosing not to. It's very simple: people will always act in what they perceive to be their own best interests. You could possibly demonstrate that blocking ads is not in users' best interests, but if that's really true, I think the onus is on you to figure out how to make that obvious to them, rather than castigating them for taking what strongly appears to be the easy way out of incurring that overhead. If you're considering this essay to be that demonstration, I'm sorry to say that I still feel, like many before me in this thread, that the business model as it currently exists seems like a bad deal for both writer and reader. And "if you don't like the business model, you come up with another one" isn't a legit response to that: my optimal model is currently to block your ads, and until there is a different transaction model that leaves an even better taste in my mouth, I'm sticking with it. "I think the onus is on you to figure out how to make that obvious to them" The easiest way is to show users what content they would receive if everyone used adblock. ie, nothing - block them completely out of the site. You say that as though it's some sort of punishment, like you'll teach those rascals a lesson; but maybe you're right - maybe blocking them out would be a better tactic than berating them. I guess what I'm seeing the question distill down to is, what would you rather your customers be saying to themselves when they leave your site after reading one of your articles: 'Wow, that was an excellent experience that I look forward to repeating,' or, 'Gee, that was a great article; too bad I had to suffer through all that BS to get to the good stuff - oh well, at least I Did The Right Thing.' I'm sure you're tired of defending your perspective. I want to reiterate that I agree with the ethics you're building your case from; I just don't see it as adequately practical to expect users to Do the Right Thing just because it's The Right Thing to Do. There has to be a more effective incentive than that--if the threat to put your content behind a paywall works, more power to you--I for one do pay for digital content from two periodicals, the NYT and the New Republic. And again, as long as you're the one berating people for not Doing the Right Thing, it does seem to me that it is your responsibility to advance a better model, not curtly invite your readers to. In other words, as long as readers can continue to get your content despite using adblockers, it's going to continue to be your problem. All the same, I do wish you luck and don't expect to get something for nothing. I don't know how successfully the paywall model is working for the NYT or the New Republic, or anyone else. I don't know if that's the solution. I appreciate your perspective and hope neither you nor anybody else gets screwed out of a living while we as a culture adapt to the technology. I try not to block ads if I can help it, but every once in a while, obnoxious, obscene, or downright dangerous injections will make me whip out Opera's built-in content blocker. Can't say much in this case since I'm browsing mobile, but I haven't consciously noticed the ads so I'd consider them non-intrusive and thus perfectly fine. The creator of Adblock is hypocritical because he asks for a donation? I don't get that. You're comparing a one-time donation request to a constant display of ads. Those two things are the same to you? Believe me, I understand your point. In fact, I don't use Adblock any more either. But when you make a statement like that - a personal dig that is not even based in logical reasoning - your argument definitely loses a good chunk of credibility. He uses a similar free content model with donations instead of ads yes. It's still the same content model. He couldnt rightly display ads to you could he? So yes, I still find it hypocritical. Happy to make money himself while destorying other peoples abilities to make money. If he believed everything on the internet should be free, why ask for donations? His work is somehow so special that it deserves payment and ours doesn't? Happy to make money himself while destorying other peoples abilities to make money. ============================= Hypocritical, no. Capitalistic, yes. "If he believed everything on the internet should be free, why ask for donations?" I don't remember him saying this. "His work is somehow so special that it deserves payment and ours doesn’t?" That's the thing though - he DOESN'T feel entitled to payment. If people like his service, they can choose to donate. A site with advertisements will make money whether the consumer feels it deserves to or not. There's no choice there. Well, not unless you use adblock ;) While I can understand the rant and point of view I am not sure I can completely agree. I use adblock. I don't micromanage sites, whitelisting some and not others. I don't compare results of blocked/unblocked sites to see how they look. This is the same way I deal with News Papers. The paper costs $.50 because they put paid ads into the middle of the paper. Do I leave the ads where they are even though they block my view? No. Do I make and effort to look through them first and then read the paper? No. I get the News Paper, take all the ads and throw them out, then read the paper. Is that breaking the News Paper industry? I dont know, but I do know if I was required to keep the ads in the paper where the Tribune put them I would stop reading the paper. I also don't look at junk mail even though that subsidizes regular mail. I suppose this argument exists all over. Am I required to watch the 30 minutes of trailers, ads, promos and logos before the movie starts? This helps advertise and sell other products so the poor celebrities and studios make money. Am I required to watch them all even the 14th time I watch Superbad? Movie theaters lose money on tickets but make up for it with selling a 5 dollar soda, am I cheating somebody out of a living if I don't buy one? Websites serve the content. How I digest it is up to me. A sincere question here- is it preferable to not view the site at all? Business and ad revenue wise- would MUO(or any similar site) prefer me to not view their site if I use adblock? I ask because I would actually be more apt to oblige in this way than to start turning micromanaging ad- blocking and white listing on a site by site basis. If I visit 100 sites per day I don't want to manage them all and change them as their advertising schemes change. But MUO has been good to me and if you asked me to not mooch anymore I would probably oblige. For the sake of full disclosure- I am a tin foil hat, bunker in my backyard, pay cash, prepaid phone type person. I am de-googling myself now, won't touch facebook, never use my real name, delete all history... You actually rip out the ads from newspapers? That's pretty hardcore…. No, it's not preferable for you not to view the site at all. That's why I've explicited we will never block content to those who do choose to block our ads. We would rather have you as a loyal but unsupportive reader than not at all. I am actually in a state that starts with WA... oh I have said too much. At work I can only control so much, like my name and email. My company is pretty big and our traffic gets routed through multiple states, sometimes I am an "M" state, sometimes a "C". I don't cut out the ads but I do throw all the stuffed ones away without even looking- same with my postal mail. I don't by sodas at the theater, I FF past the previews in movies and walk away during commercials on TV. All stuff that makes money for the provider. I understand this was an opinion piece by one writer and am not saying I will stop reading or anything. I was just curious regarding the pay structure of me coming here with adblock(I don't use the other two) on. You said some ads pay by impression. So if I come here and the ads load but are hidden do you still make some money? Are people running the trifecta of evil a net loss to MUO or still contribute a tiny tiny amount through impression ads and site rank boosting the value of your advertising space? BTW- Have you now typed way more in comments than you did in the original article? Does Aibek pay by the article or by the words/comments? I am thinking that may turn into a more important rant for you. Thanks for engaging and taking this from an interesting article to an interesting conversation. You're right. I should have noticed that. Being British, I assumed that was the name of your ISP, not a company you work for. I guess that's as good as using a VPN in effect ;) I'm afraid I don't know exactly which ads pay by impression or how much revenue is generated exactly as I'm not privy to that. It's also difficult to pin down if we lose money per blocked-impression with regards to hosting and content costs, but I suspect there's still value in people coming here to read out articles - maybe they then pass them onto a friend who does become a loyal reader without blocking ads; mayve they mention our article or link to us on a different site or their own blog; or maybe they contribute something interesting to the comments such as yourself. There's definitely some value in that I think, but nothing we could put into numbers. I suspect I've now typed more in the comments than in the original article, yes! For the sake of disclosure, we do get paid for responding to comments, but not by the word (or I would be extremely rich by now!). This ensures that people with valid concerns and questions do get them answered instead of just feeling like they're shouting into the wind. Thanks for hanging around. Omg, do you buy stuff from the supermarket, or do you make everything yourself. Ie, if you can't grow it, slaughter it, or knit it yourself you don't get it? I'm thinking when you are at the store and you want Yorkshire Pudding or Spotted Dick, you buy the one from the Manufacturer that advertises everywhere vs the one that has it in a plastic unmarked nondescript undescribed bag near the register. Oh, my did you buy a stick of gum today? Why did you do that when you could have chewn on a twig. Sounds like somebody wants to be advertised to when they see fit vs all the time. Oh, and your comments are advertising for your cause "product" that your touting "selling". Maybe the guy that makes Adblock, NoScript or Ghostery could come up with a product called say "comment blocker or no talk or ...". By the way, Ben, not to feed your tin-foil hat theories, but I should advise you that if you want to truly anonymise yourself, you need to start browsing behind a VPN. As it is, your IP address is recorded with every comment you make, so I know for instance that your ISP starts with "A" and you're located in a state that begins with "I". I won't name the rest, but it's easy to obtain from your IP. Just search here for "VPN" and you'll find many articles on the topic. I agree ,these blocker dosen't kill the intermet ;like Adblock ( I don;t use ) , NoScript ( on Chrome , its work too well , it block everything and don't use it ) , NotSpcrit was good on chrome , and now, Ghostery is uses on all the web I use . Some blocker are good and other is you add one then add another one that will work . Read what it offers and ask around for good suggestion that you want to block . There are a bunch of reasons that I use Adblock Plus and NoScript (many of those reasons have already been described by other commentators), but I won't mention them in this comment. I will, however, address this statement you make about NoScript: "when you use NoScript, you’re breaking the Internet" I am of the opinion that if the average user installed NoScript, it would do more harm than good. However, if one knows how to properly configure it, Noscript can be one of the most powerful tools available in the realm of ad-blocking/privacy addons. FYI, I have not added MUO to the AB+ whitelist, because I cannot stand the ads on this site. I also have blocked the annoying banner telling me to turn off my adblocker. FYI # 2 This site introduced me to the wonders of NoScript and Adblock Plus. For that I am truly grateful. Thank you MakeUseOf! oooh, blocking the banner that only appears to adblock! Clever! But, wait, theres a banner for noscript too, and that can't be blocked. Can it? Sneaky… I'll have to work on that. There's a banner for NoScript too? I never saw it. BTW, I normally do disable adblock for sites that ask for it to be disabled. In MUO's case though, I couldn't stand the flash ads. Text ads are mostly fine in my opinion, except for the ones that are placed inside posts. Addendum, one no-brainer use for Adblock(and NoScript) is protecting from viruses and malware. In NoScript's case, protecting from vicious javascript code. I have installed Adblock Plus on all my friends computers and all my family's computers. As a result of installing it, not one of them has gotten a virus. Adblock Plus is essential for the experienced computer veteran and non-experienced user alike. My point of view is, one request the visitor to disable ADP or other addon. That is the best way to deal. I did the same for "makeuseof.com" after seeing a message from the website. Just if u are good u cant claim others are good. So these 3 may be evils for u but good for society. Thank you. Sreedhar No I don't use loyalty cards - and no I don't click on ads - why because I view a site for the info it provides and not for the products advertised. just like I don't read ads in the newspaper, I gloss passed it and view the articles - same for magazines all those pages of ads - simply lost on me, I flick passed it until I get to the article that interests me. anyway that's just me - I recognize that adverts help provide the articles that I read and websites I view, that's the current model - however as mentioned earlier, I wouldn't mind paying to get the info without the ads. I respect your view on how does this affects you as a working man. However we are consumers we need to take precautions! Companies tracking down our usage information doesn't simple involve cookies. They create a Profile, just like Facebook (it contains all your data, likes or dislikes, companies you purchases from etc.) as well as Twitter which I'm sure you are well aware that the government record every single twit being made as part as the law, and how Facebook is working with the CIA to take anything that appears hostile to them, they're many companies involve with one another all for "revenue" purpose are you telling me this is ethical?...Point being the internet isn't safe, and with this two bills "SOPA & PIPA"-which were "illegally" passed nothings gets better. We want to protect our privacy as consumers and is our choice to choose weather we want anybody behind the screens learning what we do on the web. I know this is an opinion piece and it's interesting the different view points. No doubt this debate will continue across the internet, for a while yet. I find it interesting the viewpoint, that those who use adblocking plugins ( I'm raising my hand ..) are "hurting" the authors livelihood and the internet. But shouldn't that be directed to all those who don't read and don't click the ads? isn't not reading the ads the same as preventing the ads from appearing? the results are the same aren't they - in both cases the adverts are ignored, the sponsors message has gone to waste? Websites should have banners with " ..if you don't read and click on these ads - you're ruining my career and depriving my family of shelter and sustenance!" As for the "tracking" - that adbolckers also prevent - the adverts and websites don't say ..." by the way we're tracking your movements across the internet and recording the data - hope you don't mind!" All the tracking and related technology is done by stealth - it isn't openly advertised or publicized on adverts or websites is it? Non- techies would have absolutely no idea what's happening. No, you're wrong. Many ads pay by impression. The very fact the ad appeared on your screen, regardless of whether you clicked it or not. Even if they dont pay by impression but by click, the impression figures are used by used to bargain a better deal. We can guarantee ad networks at least a million impressions, so we get a better deal. The idea that many users will simply ignore ads is irrelevant, and wrong. You may *think* you don't click on ads, but you do. Especially if they are personalized. It's nothing to be ashamed of, and I used to think exactly the same way. "Meh, whats the point of advertising on the web at all, it's not like anyone ever clicks on them unless they're really stupid!". That's just false though. Furthermore, actively encouraging users to click on ads will get you banned from most major ad networks, as a principal violation of terms of service. As for tracking - do you use clubcards? Store discount cards / loyalty, whatever? Do they have a big block of text on the front that says "we record everything you purchase , when you purchase it , how you pay, in order to advertise our other goods and services to you and gather data on all our shoppers". No, they don't. You're right though, of course - consumers do no need to be aware of this form of tracking and be technologically informed as to *exactly* what is being recorded - otherwise we end up with conspiracy theories - just browse through the other comments here to see examples. That's why I appluad the efforts of ghostery to maintain the database and describe what each tracker does. But blocking them by default is not the answer. I think a lot of commentaries miss the point that this article is flagged as an opinion. Having that said I bet my candy *** that anyone running an independent site at the size and demand of MUO would use one or more ad-system on that site. You don't run a site like MUO on a $10 all inclusive unlimited hosting provider. Reliable and powerful hosting costs money, running a site that size takes a lot of manpower and having new content on a daily basis takes a lot of writers time. All the tracking is only natural when using ads. You don't want the geek located in Europe view ads about a nail studio in San Fran, CA. You want to show him ads relevant to both, the sites content and the visitors interest. That's what (most of) the tracking is about. Be my guest using Adblock, Ghostery, NoScript and whatnot... It's your right to get the best web experience possible. I myself have a hosts file with a bunch of Google/Doubleclick servers in it. Just be aware of who you are hurting for the good AND bad. If you want to get an idea of who´s tracking you try installing Collusion add on in Firefox. This is a screenshot of what happens when you visit Makeusof.com: Every dot is a company tracking. I´m not so much against adds on websites but I hate (and I do mean hate) being tracked. I hate shopping agents for the reason someone earlier in the comments remarked. It makes me dumber if I´m only presented with more of the same. When I become a member of a site they ask for my name, address and birthday among other things. They also track my behaviour. Granted they say it is anonymous tracking and for most of us that tracking is at best an inconvinience. Knowing who we are is of little consequence. The company doing the tracking wont be able to exploit us more if they put a name and a face on the "trackee". But that is today. As a user I have to look to the future. I cannot predict the future. If I could I could have predicted the law in my country that nowadays demands providers to log my internet activity. I only know that now the government have this huge database of logged internet traffic. "Only to catch terrorists mind you". Storing that information is expensive so to make it more cost efficient maybe they might decide to use it for other purposes. I feel like I´m on a slide. I dont know which government might have the power years into the future in all the countries I "visit". By refusing to be tracked I feel some what safe right now. Sorry if it cuts into your livelyhood. I wish it didn´t because you write good stuff. And you´re right - it does make me sound like a "tin foil hat" but I saw the Berlin wall come down and read the stories about STASI. We aren't denying or hiding anything about the number of ads here. I believe ghostery lists 26 trackers in total, about 6 of which are core to the site functionality, and 20 of which are from ad networks and behavioural tracking. It's great that you're not against ads, but the fact is that random ads don't generate enough revenue. The only way to make enough is to track users and personalize the ads displayed. Anyway, we won't block you for the choice you make. Perhaps if we added a donation button you would considering donating? You´re WL´ed. You´re the first one I ever did this to and I only did because you asked nicely and debated cordially. Besides, as I stated earlier, you write useful stuff.Now Google knows I was in here. :)No, I have to be honest. I dont donate. I read a recent survey done on an add free porn story site. It revealed that it´s mostly women who donate.Maybe you might consider making a subscription type of deal where one can pay to avoid adds.Other sites do this.Or you could make articles that require micro payments.The best guides at 10 cents a pop? I have one of your Linux guides for free but I´d have paid to get it.You´re not the only site to hassle. The newspapers fight to earn these days. Mostly because they started out having all free content and now they cant turn back the clock and have people start paying.If you crack "the code" regarding payment you will become richer than Gates. :) I really appreciate that Nalk, and you can be sure the bosses are reading that too to consider all options. Just for reference here are two screenshots of the same MUO page with and without adblock on. I think it is pretty clear why people are using adblock... ...because it removes the advertising. Yes. I use adblock because the internet is unbearable without and after reading this article I've whitelisted makeuseof.com Thank you. I only recently discovered this site. I was excited but after reading this article, I don't think I'll be coming back. Pointform to convey all my thoughts: -as others have said, the tone is childish and whiny and will alienate many readers. -enabling ads will do nothing for most people that use adblock, as they wouldn't click ads anyway. -adblock has been around for *years*. it's not new and plenty of tech savvy sites seem to survive. This is your domain so you need to learn how to deal with it (not just complain). -you sound like a greedy beggar. it's just like going shopping in a store and a clerk saying 'hey can you just buy something and give me some money"? You could have easily been more strategic about solutions without directly complaining like this (e.g. educational blog post about how online advertising works or how about a donation drive? give away prizes?). -privacy. it's an ongoing battle between consumers and marketers. Ads are EVERYWHERE in our lives (not just online). It gets annoying and of course the issues of privacy invasion, malware, identity theft are all top concerns of consumers. Marketers *are* devious (how much does FB do that non-techsavvy users know about?). There are precedents to consider and this will never go away. -Google makes tens of billions of dollars in revenue every year. All from ad revenues. -why not more data/information/research notes? e.g. how many of your users have it installed? Final note: the fact is, ads work on non techsavvy users. Your site is techsavvy. Deal with it. See you ... never Ads do not only work on non-tech savvy users, and the idea of "well I dont click on ads so blocking is fine" is a false belief. I'm tired of arguing that. I'm sorry if you find my tone childish. I respect your opinion though, and please bear in mind this was clearly marked as "opinion". This does not represent the views of other authors necessarily, and discounting the other 10,000 posts here because you find one disagreeable is like hating China because a Chinese kid once bullied you. An educational post will come next week, teaching users how to whitelist sites they wish to support. This is not that article, and was not pitched as such. We give away prizes all the time, as you will see if you take a cursory glance at the featured posts bar at the top. I should note: entry requires javascript and liking our facebook page. It wont work if you block it. Google does make tens of billions from ad revenue ever year... and employs lots of people. Not sure I follow your point here. With all due respect, your ads are not unobtrusive. I turned off all the ad blocking I use and went to Makeuseof to see what is there. I can't even concentrate on what I try to read for all the flashing, wiggling nonsense. That stuff is so objectionable that I would gladly never go to the site again (a great loss to me because your site is great) if it meant I'd have to look at that stuff. If you don't want people who don't cotton to those ads to come to your site, you should say so right up front. I'll stay away. How about this alternative: give me a way, say, once a week, to go somewhere and look at ads for 5 minutes or so and record that I did it. That way I could "pay" for using the site and not be disturbed while reading your fine content. You might think I wouldn't do it, but I would (and you shouldn't let me in if I don't). There are ways to get this done without hitting people up the side of the head with ads. You saw flashy wiggling nonsense because the ads networks show you the lowest paying advertising they have. I dont see those, by the way, because I dont block trackers. I see relevant ads - one for broadband internet, one for web servers, and one for some famous SEO tools. Your idea of a specific ad page is interesting, but would sadly be against most ad network policies, and would generate little to no actual revenue for us. It's a nice thought though, so I appreciate that. You're right, there are alternative models - like sponsored posts? Regardless, we won't be blocking you, so please continue to enjoy. ;) don't know about killing the internet but why does the sites don't keep a particular area for showing ads, go and visit sites like way2sms and site2sms just cant navigate properly on that may be the internet is free but we have to pay for data plans and these ads uses a lot of data....so just instead of forcing users to see adds with pop ups and blind links they can show ads at the side of the page and we will not use adblock Ad zoning reduces the revenue for precisely that reason. Users find the ads very easy ignore. Popups are certainly annoying, and we dont do those here. I'm referring to this website in this article, not others. Can I assume that the authors of MakeUseOf do not fast forward through advertisements when watching DVR'd television programming? Doing otherwise would seem to be hypocritical. I don't. But that's because I don't watch DVR'd TV. ;) I don't, personally. I can't speak for every author though. This piece is said to be written by British street artist, Banksy, It's his two cents’ worth'm down with that. Ban advaertising in public places, by all means. I find billboards to be obnoxious and who the hell gave them the right to show themselves to my face. What do I get from it? Nothing, it's bloody rude, is all. It's an onslaught of random noise I just don't need. You're missing the point though. You choose to come here. You choose to receive our articles, and in return, we advertise to you. That's how it works. This website would not function as it does without advertising. Simple. It is a good quote, though, dont get me wrong. You know you essentially redacted your entire article with this comment. If you don't like billboards and other public displays of advertising, quit going to the city, get rid of your phone and throw away your computer. Also, stop buying newspapers, periodicals and other things that inform you about new products. And when you go down to the store, just pass it right on by so you won't have to read any of the labels or instore merchandising. In fact, just walk right up to the Queen and ask for your welfare check and free bag bread sticks. Something does not fly here ... I have a better idea, all website owners, especially those that got on board with stop pipa, sopa, open and acta should rewrite their site structures so that all these adblocker softwares simply cause their sites to stop working. The real travesty is people have somehow been brainwashed in the last 60 years to believe that needs can be met without advertising. If that were true you wouldn't need to hear a politician talk and the politician wouldn't need to be elected as they wouldn't have to advertise "translation: campaign" what is they will do for you if elected "translation: purchased". Oh, wait, most of the republics resemble dictatorships now, with freedoms of their peoples being stripped left and right by the thousand fold. Are we all FROGS in slowly boiling water! I didnt negate the sentiment of my article, actually. Public advertising is obnoxious - those billboards do not provide a service for me. TV - yes, sure. Ok, I accept advertising, I get to see shows. Simple. Magazines - sure, even the paid ones supplement income with advertising. Billboards? They can go to hell. What free service do they provide to me? What right do they have to shove that crap in my face? What do I get in exchange? Flyers through the mail? No thank you. I didn't ask for those, and I don't get anything in return. I have nothing against advertising in general, obviously. It's the unwarranted advertising that pisses me off. As for adblocker - I can make the site blackout for those users, but I dont think thats a good solution. Far better to educate than disenfranchise. Yes you did! "If you don’t like billboards and other public displays of advertising, quit going to the city, get rid of your phone and throw away your computer" The city makes money off of that billboard. If you don't like it, don't go to the city. That is your whole argument, and it is BS! You create and share. We the readers don't owe you squat. I have never clicked on a single ad, nor paid for a single movie/tv/videogame/software/etc Information and knowledge and archives and anything non-physical is FREE. I will continue to block all I can. I don't give a lick about your revenue stream. I used to use ad block until I realize it was preventing me from seeing things I wanted to. The pages also looked blah First of all I love everything MUO but I have to disagree with your hypocrisy statement. Advertisers with their many round about ways of sending me pop ups, loading extra windows that I can't close. etc... are offensive to me - something I didn't ask for. Adblock on the other hand is a program I asked to download to rid me of the offensive. Asking for a donation to assist in helping me do that is not hypocrisy. I, however have uninstalled it as it was slowing my page loads down a lot so today I'm receiving the offensive ads and I'll wait and see how long before reinstalling if the ad nazis veer too far from tolerable. This is such a frustrating article. I happily disabled adblock on MUO when the banner requesting me to do so first appeared, but the aggressive undertones in your article and responses to comments made me consider turning it back on. In all business, you have to work within the system that exists. You can complain about the system as much as you want, but the system doesn't have sympathy with you. It's up to you to find a way to make a profit. Don't get angry with another guy for finding a way to make a profit. I'll keep your ads running, because I generally like MUO. But this article is hogwash. Selectively blocking JS and ads is also huge for bolstering security. This site whines often about this. I find it funny, considering all of the torrent and piracy related articles here. All good information on this site, it's in my top 3 tech sites. But the whining about revenue, we simply do not care and it's presented in an obnoxious way, opinion or not. I'm sorry you find me obnoxious, and we appreciate being ranked in your top 3 tech sites. Bear in mind, this is an opinion article. There are 20 writers here, and more in the past. We don't have a single voice, and often disagree. As the developer though, I would ask you to support MUO by whitelisting it in adblock. I haven't used those since I switched to Chrome. I got tired of having to constantly unblock scripts. I've since switched over to a pre-made hosts file.I don't have anything against ads, but far too often there are so many on the page I can't read the content of the article. Do you make the hosts file block yourself, or is it something everyone can use? How agressive is it? I just started with the MVPS hosts file. Then, I just used Google Chrome Developer Tools Network tab to identify any ad-serving domains and manually added those to the bottom. Took a little bit of time, but right now it's coupled with Squid proxy and has been working great. Very rarely do I encounter problems, and when I do, I just change my internet settings to not use the Squid proxy. (The Squid proxy is setup on another computer and has a hosts config option) You can find all kinds of different host file setups with some Googling. Another alternative is using Peerblock, setting up the ad list (probably the only one you'll want for this) and enabling HTTP blocking. The best solution is to whitelist makeuseof.com and ohter trustworthy( awesome ) sites from Adblock :) I completely understand your point, but I think instead of a ranting-type article, you should have written a tutorial article on how to white-list sites. For example, I use AdBlock plus-mainly for the blocking social-sharing widgets from loading because I always seem to have 50 million tabs open and use bookmarklets for sharing, so for me the widgets are just annoying clutter-BUT I white list sites I frequently read and support-and I'm sure a large number of people would do that if they knew how. Same goes for Ghostery-I have it installed to see what's tracking me and to make informed decisions. I don't block everything, but I have blocked a few marketing companies. (I don't use NoScript, so I have nothing there.) Ranting doesn't accomplish much besides making you feel a bit better-how many people are REALLY going to install the above plug-ins because of your article? An article explaining how and why to white-list sites would have been much more useful and have yielded much better results. Also, maybe I'm missing something, but I'm pretty sure the AdBlock Plus guy doesn't have ads on his website, just a donation plugin-how is that hypocritical? Both are ways to make some money for your work-he chooses to ask for donations instead of using ads. Seriously-what's the problem with that? Hi Leslie, thanks for commenting. A tutorial on whitelisting a site will be forthcoming in the following week, not written by me. You're probably right about it being more productive than ranting, but I wanted to rant. Its good that you whitelist sites and make informed decisions, but you are the minority. Most just leave it blocking everything, informed or not. The creator of adblock plus does indeed as for donations. I find that hypocritical because he's happy to block our revenue, in exchange for his own. It's basically the same thing as me pirating software, then making it available in an alternative model (ad supported, unlimited downloads etc ... like megaupload perhaps?). I wouldn't go so far as to say it's stealing, obviously, but it's in poor taste I'm sure you'll agree. Needed to post you a bit of remark in order to thank you over again cieridnsong the magnificent information you have discussed in this article. It has been really surprisingly generous of you to grant openly what exactly some people could have offered for sale for an e book to help make some cash on their own, particularly given that you could possibly have done it in case you considered necessary. The principles in addition worked to become a easy way to be sure that someone else have a similar fervor similar to my very own to know the truth a whole lot more pertaining to this matter. Certainly there are numerous more fun situations up front for people who take a look at your site. I agree with you on AdBlock. As for NoJS, that is a user issue more than the site. I have flash blocked and I accept that some sites and ads will not load correctly, but it is the risk I take. I am not saying you should cater to the NoJS crowd if you do not mind losing their revenue. It is probably pretty small and not worth it. If the whole net was like that, then maybe it is a good idea to make NoJS pages (kinda like how sites make mobile versions of their site). However, I disagree on tracking. More to the point, I disagree on your analogy. If I go to a car site and see car ads, then your analogy holds. But if I go to a site on pets, then see pet ads on my car site, it is not the same at all. Television ads assume that if you are watching a car show, you like cars and if you are watching animal planet, you may like pets. But to turn the analogy around, if I watch a lot of animal planet, then go to a car show, then they showed JUST ME an animal ads, then it would be the same. I do agree, for now, that it is not a big deal. However, I won't hesitate to install such extensions if the tracking gets beyond my comfort zone. A flawed analogy, I see, you're right. I don't see a problem with showing you car ads during animal planet if you like cars though... I'd be so bold as to say broad random advertising on TV no longer pays, and that the future for TV will be more personalized too. I think the whole argument is flawed. I for one use adblock and make no apologies. I use it mainly to hide ads as I don't want to see them and secondly to stop any tracking etc. I for one do not look at ads and have never clicked on an advert. so adblock simply unclutters the screen. the argument about why adblock and other similar plugins are "ruining" the internet is flawed - why because those people who view and click on ads won't use adblock, and those of us who don't click on ads - simply don't want clutter on the screen. If the majority of internet users use adblock plugins then that would indicate that the revenue model needs to change. I for one would rather pay $x dollars to view so many articles on a site such as MUO : so i could read a summary, and if I click to read the full article that counts as 1 read article out of the total number I've 'paid' to read- and leave out the annoying ads. Alternatively that subscription is less if you allow adverts - that way people have a choice. Incidentally when I'm in the market to purchase an item - I will look search for stockist of that item both in stores and online retailers and choose the best option at the time - sometimes that's based on price, sometimes on after sales service and support. but that's just me! I use Adblock and Noscript. Why? Because of the stupid tricks that the ad makers use to throw their stuff in your face. Because I got tired of having to click to turn off an ad so I could see the content that it covered up. Then they even took that away forcing me to sit through the exact same 10 second flash animated ad with sound that moved across the screen obscuring content and couldn't be stopped or turned off EVERY TIME the page refreshed. Sometimes over a hundred times a day every day! - until Noscript and adblock saved my internet and my sanity. I didn't mind the ads that were displayed quietly on the side, or the non moving banner ads. I DO mind the animated flashing loud audio ads that ruin web browsing and often DO carry viruses that are triggered when you try to turn them off. The ad companies cut their own throat by doing these type ads. Now, I just use an ad killing addon and rarely see any ads at all. So instead of blaming the user, how about blaming the ad companies for not policing their own industry and allowing the ad pollution that we see if we don't block all ads. I wanted to send a small reramk so as to appreciate you for all of the lovely secrets you are sharing at this site. My extensive internet look up has finally been paid with extremely good facts and strategies to share with my relatives. I would state that that we visitors are rather lucky to be in a superb network with very many marvellous individuals with beneficial points. I feel extremely blessed to have encountered your web site and look forward to some more amazing moments reading here. Thank you once more for a lot of things. You're upset people block a bunch of totally uninteresting and pointless ads, but at the same time supply/educate the same people with ways of obtaining free stuff that isn't free? (I'll give you that you're better then most when it comes to these things tho, but you're not innocent, check your post history (talking site wide)). You can't give with one hand and take it away with the other and not get called on it, sorry. Also you can't expect/demand to make a living out of something just because you want to. If your business model doesn't work, change it or quit. It's not rocket science. Anyways... I'll tell you what I get told as a musician every day... do it for the love; fun, isn't it? Or how about, sell merc to generate income because your writing is worth nothing and neither is your time or knowledge. It's a brave new world, get a second job and deal with it. Thankfully, I have a few income streams myself Sten, I appreciate the concern though! It's a bit rich to be complaining about the evils of these three addins when you have profited from publicising their existence. Don't get me wrong, MOU is a top notch site and I understand the importance of advertising income, however if I didn't have adblock, I probably wouldn't visit this site anymore. The reason being the ridiculous amount of ads. Peter, this is an opinion post and represents my opinions and not necessarily those of other writers. I personally have *never* encouraged any users to install these plugins. That would indeed be quite rich of me. And you're right, I think it's pretty disgusting other writers here would encourage such an unethical plugin - that's on their heads though, not mine. I'm sorry to hear you wouldn't visit without adblock, and that's why we won't be blocking you for your decision. We can but ask you to whitelist us ;) The add-ons mentioned above are great, useful "prevention tools". I don't want to visit a site with camouflage ads, splash screens, getting tracked by Facebook or Google. If I trust a site I can disable them, for the rest of the internet are on. When the sites -and the internet- become "mature" and keep the ads on a specific place (and not let's say on the background image of a site) and get rid of the connection addons (like, +1 etc) then I will not need them. Until then... You know, I actually kind of like background ads - they certainly have a greater impact. I dont want to be tricked into clicking on them, but if they provide a big enough revenue to support the site without traditional ad blocks, then good on them for trying a new model. I wanted to make a brief mgsasee so as to express gratitude to you for these magnificent information you are sharing on this website. My extended internet investigation has now been paid with high-quality content to write about with my good friends. I would express that many of us site visitors are quite endowed to exist in a really good community with so many perfect people with great methods. I feel rather lucky to have discovered your website page and look forward to so many more pleasurable minutes reading here. Thanks a lot again for a lot of things. Hello James. I disabled AdBlockPlus, as requested, when I first started using Makeuseof, have never used NoScript, and, following our recent exchange of emails, also disabled Ghostery fro the reasons you gave then and above. However, I think there are times when it is reasonable to use these apps, for instance when you visit a site for the first time, deliberately or randomly, and want to limit the adware/tracking footprint of the site until you decide if it's a keeper. Also, as I now have to pay a monthly fee to access the full content of my newspaper of choice, I think I'm justified in eliminating content of no use or interest, only allowing those elements (Disqus, Brightcove, Gigya) that are necessary to use the site. I think we will have to agree to differ on this one. That's a fair point Gordon, and I appreciate your actions. I guess my concern is that most people don't put as much thought into as you did - for them, it's just left on default blocking anything and everything, with no concern for the writers of the site in question and their livelihood. extension should work.While whctaing Free TV, you might see some ads. You can disable the ads using AdBlock Extension for Chrome.Features of TV Chrome Extension for Google ChromeOver 2500 Channels.Gather channels freely Nah, as long as IE is the top browser, it would not be an issue. The advanced users who use Firefox and Chrome are the only ones who will bother installing Ad Block, No Script, and Ghostery. Maybe we will stop using Ad Block, when the heavy Flash Ads stop, or when annoying popups stop. Well, we've never had popups on this site. Flash ads I agree are unjustified in this day and age. Block flash all you like! My wife and i were so peaceful Michael could do his aliynsas through the entire ideas he was given out of your web site. It is now and again perplexing to just happen to be giving for free facts which often many others could have been making money from. We fully understand we've got you to be grateful to for this. The specific illustrations you've made, the easy website menu, the friendships you will give support to engender it's got everything incredible, and it is facilitating our son and our family know that this situation is satisfying, and that's particularly fundamental. Thanks for all the pieces! IE's tracking protection + fanboy's IE list = adblocking! No need for third party extensions. for one would be fine to pay (or not). There was a time that a business either made it (or failed) based on if they could acquire and keep loyal (and possble) paying customers. What happened? You hear from "these adblock critiics" that customers just want "free" stuff--that has always been the case, nothing new there. Stop crying because you have to work "hard" to get customers and monetize that base. I think many customers are tired of being made "the product" and if it means paying...so be it. However, know that the fate of the business that can't compete on both price, value and customer service will fail. Hmm...is that too old school. Secondly, don't blame your customers for an industry problem. Privacy and annoyance is a real issue for people when it comes to ads. The same people complaining about the few adblock surfers should be advocating for better privacy practices among advertisers. Instead, at least imo, generally these critics are willing to sellout their customer's privacy and could care less as long as they get the ad money. Our entire mission statement is based around the concept of being able to use apps and websites for free, so the very concept of us moving to a paying reader model seems very wrong. This article is a big FAIL for me. While the article swears by the how the add-ons can kilol the income stream, why not think about improvising in posting ads? I cant wait to block the annoying ads and as much as we love your content, the ads in their current form are a big annoyance. can u clarify what you mean by "why not think about improvising in posting ads?", I would be happy to respond But,it depends on how you use them.I've disabled them for frequently visited sites like muo,lh,wired,etc,but when surfing the web, there is no choice bot to use them.Ads have become annoying and heavy and i know i will never click on it,and as sites are only paid if i open the link,what difference does it make? I don't agree with the argument that "I never click on ads". You might not think you do, is all. Well, you are correct for the point that these 3 kills the revenue system... but i dont want a website that is bloated with craps like a big-breated woman in bikni... internet is meant for free thinkin... if i dont like ads... i shudnt have adds...its simple as that! and about JavaScript... if i want to download a image from a website and js is standing in my way... its obvious what a person would do. without adds sites look clean... and if these addons were not there... every website would hv said.."We will do whatever we want... you have to live with it" - thnkfully thats not the case! It's a fallacy to think the internet would exist without advertising. Do you think making a website is free? I support this. Really ads are the ones due to which we can view and gain knowledge from such awesome sites like this. Atleast every that person who have a broadband connection should not use AdBlocks as it takes no time to load a webpage on a fast connection. Not all sites are like this one. Even this one not too long ago took minutes for a page to load on a 7Mb connection as James found out with my bitch-fest some time ago. (Just in case I didn't say it before: Thanks for the improvements, James.) Yawn, not another whiny anti-adblock post. If you can't sleep nights because you can't monetize the 5% of users (a very liberal estimate IMO) who use adblock, well you need to find another revenue stream. It's pathetic. I work in construction and more than half of companies who inquire from us and also our formal project bids end up buying from our competitors. Yet I still give 100% to all inquiries and request for bids. So you're only argument is that it's fine because not everyone does it? Thats pretty weak. If you like, we can completely block users with adblock running until they donate a certain amount? Would you prefer that? If you have the courage of your conviction... Yes. I dare you to block us freeloaders. We're the 1%, what will you guys lose? Sorry, perhaps you didn't read the line that said "we will never block users of ad-block"? Nevermind what we would lose - what would we *gain* by doing so? James, blocking users of ad-block was your suggestion. Am I missing something? Why are you asking Dan to provide you with a rationale? It is certainly true. Firstly, I was under the impression that ad block when used on chrome, doesn't actually stop the ad from loading it only hides it from view. If that is no longer the case then please correct me. Also, ad revenues are gained when a person actually clicks on an ad, correct? Some of these ads use deceptive wording, such as get your free Ipad 2 or you just won a whatever are praying on gullible individuals for thatad revenue. Much like the tribalfusion ads that are at the top of your home page claiming a woman made her skin look 20yrs younger for less than $5 dollars. If you click through, which I did (you're welcome), you immediately find that you are going to be paying more than $5 for the this miracle snake oil. So who is the thief here? My point isn't about the product, it's the trickery and deceptive marketing being used to get you a few cents from people that don't know any better. Adblock prevents ads loading, not just hides them. With regard to deceptive ads, I completely agree. When I find them, I ask for them to be taken down. Free iPad 2 is a good example. We dont have a system to vet every advertisement though, and I'm not judge and jury either. The ads are also tailored to users, personalized. If you see something you think is deceptive, then please take a quick screenshot and set it to me, I can ask for it to be removed. As you say, ads generally only pay per click, so it's our interest to show ads that are relevant to you and not likely to cause a negative response, so we dont like those deliberately nasty ads either. (Thanks for clicking on that silly ad, by the way!) Wrong about ad block with Chrome. In it's current form it can't block much. It hides most. This is something they are working on but they are limited because of the way Chrome lets addons hook into the browser. When I play around with Chrome I see ads all the time load for a split second and then hide. These clearly were not blocked but covered up or hidden. Try SR Ware Iron which is a Chromium based browser with real ad blocking using an ad block ini file like Opera and you will see the difference. Yes the ad blocking on Chrome sucks for the reasons you said. It is actually slower with ad blocking on in Chrome than with it off because of the way it blocks ads or doesn't block ads I should say. This is why I will not use Chrome. I use Firefox and Opera but if you want Chrome with real ad blocking use SR Ware Iron browser. It's Chrome with real ad blocking and you will see the speed difference. Iron uses an ad block ini file like Opera and does block ads not just cover them up. I quit using Adblock years ago because of the same arguments made here. I have, however, been using NoScript because I accepted that it decreased my vulnerability. I am going to take your word that (a) it decreases your ability to be compensated for the service you provide; and (b) it will not increase my susceptibility to malware. Here is hoping you are correct. Thanks Roy. I have learned many irotmpant things by means of your post. I'd personally also like to state that there will be a situation in which you will get a loan and don't need a co-signer such as a Government Student Support Loan. When you are getting credit through a conventional creditor then you need to be made ready to have a cosigner ready to help you. The lenders will certainly base any decision over a few issues but the greatest will be your credit history. There are some lenders that will in addition look at your work history and make a decision based on this but in many instances it will hinge on your report. I posted a comment that seems to have been removed. I can only assume the author of the article isn't serious when they say they would like to her your comments. Not surprised. I didnt even wake up yet, so it certainly wasnt me. I can tell you that our system will have automatically deleted the message if it contained profanity though. Anomaly, your comment was caught by Disqus moderation because you used potential foul language. Your comment has been published now. Good Grief! Another person telling me I am evil and a thief! I sort of understand the objection to adblock, but now because I choose to turn off cookie tracking and javascript? For heavens sake. I am not breaking the internet if I turn off javascript. I am choosing to have a visually less pleasing and less interactive experience by turning off javascript. As for cookie tracking, why must I put up with sites tracking my clicks? Turning off javascript breaks every single bit of advertising on this page. So yes, its the same effect as adblock. Of course, its your choice to disable Javascript, but please understand it does the same thing as adblock, if not *more* aggressively. Cookie tracking and click tracking are different. There are methods to track where you click on the page, used by google search page and such, but general advertising doesnt do that. Advertisers track sites in their network that you frequent, to learn what interests you and show you more relevant ads. That's all. "Turning off javascript breaks every single bit of advertising on this page. So yes, its the same effect as adblock." If there are legitimate reasons to disable JavaScript, and the ads on your website critically depend on JavaScript to function, maybe you should think about reworking your ads. At the very least you should stop condemning visitors to your site as destroyers of the internet. I remembered adblock used to just hide the ads but still load them in background so it generates revenue for the site owners. I have no idea when they changed it to totally blocking the ad itself. Speeds page load, I guess. I agree that it certainly harms the livelihoods of the people who produce original content which takes a lot of time to produce. In case they are not compensated they will no longer be able to supply great free content which all of us dearly love. Secondly the website owner will also not even know who has visited their site in some cases. It is only fair that website owners who give you free content are rewarded and can get statistics of how may people visited, from which country and which pages they visited etc. The stats you mention are all available from their web server logs; they just need to know how to mine those logs. Ads don't even figure in there. I lose my mind with rage when ever I read some perplexed primate whining about ad blocking extensions. Here is what these people believe they have the right to do to you. 1) Provide a vehicle for malware to infect your machines. Ads are a major source of this. 2) Massively slow down your on line experience. 3) Hugely distract and annoy you with idiotic BS you don't want to see. 4) Severely reduce the battery life of your laptop due to the excessive CPU usage. 5) Probably decrease the life span of your machine due to excessive CPU, Fan, and HDD usage due to all the shit the ads ad to the site. 6) For those on limited data packages you will use up your limits much faster downloading all the shit from these ads. It's amazing how much mb of your package ads will use up. 7) Annoy the people next to you with the fans spinning into orbit trying to cool your hot machine. Hot form all the excessive CPU usage due to the crap ads. These are just the most obvious evil things ads will do and the anti ad blocking crowd think they have a right to do to you. As for the stupid argument about paywalls and ruining the Internet, it's total BS. There is and will always will be people that want to provide free content just for the love of doing so or it's because it's what they believe in doing. It's the mutts trying to monitize every corner of the internet that are the evil ones and destroying the Internet. You want to see evil look in the mirror before you start on another anti ad blocking BS session. . So, I'm an evil monkey then? Good start. 2-5,7 all relate to Flash ads. Yeh, we hate them too. Disable Flash is you like, it does suck. Sure, there are some people who will provide content for free, just as there are some who work to provide Linux and free apps there. There's also a reason why most users don't use Linux... I do use Linux and love the whole way the Linux world works. I would have no problem dumping Windows and Mac which I currently use as well but am getting more disgusted with every second. Windows 8 will probably be the tipping point for me with Windows. Metro is just disgusting and MS seems to have developed a bad case of Mac douche bagness with Windows 8. The reason why most users don't use Linux is simply because they are lazy. They used the dominant OS's and can't be bothered to change or they think they are cool buying over priced Mac dog crap. Dear James. . Best comment on the page. ." That was totally unwarranted for!! I am running on Linux (Ubuntu) for many months now.. You should give it a try! And maybe I can make some MakeUseOf posts on Linux for you.. And for the "developer" tag, now I know some comments above were so true.. Yes, and dvrs are killing TV... If your business model relies on pissing off your users, you need to rethink the business model, not tell your users that they're wrong for being pissed off. While I have never been an AdBlock user, the kind of personalized tracking that Ghostery prevents is vastly different from the kind of segmented targeting that network execs use to decide what kind of ads should play next to certain shows. For instance, TV networks don't have access to my web search history and browsing history. In fact, unless you're a Nielsen family, they don't even know what other shows you watch. Besides, as I stated above, dvrsare killing off the traditional advertising model. Making an appeal to a dying business model as justification for propping up your own is not winning you any points... Advertising is a dying business model? Hey, if you have a better alternative then by all means, suggest one. Would you pay if MUO went premium only? If something like workable micro payments came out, yes. But I'm not paying for a subscription to a bunch of articles I have no interest in. My cable provider already bundles in a bunch of crap channels I have no interest in watching... I don't need the same thing on the web. May be u should choose to ask for paypal donationation too rather than adds. Excellent idea. DailyKos will put up a box at the top of the page alerting the viewer that it's site's ads are being blocked but also offers bypass code that you can plug into AdBlock so it will display ads. You should consider this. Once you plug in the filter (or bypass) the annoying block goes away. I'm not sure what that code is, but thanks, I'll check out that site to investigate. Thanks for the article. Ironically, I am now going to install all of those plugins, because others are not a scrupulous as you! (don't worry you're white listed) Well, thanks. That wasn't quite what I intended though! Agree no script is dumb but love my Adblock Must be a very small proportion of users who purposely click on the ads. I suspect that most of your money comes from tracking the users. Maybe you need to move to an infomercial type of model and not rely so much on the ads Tracking users in and of itself provides zero income. The purpose of tracking users is to presonalize ads - only that. Infomercial type of model is an interesting idea, but I think the bosses made the decision that it would hurt our integrity. If we were to publish ads-as-articles, a lot of users would stop trusting us, don't you think? Until the internet actually begs for money to view sites, I'll keep using Adblock we could start begging... In my opinion, the practice of tracking is both useless and potentially dangerous. Useless, because the algorithm is flawed -- A few months ago, I bought a car. Now I get many ads for cars; wasted, because I've already made my decision and am no longer in the market. Virtually every purchase I have made online has resulted in ads for that item --after the purchase. Wasted effort. The "dangerous" part is harder to explain. The internet makes it easy to interact only with those with similar views, similar tastes. Tracking intensifies this tendency. It assumes that I am only interested in what I have already exhibited interest in. It assumes I do not want to be exposed to something new. That is something that I find offensive when applied to consumer goods, and dangerous when applied to social media or news. If people are only exposed to views that mirror their own, they unconsciously come to think that anyone with different beliefs are aberrations. I would not be surprised if that is a factor in the current political polarization. I definitely consider that dangerous. I understand your point, but advertising models say you are wrong. You do click on ads personalized to you; you don't click on random ads. It's that simple, bottom line. Personalized ads make greater revenue. I completely agree that the personalization of things like search results is a dangerous path to go down. Interesting article, I clamped down on my privacy years ago, not really sure about the risks I faced, but very aware my wifes ID was stolen and used to try and create a Paypal account. I haven't given it much thought since then, but your article has given cause to think about what I need now in this different time Identity theft is a very real concern, I understand that. I guess my opinion on this matter would be quite different if I had been a victim of ID theft; but I think the risks to identity are actually more from social engineering, human error, and weak passwords - as opposed to click tracking and personalized ads. Ofcourse, you should always beware of unknown and untrusted sites. Just checked my FF Ext. AB+ v 2.0.3 - found: "Starting with AdBlock+ 2.0, there is an 'option' in 'Filter Preferences' to allow some non-intrusive advertising. The goal is to support websites using non-intrusive ways to advertise and to encourage more websites to do the same." ...and it was enabled (check-box) on auto-update in January...I had already disabled AB+ for MUO. Bob C. The problem I have with that is the creator of adblock who gets to determine what he feels is non-intrusive: It's a start though. So who should choose then? The advertisers, I'm sure that would work really well. I fully understand the pain of you guys, and I had even disabled adblock softwares in the past, but had to re-install them due to just one site. YouTube. I find their ads intolerable. Due to this one single site all the rest of the sites I visit are being hit right now. I wish I could just enable adblocking on just YouTube and give you guys what you deserve. I think you can whitelist MUO. There will be a tutorial posted soon. Thanks for doing us the favor, I'm sure the article will be a blockbuster. You're totally welcome Jamal! I did a little experiment to see the difference for my own eyes. I run numerous blocking mechanisms from DNS based, Adblock, Ghostery, Flashblock, etc. So I disabled all the blocks and cleared my cache, then viewed this page. Here is my results. What I learned. You had a total of 27+ items that tracked my movement across your site, and may set persistent cookie(s) that would stay with me across many pages I visit in a web session. They might also stay for an indefinite amount of time, (a cursory glance showed cookies w/ a 10yr expiration date). Google Analytics as an example is used on an estimated 49.95% of the top 1 million websites(according to Wikipedia) so half of my web visits could be packaged together, and possibly more if signed-in to my google account. Several of these ads you displayed were animated, flash based (with LSO cookies), and generally of little interest to my tastes. These ads showed poor targeting, and were rather annoying. I support the right to get paid, but you need to set high standards for the ads/beacons you choose to use. If you were only to display relevant text ads, I would have no problem supporting your stance. I have also been infected from Javascript exploits via supposedley pre-screened ads from a syndicated revenue network, and that also included the reputable Google Adsense. I don't use no-script because it created to many headaches today, in the era of rich media. If you were to use a platform like OpenX and screen your own ads, I would put more trust in your statement. To spark a conspiracy fuse here, when you have 10yr cookies, 27+ privacy policies, that Indicate that they won't do ... except when required to by .... Ever eroding checks & balances on what constitutes a right to search, and the Quid pro quo that exists between Big Corps and Goverments seeking user data. Why shouldn't I block by default? Well fist off, of course the ads were poorly targeted. You just said yourself you run ad-blocking all the time, so they wouldn't have had a chance to learn anything about you yet. You would be served ads from the bottom of the pile - typically bad flash based ads. I'm sorry about that, and we're working to remove those. If text ads alone could pay the bills, we would happily switch. They dont, by the way. Moving to OpenX is a good suggestion though, so I'll make sure we investigate that option, possibly even do a trial. If we can make the same revenue using that, then we would certainly be open to it by all means. Thanks for the suggestion. Fantastic article, I have worked in online ads for almost a year and changed my mind completely about security on the web, 100% agree I think the whole ad-blocking boom is a "too late" solution for a problem that existed some years ago. The ultimate evil being those that tried to make easy money via ads (pay-per-view) while delivering low quality or none-original content. Few years ago when you hit Google or your preferred search engine every second result (thanks to SEO) was a site full of advertisement ~ often more than 30% of the sites content. Today (and too late) the browsers finally hit a state where it's easy to provide tools like Adblock etc. that allows us to fight the above problem. What people didn't notice is that search providers took the fight themselves by "evaluating" sites and provide the user with better quality results (tracking being one part of it too!). The resulting situation: Once a week some new ad-blocking or tracking "guard" hit's the web hurting those that legitimately try to make revenue via quality content in a reasonable manner. Honestly, if people are really afraid of tracking and targeted advertisement someone please explain to me how Facebook got 800 million users and Google+ manages to co-exist. Very true. Interesting points and valid arguments (in article and comments). However: Apart from being annoying, don't flash ads and trackers slow the browsing experience? (Case in point: here in Spain, my supposedly 10Mbit connection and my previous also-supposedly 6Mbit connection rarely get over half a Mbit...) I had never used Adblock and stuff, from consideration of the ads' revenue generation, but recently came across recommendations for the usage of blockers mainly to block Google's "+1" button, and especially on Google reader. (I did find GReader faster after blocking the +1 button.) Then, I installed Flashblock and suddenly browsing is faster. Go for text ads - they don't eat up bandwidth and don't annoy with animations. Many websites, unlike this one, have too many ads; others have automatically-starting video. Those are the real problem. As you mention in the article, one of the big problem nowadays is social media. Facebook is pushing the idea of what they call "passive sharing", where you can be unknowingly "sharing" whatever you're browsing. That's simply ridiculous. Why should I see what my facebook contacts are reading on The Guardian, Yahoo News, etc?? Two examples: How come I get on MakeUseOf and I'm already logged in on Disqus to comment? How come I'm ALWAYS "logged in" on Mashable? I run normally CCleaner a couple of times daily (mainly to keep browsers faster) & delete cookies and everything. Flash-anything slows the internet down, I agree. Cookies are absoutely minimal though. But again, a few seconds in exchange for supporting the sites you visit isn't too much to ask, I dont think. Text ads just dont pay as much. Regarding your examples: Disqus is a third party service. If you're logged in and using Disqus on any Disqus enabled sites, you'll also be logged in here. Mashable.... If you're refering to comments, they use Facebook as a third party comment handler. Again, if you are logged into Facebook, you are automatically logged in to their comment system. Just because someone is concerned about their privacy doesn't mean their "other tab is open on Asian Hotties or cheatonmywife.com". Sounds like the usual rhetoric when the politicians are trying to push through a new surveillance bill..."You're either with us or with the child pornographers..." Cheap shot and rather insulting...I usually expect more from MakeUseOf... Thanks for the tip about Ghostery though...hadn't heard of that one yet... Wow. That really wasn't my intention to have that taken so seriously. My point was that whatever tracking services are running only run on sites they are placed on, so they *dont know* your other browsing habits. I absolutely wasn't saying "you only need to be worried if you're a dirty porn watcher". I think maybe you misunderstood me there. I don't think you were misunderstood. I think you were using vile debating tactics. I have to agree with mizkitty and Jamal on this one--you took poetic license and instead of treating the issue as a serious one worth debating with your readers, you took a childish, unprofessional jab that is every bit as misleading as the comments people make about Javascript and Internet security. It is every citizens right to both understand and have control of the privacy of their actions, protected by federal law in most countries and international law in all (see the recent UN decisions on internet privacy for reference, if you care at all). It is not your place as a content developer to make assumptions and recommendations about how your readers should feel about their privacy--'what's wrong with being tracked?' is not a valid argument, it's an assumption without basis and completely unprofessional. Would you support a new bill that allowed a federal government to walk in to your house at any moment with the rationale of 'what do you have to hide?'? I suspect not but that's what you're asking people to do here. AdBlock and Ghostery make my life better. 'nuff said. I've come to a middle point with ads, where I use FlashBlock to block all that irritating Flash arround the web and only allow that one on trusted webpages. I agree. Flash is annoying, buggy, and slows the entire online world to a crawl. There's just no excuse for it anymore. By all means, block all the flash you like here. You're hilarious, Dr. Bruce. A content publisher using Flash ads to generate revenue on her website could use all of your previous arguments to justify her revenue model and condemn users of Adblock Plus, etc. You jump ship at the Flash ads juncture simply because it's not inconvenient for you to do so; your employer's website doesn't employ that type of ad, and thus you aren't affected. You've revealed that your reasoning is significantly driven by a narrow personal interest, failing to convince those of us who are not in your situation and neglecting to acknowledge or address alternative revenue models. Furthermore your blasé attitude with respect to real security and privacy concerns brought about by some of these trackers doesn't endear you to your readers and makes you come across as callous. Well, thanks for the Doctor status. Rubbish though. We use flash ads here all the time, and I block them. I block them because they do actually drain resources, and flash has always been buggy as hell on a mac. They're usually autoplaying video based, or some kind of interactive nonsense.They're unsupported on mobile devices. They are utterly unneeded on the web today. As for my "personal interest"; I'm salaried. I get paid whether you block them or not. My own websites are unaffected, and I use discreet advertising anyway. My concern is for the web as a whole, and this site in particular, which employs quite a good number of people. If you think that's self-interest, then I feel sad for you. Furthermore, I wrote a complete follow up to this article explaining the alternatives and how to deal with adblock. As I keep saying, using a secure browser eliminates security concerns from rogue javascript. Privacy? Perhaps; that's a personal thing. I don't feel that I lose anything because ad companies show me better ads that are more appropriate to my interests. Maybe you do. Its ok saying you dont have certain ad's on your sites but unfortunately that is not the case with many sites that are just full of adverts. I applaud you for being thoughtful on what adverts you allow. Agreed! I do think intrusive ads are a problem and surfing with AdBlock makes for a much better experience, but the whole privacy panic is getting a bit much. People are advocating for changes that goes to the heart of the business model and technology of the Internet. Aside from ads, consider this: On reddit, you "upvote" certain stories you like. When you upvote, the website keeps track that you did in fact upvote. To make sure you don't upvote again, it ties your account to the story. If they can't "track" you like this, you lose the whole upvote feature, or at least turn it into something where anybody can vote up a story any number of times, rendering the feature useless metric for popularity. Now ads are a little different, to be sure. You don't affirmatively interact with the ads. You don't know anything at all about most of these ad networks. Nevertheless, these "cookies" and "tracking" is what enables advertisements to be more valuable not only to the user but to websites and companies. I think the scary part of this is not so much THAT we're being tracked, but that we don't know when we're being tracked. I think these ad networks and tech companies are culpable but only insofar as they don't explain to their users what they're collecting, how it's being used, and most importantly, how it generates value for everybody. I think we need to strike a balance. I think users should know when and where they're being tracked, but we shouldn't ban the practice outright (not that I think this is going to happen). The real danger is imposing too high of a transaction cost for users to consent/opt-in to being tracked such that they will not want to consent simply because they have to click through 10 screens and read a giant wall of legalese-text. We should also be mindful of the value of "tracking," and how laws regarding privacy can affect the usefulness of technology. (I know it's long, but;) EDIT: Tracking has been standard WWW practice since forever. It's a large part of how the Internet generates value. As mentioned, the minimal ppl who use the 3 are not going to affect very many bottom lines. I heard like only 3% of the internet uses NoScript. If so, yes that is a lot of ppl but all the internet does not visit your site so you're even less affected. I use the 3 for the simple reason of security. You don't can't trust sites to not harm your computer. If you do ANY sort of online banking there is no effing way I wouldn't use the big 3. The problem is the advertisers don't always know what ads are being pushed by their customers. In this day and age, it is big business to infect computers, so I take precaution. Sorry for all those hard pressed people trying to make money on the internet. The business model is broken if you can even make money on the internet in the first place by simply writing an article about something like above. I don't feel bad for your lack of income, now reporters have it bad. There old school method of circulation is dead or dying. Still even more, are you an expert on this? How many qualifications do you have? It becomes a matter of how qualified is one person writing an article on the internet? How many people can they affect if they get wrong info? I've seen articles posting about, "use this new program it's great" only to find out the program had a virus or trojan in it. Thus every person who read the article and downloaded it was infected. Only way to be safe is to be safe in practices, and the big 3 help the little guy do just that. I don't follow your argument with regards to online banking. Cookies can't steal your passwords and neither can javascript. It's that kind of misinformation that causes problems. Malware and viruses, sure - but that's a different issue entirely. I'm also not sure why you think our business model is broken. We provide an awful lot of high quality content at makeuseof, for free. If you think our business model is broken then I'd be happy to hear your alternative ideas. Personally, I'm glad the old ways of printing words on paper is dying. It's a waste, frankly. I also subscribe to certain publications, and they also advertise. Do you think we should charge you money, and advertise to you? No, well then suggest an alternative model. We're all ears. As for me being an expert, that's a good question. How do you know that anyone is qualified? Would you like me to write my qualifications in the author info box? Because I only get a limited space there. The issue of software being actually infected is a very serious issue indeed, and there are times when something has slipped under the radar here at MUO, but I can assure they have been dealt with swiftly and apologies made. Sometimes, the links will actually change as time goes by, so an article from 5 years ago might now point to malware which wasn't originally there. In that case, we only react when users inform us; and it's neccessary to be vigilant on your end. If an article is 5 years old, perhaps be a little cautious. We're not asking you to disable virus checkers, you know. While I agree with you that adblockers are taking away content creators revenue, and that it is impossible to see the full internet with noscript, I think that users should always be in control of their own privacy. And why can't ads be displayed without trackers? they can, they just don't make as much money. Hence, we would need more of them, or would be forced to use those kind of deceptive "click here to download" ads that basically trick you. Personalizing ads is a win-win situation for everyone. That's totally illogic saying that "adblock is killing the internet". I've NEVER clicked on a single ad on a website when i hadn't adblock. Never. Oh, maybe one time, but just because i had been tricked (with a full background page being a geant ad). And from time to time, ads became more and more intrusive : more ads, bigger ads, a ton of ads on any webpage, popups, more popups, tricks, more tricks. That's the total illogic point. Then i discovered adblock, and i can surf quietly. It's a really better experience. So is adblock killing the internet ? No. Are people like me who anyway won't click on any ad (even if they don't have adblock) ? They're not killing the internet, they're just saying "we don't want an internet with the same crap than there is for a few decades on TV". 1. Speaking of illogical points, you might want to reconsider your understanding of advertising income streams. You're only talking about clicks. That is NOT the only factor in ad revenue. Some pay for impressions instead. 2. "we don't want an internet with the same crap than there is for a few decades on TV" -- Then you'd better be prepared to pay for it. Creating quality content (of any variety) takes time and sometimes even a good deal of money. Hosting that content takes resources. Ads often pay for it so YOU don't have to. If you aren't willing to pay outright for that content when you block ads, you don't respect the site owner and content creators enough to be there in the first place. Most people pay for TV, but it still has ads. I don't think you are making a valid point. All three members of your own trifecta are included in the best Chrome extensions and the best firefox addons on your own site.... Just sayin! that's because the best of lists are compiled based on suggestions from readers)) I completely agree. Including them was downright irresponsible, and I certainly didn't write that. I humbly ask that you disable them all on MakeUseOf. I understand that ads create revenue, but they really make pages look cluttered. I tried turning it off for a while and sites, especially YouTube, loaded slower and were very cluttered with ads that I wouldn't click on anyway. I have disabled (well at least tried to, I'm not sure if it actually worked or not) MakeUseOf, mainly because of the huge banner at the top of some pages about disabling AdBlock (ironically, that banner didn't go away when I whitelisted MUO). Every time I read one of these articles, I feel inspired to disable adblock, but then the internet becomes really cluttered and I enable it again. I don't mind unobtrusive ads (especially just text ads like what's in gmail and such) but huge flashy image ones are really annoying. It should disappear if the ads are showing. It checks the height of the ad container, that's all. Yet Make Use Of has Ghostery and NoScript on its best Firefox addons list. yep, that's because the best of lists are compiled based on suggestions from readers) Actually you are quite missing the point. The user is free to choose. And with so many users, the ones with blockers should be minimal. You should also be able to get your own 'please donate' button wherever you write. It's your article after all. If not, consider a different revenue model. As far as noScript goes, do you know that standards enforce the html content provider to have also a version for people that cannot use js? For example the visually impaired. Did you ever think of those? Todays sites are too cluttered to find the information you want. We need more good content that visual candy. And we DO need a completely anonymizing system. You shouldn't be able to track myself. Only my behavior. Your. Not the advertising companies'. All in all I don't see anything negative in these products. Or let me put it on an other way? What's next? Make jailbreak unlawful? :( " If not, consider a different revenue model." Which would be? As I said in my own article about adblocking, these "different revenue models" are unicorns. Necessity is the mother of all invention. Soon there will be a work around for adblock, then someone else will invent another way of blocking ads. What's required is innovation how internet advertising works rather than complaining about people doing what's in their own best interests. To claim it will never happen understates the internet in my opinion. "...these "different revenue models" are unicorns." Only when your product has little to no value. "Unicorns"...I suppose your profits are dependent on your expenditures. I have used VLC Media Player for close to a decade and it has always been free. Not a unicorn. I agree, the question of anonymity is important, and I'm glad you'd be happy for us to track your behaviour, not your identity. We could consider adding a donate button to remove ads. that would essentially be starting a premium version of the site though, I wonder how the majority would feel about that. Though I agree somewhat, it's a bit ironic that it comes from a website that writes articles on how to best download from torrents. 'starting a premium version of the site'? I don't know why you make such elaborate jumps and extrapolation but as a reader it is insulting. A donation button simply provides a way for concerned users who value your content to make a contribution and get nothing in return (i.e., a 'donation') -- this does not segregate users or create a premium version of the site in any way shape or form. You chose to insert that phrase in order to downplay the value of alternative additional revenue stream because it is less convenient for you. The bottom line is that users will come for your content; if your content is trash or caters to a niche market then advertising is likely to be your only revenue. It's a content problem not a user one; as a content publisher and businessmen you find a way to a) market to your users, b) provide a service that your users appreciate and c) monetize your work in a way that your users respect. You approach it from the reverse angle where you expect your readers to bend to your will and as such I will no longer be visiting MakeUseOf and I'll be making blogs posts in my communities to similar effect. Well if adblock is the bane of this site, why offer ways to subscribe to articles via RSS and email? I have adblock installed BUT because I was asked to whitelist this site I have done so. Thank you for whitelisting. You raise an interesting point about RSS and email subscription, but generally a visitor who subscribes like that is likely to come back time and time again / enter competitions / partipate on the site also. There's value in that, to us. A lot of advertising is aimed at users who don't return again - they search, come to the site, click on an ad, and are gone. So I guess RSS subscriptions are like a service to regular readers, and we choose to not to jam those full of ads. Ah right on I understand what you are saying but there are a LOT of sites which are full of ads. For example the news papers sites from my country are more ads than news. Plus the fact that they put pop-ups ads or, if from mistake I hoover above an add a windows appear is annoying... This kind of sites kill the internet, not AdBlock or Ghostery. I use all those addons plus some more and I don't have any problems anymore. But, because MakeUseOf is a great site I will place on "whitelist" and maybe you should make a tutorial how everyone to do this and in this way to help MakeUseOf. If those sites killed the Internet, they wouldn't exist. But they still do. Maybe it's because they have good content if you don't consider the ads. An obvious rejoinder is that: If AdBlock was killing the Internet, it wouldn't exist. But it still does. Maybe it's because it has good intent if you don't consider the unintended consequences. Yeah, but those unintended consequences hurt those publishers. I don't love all ads; heck, I don't love most of them. But I'd rather take an ad over destroying livelihoods any day. I use both adblock plus and ghostery. I don't want to see ads and I don't feel it's anyones business which sites I visit. If the author has such a disdain for people using these useful tools maybe he can get a widget that blocks people using them from seeing his content. You lose points when you label something as evil without considering why these plugins are so popular (are your readers evil too?). Many sites use intrusive ads and questionable tracking practices and users are getting tired of those. Adblock is not shoving an ad on my face and at least lets me turn it off in the sites that I want. How about we have a real conversation about the abuses of ads and tracking and discuss possible fair solutions instead of blaming the users? Luis, only a very small subset of AdBlock users are using it because of intrusive ads and questionable tracking purposes. The bottom line is that AdBlock users -- myself included (until I stopped using AdBlock) -- want to hide ads. They don't care what the ads are doing. They also don't consider the fact that ads hurt the livelihoods of the people who are toiling hard to provide readers with interesting articles and content. Consider how AdBlock blocks nearly ALL of the ads on the web. Maybe 5-10% of those ads/sites (and I'm likely being generous) are questionable. Is that a reason to have AdBlock's presence so ubiquitous so as to hurt others because you're upset at a few bad apples on the 'net? Seems pretty offensive to those of us who help provide that content for you guys. So yeah, since I was one of those users once until I was in the shoes of content creator, I think I, too, am justified in blaming myself and others who clearly feel it's appropriate to hurt everyone because of a bit of anger toward some jerks on the Internet who seem to ruin it for everyone else. You know what a better solution would be? Don't visit those websites. Maybe even expose their tactics to the greater WWW. But don't punish everyone else because you're disgruntled with how some bad folks choose to unethically monetize their content. Do you have any evidence to justify what you are saying? Besides, the point is that the argument that these plugins are "evil" is too simplistic. Your solution to just not visit the sites are one option, but in some categories we don't have that luxury. Luis, as I've written, I speak from my own experience. The guy I married used AdBlock for the same reasons I did. I recommended AdBlock once upon a time not for questionable tracking purposes but to hide ads -- plain and simple. Most people I know just don't want to see ads -- you're more than welcome to find out if it's because of those more "malicious" tracking purposes. (What's the tracking for, anyway? To serve people more targeted ads? That can't be good at all!) But when that's not the case, these regular ad-block users don't realize that ads help put food on the table for writers of these sites. I'm not saying I agree that it's an issue of just mere "evil." I do think, though, being in ad sales too and working for sites that are monetized through ad content, that it's unfair to apply AdBlock to every site out there just because of displaced anger toward a few sites. That's how AdBlock works, though. Out of the box, it blocks. That unfortunately hurts publishers. Thanks Tamar, you've made a well reasoned argument there and I appreciate your input. With regards to the title of "evil", it's just poetic licence. "(Plugins) Create a Negative Effect on Revenue Models and No End of Support Issues" just isn't as catchy! I agree with your point-I just want to raise my hand as one of those people who uses AdBlock Plus for malaware blocking/social sharing widget blocking, and white-lists sites I support. We do exist! Leslie - interestingly enough, I used to block Digg widgets through AdBlock back in the day when they were banning accounts left and right and even published a tip on how one can do that. I hear ya. But I know that many others block *everything* and there's a negative consequence to that for hardworking writers. I would not want or allow any individual I know to track my location or any other parameter of my existence, unless they were my bodyguard or something similar. Why would I allow any of these advertising organizations to do the same? Does anyone remember what happened with the Radiohead album release a few years back? They released their album for free/donation over the web and made more $ than they ever had with the standard distribution model. Let's see every site that runs ads on the web also augment their site with a Flattr Button or some donation method. Give it some time and see how well you make out. I agree with Louis that y'all should police yourselves and then come talk to us consumers about letting down our defenses.I can't help but hopefully wonder if the entire concept of advertising and the industry it envelops are past their point of usefulness to our species. Can I draw an analogy from those profiteers to those in the health-insurance industry? Parasites. I use ad-block because I put a lot of effort into cultivating what I give brain-space to. Ads, particularly internet ads, pretty universally play on the worst stereotypes and prejudices. I'm not willing to give that kind of crud brainspace. It's also a safety/privacy issue, as the ads that are blocked could, were they displayed, load a tracking cookie. I'm absolutely not willing to have my web browsing tracked, period. As for noscript, this single article page wants to run scripts from 15 different domains if I view the article. I'm not about to expose my computer to that kind of risk just to read an article. I want content creators to be paid for their work, but not at the expense of my privacy and security. There need to be new models that don't put those goals (user privacy&security and creator livelihood) at odds. But so long as those goals ARE at odds? I'm going to be coming down on the side of my security and privacy. And I'd also like some citation for the numbers you're quoting, cos they seem awfully anecdata-ish. "Luis, as I've written, I speak from my own experience." ...should answer the question regarding the citation you need. But maybe I will make a real survey one day. Just a note, I work in ad sales. Clearly you've been so isolated from advertisements and the improvements over the years that you don't know anything about them anymore. They are not intended to universally play on stereotypes. They are intended to *help* the advertiser AND the reader. That's how you have an experience that ensures that an advertiser renews -- because they're getting the biggest bang for their buck since people are clicking and signing up for the advertised service or buying the product. And while there are exceptions when third-party served, most are served simply by a single graphic (GIF) and do not include a tracking cookie. A lot of sites use Google AdSense and those advertisers have no way to integrate tracking cookies. Therefore, while I wholly respect your argument, you're simply misinformed about the true risks of advertising and make the assumption that it's a privacy/security concern. I think you should be more concerned with Facebook or Twitter privacy than ad privacy. Finally, I reiterate this point when it does come to additional tracking, which really barely happens, at least in my experience: "What's the tracking for, anyway? To serve people more targeted ads? That can't be good at all!" I mean, wouldn't you be interested in services you may not know about but would totally suit your professional/personal needs? And if not, why not? If you want content creators to be paid for their work, then you're going to have to make some concessions. You're making a mountain out of a molehill. Ethical publishers are cognizant of privacy (read privacy policies to see what data may be collected, if anything) and they shouldn't be punished. You're being very "my experience is of course universal and applies to everyone and anyone who says their experience is different must be wrong", which isn't productive. I see plenty of ads. I see ads in magazines, on tv, when I use any computer other than mine (which is pretty often as I do computer repair and education). Your lack of ability to recognize the stereotypes that are on display doesn't mean they're not there. And you assume I'm not concerned about Facebook or twitter privacy, wrongly. Plus you're trying to tell me what my concerns should be, which is silencing rather than discussion. If you really experience almost no tracking, I'm surprised. Like I said, I do computer repair, and I've yet to encounter an unprotected computer that didn't have dozens if not hundreds of tracking cookies. And I do know the difference between useful cookies and tracking cookies, so when I say tracking cookies, I mean tracking cookies. And it's funny how you're so sure that folks would want targeted ads, but every single time I've explained tracking cookies to folks I'm working for (generally senior citizens) they find the idea of companies watching what they do online to offer them ads relevant to their interests to be appalling. There's a reason no one likes junk mail either. No, I'm not interested in services I don't know about that are presented to me based on being tracked. I find this intrusive. For others it can be outright dangerous. You did hear about the teenage who was outed to their parents based on facebook tracking, right? The one who got kicked out of the house? Threatened with violence? I think you're foolish if you think that was an isolated incident rather than exactly the kind of thing that happens over and over when one allows tracking, or is ignorant of it happening. You're doing really well at silencing bingo tho! "Mountain out of a molehill, I'd think you'd worry more about X than this, in my experience it's not a problem, it's really to help you" One more and you "win"! Not saying it's wrong, but I'm saying from another angle that it isn't right either. I have friends/family who use it for the same reasons I said. Again, there's no scientific fact in anything I've said, and I've acknowledged this as being purely anecdotal thus far (that's why I want a survey), but I'd be inclined to say that many are in the same boat as I was. I know what tracking cookies are. I also am looking at all of them right now. Most of them are tied to the sites I explicitly opt into visit, rather than being tied to ads. I said before and I said again: I do respect your argument, but I think there's another side to the story that people don't recognize and that's the fact that by simply blocking ads, they are preventing the people who work hard to provide them with hours of online entertainment the ability to feed their families. You may think it's totally appalling, but that's only one side of the coin. I've been on both. When people start seeing it from different angles, they typically are more agreeable. What people don't understand is that most ads -- when they track -- don't track truly identifying information. They know about the content you read, sure, and they can target good ads to that content. But these trackers don't really get granular about who you are. After all, Google -- which has a boatload of information on me and could truly do it if it wanted to -- can't even get it right: (Just a note, Google also says I am a 35-44 year old man. They were 0/2 on that.) As a note, I appreciate that you are also concerned about privacy on social networks, and I wasn't trying to imply anything with that correlation. But I think you need to look at privacy on the ad side as a little less evil as you believe it is -- because that's not the intention here. Google thinks everyone is a 35-44 year old man , I reckon. I've never been so insulted! So I assume that you never skip ads on TV by using a DVR? I think that you're missing something here. 1. AdBlock has a fairly extensive white list now, Sites that agree to not use overbearing ads can be whitelisted automatically, I'm not up on all the details, but it was a controversial move, but one that was needed, IMHO. 2. If we stop visiting sites that have bad practices when it comes to tracking, content creators will stop getting nearly as much traffic. Those sites include major social media outlets. Adblock is also just as useful as something like userstyles and user scripts. Its great for blocking out elements of any type that you don't want to see. I really should use it to block out comment sections on most sites :D It's also a good way to kill facebook features that you don't like. I think that perhaps, instead of trashing these elements, you might think about what drives people to use them. It's bad mojo from creepy ads that follow you, its that virus that you got from an ad a few years back, its the obscene flashing, noisy add that is right in the middle of the page that makes you want to throw the computer out the window. Think about these things, and instead of calling users of a particular software extension selfish, maybe try to fix what makes them want to use it on your site? John: As I said, out of the box, it doesn't, and most people don't know how to configure it. And that also addresses your later comment about "what makes them want to use it on [my] site." Yup, they don't know how to turn it off, so they'll use it because someone else recommended it to them. Javascript is a necessity for Internet browsers, but it also opens the door for drive-by infections like the fake Antivirus infections of 2009 & 2010. I did stop using NoScript on Firefox because it was also such a headache to maintain. I will defintely give the points made in this article some serious thought. Thanks for the well-written info. The only thing I use adblock to block is the malware domains list. If you're using an add supported website it's rude and verging on theft to block the adds. Saying it's like theft is akin to stating that someone using the "female only ad" targeted bus terminal in the article mentioned ad example is verging on theft if she covers her face and body so the ad won't play. Hmm, interesting point. It's my fault for drawing a parallel there I guess, but my question would be why would she not want the ad to play? I do think that it was a good parallel! But then again, I think that because my internet-worldview sees adblock the same way as your example, just something that helps me do what I do in semi-privacy. Good question as well, my return question is 'does it matter?' And I mean that in a non-snotty, genuine question kind of way. Whether it's security, or not wanting to be bothered, or less eye/ear clutter for her, or just a straight detest for marketing material despite the fact that marketing helps the world go round? Does it matter? To me, it doesn't matter because it's her choice and she doesn't directly interfere with anyone else by making said decision for whatever rationale she desires. If she directly interfered (instead of covering herself to hide from the ad, she were to mess with or break the ad television itself) then that would obviously be wrong; but we aren't discussing the obvious here, just the subtle. :-) Yikes, I don't know what I did but just for the record the above post has no relation to the profile it links too. I do not own a dog. ;-) I.. what? :) I'm not sure that is a serious question but if it is, I offer my answer: Because she doesn't want to. Isn't that enough for you? You shouldn't have to have explain why you don't want to be tracked, profiled or collected on--it's a basic human right, protected by law in almost every country. Do you want people to take photos of you in your house, at work, on the street or in your car? Would you feel violated if you found a folder of all your personal habits in a strangers hands? The rest of the planet should not have to bend their perception of privacy and civil liberties in order for you to make money, which is what you're asking. Ironically, your post comment button is broken on Chrome for OS X even with AdBlock disabled; I had to switch to Safari just to post this. If you're crazy enough to call yourself a web developer, I am sorely disappointed and would not hire you. A fallback from Javascript, at the very least for a post comment button (that shouldn't even require it in the first place) is incredibly poor programming. I feel your pain, but Adblock, et al will not "kill the Internet." Maybe it doesn't kill the Internet, Dave, but it certainly harms the livelihoods of the people whose content you are consuming without any consideration for the work put into that content. I LOVED AdBlock for the longest time. I haven't used it in ages. Why? Because I respect and recognize the people who toiled to give me that which I am currently reading. If I don't like a banner ad, you know what I do? I ignore it. Many can mentally turn off the ads if they don't like them. Sometimes, though, that ad may actually be interesting. Tamar, so cool to see you replying on MakeUseOf! And of course, as a writer, I fully agree with what you're sayingg. AdBlock is tempting... but disabling it seems to be the right thing to do. That's an excellent point. Plus, as a general rule ads have gotten WAY less annoying over the years. Noted the site linked in association with your name; it's some domain squatting garbage that's annoying. So; thanks for playing, and your family is going to be the future deorbiting site for garbage haulers, so you'd best buy the mineral rights there soon. Please don't be rude. It's not domain squatting, it just means they haven't set up a website there yet. Relax already. You should read The Paris Review (or other edited works') notes on payment for content; in sort, like hell should people who care about what they read defer to ad cluster-something for half (or more, or less) of their attention. Erez should know better himself; as a writer (and not least a voracious reader,) he should know time is drawn by the ads on the page, which he competes with. People got studies done on it; the 500th identical TopCow ad still takes time way more than a/500 seconds to read (a is constant, unless impression 'size' up and varied...) Got an exercise for you; get 10 (issues of) magazines. Now open enough tabs to represent those 10 magazines the way you'd read them online, at the same time, monitoring bandwidth for the site in particular.... Compare the electrical and internet bills for each (since some of the time with the magazines, you might need lights on.) Now keep hating print and NoScript/ScriptNo/AdBlock/NoBugs/etc. I think a better reaction would be to write an ad firm app that updated the joke '[Creepy voice:] I know where you go to shop, for groceries, for drugs, for things you put on your skin ..... At the grocery store, like everyone else.' You think the electrical and bandwidth cost compares even minutely to the cost of printing a paper substitute? Nonsense...Unless you're browsing from a mobile phone in a foreign country, which is the only case in which you're statement has even the slightest bit of truth to it. Of course it does; certainly with client-side costs it's near parity, though it's nice to save on server pipe booking with an iPad edition. Have people at least stopped senseless proxy refresh/page reloading? I also make a claim on memory footprint to-do, though neither on the trouble of carrying your Zune, phone, portable console (as in game), laptop and iPad simultaneously, nor on e-waste (not least because referring to peoples' MyPrecious as a waste stream to get the carbon footprint is hazardous.) I know I don't blow half my data contracts (as a reader) for a year using Conde Nast print; that's a big help, though online images often look like a pretty good match somehow at 1/2000 the data size. Maybe they proof them out specially (seam carving and all that...) So, you never blow your data limit just watching the same sites (plus ads?) Alright, I agree with ads being necessary and all - but I think the greatest problem lies in pre-play ads on Youtube and other servers. I don't mind those who turn off after 5 seconds if you want to. But some are not skippable (has anyone said VEVO?) and that is very unfair - everyone should have a chance to ignore the ad - even on TV you can change the channel. It is my idea anyways, I'm not saying that companies like to abuse us - they just have to do something to get their money back. Awwe, poor you, prevented from spamming people with garbage advertisements that people don't want to see in the first place...obviously, otherwise they wouldn't use a program like "adblock" So, I guess this makes it more difficult to har-ass and annoy people right? It pays your salary? How is that possible unless thousands of people are clicking on the ad? Oh, wait a minute, that is when you get even more aggressive, and make it pop-under, or completely take over the entire webpage where one has no choice but to click it in order to see the site they are on. I am sure you have a million tricks, forgive my lack of terminology, or knowledge because I am a simple consumer, and one of those people that hate being harrassed by your advertisements, tracked, spammed, have my location/email stolen to send your trash emails on deals I could care less about, etc. You know what they say..."If you can't beat em' join em" Point being, instead of crying about it, you may want to consider another occupation because people are sick to death of your junk...Furthermore, if not already, you may have noticed there is a dying Market for your profession...One example being this particular one. Oh, also, I have been a professional in Marketing/Advertising/Promotion, and sales for the last 20 years myself...I woke up and realized it was time for a change as a result of all the above. Good luck, stop whining, and do something else... A dying market for my profession ... of web developer? You seem to be a little out of touch with reality, don't you? "Web developer"? I'm sorry, but everyone and their mother calls themselves a "web developer" nowadays. Writing articles doesn't make you a developer, even knowing HTML and Javascript doesn't make you one, while we're at it. As an actual developer I also presume your technical ability is very limited, because you think websites _need_ Javascript, by explicitly saying for example a hit counter requires Javascript which is simply not true (just parse your access log, which is easier, more reliable and doesn't bug your user with decreased rendering times etc). If you don't want to provide a fallback from Javascript, your choice. But don't complain about the users you lost because you were lazy, providing a fallback is not as difficult as you make it sound. Javascript should be considered an extra, a convenience, not the basis of the design. And yes, there are still security vulnerabilities, as many as ever. In addition to the old reasons which are still current, marketers have lost the trust of end-users and have themselves to blame for abusing the technology (pop-unders etc). What I personally dislike the most about this article is your blatant disregard in responsibility towards your users. "Conspiracy theories" you say? As a developer I can't only see that the user likes toys. I can read between the lines and conclude from those big ad networks that the user is gay, because he is a man, bought The Swan and bought tickets for a concert which is mostly frequented by gay people. This kind of calculated information is worth hard cash and may be problematic for you. In other news, you sound like shoe makers flailing their arms and crying about machines replacing their Fine Art (TM) instead of moving with the times and finding a new way to make money. History repeats itself and nobody feels sorry for people who are not willing to adapt.
http://www.makeuseof.com/tag/adblock-noscript-ghostery-trifecta-evil-opinion/
CC-MAIN-2017-47
refinedweb
53,977
71.34
Technique Sharing with Project Templates Visual Studio .NET is a multifaceted gem. You can spend weeks, months, or perhaps a year or more exploring Visual Studio and the .NET Framework. For example, you might be exploring Visual Studio .NET and the Diagnostics namespace for several days before you stumble on BooleanSwitches. (I will get to what a BooleanSwitch is in a few minutes.) However after you have discovered BooleanSwitches, you will likely want to share this information. The project and item templates that already exist can be found in the default installation directory C:\Program Files\Microsoft Visual Studio .NET\Vb7\VBWizards. The basic mechanism is that some pre-defined template files exist in this directory. When you select an item in Visual Studio .NET a copy of these files is made and added to your project. For example, when you add a new class to your project (see figure 1) a templated file named file1.vb is uniquely renamed, a default class name is added to the template, and the file containing the class is added to your project. The easiest way for us to duplicate this kind of behavior is to, well, duplicate it.. Page 1 of 2 This article was originally published on September 13, 2002
https://www.developer.com/net/vb/article.php/1463201/Technique-Sharing-with-Project-Templates.htm
CC-MAIN-2021-10
refinedweb
209
66.23
In this tutorial, you will learn about c programming recursion with the examples of recursive functions. C programming recursive functions Until now, we have used multiple functions that call each other but in some case, it is useful to have functions that call themselves. In C, such function which calls itself is called recursive function and the process is called recursion. For example: main () { printf("Recursion \n"); main(); } In this above example, main function is again called from inside the main function. So this function will keep on printing Recursion until the program run out of memory. Example of recursive function: C program to find the factorial of first 3 natural numbers using recursion #include <stdio.h> long int fact( int n ) { if ( n <= 1 ) return 1; else //recursive step return ( n * fact (n-1) ); } //end factorial int main () { int i; for ( i = 1; i <=3; i++ ) printf("%d! = %d\n",i, fact(i) ); return 0; } Output 1! = 1 2! = 2 3! = 6 Explanation of output when i = 1 | fact (1) : 1 * fact(0) = 1*1 = 1 when i = 2 | fact (2) : 2 * fact(1) = 2*1 = 2 when i = 3 | fact (3) : 3 * fact(2) = 3*2 = 6 Explanation of the program First of all, the recursive factorial function checks whether if condition is true or not i.e whether n is less than or equal to 1. If condition is true, factorial returns 1 and the program terminates otherwise the following statement executes. return ( n * fact (n-1) ); Difference between recursion and iteration - Iteration uses a repetition statement whereas recursion does repetition through repeated function calls. - Iteration terminates when loop condition fails whereas recursion terminates when the base case became true.
http://www.trytoprogram.com/c-programming/c-programming-recursion/
CC-MAIN-2019-30
refinedweb
283
50.57
#include <tree_node.h> Detailed Description template<unsigned int N> class libMesh::TreeNode< N > This class defines a node on a tree. A tree node contains a pointer to its parent (NULL if the node is the root) and pointers to its children (NULL if the node is active. Definition at line 46 of file tree_node.h. Constructor & Destructor Documentation Constructor. Takes a pointer to this node's parent. The pointer should only be NULL for the top-level (root) node. Definition at line 215 of file tree_node.h. References libMesh::TreeNode< N >::active(), libMesh::TreeNode< N >::children, libMesh::TreeNode< N >::elements, libMesh::libmesh_assert(), libMesh::TreeNode< N >::nodes, and libMesh::TreeNode< N >::tgt_bin_size. Destructor. Deletes all children, if any. Thus to delete a tree it is sufficient to explicitly delete the root node. Definition at line 236 of file tree_node.h. Member Function Documentation - Returns - true if this node is active (i.e. has no children), false otherwise. Definition at line 77 of file tree_node.h. References libMesh::TreeNode< N >::children. Referenced by libMesh::TreeNode< N >::TreeNode(). Definition at line 104 of file tree_node.h. References libMesh::TreeNode< N >::bounds_point(), and libMesh::libmesh_assert(). Definition at line 185 of file tree_node.C. References libMesh::MeshTools::bounding_box(), std::max(), and std::min(). Referenced by libMesh::TreeNode< N >::bounds_node(). Constructs the bounding box for child c. Definition at line 211 of file tree_node.C. References libMesh::MeshTools::bounding_box(), libMesh::err, std::max(), std::min(), and libMesh::Real. - Returns - an element containing point p. Definition at line 535 of file tree_node.C. Look for point p in our children. Definition at line 568 of file tree_node.C. References libMesh::invalid_uint, and libMesh::libmesh_assert(). Inserts Node nd into the TreeNode. Definition at line 35 of file tree_node.C. References libMesh::DofObject::id(), libMesh::libmesh_assert(), mesh, and libMesh::MeshBase::n_nodes(). Inserts Elem el into the TreeNode. Definition at line 68 of file tree_node.C. References libMesh::MeshTools::bounding_box(), libMesh::dim, libMesh::Elem::dim(), libMesh::Elem::infinite(), libMesh::libmesh_assert(), libMesh::Elem::n_nodes(), and libMesh::Elem::point(). - Returns - true if this node is the root node, false otherwise. Definition at line 71 of file tree_node.h. References libMesh::TreeNode< N >::parent. - Returns - the level of the node. Definition at line 249 of file tree_node.h. - Returns - the number of active bins below (including) this element. Definition at line 516 of file tree_node.C. References libMesh::Parallel::sum(). Prints the contents of the elements set if we are active. Definition at line 435 of file tree_node.C. Prints the contents of the node_numbers vector if we are active. Definition at line 413 of file tree_node.C. Refine the tree node into N children if it contains more than tol nodes. Definition at line 138 of file tree_node.C. References libMesh::libmesh_assert(), mesh, libMesh::TreeNode< N >::set_bounding_box(), and swap(). Sets the bounding box; Definition at line 177 of file tree_node.C. References libMesh::MeshTools::bounding_box(). Referenced by libMesh::TreeNode< N >::refine(). Transforms node numbers to element pointers. Definition at line 457 of file tree_node.C. References mesh, libMesh::MeshBase::n_nodes(), and swap(). Member Data Documentation The Cartesian bounding box for the node. The minimum point is stored as bounding_box.first, the maximum point is stored as bounding_box.second. Definition at line 188 of file tree_node.h. Pointers to our children. This vector is empty if the node is active. Definition at line 181 of file tree_node.h. Referenced by libMesh::TreeNode< N >::active(), and libMesh::TreeNode< N >::TreeNode(). Does this node contain any infinite elements. Definition at line 203 of file tree_node.h. Pointers to the elements in this tree node. Definition at line 193 of file tree_node.h. Referenced by libMesh::TreeNode< N >::TreeNode(). Reference to the mesh. Definition at line 164 of file tree_node.h. The node numbers contained in this portion of the tree. Definition at line 198 of file tree_node.h. Referenced by libMesh::TreeNode< N >::TreeNode(). Pointer to this node's parent. Definition at line 175 of file tree_node.h. Referenced by libMesh::TreeNode< N >::is_root(). The maximum number of things we should store before refining ourself. Definition at line 170 of file tree_node.h. Referenced by libMesh::TreeNode< N >::TreeNode(). The documentation for this class was generated from the following files:
http://libmesh.sourceforge.net/doxygen/classlibMesh_1_1TreeNode.php
CC-MAIN-2014-41
refinedweb
706
63.56
Lab 11: Interpreters Due at 11:59pm on Friday, 08/02/2019. Starter Files Download lab, 2, and 3 must be completed in order to receive credit for this lab. - Questions 4 and 5 are optional. It is recommended that you complete these problems on your own time. Topics Consult this section if you need a refresher on the material for this lab. It's okay to skip directly to the questions and refer back here should you get stuck. Interpreters An interpreter is a program that allows you to interact with the computer in a certain language. It understands the expressions that you type in through that language, and performs the corresponding actions in some way, usually using an underlying language. In Project 4, you will use Python to implement an interpreter for Scheme. The Python interpreter that you've been using all semester is written (mostly) in the C programming language. The computer itself uses hardware to interpret machine code (a series of ones and zeros that represent basic operations like adding numbers, loading information from memory, etc). When we talk about an interpreter, there are two languages at work: - The language being interpreted/implemented. In this lab, you will implement the PyCombinator language. - The underlying implementation language. In this lab, you will use Python to implement the PyCombinator language. Note that the underlying language need not be different from the implemented language. In fact, in this lab we are going to implement a smaller version of Python (PyCombinator) using Python! This idea is called Metacircular Evaluation. Many interpreters use a Read-Eval-Print Loop (REPL). This loop waits for user input, and then processes it in three steps: Read: The interpreter takes the user input (a string) and passes it through a lexer and parser. - The lexer turns the user input string into atomic pieces (tokens) that are like "words" of the implemented language. - The parser takes the tokens and organizes them into data structures that the underlying language can understand. Eval: Mutual recursion between eval and apply evaluate the expression to obtain a value. - Eval takes an expression and evaluates it according to the rules of the language. Evaluating a call expression involves calling applyto apply an evaluated operator to its evaluated operands. - Apply takes an evaluated operator, i.e., a function, and applies it to the call expression's arguments. Apply may call evalto do more work in the body of the function, so evaland applyare mutually recursive. - Print: Display the result of evaluating the user input. Here's how all the pieces fit together: +-------------------------------- Loop -----------+ | | | +-------+ +--------+ +-------+ +-------+ | Input ---+->| Lexer |-->| Parser |-->| Eval |-->| Print |-+--> Output | +-------+ +--------+ +-------+ +-------+ | | ^ | | | | v | ^ +-------+ v | | Apply | | | REPL +-------+ | +-------------------------------------------------+ Required Questions PyCombinator Interpreter Today we will build PyCombinator, our own basic Python interpreter. By the end of this lab, you will be able to use a bunch of primitives such as add, mul, and sub, and even more excitingly, we will be able to create and call lambda functions -- all through your own homemade interpreter! You will implement some of the key parts that will allow us to evaluate the following commands and more: > add(3, 4) 7 > mul(4, 5) 20 > sub(2, 3) -1 > (lambda: 4)() 4 > (lambda x, y: add(y, x))(3, 5) 8 > (lambda x: lambda y: mul(x, y))(3)(4) 12 > (lambda f: f(0))(lambda x: pow(2, x)) 1 You can find the Read-Eval-Print Loop code for our interpreter in repl.py. Here is an overview of each of the REPL components: Read: The function readin reader.pycalls the following two functions to parse user input. - The lexer is the function tokenizein reader.pywhich splits the user input string into tokens. - The parser is the function read_exprin reader.pywhich parses the tokens and turns expressions into instances of subclasses of the class Exprin expr.py, e.g. CallExpr. Eval: Expressions (represented as Exprobjects) are evaluated to obtain values (represented as Valueobjects, also in expr.py). - Eval: Each type of expression has its own evalmethod which is called to evaluate it. - Apply: Call expressions are evaluated by calling the operator's applymethod on the arguments. For lambda procedures, applycalls evalto evaluate the body of the function. - Print: The __str__representation of the obtained value is printed. In this lab, you will only be implementing the Eval and Apply steps in expr.py. You can start the PyCombinator interpreter by running the following command: python3 repl.py Try entering a literal (e.g. 4) or a lambda expression, (e.g. lambda x, y: add(x, y)) to see what they evaluate to. You can also try entering some names. You can see the entire list of names that we can use in PyCombinator at the bottom of expr.py. Note that our set of primitives doesn't include the operators +, -, *, / -- these are replaced by add, sub, etc. Right now, any names (e.g. add) and call expressions (e.g. add(2, 3)) will output None. It's your job to implement Name.eval and CallExpr.eval so that we can look up names and call functions in our interpreter! You don't have to understand how the read component of our interpreter is implemented, but if you want a better idea of how user input is read and transformed into Python code, you can use the --read flag when running the interpreter: $ python3 repl.py --read > add Name('add') > 3 Literal(3) > lambda x: mul(x, x) LambdaExpr(['x'], CallExpr(Name('mul'), [Name('x'), Name('x')])) > add(2, 3) CallExpr(Name('add'), [Literal(2), Literal(3)]) To exit the interpreter, type Ctrl-C or Ctrl-D. Q1: Prologue Before we write any code, let's try to understand the parts of the interpreter that are already written. Here is the breakdown of our implementation: repl.pycontains the logic for the REPL loop, which repeatedly reads expressions as user input, evaluates them, and prints out their values (you don't have to completely understand all the code in this file). reader.pycontains our interpreter's reader. The function readcalls the functions tokenizeand read_exprto turn an expression string into an Exprobject (you don't have to completely understand all the code in this file). expr.pycontains our interpreter's representation of expressions and values. The subclasses of Exprand Valueencapsulate all the types of expressions and values in the PyCombinator language. The global environment, a dictionary containing the bindings for primitive functions, is also defined at the bottom of this file. Use Ok to test your understanding of the reader. It will be helpful to refer to reader.pyto answer these questions. python3 ok -q prologue_reader -u Use Ok to test your understanding of the Exprand Valueobjects. It will be helpful to refer to expr.pyto answer these questions. python3 ok -q prologue_expr -u Q2: Evaluating Names The first type of PyCombinator expression that we want to evaluate are names. In our program, a name is an instance of the Name class. Each instance has a string attribute which is the name of the variable -- e.g. "x". Recall that the value of a name depends on the current environment. In our implementation, an environment is represented by a dictionary that maps variable names (strings) to their values (instances of the Value class). The method Name.eval takes in the current environment as the parameter env and returns the value bound to the Name's string in this environment. Implement it as follows: - If the name exists in the current environment, look it up and return the value it is bound to. If the name does not exist in the current environment, raise a NameErrorwith an appropriate error message: raise NameError('your error message here (a string)') def eval(self, env): """ >>> env = { ... 'a': Number(1), ... 'b': LambdaFunction([], Literal(0), {}) ... } >>> Name('a').eval(env) Number(1) >>> Name('b').eval(env) LambdaFunction([], Literal(0), {}) >>> try: ... print(Name('c').eval(env)) ... except NameError: ... print('Exception raised!') Exception raised! """"*** YOUR CODE HERE ***"if self.string not in env: raise NameError("name '{}' is not defined".format(self.string)) return env[self.string] Use Ok to test your code: python3 ok -q Name.eval Now that you have implemented the evaluation of names, you can look up names in the global environment like add and sub (see the full list of primitive math operators in global_env at the bottom of expr.py). You can also try looking up undefined names to see how the NameError is displayed! $ python3 repl.py > add <primitive function add> Unfortunately, you still cannot call these functions. We'll fix that next! Q3: Evaluating Call Expressions Now, let's add logic for evaluating call expressions, such as add(2, 3). Remember that a call expression consists of an operator and 0 or more operands. In our implementation, a call expression is represented as a CallExpr instance. Each instance of the CallExpr class has the attributes operator and operands. operator is an instance of Expr, and, since a call expression can have multiple operands, operands is a list of Expr instances. For example, in the CallExpr instance representing add(3, 4): self.operatorwould be Name('add') self.operandswould be the list [Literal(3), Literal(4)] In CallExpr.eval, implement the three steps to evaluate a call expression: - Evaluate the operator in the current environment. - Evaluate the operand(s) in the current environment. - Apply the value of the operator, a function, to the value(s) of the operand(s). Hint: Since the operator and operands are all instances of Expr, you can evaluate them by calling their evalmethods. Also, you can apply a function (an instance of PrimitiveFunctionor LambdaFunction) by calling its applymethod, which takes in a list of arguments ( Valueinstances). def eval(self, env): """ >>> from reader import read >>> new_env = global_env.copy() >>> new_env.update({'a': Number(1), 'b': Number(2)}) >>> add = CallExpr(Name('add'), [Literal(3), Name('a')]) >>> add.eval(new_env) Number(4) >>> new_env['a'] = Number(5) >>> add.eval(new_env) Number(8) >>> read('max(b, a, 4, -1)').eval(new_env) Number(5) >>> read('add(mul(3, 4), b)').eval(new_env) Number(14) """"*** YOUR CODE HERE ***"function = self.operator.eval(env) arguments = [operand.eval(env) for operand in self.operands] return function.apply(arguments) Use Ok to test your code: python3 ok -q CallExpr.eval Now that you have implemented the evaluation of call expressions, we can use our interpreter for simple expressions like sub(3, 4) and add(mul(4, 5), 4). Open your interpreter to do some cool math: $ python3 repl.py Optional Questions Q4: Applying Lambda Functions We can do some basic math now, but it would be a bit more fun if we could also call our own user-defined functions. So let's make sure that we can do that! A lambda function is represented as an instance of the LambdaFunction class. If you look in LambdaFunction.__init__, you will see that each lambda function has three instance attributes: parameters, body and parent. As an example, consider the lambda function lambda f, x: f(x). For the corresponding LambdaFunction instance, we would have the following attributes: parameters-- a list of strings, e.g. ['f', 'x'] body-- an Expr, e.g. CallExpr(Name('f'), [Name('x')]) parent-- the parent environment in which we want to look up our variables. Notice that this is the environment the lambda function was defined in. LambdaFunctions are created in the LambdaExpr.evalmethod, and the current environment then becomes this LambdaFunction's parent environment. If you try entering a lambda expression into your interpreter now, you should see that it outputs a lambda function. However, if you try to call a lambda function, e.g. (lambda x: x)(3) it will output None. You are now going to implement the LambdaFunction.apply method so that we can call our lambda functions! This function takes a list arguments which contains the argument Values that are passed to the function. When evaluating the lambda function, you will want to make sure that the lambda function's formal parameters are correctly bound to the arguments it is passed. To do this, you will have to modify the environment you evaluate the function body in. There are three steps to applying a LambdaFunction: - Make a copy of the parent environment. You can make a copy of a dictionary dwith d.copy(). - Update the copy with the parametersof the LambdaFunctionand the argumentspassed into the method. - Evaluate the bodyusing the newly created environment. Hint: You may find the built-in zipfunction useful to pair up the parameter names with the argument values. def apply(self, arguments): """ >>> from reader import read >>> add_lambda = read('lambda x, y: add(x, y)').eval(global_env) >>> add_lambda.apply([Number(1), Number(2)]) Number(3) >>> add_lambda.apply([Number(3), Number(4)]) Number(7) >>> sub_lambda = read('lambda add: sub(10, add)').eval(global_env) >>> sub_lambda.apply([Number(8)]) Number(2) >>> add_lambda.apply([Number(8), Number(10)]) # Make sure you made a copy of env Number(18) >>> read('(lambda x: lambda y: add(x, y))(3)(4)').eval(global_env) Number(7) >>> read('(lambda x: x(x))(lambda y: 4)').eval(global_env) Number(4) """ if len(self.parameters) != len(arguments): raise TypeError("Cannot match parameters {} to arguments {}".format( comma_separated(self.parameters), comma_separated(arguments)))"*** YOUR CODE HERE ***"env = self.parent.copy() for parameter, argument in zip(self.parameters, arguments): env[parameter] = argument return self.body.eval(env) Use Ok to test your code: python3 ok -q LambdaFunction.apply After you finish, you should try out your new feature! Open your interpreter and try creating and calling your own lambda functions. Since functions are values in our interpreter, you can have some fun with higher order functions, too! $ python3 repl.py > (lambda x: add(x, 3))(1) 4 > (lambda f, x: f(f(x)))(lambda y: mul(y, 2), 3) 12 Q5: Handling Exceptions The interpreter we have so far is pretty cool. It seems to be working, right? Actually, there is one case we haven't covered. Can you think of a very simple calculation that is undefined (maybe involving division)? Try to see what happens if you try to compute it using your interpreter (using floordiv or truediv since we don't have a standard div operator in PyCombinator). It's pretty ugly, right? We get a long error message and exit our interpreter -- but really, we want to handle this elegantly. Try opening up the interpreter again and see what happens if you do something ill defined like add(3, x). We just get a nice error message saying that x is not defined, and we can then continue using our interpreter. This is because our code handles the NameError exception, preventing it from crashing our program. Let's talk about how to handle exceptions: In lecture, you learned how to raise exceptions. But it's also important to catch exceptions when necessary. Instead of letting the exception propagate back to the user and crash the program, we can catch it using a try/except block and allow the program to continue. try: <try suite> except <ExceptionType 0> as e: <except suite 0> except <ExceptionType 1> as e: <except suite 1> ... We put the code that might raise an exception in the <try suite>. If an exception is raised, then the program will look at what type of exception was raised and look for a corresponding <except suite>. You can have as many except suites as you want. try: 1 + 'hello' except NameError as e: print('hi') # NameError except suite except TypeError as e: print('bye') # TypeError except suite In the example above, adding 1 and 'hello' will raise a TypeError. Python will look for an except suite that handles TypeErrors -- the second except suite. Generally, we want to specify exactly which exceptions we want to handle, such as OverflowError or ZeroDivisionError (or both!), rather than handling all exceptions. Notice that we can define the exception as e. This assigns the exception object to the variable e. This can be helpful when we want to use information about the exception that was raised. >>> try: ... x = int("cs61a rocks!") ... except ValueError as e: ... print('Oops! That was no valid number.') ... print('Error message:', e) You can see how we handle exceptions in your interpreter in repl.py. Modify this code to handle ill-defined arithmetic errors, as well as type errors. Go ahead and try it out!
https://inst.eecs.berkeley.edu/~cs61a/su19/lab/lab11/
CC-MAIN-2020-29
refinedweb
2,721
58.18
_lwp_cond_reltimedwait(2) - duplicate an open file descriptor #include <unistd.h> int dup(int fildes); The dup() function returns a new file descriptor having the following in common with the original open file descriptor fildes: same open file (or pipe) same file pointer (that is, both file descriptors share one file pointer) same access mode (read, write or read/write). The new file descriptor is set to remain open across exec functions (see fcntl(2)). The file descriptor returned is the lowest one available. The dup(fildes) function call is equivalent to: fcntl(fildes, F_DUPFD, 0) Upon successful completion, a non-negative integer representing the file descriptor is returned. Otherwise, -1 is returned and errno is set to indicate the error. The dup() function will fail if: The fildes argument is not a valid open file descriptor. A signal was caught during the execution of the dup() function. The process has too many open files (see getrlimit(2)). The fildes argument is on a remote machine and the link to that machine is no longer active. See attributes(5) for descriptions of the following attributes: close(2), creat(2), exec(2), fcntl(2), getrlimit(2), open(2), pipe(2), dup2(3C), lockf(3C), attributes(5), standards(5)
http://docs.oracle.com/cd/E19963-01/html/821-1463/dup-2.html
CC-MAIN-2016-18
refinedweb
206
54.42
This document gets you started with Groovy in NetBeans IDE. You will create a Java application, add a JFrame, and retrieve a simple message from a Groovy file. Contents To follow this tutorial, you need the following software and resources. Support for Groovy is disabled by default when you install the Java version of the IDE. To work with Groovy in the IDE you first need to activate the Groovy plugin in the Plugins manager. Alternatively, you can type groovy in the Search field to filter the list of plugins. groovy In this section you will create a new Java application. When you click Finish the IDE creates the project and displays a project node in the Projects window. In this section you will create a JFrame and a Groovy class. When you click Finish the IDE creates the JFrame form and opens the file in the editor. When you click Finish the IDE creates the Groovy file and opens the file in the editor. If you expand the project node in the Projects window you can see that the two files that you created are under the Source Packages node. In this section, you will code the interaction between the Groovy file and the Java class. class GreetingProvider { def greeting = "Hello from Groovy" } public class DisplayJFrameForm extends javax.swing.JFrame { GreetingProvider provider = new GreetingProvider(); public DisplayJFrame() { initComponents(); String greeting = provider.getGreeting().toString(); jTextField1.setText(greeting); } You can use code completion in the Java class to find the methods you need in the Groovy class. When you choose Run the IDE compiles and launches the application. In the window of the application you can see that the text from the Groovy class is displayed in the text field. You now know how to create a basic Java application that interacts with Groovy. NetBeans IDE also supports the Grails web framework, which uses the Groovy language in Java web development. To learn how to use the Grails framework with NetBeans IDE, see Introduction to the Grails Framework.
https://netbeans.org/kb/docs/java/groovy-quickstart.html
CC-MAIN-2016-40
refinedweb
336
72.46
#include <hallo.h> * Joerg Schilling [Sun, Mar 25 2007, 11:58:39AM]: > Martin Zobel-Helas <zobel@ftbfs.de> wrote: > > > > > Read the Debian mailing list archives and you will find some of the related > > > personal atacks. > > > > I asked for references, but you seem not to be able to give me ANY of > > them, just telling me "look in the archive". So you seem not to be able > > to give me any concrete pointer. > > > > So whom should i trust now? You? Mr. Bloch? > > > > There was a reason why i asked for concrete pointers. > > There is a reason why I do not waste my time with people on a Debian list.... This is not about wasting time, this is simply about you proving things that you write. > In January 2006, some people from Debian started with a calumniation campaign > against me and my projects. They amongst others published unproven claims > on the license ofthe original software and Mr. Bloch was one of their leaders. Guess what, I just had some time for bug triage and some minor complication (at the first glance) with the licensing was among the candidates. The topic was more complicated than I expected and I had to correct the initial assessment especially after you set an ultimatum with an option of legal actions, actually trying to make all inconvenient statements appear void. But that story is past, at that time there were simple technical solutions to make almost everyone happy. >. I guess you talk about the discussion on cdrtools-devel. But which personal offense do you mean? Which? Do we have your permission to reveal the whole mail thread to the public? >. What? Again, which claims? The first story was about licensing of the build system. The second was about licensing of the LINKED software components. They were triggered by TWO separated actions done BY YOU. You are not that naive to be unable to distinguish them. And have I lead the discussion "against" you then? NO. Stop reinterpreting my role as the scapegoat of the hour. Eduard.
https://lists.debian.org/debian-project/2007/03/msg00226.html
CC-MAIN-2016-07
refinedweb
339
74.39
. You need to #include <math.h> for pow. 2. I'm still looking forward to a better hints on how to make dll file (without using MFC) than just look at the book as I'm just a beginner 2. To create a Win32 DLL 1. On the File menu, click New and then click the Projects tab. 2. Specify the Project Name, Location, Workspace, Dependency, and Platforms options and then double-click the Dynamic Link-Library icon. 3. On the Project menu, point to Add to Project, then click Files to add your source code files to the project. 4. If needed, add a function called DllMain and add the initialization and termination code for the DLL to this function. 5. Make sure you have exported the entry points to your DLL by using either the __declspec(dllexport) keyword or by listing the functions in the DLL's .DEF file. 6. Prepare a header file that includes the programs using your DLL. This header file should contain the declarations of the appropriate functions. When the header file is compiled for a DLL, use the __declspec(dllexport) keyword to export from the DLL. When the header file is compiled for a program that uses the DLL, use the declspec(dllimport) keyword. 7. If your DLL uses __declspec(dllexport) or a .DEF file, an import library will be created automatically. In other situtations you will need to prepare an import library for your programs to link with by ensuring that the /IMPLIB linker switch is set when building your DLL. 8. Build the DLL. We value your feedback. Take our survey and automatically be enter to win anyone of the following: Yeti Cooler, Amazon eGift Card, and Movie eGift Card! link to exe Example code in dll #include <iostream.h> void show(var char[]) { cout << var << endl; } eg of retrieving the function (filename test.cpp) #include "hello.h" void main() { char t[] = "Hello"; show(t); } When I rebuild test.cpp to exe... linker give me unresolved external symbol void_decl show( char *const).......andd so on.. Is there anything that I miss out ? 1. I am still wondering where or which file should I add this _declspec(dllexport). 2. How do I link to my test.cpp which I execute, it can called up my function in DLL Your help will be very much appreciate. Thank You void __declspec(dllexport) show() { } 2. When you build a DLL, an import library (.lib) is generated. Add the .lib into your project so that you can call the exported functions. I would suggest you read the documentation I mentioned. Don't jump into coding. You are welcome.
https://www.experts-exchange.com/questions/10074970/How-to-compile-a-program-too-Dll-without-using-MFC.html
CC-MAIN-2018-05
refinedweb
442
76.82
Agda Template Remix this to get started with Agda (v2.6.0.1). module HelloAgda where import Relation.Binary.PropositionalEquality as Eq open Eq using (_≡_) open Eq.≡-Reasoning using (begin_;_≡⟨⟩_;_∎) data ℕ : Set where zero : ℕ suc : ℕ → ℕ _+_ : ℕ → ℕ → ℕ 0 + n = n suc n + m = suc (n + m) _ : 1 + 1 ≡ 2 _ = begin suc 0 + 1 ≡⟨⟩ suc (0 + 1) ≡⟨⟩ suc 1 ≡⟨⟩ 2 ∎ Interactive Theorem Proving With Tab and Shift-Tab (in a running Agda runtime) you can get assistance in providing terms of the correct type, placing the cursor next to a ? or a hole {! !}. In the following example you can hit Tab at the end of the code to obtain a list of suggestions data Vec (A : Set) : ℕ → Set where [] : Vec A zero _::_ : ∀ {n} (x : A) (xs : Vec A n) → Vec A (suc n) hd : {A : Set} → {n : ℕ} → Vec A (suc n) → A hd x = ? Choosing the hole expression {! !} you can further refine your goal, or ask Agda to split your definition on a variable for pattern matching by typing it inside the hole: hd : {A : Set} → {n : ℕ} → Vec A (suc n) → A hd x = {! x !} now hitting Tab again with the cursor on the right of the hole should suggest to replace the above with a constructor expression, like so hd : {A : Set} → {n : ℕ} → Vec A (suc n) → A hd (x :: x₁) = ? now pressing Tab again after ? we'll be suggested to replace an x for ? and this will already type check fine. We can also go further and ask Agda to split on the variable x₁ instead, by inserting it into the hole, i.e. turn hd : {A : Set} → {n : ℕ} → Vec A (suc n) → A hd (x :: x₁) = {! x₁ !} into hd : {A : Set} → {n : ℕ} → Vec A (suc n) → A hd (x :: []) = ? hd (x :: (x₁ :: x₂)) = ? and so on. With Shift+Tab while the cursor is next to ? or a hole, you'll get a summary of the current goals and their types. Code execution and completions is handled by (our fork of) Agda Jupyter Kernel. Unicode input method is triggered via typing a first \ and follows emacs agda mode translations.
https://cdn.nextjournal.com/nextjournal/agda-template
CC-MAIN-2020-10
refinedweb
372
79.7
Did not help here unfortunately. Worx Landroid S with openHAB Seems to work now. I haven’t changed anything though. Hi together, I use the bridge to bring the Landroid Worx S into smartVISU. The status information and the landroid/set/start and landroid/set/stop work fine. But on the landroid/set/mow command I got an error message. I tried the following command mosquitto_pub -h 192.168.81.79 -t landroid/set/mow -m stop I got following error [2018-11-02T08:21:08.785] [INFO] Mqtt - Incoming MQTT message to topic landroid/set/mow: stop [2018-11-02T08:21:08.785] [ERROR] LandroidS - Invalid MQTT payload for topic set/mow Can anybody help. Many thanks in advance. Regards Mirko Hi, I also don’t understand how to get a working ON/OFF Button in OH to work with MQTT2-Binding. With these settings I have no success: Anyone has this working? Hi i have this problem. Any idea? [2019-04-10T16:25:35.753] [INFO] server.ts - Starting Landroid Bridge… [2019-04-10T16:25:35.757] [INFO] server.ts - Setting port to 3000… [2019-04-10T16:25:35.830] [INFO] Mqtt - Connecting to MQTT Broker… [2019-04-10T16:25:35.832] [INFO] App - Adding static files path /root/landroid-bridge/www [2019-04-10T16:25:35.832] [INFO] Scheduler - Skipping scheduler initialization (not enabled) [2019-04-10T16:25:35.836] [INFO] LandroidS - Initializing Landroid Cloud Service… [2019-04-10T16:25:35.838] [WARN] IoBrokerAdapter - landroid-cloud-2.js will be replaced in next version by mqttCloud.js and worxConfig.js. Reason: better handling of models and mqttCloud supports more brands by a diffent config see also: [2019-04-10T16:25:35.882] [INFO] Mqtt - Successfully connected to MQTT Broker! [2019-04-10T16:25:35.931] [INFO] Mqtt - Incoming MQTT message to topic landroid/set/: stop [2019-04-10T16:25:35.931] [ERROR] LandroidS - Unknown MQTT topic: set/ [2019-04-10T16:25:36.244] [ERROR] App - Unhandled exception: SyntaxError: Unexpected token N in JSON at position 0 Landroid have changed their API. There’s an Issue created for this on GitHub: Thanks to the immediate response! I just didn’t understand whether to install the package or what to replace the mqttCloud.js file You should need to do the following: 1: Edit package.json to change the referenced version of ioBroker.landroid-s from 2.5.4 to 2.5.5 2: Edit the src/LandroidS.ts file changing import * as LandroidCloud from "iobroker.landroid-s/lib/landroid-cloud-2"; to import * as LandroidCloud from "iobroker.landroid-s/lib/mqttCloud"; as noted in the pull request. 3: Execute npm run grunt to rebuild and pull the new stuff. Sorry for a clarification I used landroid bridge to associate it with openhab … thank you very much I succeeded! Hi, I’d like to step in here. (Openhab 2.4, Raspberry 3B, tested with node 10.15 and 8.12) When I try to start the landroid-bridge, it shows contradicting log messages: [2019-04-14T11:26:54.011] [INFO] LandroidS - Initializing Landroid Cloud Service... [2019-04-14T11:26:54.034] [INFO] Mqtt - Successfully connected to MQTT Broker! Mqtt url: undefined [2019-04-14T11:26:55.096] [INFO] IoBrokerAdapter - mower 0 selected I also noticed in the logfile of my mosquitto service: 1555231112: New connection from 127.0.0.1 on port 1883. 1555231112: New client connected from 127.0.0.1 as mqttjs_2512311c (c1, k60, u'openhabian'). 1555231138: Socket error on client mqttjs_2512311c, disconnecting. My config.json: "mqtt": { "enable": true, "url": "mqtt://<user>:<pw>@openhabianpi", "topic": "landroid" }, Due to build errors I changed import * as LandroidCloud from "iobroker.landroid-s/lib/landroid-cloud-2"; to import * as LandroidCloud from "iobroker.landroid-s/lib/mqttCloud"; in LandroidS.ts, which helped. I also tried different hosts in the config.json (127.0.0.1, localhost, 192.168.x.x) but no change at all. Other clients, such as my ESPs around the network are working with the same MQTT credentials flawlessly. Has someone also experienced this error? I also builded and started the landroid-bridge on my windows pc - the same error. Maybe it’s some changed method in the new mqttCloud library? Have you also updated package.json from: "iobroker.landroid-s": "^2.5.4", to "iobroker.landroid-s": "^2.5.5", Yes I’ve tried that. And rebuilded after that. I think the iobroker is for the communication between the bridge and the worx servers. I can see all of my robot’s readings in the web ui , so that is working (I think it was also working with 2.5.4). The part that isn’t working is the MQTT communication between the landroid-bridge and the OpenHab instance (on the same Raspberry Pi). I used the Mosquitto ‘installation wizard’ from OpenHab’s GUI and it’s working with all of my devices. Edit: The error message comes from the iobroker, at line 74. I opened an issue regarding that. Maybe we’ll find a solution. How did you do that without MQTT, but the data is accessed via the cloud? do you have the configuration for it? The Code from MeisterTR is for ioBroker, can i usw. it for openHab? I have this installed Im not using openHab, but data is stored in an SQLLITE database as i can see, so you need to read this data somehow with a driver from OpenHAB. I’ve already wondered, this is for openHab and not for ioBroker. you misunderstood me, that was not a criticism that you post something of ioBroker in this forum here, but that I was amazed that it runs differently with you as openHab with me. That you delete your posts now I find childish and pity. HI, I like to stop my e-Dolly once in accident green irrigation was initiated while shaun is out there. As shaun can’t get home any longer because of valves open close to wire I just like to stop shaun where it is with landroid/set/mow but this command doesn’t work for me :-/ Anyone having this running???
https://community.openhab.org/t/worx-landroid-s-with-openhab/43656/52
CC-MAIN-2019-26
refinedweb
1,023
70.19
On 11.10.2016 13:50, Vladimir Sementsov-Ogievskiy wrote: > On 01.10.2016 17:34, Max Reitz wrote: >> On 30.09.2016 12:53, Vladimir Sementsov-Ogievskiy wrote: >>> Create block/qcow2-bitmap.c >>> Add data structures and constraints accordingly to docs/specs/qcow2.txt >>> >>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsement...@virtuozzo.com> >>> --- >>> block/Makefile.objs | 2 +- >>> block/qcow2-bitmap.c | 47 >>> +++++++++++++++++++++++++++++++++++++++++++++++ >>> block/qcow2.h | 29 +++++++++++++++++++++++++++++ >>> 3 files changed, 77 insertions(+), 1 deletion(-) >>> create mode 100644 block/qcow2-bitmap.c >>> >>> diff --git a/block/Makefile.objs b/block/Makefile.objs >>> index fa4d8b8..0f661bb 100644 >>> --- a/block/Makefile.objs >>> +++ b/block/Makefile.objs >>> @@ -1,5 +1,5 @@ >>> block-obj-y += raw_bsd.o qcow.o vdi.o vmdk.o cloop.o bochs.o vpc.o >>> vvfat.o dmg.o >>> -block-obj-y += qcow2.o qcow2-refcount.o qcow2-cluster.o >>> qcow2-snapshot.o qcow2-cache.o >>> +block-obj-y += qcow2.o qcow2-refcount.o qcow2-cluster.o >>> qcow2-snapshot.o qcow2-cache.o qcow2-bitmap.o >>> block-obj-y += qed.o qed-gencb.o qed-l2-cache.o qed-table.o >>> qed-cluster.o >>> block-obj-y += qed-check.o >>> block-obj-$(CONFIG_VHDX) += vhdx.o vhdx-endian.o vhdx-log.o >>> diff --git a/block/qcow2-bitmap.c b/block/qcow2-bitmap.c >>> new file mode 100644 >>> index 0000000..cd18b07 >>> --- /dev/null >>> +++ b/block/qcow2-bitmap.c >>> @@ -0,0 +1,47 @@ >>> +/* >>> + * Bitmaps for the QCOW version 2 format >>> + * >>> + * Copyright (c) 2014-2016 Vladimir Sementsov-Ogievskiy >>> + * >>> + * This file is derived from qcow2-snapshot.c, original copyright: >>> + * Copyright (c) 2004-2006 Fabrice Bell: BME here means Bitmaps Extension and used as a namespace for >>> + * _internal_ constants. Please do not use this _internal_ >>> abbreviation for >>> + * other needs and/or outside of this file. */ >>> + >>> +/* Bitmap directory entry constraints */ >>> +#define BME_MAX_TABLE_SIZE 0x8000000 >>> +#define BME_MAX_PHYS_SIZE 0x20000000 /* 512 mb */ >> I suppose BME_MAX_TABLE_SIZE (8M) is greater than BME_MAX_PHYS_SIZE (512 >> MB) divided by the cluster size (>= 512; 512 MB / cluster_size <= 1 MB) >> because fully zero or one clusters do not require any physical space? >> >> Makes some sense, but I can see that this might make give some trouble >> when trying to serialize overly large bitmaps. But I guess that comes >> later in this series, so I'll wait for that point. >> >> Another thing is that 512 MB is rather big. It gets worse: The bitmap >> may only require 512 MB on disk, but with a maximum table size of 8 MB, >> it can require up to 8M * cluster_size in memory (with just 64 MB of >> disk space!) by using the "read as all zeroes" or "read as all ones" >> flags. With the default cluster size of 64 kB, this would be 512 GB in >> RAM. That sounds bad to me. >> >> Well, it is probably fine as long as the bitmap is not auto-loaded... >> But we do have a flag for exactly that. So it seems to me that a >> manipulated image can easily consume huge amounts of RAM on the host. >> >> So I think we also need some sane limitation on the in-RAM size of a >> bitmap (which is BME_MAX_TABLE_SIZE * cluster_size, as far as I >> understand). The question of course is, what is sane? For a server >> system with no image manipulation possible from the outside, 1 GB may be >> completely fine. But imagine you download some qcow2 image to your >> laptop. Then, 1 GB may not be fine, actually. >> >> Maybe it would make sense to use a runtime-adjustable limit here? > > Actualy BME_MAX_PHYS_SIZE is this limit: > in check_constraints we have > > uint64_t phys_bitmap_bytes = > (uint64_t)h->bitmap_table_size * s->cluster_size; > > ... > > (phys_bitmap_bytes > BME_MAX_PHYS_SIZE) || Advertising OK, so BME_MAX_PHYS_SIZE is actually supposed to be the limit of the size of the bitmaps in RAM? And I suppose it is going to be calculated differently in the future once qemu has sparse bitmap support? My fault, then, I thought BME_MAX_PHYS_SIZE was supposed to be the limit of the size on disk. OK, makes sense then, but the question whether a runtime-adjustable limit would make sense still remains. OTOH, this is something that can always be added later on. Max signature.asc Description: OpenPGP digital signature
https://www.mail-archive.com/qemu-devel@nongnu.org/msg405143.html
CC-MAIN-2016-50
refinedweb
682
59.7
Your Account Rolling with Ruby on Rails, Part 2 Pages: 1, 2, 3 The final task is to add the ability to display only those recipes in a particular category. I'll take the category displayed with each recipe on the main page and turn it into a link that will display only the recipes in that category. To do this, I'll change the recipe list view template to accept a URL parameter that specifies what category to display, or all categories if the user has omitted the parameter. First, I need to change the list action method to retrieve this parameter for use by the view template. list Edit c:\rails\cookbook\app\controllers\recipe_controller.rb and modify the list method to look like this: def list @category = @params['category'] @recipes = Recipe.find_all end Then edit c:\rails\cookbook\app\views\recipe\list.rhtml to look like this: <table border="1"> <tr> <td width="40%"><p align="center"><i><b>Recipe</b></i></td> <td width="20%"><p align="center"><i><b>Category</b></i></td> <td width="20%"><p align="center"><i><b>Date</b></i></td> </tr> <% @recipes.each do |recipe| %> <% if (@category == nil) || (@category == recipe.category.name)%> <tr> <td> <%= link_to recipe.title, :action => "show", :id => recipe.id %> <font size=-1> <%= link_to "(delete)", {:action => "delete", :id => recipe.id}, :confirm => "Really delete #{recipe.title}?" %> </font> </td> <td> <%= link_to recipe.category.name, :action => "list", :category => "#{recipe.category.name}" %> </td> <td><%= recipe.date %></td> </tr> <% end %> <% end %> </table> There are two changes in here that do all the work. First, this line: <% if (@category == nil) || (@category == recipe.category.name)%> decides whether to display the current recipe in the loop. If the category is nil (there was no category parameter on the URL), or if the category from the URL parameter matches the current recipe's category, it displays that recipe. nil Second, this line: <%= link_to recipe.category.name, :action => "list", :category => "#{recipe.category.name}" %> creates a link back to the list action that includes the proper category parameter. Browse to and click on one of the Snacks links. It should look like Figure 11. Figure 11. Showing only snacks That's it! This is a reasonably functional online cookbook application developed in record time. It's a functional skeleton just begging for polish. Wading through all of the words and screenshots in this article may have obscured (at least somewhat) exactly what this code can do and in what amount of developer time. Let me present some statistics to try to put it all into perspective. Fortunately, Rails has some built-in facilities to help answer these questions. Open up a command window in the cookbook directory (c:\rails\cookbook) and run the command: rake stats Your results should be similar to Figure 12. Note that LOC means "lines of code." Figure 12. Viewing development statistics I won't give a detailed description of each number produced, but the last line has the main figure I want to point out: Code LOC: 47 This says that the actual number of lines of code in this application (not counting comments or test code) is 47. It took me about 30 minutes to create this application! I could not have come even close to this level of productivity in any other web app development framework that I have used. Maybe you're thinking that this is an isolated experience using an admittedly trivial example. Maybe you're thinking that this might be OK for small stuff, but it could never scale. If you harbor any such doubts, the next section should lay those to rest. Rails is a relatively young framework. As of this writing, it's been barely six months since the first public release. Yet it debuted with such a stunning feature set and solid stability that a vibrant developer community quickly sprang up around it. Within this time frame, several production web applications have been deployed that were built with Ruby on Rails. From the site itself: Basecamp is a web-based tool that lets you manage projects (or simply ideas) and quickly create client/project extranets. It lets you and your clients (or just your own internal team) keep your conversations, ideas, schedules, to-do lists, and more in a password-protected central location. Basecamp was the first commercial web site powered by Ruby on Rails. David Heinemeier Hansson, the author of Rails, developed it. At its deployment, it contained 4,000 lines of code with two months of development by a single developer. In fall 2004, Basecamp stated that it had passed the 10,000-user mark. It considers the actual number of registered users to be proprietary information, but the home page currently states that it has "tens of thousands" of users. 43 Things is a goal-setting social software web application. It currently has 6,000 registered users and hundreds of thousands of unregistered visitors. 43 Things has 4,500 lines of code that were developed in three months by three full-time developers. Ta-da Lists is a free online service that implements simple, sharable to-do lists. It features a highly responsive user interface that uses XMLHttpRequest to minimize waiting for the server. Ta-da Lists came from one developer using one week of development time producing 579 lines of code. XMLHttpRequest Snow Devil is an e-commerce site specializing in snowboards and related equipment. It opened for business only recently, so there is no usage information available at this time. However, it comprises 6,000 lines of code created by two developers in four months. CD Baby is a very successful e-tailer of independent music. In business since 1998, it lists 82,443 artists that together have sold 1.2 million CDs, paying $12 million back into the artists' pockets. The CD Baby web site previously involved an increasingly unmanageable 90,000 lines of PHP code. Its authors are in the process of rewriting it in Ruby on Rails. It's too early to find any development information, but the owner of CD Baby is publicly blogging about the process and progress of the conversion. When all is said and done, good design will be more important than the framework in determining how your application performs. Think carefully about your database design and how its tables are indexed. Analyze your data access patterns and consider some strategic denormalization of data. Look for opportunities to cache preprocessed data. Rails has a lot of powerful features to make it easy to prototype and develop applications quickly, which will leave you with more time to think about your application's features and how to tune it for performance. Rails has many features that I have not used in this two-part article. I'd like to mention a few of them (with links to more information) to give you a more rounded view of the Rails toolkit. Caching is cheap way to speed up your application by saving the results of previous processing (calculations, renderings, database calls, and so on) so as to skip the processing entirely next time. Rails provide three types of caching, in varying levels of granularity: To make sure your data is correct and complete before writing it to the database, you must validate it. Rails has a simple mechanism that allows your web application to validate a data object's data before the object updates or creates the appropriate fields in the database. Read the validation how-to or go straight to the validation API documentation. ActiveRecord callbacks are hooks into the life cycle of a data object that can trigger logic before or after an operation that alters the state of the data object. ActiveRecord also supports transactions. Quoted straight from the documentation: Transactions are protective blocks where SQL statements are only permanent if they can all succeed as one atomic action. The classic example is a transfer between two accounts where you can only have a deposit if the withdrawal succeeded and vice versa. Transaction enforce the integrity of the database and guards the data against program errors or database break-downs. So basically you should use transaction blocks whenever you have a number of statements that must be executed together or not at all. For example, consider the code: transaction do david.withdrawal(100) mary.deposit(100) end Rails was built with testing in mind and provides support for testing your web application. An extensive online tutorial shows how to test a Rails web application. Generators are the helper scripts that you can use to generate code for your application. You have already used generators to create new controllers and models, and at the beginning of this article I showed you how to use a new generator to create scaffolding. Rails also supports user-created add-on generators. For example, Tobias Luetke has written a Login Generator that creates all the code for easily adding authentication, users, and logins to your Rails app. Related Reading Programming Ruby The Pragmatic Programmer's Guide, Second Edition By Dave Thomas By now, everyone should know the importance of good security in web applications. The Ruby on Rails web site has an online security manual that describes common security problems in web applications and how to avoid them with Rails. Rails is not your run-of-the-mill, proof-of-concept web framework. It is the next level in web programming, and the developers who use it will make web applications faster than those who don't; single developers can be as productive as whole teams. Best of all, it's available right now, under an MIT license. I believe that there hasn't been an improvement in productivity like this in recent programming history. Editor's note: Want more Ruby on Rails? See 2015, O’Reilly Media, Inc. (707) 827-7019 (800) 889-8969 All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
http://www.onlamp.com/pub/a/ruby/archive/rails2.html?page=3&x-order=date&x-maxdepth=0
CC-MAIN-2015-06
refinedweb
1,662
55.64
Qt 4.7.2 and VS2008 qglobal.h error I just downloaded and built the 4.7.2 release of Qt and Qt Creator 2.1.0 final. I have MSVS2008 professional and used this configure settings: Everything built fine in 4.7.2 just as It did with 4.7.1, I used the same configure parameters for 4.7.1 and had no issues whatsoever. I decided to give Qt Creator 2.1.0 a chance, but after I build a small test project I have, I got this warnings and errors: Warning 32 warning C4530: C++ exception handler used, but unwind semantics are not enabled. Specify /EHsc c:\Program Files\Microsoft Visual Studio 9.0\VC\include\xlocale 342 ../../src/corelib/global/qglobal.h(2388) : error C2825: 'T': must be a class or namespace when followed by '::' As I got these issues on a project I used before with 4.7.1 and MSVC2008 pro, I decided to switch from creator 2.1.0 to MSVC2008 to see what was going on. Apparently, something happened with the Qt VS addin that disabled the "C++ Exceptions" option on my project settings, and I had to enable it (weird, really), but the error line is this one: { return T::dynamic_cast_will_always_fail_because_rtti_is_disabled; } somehow two things happened, the compiler is no longer recognizing C++ templates (maybe some MSVC obscure option got disabled like the C++ exceptions which is odd since I didn't make any changes to the project configuration) and the error is on the rtti_is_disabled macro, which I specifically built with Qt using -rtti just like 4.7.1 which worked fine, and also, I checked the compiler settings and rtti is in fact enabled. And now, the worst part is, I changed my Qt Folder env variable from \4.7.2 to \4.7.1, and now I get the same errors on 4.7.1 which didn't happened before, and now ALL of my Qt projects aren't building! Has anyone had this issue before? please help Can somebody please help me? Update: I just found out a couple of issues, I don't know if this happened because I installed Qt Creator 2.1.0, or if it's some problem between the VS addin and Qt 4.7.2. but in case something like this happens to someone else, please check for this issues: On VS2008 Pro, go to Project -> Properties menu and check for the following: Configuration Properties -> C/C++ -> General : that your additional includes aren't messed up, the Addin added the ActiveQt libraries to Qt Configuration and the libs in that path, when I never enabled them, or haven't even used them in my life. Configuration Properties -> C/C++ -> Preprocessor : that your preprocessor directives are correct, somehow the directive QT_NO_DYNAMIC_CAST got added and caused my build to crash, because I built Qt with the -rtti flag and enabled the rtti on my project in Configuration Properties -> C/C++ -> language -> Enable Runtime Type Info = Yes Configuration Properties -> C/C++ -> Command Line : that your Additional options for your compiler are correct, I really don't add any extra options to the compiler, but somehow there where 3 extra options added there (-Zm200 -w34100 -w34189), I don't know if they where added by the Addin or not. I was able to get my project going again with these fixes, but still, I'm gonna try to find out if there are some issues being introduced to the project when I open it from Qt Creator 2.1.0 as well, because even though I reconfigured VS2008, I still get this errors in Qt Creator. --- EDIT: --- I can now confirm that these problems where introduced by the Addin, I added in Qt -> Qt Options the new Qt folder (4.7.2) and when I change the Qt version on my project from 4.7.1 to 4.7.2, the Addin messes up all of my includes, libraries, preprocessor directives, etc. So if someone uses the plugin, use the steps above because those are the settings that get messed up. I only have to see how to add all of the MSVC options mentioned above to Qt Creator 2.1.0 so that I can also build my app in there as well.
https://forum.qt.io/topic/4264/qt-4-7-2-and-vs2008-qglobal-h-error
CC-MAIN-2017-43
refinedweb
719
68.91
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). On 29/12/2014 at 17:34, xxxxxxxx wrote: User Information: Cinema 4D Version: 13 Platform: Windows ; Language(s) : C++ ; --------- Hi, I've been trying to understand how to pass command line arguments to C4D in C++. But nothing I'm trying is working. Every time I try to pass a value to args->argv _C4D crashes when it starts. And this code that Maxon put there does not print anything no matter how I edit the code. case C4DPL_COMMANDLINEARGS: { C4DPL_CommandLineArgs *args = (C4DPL_CommandLineArgs* )data; for (LONG i=0;i<args->argc;i++) { if (!args->argv[i]) continue; if (!strcmp(args->argv[i],"--help") || !strcmp(args->argv[i],"-help")) { // do not clear the entry so that other plugins can make their output!!! GePrint("\x01-SDK is here :-)"); } else if (!strcmp(args->argv[i],"-SDK")) { args->argv[i] = NULL; GePrint("\x01-SDK executed:-)"); } else if (!strcmp(args->argv[i],"-plugincrash")) { args->argv[i] = NULL; *((LONG* )0) = 1234; } } } break; I'm reading how to use commandline arguments in raw C++ code. But they never seem to work in C4D. It always crashes. Can someone please shed some light how how these things work. And tell me how to pass values into them in C4D so that they print to the console when C4D launches? Thanks, -ScottA On 29/12/2014 at 18:20, xxxxxxxx wrote: The fundamental thing is that command line arguments need to exist at the call to run C4D so that the command line executer can pass them into C4D. I do not think that one can 'add' or 'change' command line arguments from a plugin since C4D has already started running. Reading the information for C4DPL_COMMANDLINEARGS, it simply allows a plugin to parse the command line arguments. You can't pass some to it or make alterations. I suspect that your args->argv[i] _= NULL is a no-no. You will need to pass the arguments in the command line (or in the Shortcut Target of the icon - in Windows, for instance), unfortunately. Or, you need to write a system-dependent C++ program that runs C4D with the desired arguments and process-detaches so that your program can close while C4D continues to run. Also remember that argv[0] is always the executable (that is, "Cinema4D.exe --help" is argv[0] = "Cinema4D.exe" and argv[1] = "--help"). _ On 29/12/2014 at 19:01, xxxxxxxx wrote: Hi Robert. The for() loop never returns any data inside of it. And I can't even get the default value in argv[0] to print without crashing. case C4DPL_COMMANDLINEARGS: { C4DPL_CommandLineArgs *args = (C4DPL_CommandLineArgs* )data; GePrint(args->argv[0]); //<---crashes!! ... args->argv[0] is a char array. I wonder if I need to cast it to as String type somehow to stop it from crashing? I don't understand how this example that Maxon wrote is suppose to help? The code never does anything! In Python we can add data elements into the argv[] array like this: import c4d import sys def PluginMessage(id, data) : if id==c4d.C4DPL_COMMANDLINEARGS: #Create some arguments and assign values to them myList = ['first', 33] myValue = 199 #Assign the above arguments to argv sys.argv = [myList, myValue] #The total number of args total = len(sys.argv) #Get the arguments list in string format #Print them to the console when C4D starts up cmdargs = str(sys.argv) print "The total numbers of args: %d " % total print "Args list: %s " % cmdargs return True return False How do I do this same thing with C++? -ScottA On 29/12/2014 at 19:16, xxxxxxxx wrote: A simple observation: you don't check that 'data' is not NULL (neither does the example code - shame). This is something I do for all data coming into Message() in my plugins, for example. The data should exist but best to check nonetheless. It may be as simple as that and then you need to find out why it is NULL if that is the case. If it is non-null, then I would print out the args->orig to see the entire command line string. I do *love* the fact that the example of this in main.cpp for the cinema4dsdk is commented out. Not very reassuring. We might need to wait for an 'official' response to know the facts here. On 29/12/2014 at 19:29, xxxxxxxx wrote: Quick news: If there are *NO* arguments, C4DPL_COMMANDLINEARGS *never* gets called. Arguments must exist and it appears that the executable (argv[0]) is automatically stripped. So, I called C4D with "_<_path_>_/Cinema4D.exe" --hello and "--hello" was printed with GePrint(args->orig). Therefore I would definitely check for data != NULL and make sure to have arguments on the command line. Do not understand your crash though. I did not crash in either case (no argument or an argument with the modification in the example code). Even using "-SDK" worked. As per the example code, I would 'return True;' instead of using break and returning False. On 29/12/2014 at 19:42, xxxxxxxx wrote: As usual. I have no idea what you're saying. You called C4D with <path>/ .... <--- Ummm whut? Oh wait...you mean the path that's in your icon that launches C4D? I tried adding --hello and the used this: if (!strcmp(args->argv[0], "--hello") ) GePrint(args->argv[0] ); It prints ok to the console now. But it also gives an unknown arguments error too On 29/12/2014 at 20:10, xxxxxxxx wrote: That is okay. I think that is just an automatic response to an 'unknown' argument. You will need to 'parse' (and possibly return True) it for it be considered a 'known' argument. Now you are getting there. On 29/12/2014 at 20:14, xxxxxxxx wrote: Thanks for the help Robert. I have added a -hello argument to the "target" field in my C4D desktop icon. And this code prints it in the console when C4D launches case C4DPL_COMMANDLINEARGS: { C4DPL_CommandLineArgs *args = (C4DPL_CommandLineArgs* )data; for (LONG i=0;i<args->argc;i++) { if (!args->argv[i]) continue; if (!strcmp(args->argv[i],"-hello")) { args->argv[i] = NULL; //Kills the unknown argument console error GePrint("found -hello argument in the C4D icon taget field"); } } break; But now how do I get the data from some other source like that python file? And how do I use this kind of thing: *((LONG* )0) = 1234; Usually this kind of code is used to cast void *data to a LONG data type. But how do I use this with command line arguments? On 30/12/2014 at 05:32, xxxxxxxx wrote: Hello, when you pass the command line argument " -plugincrash " the line *((Int32* )0) = 1234; will – as the name suggests – cause a crash (as you assign a value to a nullptr). What do you mean with "how do I get the data from some other source like that python file?" best wishes, Sebastian On 30/12/2014 at 08:25, xxxxxxxx wrote: Hi Sebastian. I'm trying to understand what use these command line args can be. It's been said that it can be used to communicate with other programs. But I don't understand how that works? I did finally manage to figure out how to insert values into the args array: //Add -a1 and -a2 arguments to the "target" field in your C4D desktop icon //This code assigns values to them and prints it in the console when C4D launches case C4DPL_COMMANDLINEARGS: { char *myString = "Text is fun"; LONG myValue = NULL; C4DPL_CommandLineArgs *args = (C4DPL_CommandLineArgs* )data; for (LONG i=0;i<args->argc;i++) { if (!args->argv[i]) continue; //If this argument is found in the program's shortcut if (!strcmp(args->argv[i],"-a1") || !strcmp(args->argv[i],"-a2")) { args->argv[i] = NULL; //Kills the "unknown argument" console error GePrint("found -a1 and -a2 arguments in the C4D icon's target field"); //This puts the myString char value into the args array at index 0 args->argv[0] = myString; //This puts numbers (as chars) into the args array at index 1 char *myArr = {"5555"}; args->argv[1] = myArr; //This will convert the chars to a LONG if needed String s = String(myArr, STRINGENCODING_8BIT); GeData num = StringToNumber(s, FORMAT_LONG, NULL, NULL); myValue = num.GetLong(); } //Finally...Print all the values we stored in the args array when C4D launches GePrint("args[] index# " + LongToString(i) + " : " + args->argv[i]); } } break; I'm still missing the big picture though. How can doing this allow me to communicate with other programs? On 31/12/2014 at 08:34, xxxxxxxx wrote: Computers can execute computer programs. One way to interact with your computer is the command line (like cmd.exe on windows, or the terminal on mac). Using the command line you can start and use computer programs. If it is a well known program it's enough to call the program by it's name. For other programs you have to call the "exe" file. For example you can call "tracert". This will call the program "tracert". The string "" is the first argument of this call, so it is a command line argument. You can also start Cinema from the command line; navigating to the Cinema folder and calling "CINEMA 4D.exe". So when you start a program (using a double click on an icon or starting a program from another program) you can hand over command line arguments that will tell the computer program something. The list of command line arguments can be found in an array that is handed to the program in it's main function (like this). So adding anything to that array inside you program is not really useful. On 31/12/2014 at 11:17, xxxxxxxx wrote: I've been searching around the web for the possible uses for these things. But the only things I've seen them used for are: -To set options when a program launches -To launch another program at the same time I personally don't consider launching another application as "communication" with it. So once again. I think I'm a victim of misleading terms. To me. The word "communication" means that the two programs can share data with each other. Like for example. When using pipes. I've been looking for a way to get two programs to really "communicate" with each other easier than using clumsy things like pipes and ports. And I was hoping this might be an option. I'm still trying to figure out how to get a Qt gui to communicate with C4D. But sadly, this is another dead end. I did learn something new from this though. These commandline args are good for creating launch options for C4D. Like letting you launch to a custom layout. So it wasn't a complete waste of time. //This is how to set a custom layout when C4D launches using a commandline argument //The arg -Lscripting needs to be added to your C4D desktop icon's target field case C4DPL_COMMANDLINEARGS: { C4DPL_CommandLineArgs *args = (C4DPL_CommandLineArgs* )data; for(LONG i=0;i<args->argc;i++) { if(!args->argv[i]) continue; //If there's an argument named -Lscripting in your C4D desktop icon's target field if(!strcmp(args->argv[i],"-Lscripting")) { args->argv[i] = NULL; //Kills the unknown argument console error //Look for a custom layout named "scripting.l4d" in the AppData's library\layout folder //If it exists then set this layout when C4D launches BaseDocument *doc = GetActiveDocument(); Filename file = GeGetC4DPath(C4D_PATH_LIBRARY_USER) + "/layout/scripting.l4d"; Bool fileExists = GeFExist(file, FALSE); if(fileExists) LoadFile(file); else GePrint("Layout file was not found!!"); } else GePrint("-Lscripting Commandline Arg Was Not Found!"); }break; Thanks for the help, -ScottA
https://plugincafe.maxon.net/topic/8393/10968_passing-commandline-args
CC-MAIN-2021-31
refinedweb
1,989
65.22
zapi(7) zapi Manual - zapi/1.2.0 Name zapi - ØMQ C Binding Synopsis #include <zapi.h> cc ['flags'] 'files' -lzmq -lzapi ['libraries'] Description Scope and goals z zapi is maintained by Pieter Hintjens. Its other authors and contributors are listed in the AUTHORS file. It is held by the ZeroMQ organization at github.com. The authors of zapi zapi you must be willing to maintain it as long as there are users of it. Code with no active maintainer will in general be deprecated and/or removed. Using zapi Building and installing zapi uses autotools for packaging. To build from git (all example commands are for Linux): You will need the pkg-config, libtool, and autoreconf packages. Set the LD_LIBRARY_PATH to /usr/local/libs unless you install elsewhere. After building, you can run the zapi selftests: Linking with an application Include zapi.h in your application and link with libzapi. Here is a typical gcc link command: You should read zapi.h. This file includes zmq.h and the system header files that typical ØMQ applications will need. The provided c shell script lets you write simple portable build scripts: The class model zapi consists of classes, each class consisting of a .h and a .c. Classes may depend on other classes. zapi.h includes all classes header files, all the time. For the user, zapi forms one single package. All classes start by including zapi.h. All applications that use zapi start by including zapi.h. zapi sucess, -1 failure. Private/static functions in a class are named s_functionname and are not exported via the header file. All classes (with some exceptions) have a test method called zclass_test. Authors The zapi) 1991-2010 iMatix Corporation and contributors. License LGPLv3+: GNU LGPL 3 or later <>. This is free software: you are free to change it and redistribute it. There is NO WARRANTY, to the extent permitted by law. For details see the files COPYING and COPYING.LESSER included with the zapi distribution.
http://czmq.zeromq.org/manual:zapi
CC-MAIN-2017-13
refinedweb
333
61.22
Breaking Out of an Infinite Loop in Your C Language Program pain. But sometimes a C program contains an endless loop on purpose. This type of construct may seem odd, yet the basis of many modern programs is that they sit and spin while they wait for something to happen. The loop may look like this: for(;;) { check_Keyboard(); check_Mouse(); check_Events(); check_System(); } Notice that the conditions inside the parentheses after the for keyword are missing, which is okay. The result is an endless loop in which the statements are checked repeatedly, one after the other: The program is looking for activity somewhere. When activity is found, the program goes off and does something interesting. But most of the time, the program just sits in this type of loop, waiting for something to happen. (The typical word processor may perform thousands of these loops as it waits between keystrokes as you're typing.) Enter this source code and save it to disk. Then compile and run the program: #include <stdio.h> int main() { char ch; puts("Typing Program"); puts("Type away:"); for(;;) { ch=getchar(); } return(0); } Yes, you can type. And you can see your text on the screen. But how do you stop? To stop, you have to break the endless loop, which can be done by pressing Ctrl+C. But that isn't the way you want your programs to work. Instead, an exit condition must be defined for the loop, which is where the break keyword comes into play. The C language developers knew that, in some instances, a loop must be broken based on conditions that could not be predicted or set up inside the for statement. So, in their wisdom, they introduced the break keyword. What break does is to immediately quit a loop (any C language loop, not just for loops). When the computer sees break, it just assumes that the loop is done and continues as though the loop's ending condition was met: #include <stdio.h> int main() { char ch; puts("Typing Program"); puts("Type away; press '~' to quit:"); for(;;) { ch=getchar(); if(ch=='~') { break; } } return(0); } Now an exit condition is defined. The if comparison in Line 12 checks to see whether a ~ (tilde) character is entered. If so, the loop is halted by the break statement. Change your source code so that it matches what was just shown. Compile and run. Now, you can halt the program by typing the ~ character. Note that the if statement can also be written without the braces: if(ch=='~') break; This line may be a bit more readable than using braces.
http://www.dummies.com/how-to/content/breaking-out-of-an-infinite-loop-in-your-c-languag.navId-323181.html
CC-MAIN-2015-11
refinedweb
435
81.12
>>IMAGE this kind of architecture is that it is very simple and indeed often good enough as a starting point. However, the problem is that over time when the application gets more complex this kind of approach does not scale too well. Often you end up with Services that call 6 - 8 other Services. Many of these Services have no clear responsibilities but are built in an ad-hoc manner as wrappers of existing Services adding tiny bits of logic needed for some specific new feature. So how to avoid or dig yourself out from this kind of architecture? One approach I have found very useful is looking at the unit tests when writing them. By listening to what my tests are trying to tell me I will be able to build much better design. This is nothing else but the "Driven" part in TDD which everybody knows but is still quite hard to understand. Indeed it is quite easy to write tests before production code but at the same time not let these tests have any significant and as a result not only are tests better but also the production code. In the following text I use "spec" to refer to a single test class/file. Rule 1: when spec is more than 120 lines then split it When the spec is too long I behavior: def "when gets payment methods for EUR then returns single card method"() def "when gets payment methods for non-EUR then returns debit and credit as separate methods"() def "when gets payment methods then returns only enabled methods"() def "when gets payment methods for a known user then orders them based on past usage"() def "when gets payment methods for transfer amount > 2000 GBP then returns bank transfer as the first method"() ... These tests all repeat when gets payment methods. So maybe we can create a new spec for getting payment methods and we can just dump the duplicating prefix from all of the test names. Result will be: class GetPaymentMethodsSpec { def "returns only enabled methods"() def "when user is known then orders methods based on past usage"() def "for transfer amount > 2000 GBP bank transfer is the first method"() ... }: when you have split too long spec then always think whether you should split/extract something in production code as well If there are many tests for something then it means that the tested behavior is complex. If something is complex then it should be split apart. Often lines of code are not good indicator for complexity as you can easily hide multiple branches/conditions into single line. From the previous example if we have multiple tests around the ordering of payment methods it may be a good sign that ordering could you think that you need to mock/stub some class partially then this is generally bad idea. What the test is telling you is that you have too much behavior cramped together. You have 2 choices: - don't mock it and use the production implementation - if your test becomes too complex or you need too many similar tests then extract that logic out into separate class and test that part of behavior separately You can also check out my post from few years ago for more tips for writing good unit tests. Used still from Ridley Scott's Blade Runner
https://tech.transferwise.com/5-tips-for-getting-more-out-of-your-tests/
CC-MAIN-2018-30
refinedweb
559
61.8
The QSqlTableModel class provides an editable data model for a single database table. More... #include <QSqlTableModel> Inherits QSqlQueryModel. Inherited by QSqlRelationalTable);SqlTableModel model; model.setTable("employee"); QString name = model.record(4).value("name").toString();, and Model/View Programming. This enum type describes which strategy to choose when editing values in the database. before the row is deleted. This signal is emitted before a new row is inserted. The values that are about to be inserted are stored in record and can be modified before they will be inserted. This signal is emitted before the row is updated.(). Returns the current edit strategy. See also setEditStrategy(). Returns the index of the field fieldName. Returns the currently set filter. See also setFilter() and select().(). Inserts the record after row. If row is negative, the record will be appended to the end. Calls insertRows() and setRecord() internally. Returns true if the row could be inserted, otherwise false. See also insertRows() and removeRows().. when an insertion is initiated in the given row.. Returns true if all rows could be removed; otherwise returns false. Detailed error information can be retrieved using lastError(). Reimplemented from QAbstractItemModel. See also removeColumns() and insertRows()... See also setTable(), setFilter(), and selectStatement(). Returns the SQL SELECT statement used internally to populate the model. The statement includes the filter and the ORDER BY clause. See also filter() and orderByClause()... Note that no new records are selected. To select new records, use select(). The filter will apply to any subsequent select() calls. The filter is a SQL WHERE clause without the keyword WHERE (for example, name='Josephine'). oder(). See also select() and setFilter().. Does nothing for the other edit strategies. Use submitAll() to submit all pending changes for the OnManualSubmit strategy. Returns true on success; otherwise returns false. Use lastError() to query detailed error information. Reimplemented from QAbstractItemModel. See also revert(), revertRow(), submitAll(), revertAll(), and lastError(). Submits all pending changes and returns true on success. Returns false on error, detailed error information can be obtained with lastError(). See also revertAll() and lastError(). Returns the name of the currently selected table. Updates the row row in the currently active database table with the values from values.().
http://doc.trolltech.com/4.0/qsqltablemodel.html
crawl-001
refinedweb
362
55.1
or the Android SDK. The following features are supported by the two different build types. Note: The online build service is free for public GitHub repositories. To built from a private repository, you need a developer account. To build an app locally you need a pro account. config.xml The minimal build configuration you need is a config.xml file that describes your app. The config.xml> The .tabrisignore file The build service. Note: The .tabrisignore file is only relevant for the build service. In a local build, you have to manage the packaged files yourself (see below). Developer”. Note: The public GitHub repository. Users who are on the Developer plan can also use private GitHub repositories and needs to be signed with a certificate only if you want to deploy them to Google Play. You can find a very good tutorial in the Phonegap Build documentation as well. -. Adding Plugins To add a set of Apache Cordova Plugins you only need to add them to the config.xml. The online build supports the <gap:plugin /> tag that you might already know from Phonegap Build. This tag allows you to add plugins using an ID, an HTTP or a git URL. A sample config.xml including two Cordova plugins could look like this: <?xml version='1.0' encoding='utf-8'?> <widget id="my.first.app" version="1.0.0" xmlns: ... <gap:plugin <gap:plugin </widget> Please Note: You need to include the gap XML namespace in the root element of your config.xmlfile, as seen in the example above. Local Build You can build Tabris.js apps on your local machine using the Cordova command line interface.. Please Note: Local builds are a Pro feature. If you don’t see the download you are probably not on a Pro plan.-ios:
http://docs.tabris.com/1.0/build
CC-MAIN-2019-43
refinedweb
301
68.16
The requirement to write this code was, After export/import, workflow modification for approvers was failing. Approvers were not able to perform the Approval workflow operations and then we tested that if we create a new Tasks list and modify the existing Approval workflow association then everything starts working so we needed a code which can do this operation for whole site collection. Following code snippet will create a Tasks list with name “MS Tasks” and associate it with a Parallel Approval workflow in a “Pages” library in root site as well as in all the sub sites. 1: using System; 2: using System.Collections.Generic; 3: using System.ComponentModel; 4: using System.Data; 5: using System.Drawing; 6: using System.Linq; 7: using System.Text; 8: using System.Windows.Forms; 9: using Microsoft.SharePoint; 10: using Microsoft.SharePoint.Publishing; 11: using Microsoft.SharePoint.Workflow; 12: 13: namespace WorkflowAssoc 14: { 15: public partial class Form1 : Form 16: { 17: public Form1() 18: { 19: InitializeComponent(); 20: } 21: 22: 23: private void button1_Click(object sender, EventArgs e) 24: { 25: using (SPSite site = new SPSite(textBox1.Text.Trim())) 26: { 27: using (SPWeb web = site.OpenWeb()) 28: { 29: //executing for root site 30: WorkflowModification(web); 31: 32: //executing for all subsites 33: foreach (SPWeb subweb in web.Webs) 34: { 35: WorkflowModification(subweb); 36: } 37: } 38: } 39: 40: MessageBox.Show("Done!"); 41: } 42: 43: private void WorkflowModification(SPWeb web) 44: { 45: bool checkTasks = true; 46: bool checkPages = true; 47: SPList pages; 48: SPWorkflowAssociation WFCol; 49: SPList tasks; 50: 51: //making sure Pages list is available. 52: foreach (SPList list in web.Lists) 53: { 54: if (list.Title == "Pages") 55: { 56: checkPages = false; 57: } 58: } 59: 60: //making sure Tasks list is not available. 61: foreach (SPList list in web.Lists) 62: { 63: if (list.Title == "MS Tasks") 64: { 65: checkTasks = false; 66: } 67: } 68: 69: //execute only if Tasks list not available and Pages list is available 70: if (checkTasks && !checkPages) 71: { 72: //creating a Tasks list 73: web.Lists.Add("MS Tasks", "MS Tasks List", SPListTemplateType.Tasks); 74: 75: //getting an object of Pages list 76: pages = web.Lists["Pages"]; 77: 78: //getting an object of Tasks list and setting it in workflow association 79: tasks = web.Lists["MS Tasks"]; 80: 81: //getting an object of "Parallel Approval" workflow 82: //this is the only workflow available so first index 83: for (int i = 0; i < pages.WorkflowAssociations.Count; i++) 84: { 85: WFCol = pages.WorkflowAssociations[i]; 86: 87: //setting up Tasks list 88: WFCol.SetTaskList(tasks); 89: WFCol.TaskListTitle = "MS Tasks"; 90: 91: //updating the workflow association 92: pages.UpdateWorkflowAssociation(WFCol); 93: 94: //clearing the objects 95: WFCol = null; 96: } 97: tasks = null; 98: pages = null; 99: } 100: } 101: } 102: } :: Please note that I would have just uploaded my initial code and you might want to consider proper optimization of the code and disposal of objects properly. I might not have updated the latest code here.
https://blogs.msdn.microsoft.com/tejasr/2010/03/05/code-snippet-to-create-a-tasks-list-and-associate-it-with-a-parallel-approval-workflow-in-a-pages-library/
CC-MAIN-2019-51
refinedweb
490
50.12
I made a very simple function that takes a list of numbers and returns a list of numbers rounded by some digits: def rounded(lista, digits = 3): neulist = [] for i in lista: neulist.append(round(i, digits)) return neulist However, I mistakenly put the function itself in the code instead of the built-in round() (as in the example below): def rounded(lista, digits = 3): neulist = [] for i in lista: neulist.append(rounded(i, digits)) return neulist and got this output: Traceback (most recent call last): File "<pyshell#286>", line 1, in <module> rounded(a) File "<pyshell#284>", line 4, in rounded neulist.append(rounded(i, digits)) File "<pyshell#284>", line 3, in rounded for i in lista: TypeError: 'float' object is not iterable The question is: how does the interpreter know that it had to apply the function rounded() while evaluating the function rounded() itself? How can it anyway now that rounded() is a function taking floats if it is attempting to interpret that very function? Is there a sort of two cycle procedure to evaluate & interpret functions? Or am I getting something wrong here?
http://www.howtobuildsoftware.com/index.php/how-do/cnXW/python-function-interpreter-evaluation-function-recursive-function-paradox-in-python-how-can-it-be-explained
CC-MAIN-2017-51
refinedweb
186
55.27
The sources have been updated. (Hopefully) This will make them compilable on VC 7+. I couldn't test it with VC 7, but I expect it will be OK now. For more info take a look what was the problem. This class has been modified: HKEY_LOCAL_MACHINE HKEY_USERS In short, in order to store a structure in registry, you only need to do like this: SomeStruct X; //set struct values... reg["SomeStructData"].set_struct(X); In order to read back the stored structure: SomeStruct X; //set struct values... if(reg["SomeStructData"].get_struct(X)){ //Ok, X contains the read data... }else { //error reading struct. } In order to use HKEY_LOCAL_MACHINE or other, you can pass a second parameter of type registry::Key to the registry constructor: registry::Key registry reg("Sowtware\\Whatever",registry::hkey_local_machine); The second parameter is optional, it defaults to registry::hkey_current_user. registry::hkey_current_user In order to see internal execution flow, you may #define WANT_TRACE. And in console mode, you'll see when and what objects are created, and when values read to/ written from the registry. There was added a new project with_trace to see that. #define WANT_TRACE //with trace #define WANT_TRACE //define this to be able to see internal details... #include "registry.h" #include "string.h" #include <STRING> struct Something{ int area; float height; }; int main(int, char*) { registry reg("Software\\Company"); { Something X; X.area=100; X.height=4.5; reg["Something"].set_struct(X); } Something Z; if(reg["Something"].get_struct(Z)){ printf("area is: %i\nheight is: %f\n", Z.area,Z.height); } return 0; } It produces the following output: First of all, you need only one file registry.h, that's included in the source distribution. Read on... //Dirty and fast howto #include "registry.h" #include "string.h" int main(int, char*){ registry reg("Software\\Company"); reg["Language"]="English"; //set value reg["Width"]=50; printf("The address is: %s\n",(char*)reg["Address"]); //prints nothing unless you put something in return 0; } A more descriptive howto: //Long"]; //Personally I don't use it this way - too much to type... //and then you can retrive value of "Language" key //as follows: printf("The selected language is: \"%s\"\n",=string((const char *)(&MyCoolServer),sizeof(MyCoolServer)); //now let's test by retriving the structure back: ServerAddress serAdr; const string &FromRegistry=server; //copy data only if it's of the size of the struct ServerAddress if(FromRegistry.length()==sizeof(ServerAddress)){ memcpy(&serAdr,FromRegistry.c_str(), sizeof(ServerAddress)); //now let's see what we have (see a pic below):; } I coded these few lines with simplicity in mind. I like the style, when I can assign value to a registry key as to a usual variable. I will describe shortly how I use this code in my projects. I put a member variable reg (instance of registry) to my app's class (there is a reason, read below) that needs access to the registry. Then, in the class' constructor, I instantiate reg variable to the registry home key. Since there is no default constructor in the registry class, you may do it like this: reg class my_app{ protected: registry reg; string UserName; public: my_app() : reg("Software\\my_app"){ // <-- UserName = reg["UserName"]; //read value } ~my_app(){ //store value back to the registry reg["UserName"] = UserName; } } In the preceding line, you actually don't need to create a separate string for UserName - you may have a member variable of registry::iterator type, that will be automatically stored to the registry when your class is destructed (see the final example at the bottom of the page). UserName registry::iterator And then, whenever you need, you can do like this: SetLanguage(reg["Lang"]); string UserName=reg["User"]; that's really simple... Now, a little background on how it works internally. This is actually a very lightweight class. It makes less API calls than you would probably do. For example, the line: reg["Something"]="nothing"; will only once physically access registry on destruction of the returned key-value to assign value "nothing" to key "something". Same happens with the next lines (only one API call when the group goes out of scope): registry::iterator name=settings["UserName"]; name="John"; name="Alex"; name="Someone Else"; Basically, if you intend to use this class, you should be aware that statement reg["something"]="nothing"; doesn't modify anything in registry by itself. The value will be set only when the statement goes out of scope, like this: reg["something"]="nothing"; { reg["UserName"]="Bill"; printf("The username is: %s\n", (char*)reg["UserName"]); //!!WILL NOT PRINT BILL, //it will print the value that was before the previous line }//at this point the value Bill is in the registry printf("The username is: %s\n", (char*)reg["UserName"]); //now it prints Bill; In order to force it to commit to the registry at some point, you can use flush() method like this: flush() registry::iterator name=settings["UserName"]; name="John"; name.flush(); //at this point it will write value "John" to registry. The same applies to reading values - they are retrieved only when you request them, and only once! So, if you know (or think) that something else (perhaps Earth's magnetic field :)) has modified a key's value, you may use refresh() method. But, be carfull!! Anything you have assigned before will not be committed (it's overwritten by the returned value from registry). refresh() And a very important point. When the instance of registry is destructed, or goes out of scope, then all the values you created from it will not have access to the registry. That is, all iterators on a given registry instance should be destructed before their parent registry instance. That's why I use this class as a member of another class. When the parent class is destroyed, it doesn't need access to the registry, neither is it possible at that point. That's it. Hope it will be useful to someone. #include "registry.h" #include <iostream> class my_app{ protected: registry reg; registry::iterator UserName; public: my_app():reg("Software\\Company"),UserName(reg["UserName"]){} ~my_app(){} void SetUserName(const char * name){ UserName = name; } const char * GetUserName(){ return (const char *) UserName; } }; int main (int, char *){ my_app App; App.SetUserName("Alex"); cout<<"The username is: "<<App.GetUserName() <<endl<<"Thanks for playing..."<<endl; return 0; } This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here CrystalPaul wrote:If you want to support UNICODE users (which I am), you need to replace the standard registry functions with ANSI-specific versions: RegCreateKeyEx with RegCreateKeyExA RegQueryValueEx with RegQueryValueExA RegSetValueEx with RegSetValueExA This way it will compile without errors even if UNICODE support is turned on. Should also work with UNICODE off. CrystalPaul wrote:Note that if the registry contains wide strings, you will lose that data when retriving it using this code, so use with caution if you might be accessing a registry value that contains UTF-16 characters (you'll lose 'em). General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Articles/7391/A-handy-class-to-make-use-of-Windows-Registry?fid=62469&df=90&mpp=10&sort=Position&spc=None&tid=1300841&PageFlow=FixedWidth
CC-MAIN-2017-34
refinedweb
1,212
52.6
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. api.onchange v8 doesnt work i can`t determine why api.onchange wont work on my custom module. it does nothing when i change value of field phone my code: from openerp import models, fields, api, _ class crm_lead(models.Model): _inherit = 'crm.lead' ************ my other inheritet or created methods ******** api.onchange('phone') def onchange_make_change(self): self.email_from = 'email@email.email' About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now Andre thanks for answer. My bad, was missing @ befor api.
https://www.odoo.com/forum/help-1/question/api-onchange-v8-doesnt-work-90324
CC-MAIN-2017-26
refinedweb
126
61.53
Here is my class: public class Easter { //Instance Variables private int n; private int p; //Constructor public Easter(int y) { int a = y % 19; int b = y / 100; int c = y % 100; int d = b / 4; int e = b % 4; int g = (8 * b + 13) / 25; int h = (19 * a + b - d - g + 15) % 30; int j = c / 4; int k = c % 4; int m = (a + 11 * h) / 319; int r = (2 * e + 2 * j - k - h + m + 32) % 7; n = (h - m + r + 90) / 25; p = (h - m + r + 19) % 32; }//end constructor Easter //toString method public String toString() { return "The month is: " + n + " and the day is: " + p; }//end toString method }//end class Easter I know the variable names are not descriptive, but that's how the instructions state to use them. Here is my driver: import java.util.Scanner; public class FindEaster { public static void main(String[] args) { System.out.println("*This program calculates the month and day Easter falls on for the year input*"); Scanner year = new Scanner(System.in); int y = 0; Easter newEaster = new Easter(y); System.out.print("Please enter year: "); y = year.nextInt(); System.out.println(newEaster); }//end main }//end class FindEaster.java When I run the program, I keep getting this output: *This program calculates the month and day Easter falls on for the year input* Please enter year: 2011 The month is: 4 and the day is: 5 I doesn't seem to matter what year I input, the output is always "The month is: 4 and the day is: 5", which leads me to believe that my "y" variable is being read as "0" and not the given user input. Thanks for any help you guys could provide.
http://www.dreamincode.net/forums/topic/218603-java-hw-help-user-input/
CC-MAIN-2017-04
refinedweb
289
66.51
Hello Friends i want to click event, and i am using this code for click event public class MyClickLabel:Label { public event EventHandler Clicked; public virtual void OnClicked() { Clicked?.Invoke(this, EventArgs.Empty); } } ////////// //// private void MyClickLabel_Clicked(object sender, EventArgs e) { Myclick.Text = "it's Working"; Myclick.ScaleTo(2, 1000); } But its not working , i am wasting lost of time. what can i do plz help me. Hi @Singhsumit I think this post will solve your problem : Instead of using EventHandler, just use TapGestureRecognizer like it is shown as solution of the post. very thanks @DavidS67 and @seanyda to fast reply, public class MyClickLabel:Label { public event EventHandler ImageClicked; public MyClickLabel() { var tgr = new TapGestureRecognizer { NumberOfTapsRequired = 1 }; tgr.Tapped += ImageOn_Clicked; this.GestureRecognizers.Add(tgr); } public virtual void MyClickLabel_Clicked(object sender,EventArgs e) { ImageClicked?.Invoke(sender,e); } } its Working. Make sure to mark as answer that help them answer better yes @Charwaka its working for me. and its helping to creating to custom image entry with Left,Right click
https://forums.xamarin.com/discussion/comment/308039
CC-MAIN-2019-39
refinedweb
167
50.53
signed angle in degrees between from and to. The smaller of the two possible angles between the two vectors is returned, therefore the result will never be greater than 180 degrees or smaller than -180 degrees. If you imagine the from and to vectors as lines on a piece of paper, both originating from the same point, then the axis vector would point up out of the paper. The measured angle between the two vectors would be positive in a clockwise direction and negative in an anti-clockwise direction. using UnityEngine; using System.Collections; public class ExampleClass : MonoBehaviour { public Transform target; void Update() { Vector3 targetDir = target.position - transform.position; Vector3 forward = transform.forward; float angle = Vector3.SignedAngle(targetDir, forward, Vector3.up); if (angle < -5.0F) print("turn left"); else if (angle > 5.0F) print("turn right"); else print("forward"); } } Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/ScriptReference/Vector3.SignedAngle.html
CC-MAIN-2020-10
refinedweb
150
60.82
Neo4j server stub setup Project description This package provides a neo4j server stub setup based on a real neo4j server. README setup This test is using a neo4j server. The test setUp method used for this test is calling our startNeo4jServer method which is starting a neo4j server. The first time this test get called a new neo4j server will get downloaded. The test setup looks like: def test_suite(): return unittest.TestSuite(( doctest.DocFileSuite('README.txt', setUp=testing.doctestSetUp, tearDown=testing.doctestTearDown, optionflags=doctest.NORMALIZE_WHITESPACE|doctest.ELLIPSIS, encoding='utf-8'), )) Your setup with a custom conf folder could look like: def mySetUp(test): # setup neo4j server here = os.path.dirname(__file__) sandbox = os.path.join(here, 'sandbox') confSource = os.path.join(here, 'conf') startNeo4jServer(sandbox, confSource=confSource) def myTearDown(test): # tear down neo4j server here = os.path.dirname(__file__) sandbox = os.path.join(here, 'sandbox') stopNeo4jServer(sandbox) # do some custom teardown stuff here Also see our test.py for a sample setup. windows On windows a service with the name p01_neo4jstub_testing get installed and removed during the test run. This is not nice but that’s how neo4j can get stopped after starting. If soemthing fails and the service dosn’t get removed, you can simply use the follwoingcommand for remove the service: sc delete p01_neo4jstub_testing Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/p01.neo4jstub/
CC-MAIN-2018-34
refinedweb
243
58.48
As a professional videographer and also as a working professional who always gives presentations, attends seminars, and uploads videos to social media sites and many more shooting videos play a very important role. The time that is taken to deliver a speech by a person and a video prerecorded differs a lot. At many presentations that we give in our day to day life, we may not remember everything, and mistakes do happen right?? But preparing the same presentation before and showing it in front of the people as well as explaining do helps us in overcoming the mistakes that would have incurred when delivered verbally. Now, if for some reason a person recorded multiple videos say 2 and both collectively contain the information required to be displayed to the public then what should be done?? The answer is he/she will merge these videos using certain applications both .exe and apk. But, these applications do have a drawback that is, their features are limited or they leave behind a watermark, etc. What if these problems can be dealt with with the help of coding?? Yes, it is possible with the help of Python. Python is a very powerful Object-Oriented Programming language that contains numerous in-built as well as third party libraries that ease down the user’s task. The library here we are referring to is moviepy. This is an amazing third party library of Python that needs to be imported through pip installation either via normal Python or through Anaconda. This library contains various features out of which merging videos, as well as audios, is a major part. Let’s take a look at how to merge two videos using this library in Python: Steps to Merge Videos using Python Step 1: Before starting the tutorial, I am assuming that you already have the Python installed on your system, If not then see our tutorial: How to install Python on Windows or Linux. Step 2: Once you have the Python, open command prompt in Windows or Terminal in Linux operating system to run the PIP command for the MoviePy. Step 3: Finally, use the below command to install moviepy library of Python on your system. pip install moviepy Step 4: Now, we need to open the text editor, you can use any of your choices, however here we are using Jupyter. Therefore to install that again in your command prompt or terminal type: pip install jupyter Step 5: The next step is to locate your video files and place them in the same folder by creating a new one with any desired name of your choice. Once done, place both your video files under the same folder and now within this path open your command prompt and type cd “your pathname”. Once done type jupyter notebook in the command prompt and your Jupyter notebook will get opened up. A pictorial demonstration for this is shown below: Step 6: Click on the right side given- “New” button and select Python3 within the Jupyter console. Step 7: Simply paste the below command in the Jupyter text editor area to import the following python libraries that will help in merging videos. from moviepy.editor import VideoFileClip, concatenate_videoclips, CompositeVideoClipClip clip1= VideoFileClip("1.mp4") clip2= VideoFileClip("2.mp4") Note: Replace 1.mp4 and 2.mp4 without the name of your videos. Step 8: After loading these video files the last thing is to concatenate them with each other and saving them by final_clip= concatenate_videoclips([clip1,clip2]) final_clip.write_videofile("new.mp4") Step 8: Once the videos are merged you will get two files one .mp3 format and the other .mp4 format. Choose the requisite format and demonstrate the same in front of your clients. There is a provision to merge .mp3 files as well and people can access that feature as and when desired. To know more about Python Moviepy check out this documentation. Conclusion This is how the merging of two videos is possible using Python and that too without any watermark. The saved file that came out as the final result is contained in the same folder where the two videos are located. So go and try this out with Python.
https://www.how2shout.com/how-to/merge-two-videos-using-python-library-called-moviepy.html
CC-MAIN-2021-04
refinedweb
701
70.13
The System.Web.Caching namespace provides classes for caching frequently used data on the server. This namespace includes the System.Web.Caching.Cache class, a dictionary that enables you to store data objects such as hash tables and data sets. It also provides expiration functionality for those objects, and methods that enable you to add and remove the objects. You can add the objects to the cache with a dependency on other files or cache entries. In that case, the System.Web.Caching.Cache object can invoke a callback method to notify your application when an object is removed from the cache.
http://docs.go-mono.com/monodoc.ashx?link=N%3ASystem.Web.Caching
CC-MAIN-2018-05
refinedweb
102
57.87
Hello; I am evaluating Aspose.Cells for java. The output I will be producing is very large and for this reason I do not want to hold the entire spreadsheet in memory at once. I understand that Aspose allows streaming read/write. Can you explain what API I should use to stream my data out so that it would not need to be held in memory? Regards; Hello; Hi, <?xml:namespace prefix = o Thank you for considering Aspose. Well, if you open / save the workbook using streams, still the stream will be held in the memory. Please try our latest version of Aspose.Cells by downloading it from the following location: In the latest version we have enhance the memory management. Also, please make sure that you have provided sufficient memory to the JVM. For opening a workbook using the stream, please see the following documentation link: For saving a workbook using the stream, please see the following documentation link: Thank You & Best Regards,
https://forum.aspose.com/t/streaming/140579
CC-MAIN-2021-10
refinedweb
164
66.44
How to use a web service with Mirae This is a guide to show you onto how to use a simple web service from the j2me platform with the Mirae API. This year in the Google Summer of Code , my project was to add extra protocol handlers to Mirae , so I'll be also demonstrating on how to use them as well. At the moment if you get a Mirae source code checkout, it won't extirely build succesfully. So here as I was initially instructed by my mentor Mr.Chanshin Lee , we'll be using the Mirae source directly in our project , so we'll import the needed source files from Mirae and compile the whole thing into one jar file , instead of having the Mirae API in a seperate jar. Here I'll be building a mirae client project that accesses a simple web service called " MiraeAdd " that just adds 2 given numbers and returns the result. The Eclipse/EclipseME project's zip file is attached here " " , but here we'll be using the command line to do all the building and running of the application. I'll show step by step on how to do that. *** Step 1: Get the Mirae source code. *** The neede Mirae source code is included in my " MiraeAdd " project , there " java , javax, org.apache " packages are the ones that's from Mirae. So if you are making another new project you have to import those packages to the project. Anyway also if you're doing this from the beginning get the full Mirae source code from "svn checkout". *** Step 2: Find the WSDL file that you are going to make the stubs from. *** Ah I was told, Mirae only works with doc-lit wsdl's. So you would need to find one like that. Here the web service I picked is " ". And let's get down to creating the stubs. I once pointed a way to build the stubs becouse currently the tool they have given is outdated. So wat you have to do is goto root folder of mirae and type " ant jar-wsdl2javame " , after that copy "...\build\lib\wsdl2javame.jar" to "...\lib\wsdl2javame.jar". Then copy this " " file to " ...\trunk\bin\windows\" folder. and cd to that folder and type " wsdl2javame <url-to-wsdl-file> ", in our case it's " wsdl2javame ". After that it'll generate the source files in that folder and then what you have to do is copy those source files in to your project. *** Step 3: Compiling. *** So we got all the source code together, so now you use the stubs and use the operations of the web service. So after you write the those, we are ready to compile it. First you have to choose a j2me library too use when building , here I'm using the Sun WTK 2.2 library files , the files I'll be using are " midpapi20.jar;cldcapi11.jar;jsr082.jar;wma20.jar ". So you have to give those with the -bootclasspath switch when compiling the files. create 2 folders " classes " and " tempclasses " , and run this line " javac -g:none -d tmpclasses -bootclasspath C:\WTK22\lib\midpapi20.jar;C:\WTK22\lib\cldcapi11.jar;C:\WTK22\lib\jsr082.jar;C:\WTK22\lib\wma20.jar -classpath tmpclasses src\javax\xml\*.java src\javax\xml\namespace\*.java src\javax\xml\parsers\*.java src\javax\microedition\xml\rpc\*.java src\javax\xml\rpc\*.java src\javax\xml\stream\*.java src\dk\iter\www\webservices\calculator_asmx\*.java src\java\rmi\*.java src\org\apache\mirae\io\*.java src\org\apache\mirae\j2me\xml\*.java src\org\apache\mirae\j2me\xml\sax\*.java src\org\apache\mirae\stax\*.java src\org\apache\mirae\stax\util\*.java src\org\apache\mirae\transport\*.java src\org\apache\mirae\util\*.java src\org\apache\mirae\ws\*.java src\org\apache\mirae\ws\util\*.java src\org\xml\sax\*.java src\org\xml\sax\helpers\*.java src\org\laf\addservice\*.java ". At the end of the line give rest of the files that needed to be compiled. So with this command the mirea source and your files will be compiled, and then for the next step that's to obfuscate the classes. *** Step 4: Obfuscating. *** Here my greatest need to obfuscate the classes is becouse the Mirae source code have classes that are in the java system packages like " java and javax " , and this is not allowed in most of the j2me devices and emulators. So to work around that problem we have obfuscate the classes so the names of the full package and class names will be changed, and also the obfuscation reduces the size of the classes by a considerable amount. The obfuscator I've choosen is RetroGuard. To obfuscate in the Sun WTK or EclipseME should have Proguard to do that , but that doesn't change the names of the package names only the class names ( this is to be changed in Proguard 4, still not released ) , so RetroGuard is my only option. This is the reason I'm doing all this on the command line. To obfucate with retroguard you need a jar file to do that , so we have to jar up the class files we've built , you can do that with this command. jar cmf Manifest.mf temp.jar -C tmpclasses . After that we obfuscate temp.jar set classpath=C:\WTK22\lib\midpapi20.jar;C:\WTK22\lib\cldcapi11.jar;C:\WTK22\lib\jsr082.jar;C:\WTK22\lib\wma11.jar;C:\retroguard-v2.2.0\retroguard.jar; java RetroGuard temp.jar temp2.jar script.rgs Change the classpath according to your settings. Here the script.rgs contains the classes that's to be excluded from obfuscating , that's cos we have to give the proper name of the main midlet class in the " MANIFEST.MF " , so we only leave that class. So after obfuscating we extract the class files in that to get it ready for preverifying. del tmpclasses\** /S /Q /F cd tmpclasses jar xf ..\temp2.jar cd .. *** Step 5: Preverifying. *** Preverifying is done by this command , C:\WTK22\bin\preverify -classpath C:\WTK22\lib\midpapi20.jar;C:\WTK22\lib\cldcapi11.jar;C:\WTK22\lib\jsr082.jar;C:\WTK22\lib\wma11.jar; -d classes tmpclasses ( set the suitable classpath ). Now the next final step is to make the final jar file. *** Step 6: Final jar file *** To make the jar; run , jar cmf Manifest.mf MiraeAdd.jar -C classes . -C res . So here it'll create " MiraeAdd.jar ". *** Step 7: Run it. *** After that get a suitable emulator and run it , I used the nokia 6230i emulator. C:\Nokia\Devices\Nokia_S40_DP20_SDK_6230i\bin\S40_DP20_SDK_6230i_em.exe MiraeAdd.jar That's it to making a simple web service client with Mirae, it's easy to get all those commands into batch file and run it when needed. There's a " make.bat " in the " MiraeAdd " project that I used to make myn , so you can modify it and use it. How to use protocols other than HTTP This is the part that I made , I developed a MMS and a Bluetooth transport handler. From those 2 you can send web services request by mms messages or through bluetooth. In Mirea the protocol handlers are choosen by looking at the url pattern , for example if the the url starts with " http:// " then HTTP handler will be used , and if it starts with " mms:// " then MMS handler and at last if " bt:// " is used that means the Bluetooth handler will be used. MMS Transport When using the mms transport handler the end point is resolved using the url and the soap_action attributes , for example the url = " mms://+9477234124 " soap_action = " webservice_name ", so when given like this mirae will create a MMS message with the destination " +9477234124 " and with the message subject " webservice_name ". And also the default application id used is " WS_MMS " , this is like the port in HTTP protocol. The recipient of the MMS message will get this and look at the subject field and invoke the correct web service and return the result as a MMS message. So to send via the MMS protocol we must edit the client stubs and change the endpoint url and the soap_action to the suitable value. Bluetooth Transport In Bluetooth there's a concept called a service, and there's a specific service id for every bluetooth service. In the protocol handler this service is directly mapped to a service in Mirae. The service id is a 128 bit number which is represented by hexadecimal numbers. And also there can be several services active in one device. Mirae activates the Bluetooth protocol handler when the url starts with " bt://" , an example of a url will be " bt://ab54f92ce871 " , the " ab54f92ce871 " is the service id that's used to recognize the service. So what will happen is when some one wanted to create a service available via bluetooth he'll create a bluetooth service with the id , let's " ab54f92ce871 " and keep it running. So then the client , which is the mirae client will invoke a service through the stub which had the endpoint url as " bt://ab54f92ce871 ", when that happens mirae searches near by bluetooth devices for services with that service id , if it finds one , it'll send the soap request throught established data stream, this works same as the streams used in http. I've created a proxy server called " KProxyServer" , which works in a j2me device that can be used to try out the above 2 protocol handlers, it maps mms and bluetooth services to http web services. You can get the project from here, " ". For more info about my work look at " ".
https://wiki.apache.org/general/AnjanaFernando/MiraeGuide?highlight=RetroGuard
CC-MAIN-2017-22
refinedweb
1,614
62.58
If there's one trend that has been nice to notice, it's the rise of reactive programming. You can see this in technologies like RxJS and cycle.js. To learn more about the topic, I'm interviewing Brian Cavalier, one of the authors of Most.js. In 2007, I was working for a Pittsburgh startup as a Java server-side engineer. They wanted to create an ambitious web UI, and I ended up diving into the role of front-end JavaScript developer. A few years later, John Hann (unscriptable) and I created cujojs, and I became hooked on doing open source work. Most.js is a library for reactive programming. It helps you combine streams of events, like DOM Events, to create highly interactive applications. Asynchronous programming is hard because trying to reason about when things happen and in what order is hard. Most.js makes this easier by giving you a declarative DSL for explicitly describing how asynchronous events relate to each other. For example, if your goal is to log all the mousemove events until the user clicks the mouse, you can declare that's what you want: import { mousemove, click } from `@most/dom-event` mousemove(document) .until(click(document)) .observe(e => console.log(e)) The ability to describe what the result should be, rather than having to try to detail all the steps of how to achieve it, is a central idea of Most.js's declarative functional API. The primary architectural concept in Most.js is the Stream, which represents an asynchronous sequence of discrete events, like mouse clicks, or WebSocket messages. Under the hood, a Most.js Stream is a composition of two other important concepts: Source and Sink. A Source produces events, and a Sink consumes them. For example, a particular kind of Source may represent DOM events, like mousemove() and click() above, which produce DOM mousemove and click events on the document. In contrast, observe() is an example of a particular kind of Sink that consumes events, and passes them to a function you provide. The vast majority of operations involve both a Source and a Sink. For example. map(), which transforms all the events in a stream, acts as a Sink by consuming events, and as a Source then producing new event values after applying a function to them. mousemove(document) .until(click(document)) .map(event => `${event.clientX}, ${event.clientY}`) .observe(e => console.log(e)) So, when you create and transform a Most.js Stream, you're building up a chain of Sources and Sinks that represent the behavior of the Stream. However, Most.js Streams are not active until you consume them, by using one of the "terminal" combinators, observe, drain, or reduce. When you call one of those, the Stream sends a signal to the Source- Sink chain to the Source at the very beginning of the chain. That producer Source will then begin producing events. Events are then propagated synchronously from the Source through the Source- Sink chain by a simple method call. In the example above: mousemoveproducer Sourcepropagates a mousemoveDOM event by calling the untilSink's eventmethod. document, the untilSink propagates an event to the map Sinkby calling its eventmethod. map Sinkthen applies the mapping function to the event value and calls the observe Sink's eventmethod. This direct synchronous method call event propagation model is one of the keys to Most.js's simple and performant internal architecture. Check out the Architecture wiki, to read more about the details of the Source-Sink chain, including how error handling works, and avoids having to try/ catchin every combinator. I think many people know Most.js because of its performance characteristics, and that was certainly a goal from the beginning, along with modularity and a simple API. The simple call stack event propagation architecture, plus hoisting try/ catch out of combinator implementations were two of the earliest and biggest performance improvements. Most.js performs several other optimizations automatically, based on algebraic equivalences. A relatively well-known example is combining multiple map operations, e.g. map(g, map(f, stream)), into a single map by doing function composition on f and g. The operation also combines multiple filter operations, multiple merge operations, multiple take and skip, among others. These optimizations reduce the number of method calls needed to propagate an event from producer to consumer. To me, though, Most.js's more strict adherence to a smaller declarative API is even more important, and maybe even a bigger differentiator. Asynchronous programming is complicated in general. JavaScript programs often deal with many interleaving asynchronous events, and as programmers, we have to try to coordinate all of them. Using imperative approaches, especially those that rely on the developer to manage shared mutable state, to try to coordinate highly asynchronous systems is difficult because we have to think carefully about the operational semantics of the system. We have to look at our static code and execute it in our heads to figure out the order(s) in which things might happen. Then, we have to convince ourselves that our code is correct for each possible ordering. As one example, Most.js event streams' core API doesn't provide an imperative "unsubscribe" function. Instead, you use combinators such as until, take, takeWhile, and skipAfter to declare, up front, the slice of an event stream you want. You declare what your intentions are, and Most.js takes care of the how and when. Two big personal reasons are learning, and that reactive programming is the way I want to be building front-end JS apps. I believe in learning by doing. I wanted to find out more about reactive programming and Functional Reactive Programming (FRP) because they just seemed like such a great fit for front-end JS development. After I had discovered reactive programming concepts, I started reading all the papers and source code I could find. Finally, I decided that the best way to learn even more was to try to implement something. That's basically how the project started. As for technical motivations, there were several. Performance, architectural and API simplicity, and modularity have been driving factors from the beginning. A while back, there was a GitHub issue asking why someone might pick Most.js over other reactive libs. I wrote a longer answer there with more detail about the technical reasons and differences with other libs. It's still a good read and sums up my motivation pretty well. There are a few exciting things on the horizon. The Most.js team is working on @most/core, where we've extracted a minimal core of the Most.js architecture and combinators. It's a base reactive events package that has a strict focus on a lean, declarative API, and incorporates more functional programming concepts. For example, it has a functions-only API, where every function is curried, so you get partial application and function composition. It's also even more modular and exposes more pieces that other developers can use in building new event sources and combinators. For example, Most.js's high-performance scheduler is available in the @most/scheduler package. And we're planning to expose many of Most.js's internal testing tools as a part of @most/core. You can npm install --save @most/core to try it today. It's not yet 1.0, and we have some work to do on documentation and examples, but they're very usable. These new @most/core packages will for the basis of Most.js 2.0. They're a separate project at the moment, but once they hit 1.0, we'll start the work of building Most.js 2.0 on top of them. We're also experimenting with a package of continuous values, aka "Behaviors" or "Properties", values that vary over time, as a companion to Most.js's discrete event streams. The notion of continuous values is quite common in FRP in other functional languages, like Haskell and PureScript, and a few other JS reactive libraries, such as Bacon.js and kefir, provide continuous them. Some things can be modeled more simply as values that vary over time rather than as discrete occurrences (events). For example, a mouse click is fairly clearly a thing that occurs, an event. However, the position of a spaceship in a game is a value. It varies over time as the ship moves but doesn't occur per se. We're very use case driven, and we love feedback, so we encourage folks to try it out and give us feedback in gitter. I see a trend toward functional programming techniques in the JavaScript community. I think it's fascinating how JavaScript, being such a flexible language, can support both OO and functional techniques fairly effectively. Declarative (vs. imperative) programming seems to be on the rise and fits real with the similar swell in reactive programming techniques. Typescript and Flow have also raised the awareness of the benefits of strong static type systems. I think we'll continue to see more tooling around type checking: better IDE support, better type systems, code generators, tools for dealing with foreign data (like PureScript's foreign package). These technologies make everything safer by reducing the kinds of mistakes that can make it through to deployment. We plan to continue embracing these things in Most.js. For example, Most.js has a full set of TypeScript type definitions, and @most/core has a complete set of both TypeScript and Flow type definitions. We use type checking in the development of Most.js and @more/core, and even type check our unit tests. There are a few things that have become very important to me in every bit of programming I do now - that transcend any project, library, framework, or programming paradigm du jour. The first is learning by doing, or perhaps more accurately in my case, learning by trying and failing! One key has been learning that it's ok to fail. It's ok to read about a concept, or algorithm, or data structure in a blog or paper, and then write code solely to try to learn more about how the thing works. Make lots of mistakes trying to get the thing to work. Not everything has to become a long-lived project. If you learn something (even if its the best way not to do something!), you can take that with you no matter what happens to the code. Simplicity has become the most important guiding principal in everything programming-related I do. Simplicity in code, API design, directory structure, project management, communicating with other team members ... everything. Simple is hard. It requires think time and sometimes trying and failing. Simple helps others. Sometimes it takes a while to reap the benefits of simple. On the other hand, "easy" may feel like it helps right now, but often lays a complexity land mine you (or someone else) will step on later. Often, you have to find a balance between the two. I always try to err on the side of simple when I can. I've gotten way more from the open source web community than I've given to it. In many cases, that's been due to interacting with and learning from other developers who have treated me with respect and kindness. I'm very thankful for the excellent people in the web community who help others. At some point, you'll be the one who knows more than someone else. When it happens, be one of that kind, awesome people. I think an interview with Tylor Steinberger, creator of Motorcycle.js, and a Most.js contributor would be great. It's amazing that he's completely self-taught. I've become a huge fan of Rollup, and I think it'd be cool to interview Rich Harris about it, and about modern JavaScript build tooling in general. Editor's note: Brian suggested interviewing Phil Freeman, the author PureScript. As it happens, I interviewed him earlier. So go check out the interview. I really want to thank the Most.js core team: Tylor Steinberger, David Chase, and Frederik Krautwald. They've contributed a ton of ideas and code, and they proposed the idea of @most/core. Given that Most.js started as a project to help me learn about reactive programming, I never expected it to become as popular as it has. Thanks to everyone who has supported it, who has sent a PR, and who is using it to build cool things! Thanks for the interview Brian! It's refreshing to see reactive approaches make their way to JavaScript. I feel a lot of these ideas are slowly but surely beginning to enter the mainstream as people discover their value. By changing your thinking you can forget about older problems while gaining more powerful constructs to use. To learn more about Most.js, head to Most.js GitHub page and study especially the examples.
https://survivejs.com/blog/most-interview/index.html
CC-MAIN-2018-34
refinedweb
2,167
65.73
:findand gfknow how to find files. :makebuilds and populates the QuickFix window. Update your login script Steps #2 and #3 depend on configuration set by the fx set command. Add these lines to your startup script (typically ~/.bashrc). export FUCHSIA_DIR=/path/to/fuchsia-dir fx set core.x64 Update your vim startup file If this line exists in your ~/.vimrc file, remove it: filetype plugin indent on Then add these lines to your ~/.vimrc. if $FUCHSIA_DIR != "" source $FUCHSIA_DIR/scripts/vim/fuchsia.vim endif filetype plugin indent on Install YouCompleteMe (ycm) Optionally install YouCompleteMe for fancy completion, source navigation and inline errors. If it's installed, fuchsia.vim will configure it properly. If everything is working properly, you can place the cursor on an identifier in a .cc or .h file, hit Ctrl-], and YCM will take you to the definition of the identifier. If you build a compilation database, YCM will use it, which may be more reliable and efficient than the default ycm_extra_config.py configuration. Use fx compdb to build a compilation database. Zircon editor integration In the future it would be nice to support:
https://fuchsia.googlesource.com/fuchsia/+/refs/heads/master/scripts/vim/README.md
CC-MAIN-2019-26
refinedweb
186
51.24
ZTF Stamps Access¶ The ALeRCE Stamps API Wrapper gives an easy access to our stamps API that can be used to retrieve stamps and full avro information of a specific alert. Quickstart¶ from alerce.core import Alerce #Import ALeRCE Client client = Alerce() stamps = client.get_stamps("ZTF18abkifng") Making Queries¶ There are two operations you can perform with stamps. Getting the stamps of an object and if you are on a jupyter notebook you can plot the stamps. get_stamps()method will allow you to get stamps of the first detection of an object id. You can also specify a candid to retrieve stamps of a different detection. plot_stamps()works the same as get_stamps but will plot the stamps using IPython HTML if you are in a notebook environment.
https://alerce.readthedocs.io/en/latest/tutorials/stamps_api.html
CC-MAIN-2022-27
refinedweb
126
73.07
, 2011 This article was contributed by Ian Ward. The biggest change is to how strings are handled in Python 3. Python 2 has 8-bit strings and Unicode text, whereas Python 3 has Unicode text and binary data. In Python 2 you can play fast and loose with strings and Unicode text, using either type for parameters and conversion is automatic when necessary. That's great until you get some 8-bit data in a string and some function (anywhere — in your code or deep in some library you're using) needs Unicode text. Then it all falls apart. Python 2 tries to decode strings as 7-bit ASCII to get Unicode text leaving the developer, or worse yet the end user, with one of these: Traceback (most recent call last): ... UnicodeDecodeError: 'ascii' codec can't decode byte 0xf4 in position 3: \ ordinal not in range(128) In Python 3 there are no more automatic conversions, and the default is Unicode text almost everywhere. While Python 2 treats 'all\xf4' as an 8-bit string with four bytes, Python 3 treats the same literal as Unicode text with U+00F4 as the fourth character. Files opened in text mode (the default, including for sys.stdin, sys.stdout, and sys.stderr) in Python 3 return Unicode text from read() and expect Unicode text to be passed to write(). Files opened in binary mode operate on binary data only. This change affects Python users in Linux and other Unix-like operating systems more than Windows and Mac users — files in Python 2 on Linux that are opened in binary mode are almost indistinguishable from files opened in text mode, while Windows and Mac users have been used to Python at least munging their line breaks when in text mode. This means that much code that used to "work" (where work is defined for uses with ASCII text only) is now broken. But once that code is updated to properly account for which inputs and outputs are encoded text and which are binary, it can then be used comfortably by people whose native languages or names don't fit in ASCII. That's a pretty nice result. Python 3's bytes type for binary data is quite different from Python 2's 8-bit strings. Python 2.6 and later have defined bytes to be the same as the str type, which a little strange because the interface has changed significantly: >>> bytes([2,3,4]) # Python 2 '[2, 3, 4]' >>> [x for x in 'abc'] ['a', 'b', 'c'] In Python 3 b'' is used for byte literals: >>> bytes([2,3,4]) # Python 3 b'\x02\x03\x04' >>> [x for x in b'abc'] [97, 98, 99] Python 3's byte type can be treated like an unchanging list with values between 0 and 255. That's convenient for doing bit arithmetic and other numeric operations common to dealing with binary data, but it's quite different from the string-of-length-1 Python 2 programmers expect. Integers have changed as well. There is no distinction between long integers and normal integers and sys.maxint is gone. Integer division has changed too. Anyone with a background in Python (or C) will tell you that: >>> 1/2 0 >>> 1.0/2 0.5 But no longer. Python 3 returns 0.5 for both expressions. Fortunately Python 2.2 and later have an operator for floor division (//). Use it and you can be certain of an integer result. The last big change I'll point out is to comparisons. In Python 2 comparisons (<, <=, >=, >) are always defined between all objects. When no explicit ordering is defined then all the objects of one type will either be arbitrarily considered greater or less than all the objects of another type. So you could take a list with a mix of types, sort it, and all the different types will be grouped together. Most of the time though, you really don't want to order different types of objects and this feature just hides some nasty bugs. Python 3 now raises a TypeError any time you compare objects with incompatible types, as it should. Note that equality (==, !=) is still defined for all types. Module importing has changed. In Python 2 the directory containing the source file is searched first when importing (called a "relative import"), then the directories in the system path are tried in order. In Python 3 relative imports must be made explicit: from . import my_utils The print statement has become a function in Python 3. This Python 2 code that prints a string to sys.stderr with a space instead of a newline at the end: import sys print >>sys.stderr, 'something bad happened:', becomes: import sys print('something bad happened:', end=' ', file=sys.stderr) These are just some of the biggest changes. The complete list is here. Fortunately a large number of the little incompatibilities are taken care of by the 2to3 tool that ships with Python. 2to3 takes Python 2 source code and performs some automated replacements to prepare the code to run in Python 3. Print statements become functions, Unicode text literals drop their "u" prefix, relative imports are made explicit, and so on. Unfortunately the rest of the changes need to be made by hand. It is reasonable to maintain a single code base that works across Python 2 and Python 3 with the help of 2to3. In the case of my library "Urwid" I am targeting Python 2.4 and up, and this is part of the compatibility code I use. When you really have to write code that takes different paths for Python 2 and Python 3 it's nice to be clear with an "if PYTHON3:" statement: import sys PYTHON3 = sys.version_info >= (3, 0) try: # define bytes for Python 2.4, 2.5 bytes = bytes except NameError: bytes = str if PYTHON3: # for creating byte strings B = lambda x: x.encode('latin1') else: B = lambda x: x String handling and literal strings are the most common areas that need to be updated. Some guidelines: Use Unicode literals (u'') for all literal text in your source. That way your intention is clear and behaviour will be the same in Python 3 (2to3 will turn these into normal text strings). Use byte literals (b'') for all literal byte strings or the B() function above if you are supporting versions of Python earlier than 2.6. B() uses the fact that the first 256 code points in Unicode map to Latin-1 to create a binary string from Unicode text. Use normal strings ('') only in cases where 8-bit strings are expected in Python 2 but Unicode text is expected in Python 3. These cases include attribute names, identifiers, docstrings, and __repr__ return values. Document whether your functions accept bytes or Unicode text and guard against the wrong type being passed in (eg. assert isinstance(var, unicode)), or convert to Unicode text immediately if you must accept both types. Clearly labeling text as text and binary as binary in your source serves as documentation and may prevent you from writing code that will fail when run under Python 3. Handling binary data across Python versions can be done a few ways. If you replace all individual byte accesses such as data[i] with data[i:i+1] then you will get a byte-string-of-length-1 in both Python 2 and Python 3. However, I prefer to follow the Python 3 convention of treating byte strings as lists of integers with some more compatibility code: if PYTHON3: # for operating on bytes ord2 = lambda x: x chr2 = lambda x: bytes([x]) else: ord2 = ord chr2 = chr ord2 returns the ordinal value of a byte in Python 2 or Python 3 (where it's a no-op) and chr2 converts back to a byte string. Depending on how you are processing your binary data, it might be noticeably faster to operate on the integer ordinal values instead of byte-strings-of-length-1. Python "doctests" are snippets of test code that appear in function, class and module documentation text. The test code resembles an interactive Python session and includes the code run and its output. For simple functions this sort of testing is often enough, and it's good documentation. Doctests create a challenge for supporting Python 2 and Python 3 from the same code base, however. 2to3 can convert doctest code in the same way as the rest of the source, but it doesn't touch the expected output. Python 2 will put an "L" at the end of a long integer output and a "u" in front of Unicode strings that won't be present in Python 3, but print-ing the value will always work the same. Make sure that other code run from doctests outputs the same text all the time, and if you can't you might be able to use the ELLIPSIS flag and ... in your output to paper over small differences. There are a number easy changes you need to make as well, including: Use // everywhere you want floor division (mentioned above). Derive exception classes from BaseException. Use k in my_dict instead of my_dict.has_key(k). Use my_list.sort(key=custom_key_fn) instead of my_list.sort(custom_sort). Use distribute instead of Setuptools. There are two additional resources that may be helpful: Porting Python Code to 3.0 and Writing Forwards Compatible Python Code. Python 3 is unarguably a better language than Python 2. Many people new to the language are starting with Python 3, particularly users of proprietary operating systems. Many more current Python 2 users are interested in Python 3 but are held back by the code or a library they are using. By adding Python 3 support to an application or library you help: make it available to the new users just starting with Python 3 encourage existing users to adopt it, knowing it won't stop them from switching to Python 3 later clean up ambiguous use of text and binary data and find related bugs And as a little bonus that software can then be listed among the packages with Python 3 support in the Python Packaging Index, one click from the front page. Many popular Python packages haven't yet made the switch, but it's certainly on everyone's radar. In my case I was lucky. Members of the community already did most of the hard work porting my library to Python 3, I only had to update my tests and find ways to make the changes work with old versions of Python as well. There is currently a divide in the Python community because of the significant differences between Python 2 and Python 3. But with some work, that divide can be bridged. It's worth the effort. Moving to Python 3 Posted Feb 9, 2011 20:26 UTC (Wed) by midg3t (guest, #30998) [Link] Posted Feb 9, 2011 23:22 UTC (Wed) by euske (subscriber, #9300) [Link] Posted Feb 9, 2011 23:32 UTC (Wed) by bronson (subscriber, #4806) [Link] Posted Feb 10, 2011 6:52 UTC (Thu) by flewellyn (subscriber, #5047) [Link] They did. It's called "Python3". The old one is called "Python2". It's analogous in some ways to ALGOL 60 vs ALGOL 68. Same heritage, but different languages. catch 22: it's not just too hard Posted Feb 9, 2011 20:41 UTC (Wed) by amtota (guest, #4012) [Link] And I suspect that is the main reason why so few have made the jump: there is no real incentive. There are no killer features (for most) and there is a huge cost associated with supporting python3.x If there was a way to support both versions in the same codebase cleanly, I would have done it by now, but there isn't. (exception handling is one sticky point, there are others) To be honest, I don't see any easy way out of this one. Complaining that authors don't make the porting effort is barking up the wrong tree. Posted Feb 9, 2011 21:17 UTC (Wed) by iabervon (subscriber, #722) [Link] Posted Feb 9, 2011 22:53 UTC (Wed) by mmcgrath (subscriber, #44906) [Link] in fairness time is invisible. Posted Feb 9, 2011 23:24 UTC (Wed) by Webexcess (guest, #197) [Link] It does take a lot of work to do, and you have to restrict your programming style to maintain compatibility. If I didn't have help I'm sure I wouldn't have made much progress at it. More and more projects do support both major versions from the same code base. It certainly can be done, it will just be ugly for a while. upgrade to ugly code? Posted Feb 11, 2011 13:16 UTC (Fri) by amtota (guest, #4012) [Link] the list on PyPI is far from complete Posted Feb 9, 2011 22:30 UTC (Wed) by zuki (subscriber, #41808) [Link] The major packages which kept a lot of software back, were numpy and scipy. But numpy officially supports python 3, and scipy, currently in -rc, will soon. There's no need to have majority of packages supporting python 3, just a few percent of the most important ones are enough. It seems that we are quite close to this amount. Posted Feb 10, 2011 3:36 UTC (Thu) by maney (subscriber, #12630) [Link] All statistics in this post were made up or borrowed from half-remembered sources who almost certainly made them up. Nevertheless, MS Word is still the 800 pound gorilla... pygtk+ would be nice Posted Feb 10, 2011 1:24 UTC (Thu) by tstover (subscriber, #56283) [Link] Maybe a gtk3 python3 wrapper will clean out some legacy baggage. Big problem: Confusing implementation with spec. One implementation should support BOTH Posted Feb 10, 2011 2:46 UTC (Thu) by dwheeler (guest, #1216) [Link] The real problem here is that the *spec* is being confused with its *implementation*. If there was a single implementation that accepted both Python2 and Python3, and let Python3 programs call Python2 programs, this would be a non-event. In fact, it'd be easy to do the transition, and we'd be mostly done. Instead, there's one program that implements Python3, and a completely separate and incompatible engine that runs Python2. Of course, this suggests a way out: Expand the Python2 system so it can run arbitrary Python3, and make it possible for Python3 programs to seamlessly call Python2. Then the problem would disappear. Posted Feb 10, 2011 3:47 UTC (Thu) by maney (subscriber, #12630) [Link] Oh, and you gloss over the issue of how the omnivorous compiler knows which language version to treat a given module as. Sure, tagging could be added, but then you're not really doing seamless interoperability after all. It would be close enough to make me smile! BTW, I just saw a brief mention of what sounds like it might be real, workable versioning for compiled (.pyc and .so) modules in a new or forthcoming release of 3.x. That might not be a killer feature, but it's the first thing in 3 that's more than "that's nice, wish Guido really did have a time machine so the original mistake hadn't been made" to me. Posted Feb 10, 2011 17:44 UTC (Thu) by zlynx (subscriber, #2285) [Link] Make a "python" interpreter that reads the script file and either looks for a Python2 marker of some kind or analyzes the file for Python3 syntax. It then calls the correct real interpreter. Considering the speed (slow) of Python applications, this step would not add too much time. Posted Feb 10, 2011 18:53 UTC (Thu) by foom (subscriber, #14868) [Link] Posted Feb 10, 2011 19:56 UTC (Thu) by zlynx (subscriber, #2285) [Link] The solution might be to convert cPython into IronPython everywhere and compile Python to .NET / Mono IL. Then Python3 could call Python2 code in the same way that C#, F#, C++ and Visual Basic can all call each other. Or to be more acceptable to anti-Microsoft people, use the Parrot virtual machine, if that ever becomes usable. Another option might be LLVM. A problem with LLVM is that it doesn't specify a reflection, object, function and data sharing scheme in the same way that .NET does. Java might be another virtual machine that could be made to work. Posted Feb 10, 2011 5:19 UTC (Thu) by nevyn (subscriber, #33129) [Link] Which is like a one line fix to make, to default the system locale to utf-8 in py-2 instead of "ascii" ... almost instantly removing the need for checking every $%#%$# string operation in your app. ... but hey, let's pretend it's 1985 instead and write an incompatible language. > Python 3 is unarguably a better language than Python 2. Really? Unarguably? The fact that os.listdir() is utterly broken on Linux isn't any kind of hint that maybe, just maybe, there might be some problems? Or maybe people might find _some_ argument in the fact that in the two years since py-3 (3.0 was released Dec. 2008), _no_ Linux distribution has announced a timeline to move to py3k as the default python implementation. How many apps. on rawhide or unstable run against the py-3 stack, again? About as many as the perl apps. are running on perl6? First perl kills itself, and now this ... it's enough to make you go back to C ... or even look at Java again. Posted Feb 10, 2011 6:10 UTC (Thu) by mrjoel (subscriber, #60922) [Link] Posted Feb 10, 2011 14:39 UTC (Thu) by Webexcess (guest, #197) [Link] You didn't provide a link for the problem, but is this what you're looking for? os.listdir(b'.') # no decoding for me, thanks Posted Feb 10, 2011 15:38 UTC (Thu) by nevyn (subscriber, #33129) [Link] 1. When calling listdir() directly, the default is broken (and in a non-obvious way) ... so everybody has to remember "Oh, yeh, you have to call os.listdir() in this speciail way or it's broken". 2. It assumes people are calling os.listdir() directly ... which is _far_ from the normal case. So now, to do the same hack, every API that eventually calls listdir() will have to implement/debug the bytes vs. unicode input vs. output thing ... and every caller of those APIs will have to remember "Oh, yeh, you have to call foo_API() in this speciail way or it's broken". 3. It's still not obvious what you _do_ with those bytes, because the reason listdir() doesn't work "normally" is that it's model of the Universe doesn't match reality. Basically you can't load a POSIX filename, and print "Error: open(%s): %s" ... and this problem is much bigger than POSIX filenames, it's just that's the most glaringly broken problem that people see. So the whole thing is a huge clue that "Unicode" is not any better in py-3 than it is in py-2 (which is to say, it's completely broken). Posted Feb 12, 2011 0:13 UTC (Sat) by cmccabe (guest, #60281) [Link] Python has a pretty long history of "forcing" what it believes to be the correct behavior on its users. It even tells you how to use whitespace. I am not surprised at all that they ignore non-UTF filenames. Frankly, it's a good decision. Posted Feb 12, 2011 1:50 UTC (Sat) by foom (subscriber, #14868) [Link] Posted Feb 15, 2011 1:24 UTC (Tue) by yuhong (guest, #57183) [Link] Posted Feb 15, 2011 14:32 UTC (Tue) by nevyn (subscriber, #33129) [Link] It is exactly python's fault that it pretends unix is like windows, when it isn't. Posted Feb 15, 2011 14:52 UTC (Tue) by foom (subscriber, #14868) [Link] Except that python doesn't actually do that, see comment above... Posted Feb 10, 2011 22:20 UTC (Thu) by rahulsundaram (subscriber, #21946) [Link] $yum search python3 Loaded plugins: presto, refresh-packagekit ========== N/S Matched: python3 =========== dreampie-python3.noarch : Support for running the python3 interpreter from : dreampie python3-cairo-devel.i686 : Libraries and headers for python3-cairo python3-cairo-devel.x86_64 : Libraries and headers for python3-cairo python3-decorator.noarch : Module to simplify usage of decorators in python3 python3-smbc.x86_64 : Python3 bindings for libsmbclient API from Samba python3-stomppy.noarch : Python stomp client for messaging for python3 dpm-python3.x86_64 : Disk Pool Manager (DPM) python bindings lfc-python3.x86_64 : LCG File Catalog (LFC) python bindings libselinux-python3.x86_64 : SELinux python 3 bindings for libselinux libsemanage-python3.x86_64 : semanage python 3 bindings for libsemanage python3.i686 : Version 3 of the Python programming language aka Python 3000 python3.x86_64 : Version 3 of the Python programming language aka Python 3000 python3-PyQt4.i686 : Python 3 bindings for Qt4 python3-PyQt4.x86_64 : Python 3 bindings for Qt4 python3-PyQt4-devel.i686 : Python 3 bindings for Qt4 python3-PyQt4-devel.x86_64 : Python 3 bindings for Qt4 python3-PyYAML.x86_64 : YAML parser and emitter for Python python3-babel.noarch : Library for internationalizing Python applications python3-beaker.noarch : WSGI middleware layer to provide sessions python3-bpython.noarch : Fancy curses interface to the Python 3 interactive : interpreter python3-cairo.x86_64 : Python 3 bindings for the cairo library python3-chardet.noarch : Character encoding auto-detection in Python python3-cherrypy.noarch : Pythonic, object-oriented web development framework python3-coverage.x86_64 : Code coverage testing module for Python 3 python3-debug.i686 : Debug version of the Python 3 runtime python3-debug.x86_64 : Debug version of the Python 3 runtime python3-deltarpm.x86_64 : Python bindings for deltarpm python3-devel.i686 : Libraries and header files needed for Python 3 development python3-devel.x86_64 : Libraries and header files needed for Python 3 : development python3-gobject.i686 : Python 3 bindings for GObject and GObject Introspection python3-gobject.x86_64 : Python 3 bindings for GObject and GObject Introspection python3-httplib2.noarch : A comprehensive HTTP client library python3-inotify.noarch : Monitor filesystem events with Python under Linux python3-jinja2.noarch : General purpose template engine python3-libs.i686 : Python 3 runtime libraries python3-libs.x86_64 : Python 3 runtime libraries python3-lxml.x86_64 : ElementTree-like Python 3 bindings for libxml2 and libxslt python3-mako.noarch : Mako template library for Python 3 python3-markupsafe.x86_64 : Implements a XML/HTML/XHTML Markup safe string for : Python python3-minimock.noarch : The simplest possible mock library python3-mpi4py-mpich2.x86_64 : Python bindings of MPI, MPICH2 version python3-mpi4py-openmpi.x86_64 : Python bindings of MPI, Open MPI version python3-numpy.x86_64 : A fast multidimensional array facility for Python python3-numpy-f2py.x86_64 : f2py for numpy python3-paste.noarch : Tools for using a Web Server Gateway Interface stack python3-ply.noarch : Python Lex-Yacc python3-postgresql.x86_64 : Connect to PostgreSQL with Python 3 python3-psutil.noarch : A process utilities module for Python 3 python3-pygments.noarch : A syntax highlighting engine written in Python 3 python3-pyke.noarch : Knowledge-based inference engine python3-pyparsing.noarch : An object-oriented approach to text processing : (Python 3 version) python3-setuptools.noarch : Easily build and distribute Python 3 packages python3-sip.i686 : SIP - Python 3/C++ Bindings Generator python3-sip.x86_64 : SIP - Python 3/C++ Bindings Generator python3-sip-devel.i686 : Files needed to generate Python 3 bindings for any C++ : class library python3-sip-devel.x86_64 : Files needed to generate Python 3 bindings for any : C++ class library python3-sleekxmpp.noarch : Flexible XMPP client/component/server library for : Python python3-smbpasswd.x86_64 : Python SMB Password Hst Generator Module for Python 3 python3-sqlalchemy.x86_64 : Modular and flexible ORM library for python python3-tempita.noarch : A very small text templating language python3-test.i686 : The test modules from the main python 3 package python3-test.x86_64 : The test modules from the main python 3 package python3-tkinter.i686 : A GUI toolkit for Python 3 python3-tkinter.x86_64 : A GUI toolkit for Python 3 python3-tools.i686 : A collection of tools included with Python 3 python3-tools.x86_64 : A collection of tools included with Python 3 python3-zmq.x86_64 : Software library for fast, message-based applications Posted Feb 10, 2011 23:50 UTC (Thu) by dave_malcolm (subscriber, #15013) [Link] Posted Feb 10, 2011 6:39 UTC (Thu) by ras (subscriber, #33059) [Link] I think the mistake arose from a common misconception. It seems popular to equate UCS2 with unicode support. This is an abuse of terminology. The old strings represented unicode perfectly well as UTF-8. The new strings style strings use UCS2 instead. That may have been a good idea when Java introduced it, because back then unicode only occupied one code plane in UCS2. Now UCS2, just like UTF-8 must use multibyte sequences for some unicode code points. So the one good point is gone. The major downside remains however: UCS2 is almost never found in the real world. So you spend 1/2 your time converting between whatever the outside world is using and UCS2, and then back again. The lines of code increase, memory requirements almost double, the execution time increases and in my experience the bugs sky rocket. Posted Feb 10, 2011 7:26 UTC (Thu) by peregrin (subscriber, #56601) [Link] Posted Feb 10, 2011 11:52 UTC (Thu) by tialaramex (subscriber, #21167) [Link] So, no, Windows isn't an example of UCS2, and hasn't been for many years. Posted Feb 10, 2011 17:10 UTC (Thu) by marcH (guest, #57642) [Link] In this sense, UCS-2 is extremely often found in the real world. UTF family Posted Feb 11, 2011 4:01 UTC (Fri) by tialaramex (subscriber, #21167) [Link] What were you imagining they should be using java.lang.String.codePointCount() for ? Text is hard, like I said, and a count of Unicode code points is rarely what you need. Examples of things which are assigned one or more Unicode code points: A harmless, invisible and ignorable marker; indication that subsequent neutral text is intended to be displayed right-to-left; the cedilla accent on a character; a lowercase x; a vertical tab; indication that a non-fatal error occurred in some previous processing. Posted Feb 10, 2011 11:57 UTC (Thu) by tialaramex (subscriber, #21167) [Link] As the original poster said (even if their terminology is wrong in a bunch of places) UCS-2 looked like it might be clever in the mid-1990s. Once it became clear that Unicode's hyperspace would be populated, and UCS2 wasn't capable of handling that, the choice was no longer between UCS2 and UTF8 (where UCS2 delivers some intuitive-seeming properties, although not as many as sometimes claimed) but between UTF8 and UTF16, where UTF16 is completely horrible. Posted Feb 10, 2011 9:33 UTC (Thu) by rweir (subscriber, #24833) [Link] ??? all python3 did was switch what the 'str' type refers to, from 'bytes' to 'abstract sequence of unicode codepoints'. as far as I know, python 2 and python 3 both support ucs-2 or ucs-4 as the concrete-you-almost-never-have-to-care representation for unicode strings. Posted Feb 10, 2011 11:03 UTC (Thu) by cortana (subscriber, #24596) [Link] Leaky abstractions Posted Feb 10, 2011 12:05 UTC (Thu) by tialaramex (subscriber, #21167) [Link] But in my experience it's surprisingly hard to prevent this abstraction from leaking. Text is really tricky, in fact one of the main lessons from the Unicode project is that text is way trickier than anyone had really thought before. For example, what happens with canonicalisation in Python? (You will not be surprised to know that the answer in C is generally "C does not care about canonicalisation, it's all byte strings to us") Posted Feb 10, 2011 17:11 UTC (Thu) by marcH (guest, #57642) [Link] Posted Feb 11, 2011 4:06 UTC (Fri) by tialaramex (subscriber, #21167) [Link] Posted Feb 17, 2011 4:14 UTC (Thu) by spitzak (guest, #4593) [Link] The real result, in Python 3 and 2 and on Windows and virtually everywhere else where the "wchar" madness infects designers is that any programmers working with text where the UTF-8 might contain an error is that they resort to destroying the UTF-8 support by saying the text is actually ASCII or ISO-8859-1 or whatever (sometimes they double-UTF-8 encode it which is the same as ISO-8859-1). Basically the question is whether to eliminate the ability to see even the ASCII letters in the filenames versus the ability to see some rarely-used foreign letters in the cases where they happen to be encoded correctly. If you don't believe me then you have not looked at any recent applications that read text filenames, even on Windows. Or just look at the idiotic behavior of Python 2, described right here in this article! Congratulations, your belief in new encodings has set I18N back 20 years. We will never see filenames that work across systems and support Unicode. Never ever ever, because of your stubborn belief that you are "right". The real answer: Text is a stream of 8 bit bytes. In about 1% of the cases you will care about any characters other than a tiny number of ASCII ones such as NUL and CR. You will then have to decode it, using an INTERATOR that steps through the string, and is capable of returning Unicode code points, Unicode composed characters, and clear lossless indications of encoding errors. Strings in source files should assume UTF-8 encoding. If the source file itself is UTF-8 this is trivial. But "\u1234" should produce the 3-byte UTF-8 encoding of U+1234. "\xNN" should produce a byte with that value, despite the fact that this can produce an invalid UTF-8 encoding. Printing UTF-8 should never throw an error, it should produce error boxes for encoding errors, one for each byte. On backwards systems where some idiot thought "wchar" was a hot idea, you may need to convert to it, in which case encoding errors should translate to U+DCxx where xx is the byte's value (these are errors in UTF-16 as well), but conversion back from UTF-16 will be lossy as these will turn back into 3 UTF-8 bytes. Posted Feb 17, 2011 14:27 UTC (Thu) by foom (subscriber, #14868) [Link] However, as I said in python3 *does* do non-lossy decoding/encoding for filenames with random bytes in them. Posted Feb 11, 2011 0:00 UTC (Fri) by dave_malcolm (subscriber, #15013) [Link] Posted Feb 10, 2011 7:10 UTC (Thu) by ssmith32 (subscriber, #72404) [Link] Yes, I could (guest, right away gives you 4 threads. Add a thread to monitor everything (since thread death does not get signalled anywhere) and you're at 5. There's not so much shared state as that I/O on any port can execute callbacks which could access anything the initiator of the request wanted (go closures!). There's barely any locking, python's atomic instructions is sufficient (though I imagine Queue does it under the hood). One effect of the fact that I/O falls outside the GIL means that the process running at full speed can take 110% CPU. (There's a lot of I/O). Back to the issue at hand: Python2's unicode handling bites me daily. Whoever decided that using str() on a unicode string should *except* when you have a unicode character, should be shot. Just error *every* time, then I won't get called at 3 in the morning to fix the bloody thing (usually buried in some library, even some standard python libs have had bugs in the past).] Posted Feb 17, 2011 8:26 UTC (Thu) by rqosa (subscriber, #24136) [Link] That way can scale poorly, because there must be at least one thread per FD. Using an epoll-driven main loop and a pool of worker threads (with one work queue per worker thread) makes the amount of threads become independent from the amount of FDs, so you can adjust the amount of threads to whatever gives the best performance. It also has the benefit of avoiding the overhead of thread-start-on-FD-open and thread-quit-on-FD-close, since you can reuse the existing threads. (Make it so that any idle thread will wait on a semaphore until its work queue becomes non-empty. Also, rather than using epoll directly, use libevent, so that it's portable to non-Linux systems.) epoll Posted Feb 17, 2011 8:41 UTC (Thu) by rqosa (subscriber, #24136) [Link] > at least one thread per FD Forgot to mention this in my previous post: the "one thread/process per FD" pattern is the main design issue that made possible the Slowloris DoS attack, which LWN covered 2 years ago. Posted Feb 17, 2011 20:50 UTC (Thu) by kleptog (subscriber, #1183) [Link] Of course in the general case you are right, a service like a webserver should try to reduce the number of threads. But also in the special case of CPython it's pointless to use more threads, since the GIL prevents more than one thread running at a time anyway.?") Posted Feb 10, 2011 10:01 UTC (Thu) by talex (subscriber, #19139) [Link] #!/usr/bin/env python import sys if sys.version_info[0] > 2: import os os.execvp("python2", ["python2"] + sys.argv) # ... chain-load Python 2 code without using syntax that Python 3 will choke on Posted Feb 10, 2011 10:37 UTC (Thu) by tetromino (subscriber, #33846) [Link] Posted Feb 18, 2011 21:03 UTC (Fri) by valhalla (subscriber, #56634) [Link] Most PKGBUILD for said distribution just run python2 setup.py and everything works. Missing details Posted Feb 10, 2011 16:50 UTC (Thu) by southey (subscriber, #9466) [Link] However, I think there are two bigger issues involved. One for developers has been the API changes because many Python 2.x projects often used depreciated functions that were removed for Python 3. The second is the far more important is which Linux distributions are providing Python 3.x as the default Python? While you can build Python 3 yourself or have it as secondly package in recent distros, you can not use it as default without disrupting any Python 2.x code especially system related code. Posted Feb 10, 2011 17:57 UTC (Thu) by ssam (subscriber, #46587) [Link] personally i need the numpy, scipy and matplotlib family, which i think are mostly ported. Posted Feb 11, 2011 11:45 UTC (Fri) by shane (subscriber, #3335) [Link] Posted Feb 12, 2011 18:43 UTC (Sat) by jensend (guest, #1385) [Link] With NumPy and SciPy finally Python 3-compatible, two of the biggest reasons why people have stuck with Python 2 are finally gone. With the release of GTK+ 3 with PyGObject, distros that have depended on PyGTK+ will start to move as well. Chances seem good that we may see most of the momentum shift to Python 3 this year. Dealing with external scripts Posted Feb 11, 2011 22:19 UTC (Fri) by schwitrs (subscriber, #3822) [Link] I maintain a scientific software package which is partially written in Python, but more relevantly, has Python user scripts which I do not and cannot maintain. It is very important that old results are reproducible. A requirement of very minor, obvious changes to the scripts might fly, but the Python 2 to Python 3 transition doesn't seem to be this. Has anyone tried to deal with this sort of problem? Posted Feb 11, 2011 22:48 UTC (Fri) by foom (subscriber, #14868) [Link] Posted Feb 12, 2011 17:23 UTC (Sat) by ssam (subscriber, #46587) [Link] Avoid Version Numbers Posted Feb 15, 2011 1:48 UTC (Tue) by ldo (subscriber, #40946) [Link] I hate version numbers. Dont check for version numbers; instead, check for something as close as possible to the actual functionality you need. Python even helps you, by providing ways to query it for things. E.g. to check for the bytes function, you dont have to go through the rigmarole of watching for a NameError exception; just write hasattr(__builtins__, "bytes") What could be simpler? And similarly the PYTHON3 version check could be replaced with something like not hasattr(__builtins__, "unichr") (Negated so its True for Python 3.x and False for earlier versions.) Posted Feb 15, 2011 14:20 UTC (Tue) by Webexcess (guest, #197) [Link] I was actually using: not str is bytes But when you really mean "is this python3" then that's not clear to someone reading the code so I changed it. Posted Mar 8, 2011 22:55 UTC (Tue) by engla (guest, #47454) [Link] Quite frankly Posted Feb 17, 2011 19:51 UTC (Thu) by NikLi (guest, #66938) [Link] I mean, "why?". And then there is a certain pressure to make the switch and show to the world that Python3 was a huge success, unlike the "perl6 fiasco", and then show the success of the python development model, PEPs, BDFL, etc. This seems like burning bridges as in "if we go back to python2, we will be like those perl6 people we've been making fun of". The techinal differences are minimal and many of them in the domain of "nits". I find arguments that say that "in the future there will be only Python3" misleading. There are many firms that use python extensively for their services and THEY WON'T SWITCH ANY TIME SOON. Count google as one of them, and i bet you they haven't got the least interest to rewrite their infrastructure to python3 because "dict.keys is an iterator", etc. And, last but not least, YOU DON'T BREAK "Hello World". Quite frankly in education i'd rather teach my students Python2, so they'll go work for google or something... Posted Feb 17, 2011 21:17 UTC (Thu) by foom (subscriber, #14868) [Link] But, at this point, there have been 3 releases of 3.x made now, and 2.x isn't being developed anymore. So, if you want any new features ever, you either have to pick up maintenance of 2.x, or switch to 3.x. I wouldn't say it's impossible to conceive of the first alternative happening, but I'm certainly not interested in doing that, even though I'd really prefer if 3.x just magically ceased to exist. In another 2-3 years when 3.2 is installed ubiquitously alongside 2.x, maybe I'll even start writing python3 code. Stranger things could happen. :) 2.x features and improvements Posted Feb 21, 2011 23:26 UTC (Mon) by pboddie (subscriber, #50784) [Link] But, at this point, there have been 3 releases of 3.x made now, and 2.x isn't being developed anymore. So, if you want any new features ever, you either have to pick up maintenance of 2.x, or switch to 3.x. However, most of the implementations apart from CPython work with 2.x features, and although there have been noises amongst some of them about 3.x support being a possibility, the priorities of their developers would appear to be the development of other kinds of features than language features. So, for example, PyPy would seem to be sticking with 2.x support and concentrating on performance - it's already faster than CPython for some things and getting faster - so if you value those kinds of features over the language tidying that 3.x represents, then you're not going to switch to 3.x. But then again, neither are the implementation developers, so 2.x is still a very safe bet. Moving to Python 3 (I Feel Sorry) Posted Mar 13, 2011 17:22 UTC (Sun) by litosteel (guest, #33304) [Link] Posted Mar 13, 2011 23:45 UTC (Sun) by foom (subscriber, #14868) [Link] Linux is a registered trademark of Linus Torvalds
http://lwn.net/Articles/426906/
CC-MAIN-2013-20
refinedweb
6,658
71.34
Hi everyone. I'm new(ish) to C# and i've decided to hone my skills in windows forms applications. I've got stuck on this question and would really appreciate some expertise from you guys: "The prime factors of 13195 are 5, 7, 13 and 29. What is the largest prime factor of the number 600851475143?" The question is taken from projecteuler.net which has some challenging problems on their (not an advertisement) just saying incase anyone didn't know about the site. I've created this method which doesn't work around the while loop. The math.Round(i) doesn't work because i don't know how to use it, I read somewhere that it collects only whole numbers but I am unsure what I should be referencing. private static long question3() { long bigNum = 600851475143; int x = 1; while (bigNum == Math.Round(bigNum)) { bigNum /= x; x++; } return (bigNum); } What i'm trying to do is to get this method to break down the 600851475143 then extract the largest whole number. I then wanted to put the largest whole varible into another method which would then further break down the number into its highest prime factor. I'm sure theres an easier way to do this, but please understand that i'm just practising and making mistakes as I go whilst trying to figure them out myself, but this has made me very confused. I would greatly appreciate help from you guys and maybe you could tell me the way you would tackle this problem. Thanks.
https://www.daniweb.com/programming/software-development/threads/371434/find-the-largest-prime-factor-of-a-composite-number
CC-MAIN-2018-34
refinedweb
258
67.99
I have installed laravel 5.2 with the folder name "MyProject" and i have created controllers, views and models completely. Now i access this project on localhost as say Use php artisan app:name After installing Laravel, you may wish to "name" your application. By default, the app directory is namespaced under App, and autoloaded by Composer using the PSR-4 autoloading standard. However, you may change the namespace to match the name of your application, which you can easily do via the app:name Artisan command. For example, if your application is named "Horsefly", you could run the following command from the root of your installation: php artisan app:name Horsefly Renaming your application is entirely optional, and you are free to keep the App namespace if you wish
https://codedump.io/share/OZfjYrRI139s/1/changing-the-root-directory-name-not-public-of-laravel-installation
CC-MAIN-2017-51
refinedweb
130
50.36
Custom model with Estimators in TensorFlow In the last tutorial we saw how easy it is to do a simple linear regression using TensorFlow. We used a LinearRegressor, which did most of the work for us. This approach is great to get started and perform simple tasks. However, if you want to have more freedom about how your model is built and how it is trained, this approach will be insufficient. Fortunately, TensorFlow is also providing us with a more flexible way of doing things. Using the example of linear regression, we want to show you how to create a custom model and train it. The techniques you will see here are basically the same later on to train more complex models like neural networks and do more complex tasks like image classification. Preparation Similar to our previous example we want to train the model y = m * x + b to find the best fitting values for m and b with a given set of x and y values. We are starting by generating the values for x and y. import tensorflow as tf import numpy as np import matplotlib.pyplot as plt # if you are using Juypter and want to see the plot inlined %matplotlib inline) input_fn = tf.estimator.inputs.numpy_input_fn({"x": x}, y, shuffle=True) This code should be familiar to you, if you read the previous tutorial on doing Simple linear regression with TensorFlow. Creating a model function With the newly created values for x and y we can start creating our custom model function. Instead of using the specific LinearRegressor we will use the more general Estimator class. The Estimator requires a model function, describing the model, as well as the training processes being used. To create such a function, we can start with def model_fn(features, labels, mode, params): The function needs to be defined with four parameters: - features is equal to the values provided by the input function, on which the model should be trained on. In our case this is the dictionary containing our "x" values. - labels are the expected values also provided by the input function. Given our example, ywill be passed to the model function as labels. - mode is representing what the model is used for at the moment. Meaning if it is currently used for training, evaluation or predictions. This is determined by calling for example train, evaluate or predict on the Estimator. - params are optional parameters that you can provide to change values either in your model or the training process. In our example we don't use any additional parameters. The first thing we want to do in the model function is to get x from the given features. Remember we are providing a dictionary with the key "x" in the input function. x = features["x"] As a next step we create the variables for m and b needed in our model. To get these variables we simple create them using the TensorFlow API. m = tf.Variable(tf.random_uniform([1])) b = tf.Variable(tf.zeros([1])) We initialize m with a random value, because we want to multiply m with x. If we would use 0 as initial value, the calculation would always stay 0 and our model is not able to learn. This is avoided doing the random initialization. For b we can use 0, because we will simply do an addition and it is no problem to start from 0. Now that we got x, m and b we can create our model by simply writing the line function. y_pred = m * x + b In order to do the actual training, we have to compare the result of our model y_pred with the known correct values of y and tell the model to adjust its values. This step should only be performed during training. We can use the parameter mode to check if a training is being done. If so, we calculate the error our model produces on the given data. train_op = None error = None if mode == tf.estimator.ModeKeys.TRAIN: error = tf.reduce_mean((y_pred - labels) ** 2) Given the labels parameter, which is equal to y, we subtract it from the calculated result of our model y_pred. We then square the result so that the error always is a positive number and positive and negative errors don't cancel each other out. Last step in calculating the error is to get the reduced mean. y_predand labels are both vectors or arrays, but we want a single value representing the overall error of our model. This is why we are doing the reduction here. With the calculated error we need to establish a training method that is trying to minimize the error and adjust the values of m and b. TensorFlow is providing us with a lot of different options of so called optimizers. In this case we will use the FtrlOptimizer to perform the training. This optimizer will run a "Follow the regularized leader" algorithm to adjust mand b. If you want to learn more about this see FRTL (Wiki). optimizer = tf.train.FtrlOptimizer(0.1) train_op = optimizer.minimize(error, global_step=tf.train.get_global_step()) First, we create the FtrlOptimizer and give it a learning rate of 0.1. The learning rate is determining how fast an optimizer is trying to converge to the goal of the training. You can play around with these values and see the difference in the change of the calculated error between multiple epochs. On the optimizer we call minimize to tell it that our training goal is to get the error as low as possible. Therefore our first parameter is the error we calculated above. The second parameter is the current training step we are in. This has to be passed in, so TensorFlow can keep track of the steps (training iterations) it already took. The last thing that has to be done is wrap everything together in an EstimatorSpec and return it from our model function. Similar to the four parameters the model function has to have, it always has to return an EstimatorSpec. return tf.estimator.EstimatorSpec(mode, y_pred, error, train_op) The parameters for the EstimatorSpec are - the mode passed into the model function - the calculated result of our model y_pred - the current error our model produces - and the opteration how to train the model train_op Altogether the model function should look like this def model_fn(features, labels, mode, params): x = features["x"] m = tf.Variable(tf.random_uniform([1])) b = tf.Variable(tf.zeros([1])) y_pred = m * x + b train_op = None error = None if mode == tf.estimator.ModeKeys.TRAIN: error = tf.reduce_mean((y_pred - labels) ** 2) optimizer = tf.train.FtrlOptimizer(0.1) train_op = optimizer.minimize(error, global_step=tf.train.get_global_step()) return tf.estimator.EstimatorSpec(mode, y_pred, error, train_op) Perform the training Now that we have defined our model function we can do the training similar to the previous tutorial. The key difference is that we are now using the Estimator class instead of the LinearRegressor. The first parameter of the Estimator is our freshly created model_fn. Note that you have to pass in the method reference (don't add parenthesis to it). estimator = tf.estimator.Estimator(model_fn, model_dir="/tmp/tutorial/custom_model") for i in range(20): print("Running epoch ", i+1) estimator.train(input_fn) print() x_test = np.linspace(0, 200, 2, dtype=np.float32) + np.random.uniform(-100, 100, size=2).astype(np.float32) test_input_fn = tf.estimator.inputs.numpy_input_fn({"x": x_test}, shuffle=False) y_pred = [predictions for predictions in estimator.predict(test_input_fn)] plt.plot(x, y, '*') plt.plot(x_test, y_pred, 'r') As a result of the training you can see the calculated line, using our model y = m * x + b, in red. If you want to see the code it its entirety you can download it here. Conclusion That's it! We completed a training using our own custom model function. It took a little bit more code than using the LinearRegressor, but we gained more freedom on how we want our model to look like and how it is being trained. This tutorial is just a starting point to show you the basics of creating your own custom model function. Going forward you can use the same approach you saw here to train more complex models like neural networks, with more complex data.
https://arconsis.de/unternehmen/blog/custom-model-with-estimators-in-tensorflow
CC-MAIN-2019-18
refinedweb
1,381
56.15
App Engine customers can send 12,000 emails every month for free. Create a SendGrid account to claim the free emails and to select higher volume plans. Note that Google will be compensated for customers who sign up for a paid account. SendGrid libraries You can send email with SendGrid through an SMTP relay or using a Web API. To integrate SendGrid with your App Engine project, use the SendGrid client libraries. Setup To use SendGrid to send an email: Select or create a new Cloud Platform project in the Google Cloud Platform Console and then ensure an App Engine application exists and billing is enabled: The Dashboard opens if an App Engine application already exists in your project and billing is enabled. Otherwise, follow the prompts for choosing a region and enabling billing. Create a SendGrid account. Add your SendGrid settings to the environment variables section in app.yaml. For example, for the sample code below you would add: env_variables: SENDGRID_API_KEY: your-sendgrid-api-key Download the SendGrid package to your local machine in the command line: go get gopkg.in/sendgrid/sendgrid-go.v2 In your application, import the SendGrid package: import "gopkg.in/sendgrid/sendgrid-go.v2" Example You can create a SendGrid instance and use it to send mail. In the following sample code, sendgrid.go shows how to send an email and specifies some error handling: func sendMailHandler(w http.ResponseWriter, r *http.Request) { m := sendgrid.NewMail() m.AddTo("example@email.com") m.SetSubject("Email From SendGrid") m.SetHTML("Through AppEngine") m.SetFrom("sendgrid@appengine.com") if err := sendgridClient.Send(m); err != nil { http.Error(w, fmt.Sprintf("could not send mail: %v", err), http.StatusInternalServerError) return } fmt.Fprintf(w, "email sent successfully.") } Add your own account details, and then edit the email address and other message content. For more email settings and examples, see the SendGrid-Go library. Testing and Deploying Before you run your app locally,: go run sendgrid.go After you test your application, deploy your project to App Engine: aedeploy gcloud app deploy Getting real-time information In addition to sending email, SendGrid can help you receive email or make sense of the email you’ve already sent. The two real-time webhook solutions can greatly enhance the role email plays in your application. Event API Once you start sending email from your application, you can view statistics collected by SendGrid to assess your email program. You can use the Event API to see this data. For example, whenever a recipient opens or clicks an email, SendGrid can send a small bit of descriptive JSON to your Google App Engine app. You can react to the event or store the data for future use. You can use this event data in many different ways, such as integrating email stats into internal dashboards or responding. Inbound Parse API SendGrid excels at sending email, but it can alsodomain.
https://cloud.google.com/appengine/docs/flexible/go/sending-emails-with-sendgrid
CC-MAIN-2017-04
refinedweb
485
57.98
Code goes first: #include <stdio.h> void foo() { static int bar; } int main() { bar++; return 0; } The compiler(Clang) complains: static.c:10:2: error: use of undeclared identifier 'bar' Shouldn't the statement static int bar; in foo() give bar static storage duration, which makes it declared and initialized prior to main function? Marking something as static within a function relocates its storage off of the stack and allows its value to persist across multiple calls. Marking something static, however, does nothing to change the scope of the variable. While you certainly could create a pointer that is aimed at bar and manipulate it from main, the compiler will view bar as undefined within main because of scoping. You are confusing the scope of a variable with the storage duration. As mentioned in the C11 standard, chapter §6.2.1, Scopes of identifiers, [...] If the declarator or type specifier that declares the identifier appears outside of any block or list of parameters, the identifier has file scope, which terminates at the end of the translation unit. [...] and for function (or block) scope [...] If the declarator or type specifier that declares the identifier appears inside a block or within the list of parameter declarations in a function definition, the identifier has block scope, which terminates at the end of the associated block. [...] In your case, bar has the file scope in foo(). So this is not visible in main(). OTOH, for the storage duration part,. So, to summarize, bar has static storage duration, but the scope is limited to the foo() function. So, it is declared and initialized prior to main()function (before main() starts, to be exact) but not visible and accessible in main().
http://www.devsplanet.com/question/35273855
CC-MAIN-2017-04
refinedweb
285
63.49
04 August 2011 18:47 [Source: ICIS news] HOUSTON (ICIS)--August polypropylene (PP) prices are going up in ?xml:namespace> Buyers expecting an August price rollover were surprised by the increase. The increase is in the range of $35-50/tonne (€25-35/tonne) for all grades of PP, but implemented only by Petroquimica Cuyo, one of two producers in the country. Prior to the August increase, PP prices for homopolymers were in the range of $1,932-2,053/tonne, and slightly higher for copolymers and random material. Petroken, the remaining PP producer, manufactures only homopolymers. Buyers continued to experience delays in the delivery of copolymers because of shortages of ethylene in the country. Imports from But this source is constrained by the Department of Commerce in Buyers must prove that domestic producers are not delivering orders in order to get import licences. The result is that buyers are forced to slow down or stop their production plants for lack of raw materials while the cumbersome licensing process takes place, a transformer said. However, the procedure becomes a tool that allows buyers to pressure suppliers into compliance. Resellers of Brazilian PP said the number of granted import licences had increased of late as a result of the domestic production shortages, but not fast enough. (
http://www.icis.com/Articles/2011/08/04/9482701/august-pp-prices-rise-35-50tonne-in-argentina-on-tight-supply.html
CC-MAIN-2014-41
refinedweb
215
50.77
Note The theoretical parts of this article require some knowledge of optics and electromagnetism, however the conclusion and final result (a practical implementation of single-layer thin film interference in the context of a BRDF) do not. You may therefore wish to skip the theoretical sections. Introduction Wave interference of light has been neglected for a long time in computer graphics, for multiple reasons. Firstly, it is often insignificant and can be cheaply approximated, or even ignored completely. Secondly, it is harder to understand as it requires interpreting light as waves instead of particles (photons). However, interference crops up almost everywhere in daily life, and has recently gained popularity in rendering applications. Examples of wave interference of light are soap bubbles, gasoline rainbow patterns, lens flares, basically everything that looks cool and/or involves multicolor patterns. For instance, in computer graphics, soap bubbles were in the past approximated with more or less realistic multicolor textures slightly panned with view angle. But it turns out that they are not that computationally difficult to accurately render. We will learn how. This article will focus on one particular form of interference, namely thin film interference. This occurs when one or more very thin transparent coatings ("films") are placed on top of a material. The films are so thin that when a light wave comes into contact with these film layers, it reflects and refracts multiple times inside the layer system, and interferes with itself in the process. The goal is to calculate the amount of light reflected off the layer system, and the amount of light transmitted into the internal medium. We will make the assumption that no light is absorbed, which is not required but makes the calculations more approachable as considering absorption of light involves delving deep into Maxwell's equations (the behaviour of electromagnetic waves at interfaces of lossy media is nontrivial). Though in general, each layer is so thin that absorption effects can be neglected most of the time. We will derive a physical solution for the case where only one film is present (single-layer) and conclude on how to solve the general case with arbitrarily many layers. The single-layer case is sufficient to render most real life occurrences of thin-film interference, however using more layers enables many more advanced effects. The cost of calculating reflection and transmission coefficients is linear in the number of layers. Derivation Consider a light wave incident to a thin layer of depth \(\delta\) and real refractive index \(n_1\). The external medium has refractive index \(n_0\) and the internal medium has refractive index \(n_2\). The incident angle made by the incident light wave and the film's surface normal is \(\theta_0\), the angle inside the layer is \(\theta_1\) and the refracted angle (inside the internal medium) is denoted \(\theta_2\). We will also give numbers to each of the three media: medium 0 is the external medium, medium 1 is the layer, and medium 2 is the internal medium. Also, naturally, medium 0 has to have a different refractive index than medium 1, and the same goes for medium 1 and medium 2. Media 0 and 2 can be the same, of course. First, we know from Snell's Law that the following holds: \[n_0 \sin{\theta_0} = n_1 \sin{\theta_1} = n_2 \sin{\theta_2}\] Therefore the angles \(\theta_1\) and \(\theta_2\) can be derived from \(\theta_0\). Now, we see from the diagram that the only path the light wave can follow is a zigzag pattern as it bounces back and forth between the layer, until it gets transmitted either back into the external medium or into the internal medium. We also note that all the reflected waves (denoted \(R_0\), \(R_1\), ...) and all the transmitted waves are parallel. This is necessary for interference and is a natural consequence of the reciprocal nature of Snell's Law. And we have assumed that the media involved are non-absorbing, therefore by conservation of energy the reflection and transmission coefficients must sum up to exactly one. It turns out that it is slightly easier to derive the transmission coefficient, so we will do that, but we would get the same thing either way. The reason for this is because the very first reflected wave \(R_0\) does not actually penetrate the layer, which means it needs to be handled separately. This does not occur for transmitted waves. Now let's take a look at what happens to the amplitude of the light wave as it travels through this layer system. First, we need to introduce the Fresnel equations, which let us calculate how much of a light wave's amplitude is reflected and how much of it is transmitted whenever it comes into contact with an interface. These equations should be familiar, although perhaps not in the following form: \[r_s = \frac{n_i \cos{\theta_i} - n_j \cos{\theta_j}}{n_i \cos{\theta_i} + n_j \cos{\theta_j}}\] \[t_s = \frac{2 n_i \cos{\theta_i}}{n_i \cos{\theta_i} + n_j \cos{\theta_j}}\] \[r_p = \frac{n_j \cos{\theta_i} - n_i \cos{\theta_j}}{n_i \cos{\theta_j} + n_j \cos{\theta_i}}\] \[t_p = \frac{2 n_i \cos{\theta_i}}{n_i \cos{\theta_j} + n_j \cos{\theta_i}}\] These are amplitude reflection/transmission coefficients, for s-polarized and p-polarized light. Indeed, light polarization is important, and in the derivation we will assume the light wave has a given known polarization. We will now introduce some notation. The following denotes the amplitude reflection coefficient for a light wave going from medium \(i\) to medium \(j\): \[\rho_{i | j} = r_{s/p}\] Where the correct reflection coefficient is chosen based on the light wave's polarization. Similarly, the amplitude transmission coefficient is: \[\tau_{i | j} = t_{s/p}\] Because the refractive indices and incident angles for each medium are known and constant, we do not need to specify them. We are now ready to tackle the problem. Consider the transmitted wave \(T_0\). It's easy to see that since it crosses the layer at two locations, and never reflects anywhere, its amplitude will be: \[\tau_{0 | 1} \tau_{1 | 2}\] What about the second transmitted wave \(T_1\)? This one is transmitted once from medium 0 to medium 1, reflects off the medium 1 to medium 2 interface, is reflected again from the medium 1 to medium 0 interface, and is finally transmitted across the medium 1 to medium 2 interface. So its amplitude will be: \[\tau_{0 | 1} \rho_{1 | 2} \rho_{1 | 0} \tau_{1 | 2} = \tau_{0 | 1} \tau_{1 | 2} \rho_{1 | 0} \rho_{1 | 2}\] We can see there's a pattern here. Every successive transmitted wave will simply reflect two additional times off the top and bottom interface. So, if we denote the amplitude of the \(k\)th transmitted wave \(A_k\), we have: \[A_k = \tau_{0 | 1} \tau_{1 | 2} \rho_{1 | 0}^k \rho_{1 | 2}^k\] We note that even though there are (in theory) infinitely many transmitted waves, their amplitude decreases exponentially, since the Fresnel amplitude reflection coefficient is never quite 1 (except in the case of total internal reflection, where all light is reflected and none is transmitted, of course, if this is the case then the incident wave fully reflects off the layer first chance it gets and so this analysis doesn't apply). We now have the amplitudes of each transmitted wave. Can we calculate the total amount of transmitted light now? Not quite. These are waves, and you can't just add waves using their amplitudes. We need to consider the phase of each transmitted wave, as these waves might cancel each other out depending on their phase (out of phase waves cancel out, in phase waves amplify each other). The waves also have a frequency, but the frequency depends only on the incident wave's wavelength, which is known and constant, so it can be taken out of the equation. How do we calculate the phase of each transmitted wave? This is in fact a simple textbook thin film interference problem, and if we denote the phase of the \(k\)th transmitted wave \(\varphi_k\), the following holds: \[\varphi_k = k \left [ \frac{2 \pi}{\lambda} \left ( 2 n_1 \delta \cos{\theta_1} \right ) + \Delta \right ]\] Where \(\lambda\) is the light wave's wavelength and \(\Delta\) is a constant meant to account for phase changes upon reflection (we will expand on this soon). The important thing is that the phase of every transmitted wave is a multiple of a constant (with respect to the wave index \(k\))! That is: \[\varphi_k = k \left [ \frac{2 \pi}{\lambda} \left ( 2 n_1 \delta \cos{\theta_1} \right ) + \Delta \right ] = k \varphi\] The explanation for this lies in the rather trivial observation that the distance travelled by the light wave inside the layer increases by a constant factor for every consecutive transmitted wave. This is fortunate, as it makes the upcoming calculations very simple. Had the phase depended on \(k\) in a more complicated way, the problem could have very well been analytically intractable. We will now explain the meaning of the \(\Delta\) term. When a wave (any wave, not just electromagnetic light waves) reflects off a medium denser than the one it is in, it will undergo a 180-degree phase change. Because the refractive index is a measure of how dense a medium is, we can use that to calculate this constant. There are two possible reflections here: one at the top interface and one at the bottom interface. We denote: \[\Delta_{i | j} = \begin{cases} 0 ~ & \text{if} ~ n_i > n_j \\ \pi ~ & \text{if} ~ n_i < n_j \end{cases}\] For the reflection phase change when reflecting off the interface from medium \(i\) to medium \(j\). Therefore, we see that: \[\Delta = \Delta_{1 | 0} + \Delta_{1 | 2}\] Which is constant, as it depends only on the refractive indices of each medium. At this point we have the amplitude and phase of each transmitted wave. All we have to do is sum them up (as waves), and take the squared magnitude of the resulting complex amplitude to obtain the transmitted intensity. However, because the transmitted waves are in a different medium than the incident wave, we need to take into account the ratio of beam surface area to make sure energy is conserved. That is, we need to multiply by: \[\frac{n_2 \cos{\theta_2}}{n_0 \cos{\theta_0}}\] This is actually two factors in one. The first, ratio of refractive indices, is there because the transmitted wave won't, in general, have the same speed as the incident wave (for instance, light travels slower in water than in air). So the perceived intensity will not be the same. Remember, intensity is energy per second per squared area, so if the wave is faster the intensity will be higher, so we need to scale the intensity down by a corresponding amount to make sure energy is conserved. The second factor, ratio of cosines, exists because of the change in area of a beam of light as it is refracted. The following diagram illustrates all of this nicely: It is worth noting that reflected light is treated the same, however because reflected waves remain in the same medium and the reflected angle is the same as the incident angle, both ratios just cancel out. Now, we have the following expression for the transmitted intensity: \[I_T = \frac{n_2 \cos{\theta_2}}{n_0 \cos{\theta_0}} \left | \sum_{k = 0}^\infty A_k e^{i \varphi_k} \right |^2\] This looks complicated, but it actually isn't. This is because both the phase and the amplitude are dependent on \(k\) in such a way that: \[I_T = \frac{n_2 \cos{\theta_2}}{n_0 \cos{\theta_0}} \left | \sum_{k = 0}^\infty \tau_{0 | 1} \tau_{1 | 2} \rho_{1 | 0}^k \rho_{1 | 2}^k e^{i k \varphi} \right |^2 = \frac{n_2 \cos{\theta_2}}{n_0 \cos{\theta_0}} \left | \tau_{0 | 1} \tau_{1 | 2} \sum_{k = 0}^\infty \left ( \rho_{1 | 0} \rho_{1 | 2} e^{i \varphi} \right )^k \right |^2\] And we will now use the following two substitutions, just to make the expressions a bit more readable: \[\alpha = \rho_{1 | 0} \rho_{1 | 2}\] \[\beta = \tau_{0 | 1} \tau_{1 | 2}\] We now have a geometric series sum, which we can evaluate as follows: \[I_T = \frac{n_2 \cos{\theta_2}}{n_0 \cos{\theta_0}} \left | \beta \sum_{k = 0}^\infty \left ( \alpha e^{i \varphi} \right )^k \right |^2 = \frac{n_2 \cos{\theta_2}}{n_0 \cos{\theta_0}} \left | \frac{\beta}{1 - \alpha e^{i \varphi}} \right |^2\] Simplifying rather elegantly to the following (assuming \(\alpha\) is real): \[I_T = \left ( \frac{n_2 \cos{\theta_2}}{n_0 \cos{\theta_0}} \right ) \frac{|\beta|^2}{| \alpha |^2 - 2 \alpha \cos{\varphi} + 1}\] And by conservation of energy, we have: \[I_T + I_R = 1\] Which concludes the derivation. As a final note, we can calculate the average over all possible phases of this result. If we are correct, then we should get the same result as a geometric optics derivation. The reader can verify that, indeed, we have: \[\overline{I_T} = \frac{1}{2 \pi} \int_{-\pi}^{+\pi} I_T ~ \text{d} \varphi = \left ( \frac{n_2 \cos{\theta_2}}{n_0 \cos{\theta_0}} \right ) \frac{|\beta|^2}{1 - |\alpha|^2}\] We conclude that the transmission coefficient (intensity of light transmitted across the layer) is \(I_T\) and the reflection coefficient (intensity of light reflected off the layer) is \(I_R\). General Case The derivation shown above is quite naive, and does not generalize well at all to multiple layers, though it is the simplest method to see what is happening at a low level. If you wish to implement n-layer thin film interference, the method of choice is the Transfer-matrix method, which simplifies the problem down to a series of matrix multiplications and can be derived using powerful electromagnetism techniques. Implementation So we now know just how much light is reflected from the layer. How can we implement this in the context of a BRDF? It's quite simple: this reflected term simply replaces the ordinary Fresnel term, accounting for thin film interference effects. This means you can trivially include thin-film interference effects in any BRDF as long as it has a Fresnel term. The function below computes the reflection coefficient for a given wavelength and incident angle. // cosI is the cosine of the incident angle, that is, cos0 = dot(view angle, normal) // lambda is the wavelength of the incident light (e.g. lambda = 510 for green) float ThinFilmReflectance(float cos0, float lambda) { const float thickness; // the thin film thickness const float n0, n1, n2; // the refractive indices // compute the phase change term (constant) const float d10 = (n1 > n0) ? 0 : PI; const float d12 = (n1 > n2) ? 0 : PI; const float delta = d10 + d12; // now, compute cos1, the cosine of the reflected angle float sin1 = pow(n0 / n1, 2) * (1 - pow(cos0, 2)); if (sin1 > 1) return 1.0f; // total internal reflection float cos1 = sqrt(1 - sin1); // compute cos2, the cosine of the final transmitted angle, i.e. cos(theta_2) // we need this angle for the Fresnel terms at the bottom interface float sin2 = pow(n0 / n2, 2) * (1 - pow(cos0, 2)); if (sin2 > 1) return 1.0f; // total internal reflection float cos2 = sqrt(1 - sin2); // get the reflection transmission amplitude Fresnel coefficients float alpha_s = rs(n1, n0, cos1, cos0) * rs(n1, n2, cos1, cos2); // rho_10 * rho_12 (s-polarized) float alpha_p = rp(n1, n0, cos1, cos0) * rp(n1, n2, cos1, cos2); // rho_10 * rho_12 (p-polarized) float beta_s = ts(n0, n1, cos0, cos1) * ts(n1, n2, cos1, cos2); // tau_01 * tau_12 (s-polarized) float beta_p = tp(n0, n1, cos0, cos1) * tp(n1, n2, cos1, cos2); // tau_01 * tau_12 (p-polarized) // compute the phase term (phi) float phi = (2 * PI / lambda) * (2 * n1 * thickness * cos1) + delta; // finally, evaluate the transmitted intensity for the two possible polarizations float ts = pow(beta_s) / (pow(alpha_s, 2) - 2 * alpha_s * cos(phi) + 1); float tp = pow(beta_p) / (pow(alpha_p, 2) - 2 * alpha_p * cos(phi) + 1); // we need to take into account conservation of energy for transmission float beamRatio = (n2 * cos2) / (n0 * cos0); // calculate the average transmitted intensity (if you know the polarization distribution of your // light source, you should specify it here. if you don't, a 50%/50% average is generally used) float t = beamRatio * (ts + tp) / 2; // and finally, derive the reflected intensity return 1 - t; } We can now sample this function at red, green, and blue wavelengths (650, 510, 475 nanometers, respectively) and substitute the RGB reflectance obtained into the Fresnel term of the BRDF. Or, if you are rendering spectrally, just give the wavelength directly. That's it. One word on polarization - in general, in computer graphics, we assume light contains an equal amount of s-polarized and p-polarized light waves. Then the Fresnel reflection coefficient is simply an average between the s-polarized and p-polarized light reflection coefficients, as the comment indicates. If you have more information on how much s-polarized light is emitted by your light source, then the average should reflect that. BRDF Explorer Sample The following shader script implements the BRDF in the Disney BRDF Explorer tool, using the stock Blinn-Phong shader with the default microfacet distribution. Note how we just implemented the code separately and multiplied the BRDF by the modified "thin film" Fresnel term. analytic # Blinn Phong based on halfway-vector with single-layer thin # film wave interference effects via a Fresnel film coating. ::begin parameters float thickness 0 3000 250 # Thin film thickness (in nm) float externalIOR 0.2 3 1 # External (air) refractive index float thinfilmIOR 0.2 3 1.5 # Layer (thin film) refractive index float internalIOR 0.2 3 1.25 # Internal (object) refractive index float n 1 1000 100 # Blinn-Phong microfacet exponent ::end parameters ::begin shader const float PI = 3.14159265f; /* Amplitude reflection coefficient (s-polarized) */ float rs(float n1, float n2, float cosI, float cosT) { return (n1 * cosI - n2 * cosT) / (n1 * cosI + n2 * cosT); } /* Amplitude reflection coefficient (p-polarized) */ float rp(float n1, float n2, float cosI, float cosT) { return (n2 * cosI - n1 * cosT) / (n1 * cosT + n2 * cosI); } /* Amplitude transmission coefficient (s-polarized) */ float ts(float n1, float n2, float cosI, float cosT) { return 2 * n1 * cosI / (n1 * cosI + n2 * cosT); } /* Amplitude transmission coefficient (p-polarized) */ float tp(float n1, float n2, float cosI, float cosT) { return 2 * n1 * cosI / (n1 * cosT + n2 * cosI); } /* Pass the incident cosine. */ vec3 FresnelCoating(float cos0) { /* Precompute the reflection phase changes (depends on IOR) */ float delta10 = (thinfilmIOR < externalIOR) ? PI : 0.0f; float delta12 = (thinfilmIOR < internalIOR) ? PI : 0.0f; float delta = delta10 + delta12; /* Calculate the thin film layer (and transmitted) angle cosines. */ float sin1 = pow(externalIOR / thinfilmIOR, 2) * (1 - pow(cos0, 2)); float sin2 = pow(externalIOR / internalIOR, 2) * (1 - pow(cos0, 2)); if ((sin1 > 1) || (sin2 > 1)) return vec3(1); /* Account for TIR. */ float cos1 = sqrt(1 - sin1), cos2 = sqrt(1 - sin2); /* Calculate the interference phase change. */ vec3 phi = vec3(2 * thinfilmIOR * thickness * cos1); phi *= 2 * PI / vec3(650, 510, 475); phi += delta; /* Obtain the various Fresnel amplitude coefficients. */ float alpha_s = rs(thinfilmIOR, externalIOR, cos1, cos0) * rs(thinfilmIOR, internalIOR, cos1, cos2); float alpha_p = rp(thinfilmIOR, externalIOR, cos1, cos0) * rp(thinfilmIOR, internalIOR, cos1, cos2); float beta_s = ts(externalIOR, thinfilmIOR, cos0, cos1) * ts(thinfilmIOR, internalIOR, cos1, cos2); float beta_p = tp(externalIOR, thinfilmIOR, cos0, cos1) * tp(thinfilmIOR, internalIOR, cos1, cos2); /* Calculate the s- and p-polarized intensity transmission coefficient. */ vec3 ts = pow(beta_s, 2) / (pow(alpha_s, 2) - 2 * alpha_s * cos(phi) + 1); vec3 tp = pow(beta_p, 2) / (pow(alpha_p, 2) - 2 * alpha_p * cos(phi) + 1); /* Calculate the transmitted power ratio for medium change. */ float beamRatio = (internalIOR * cos2) / (externalIOR * cos0); /* Calculate the average reflectance. */ return 1 - beamRatio * (ts + tp) * 0.5f; } vec3 BRDF(vec3 L, vec3 V, vec3 N, vec3 X, vec3 Y) { vec3 H = normalize(L + V); float val = pow(max(0, dot(N, H)), n); return vec3(val) * FresnelCoating(dot(V, H)); } ::end shader It is worth noting that this is a reference implementation meant to be readable, and can be thoroughly optimized. In particular, the Fresnel calculations are the most expensive, but there are numerous ways of reducing the amount of computations. For instance, we can use the reciprocity properties of s-polarized light, and also recycle many intermediate calculations. If you are not interested in perfect physical accuracy, you can also skip the polarization calculations and directly use intensity Fresnel coefficients, though because amplitudes are signed and intensities are not, you will need to calculate the proper sign to use for the cosine term somehow (or just ignore it altogether and have incorrect but plausible thin film interference). If you are really desperate about runtime performance, you can still retain the nice colorful patterns while trading physical accuracy by approximating the final formula however you see fit, the only fundamental requirement is that the \(\cos{\varphi}\) term be in there somewhere. This is a screenshot of the above BRDF's polar plot at incidence 45 degrees and illustrates its wavelength-dependent nature: Note how the BRDF differs for the three channels (in fact, every wavelength produces a different response, but we're working in RGB mode here). And here are a few renders (still from BRDF Explorer) with some sensible parameters. Here we assume the internal medium is fully opaque: What happens when we set the thin film thickness to zero? In this case, the layer physically disappears and the formula degenerates to ordinary Fresnel reflection (more specifically, the Fresnel reflection coefficients for the layer become zero while the transmission coefficients become one). What about making the thin film extremely thick? In that case, we see that the rate of change of the phase \(\varphi\) with respect to view angle becomes arbitrarily large, causing the interference effects to average out to white light, as expected. Also, because we are using a BRDF, we are assuming that light exits the surface at the same point it enters it, which is a very good approximation when the thin film is very small (on the order of light's wavelength). However, as the film becomes thicker, the approximation breaks down, so the film should probably be no larger than a few thousand nanometers, at most. The same holds true for microfacet distributions. Thin films coated over surfaces with very high microfacet roughness coefficients are somewhat unphysical, since a coating naturally tends to be smoother than the surface it is applied on. This should be kept in mind, as the two layer interfaces are assumed to be coplanar. You might also wonder what happens if refractive index depends on wavelength. Well, not much, the correct refractive index and incident/transmitted angles are simply used, and everything else remains the same, since waves of different frequencies do not interfere in any meaningful way. The BRDF above chooses to assume a constant IOR, though, to simplify matters. Also, if the refractive indices are wavelength-dependent, you will also observe dispersion effects in the transmitted light. Transparency? You may also want to handle transparency if the internal medium is not opaque. You can use whatever method you already have in place to render refractive surfaces, using the final transmitted angle (cos2 in the pseudocode). This is necessary for soap bubbles. Of course, if the internal medium is opaque, this is not necessary as the transmitted light is simply absorbed. It is also possible to use this with subsurface scattering (thus representing a subsurface scattering material with a thin film coating) by using the transmitted light (suitably refracted, as mentioned above). Here is a render of a model with a soap-bubble-like BRDF, rendered with ray tracing. In this case, there is no visible refraction because soap bubbles are simply an air/water/air interface, so the final transmitted angle is the same as the incident angle: Final notes A good selection of parameters is essential to obtain realistic results. For instance, the film thickness should be on the order of light's wavelength (a few hundred nanometers). As you increase the thickness, interference effects disappear and as the thickness tends to zero, you just get ordinary Fresnel reflection, as mentioned previously. Make sure to use correct refractive indices for your materials. The range of values which can produce interference effects is quite narrow, so the parameters have to be accurate. For metals or materials where the refractive index varies considerably over the visible spectrum, such as copper, three refractive indices (one per RGB channel) should be used for physical accuracy if possible. This requires only minor changes to the BRDF, as everything can be vectorized. It suffices to make the IOR parameters 3-component vectors and vectorize the Fresnel coefficient functions. The computational cost is therefore exactly the same. One last point is that for non-solid thin films, such as oil or water coatings, the thickness of the layer is probably not constant at every point of a given object. As an example, soap bubbles are thicker at the bottom than at the top, due to gravity. For a convincing render, this should be taken into account. As a result, thin film thickness should probably be a vertex attribute rather than a material attribute, or, alternatively, a more general reflectance model should be considered (such as a spatially varying BRDF). Adding some noise to the film thickness can also go very far in improving the appearance of some materials, and it is convenient to implement. Attached is the zipped BRDF Explorer script so that you may play around with it at your leisure. Article Update Log 26 Aug 2013: Converted all externally linked formulas to actual LaTeX code, removed now redundant images. 2 May 2013: Fixed a couple of bugs in the shader, corrected a few typos and improved formatting. 30 Apr 2013: Added some notes about interesting variations to apply to film thickness and on optimizations. 29 Apr 2013: Added notes on the motivation of neglecting absorption effects. 28 Apr 2013: Added notes on IOR and physical accuracy of solution. 27 Apr 2013: Added extra render, improved formatting. 26 Apr 2013: Added BRDF and some renders. 25 Apr 2013: Began writing article. About the Author(s) I'm a second year university student fascinated in all things related to light and to a greater extent computer graphics. I like to explore in depth various curious aspects of rendering, improve my understanding of physics and nature as a whole, and help propagate this knowledge among the community. License GDOL (Gamedev.net Open License) Great article! Obviously there's lots of math to cover in such a short article. I would love to dig deeper into these equations. I thought you presented the high level details very well. As only a second year University student, you have an impressive understanding of this material. Did you teach yourself most of this stuff? I started young myself (I'm just graduating University), but I only recently began digging into the more theoretical and challenging math behind graphics. At any rate, nice job! ZBethel - thanks! Yes, it's quite a lot to digest, so I skipped over most of the details and just gave a rough outline of the reasoning, which I'm not too happy about but I didn't want to end up with 90% math and 10% practical application. I taught myself most of it (with some help from a first year electromagnetism course) but I have a rather shallow understanding of the subject, to be fair. I understand what is happening but I would easily get lost when "going off the beaten path", for instance I am still not quite sure what actually happens with complex refractive indices. I intend to further study electromagnetism in my own time since all my time at uni is taken up with math and CS and I'm not really interested in anything else in physics. Computer graphics is a wonderful field, there is just so much to learn and there is a healthy balance between theory and application. Myself I prefer the theoretical aspects more, and it is indeed challenging at times. Not so much doing the math itself but grasping the physics behind it, in my experience. Great read! I really like physically based rendering articles. Is there an error in the attached file? In my PC the BRDF explorer just displays a white sphere... TiagoCosta - thanks! No there shouldn't be any error, it works fine on my computer. Does BRDF explorer churn out any error messages (in the console)? It's possible I made some assumptions about GLSL, I am not too familiar with it. It does. (The first 3 lines are not caused by your brdf) Thanks Tiago, it seems your version of GLSL isn't happy with implicit casting, and doesn't like the const precalculations (I was actually unsure about that, but it worked over here so I left it there). I will fix the shader ASAP. Can you check if moving the phase change calculations (just below the definition of pi) into the function and removing the const attribute makes the relevant errors go away? And for the other one, replace the "vec3(1)" by "vec3(1, 1, 1)"? I think that should fix it. You have done a great job in describing a complex effect like this in such a short space. I think you have found a good balance between math and applications. Trying the sample I got the same error as TiagoCosta. The solution I have found is to compute delta10, delta12 and delta in the shader (thus not precomputing them and changing their types to float) and to write a cast in the line After these changes it compiles, but I haven't checked if it correctly works. Ah, thanks apatriarca. Yes, it should be equivalent, as long as vec3(float) produces a vec3 with all three components set to the same thing (which is what I assumed). I just checked and it is indeed equivalent. I wish BRDF explorer would set the BRDF parameters to be constant, though. It's kind of stupid to not be able to precompute them outside the shader. But I guess in real applications this step can be done on the CPU anyhow (or even assumed based on the possible range of each refractive index, for instance I am pretty sure the external IOR will almost always be 1 for realtime graphics, and often the internal IOR will be greater than the thinfilm IOR for non-transparent materials) I have modified the shader, all should be good now. And cheers I think that the issues some people may be having is the floats being written as integers . Not all drivers will automatically cast (2) into (2.0). All nVidia drivers will and newer ATI drivers will as well, but most others will not. Anyways, this is an awesome article, thank you. @marc: thanks! I will look into the float casting issue you raise, I'd like the shader to work out of the box so I'll make sure to implement your suggestions I have updated the article to use the new LaTeX tags instead of linking external images from a third party LaTeX rendering service. I have triple-checked everything to make sure I didn't introduce any error or broken any link, but let me know if you spot one so I can fix it.
http://www.gamedev.net/page/resources/_/technical/graphics-programming-and-theory/thin-film-interference-for-computer-graphics-r2962
CC-MAIN-2016-36
refinedweb
5,208
56.39
On Tuesday 24 February 2009, Shane Hathaway wrote: > I've noticed that nearly all packages that depend on zope.publisher > depend only on a few pieces of it: > > - zope.publisher.interfaces Advertising Can you give examples? > - zope.publisher.browser.Browser{View|Page} > > - zope.publisher.browser.TestRequest Packages that depend on those classes usually more or less implicitly depend on zope.publisher. So the split might be arbitrary for this example. > One simple, low-risk refactoring I would like to do is move > zope.publisher.interfaces into its own package, make zope.publisher a > namespace package, and make zope.publisher depend on > zope.publisher.interfaces. The __init__.py in zope.publisher is already > empty, so I expect the namespace conversion to be safe. Then I'd like > to refine the dependency list of various packages that only require > zope.publisher.interfaces. Any objections? I want to see some motivation, because I fail to see how this helps. > It is less clear what we should do with BrowserView and BrowserPage. > They depend on zope.location, unlike the rest of zope.publisher, so they > don't really fit there. Perhaps those two belong in a new package, > "zope.publisher.browserbase". I do agree moving BrowserView and BrowserPage out of the publisher because they introduce the zope.location dependency. > There is also the tiny new "zope.browser" > package. Would it make sense to move them there? (It's hard to tell > what the intent of the new package is.) I'd love to hear other > suggestions. I think the purpose of the package is still defining itself. I think it will be defined by the things that we move into it. I am very tempted to say that it is a good home for BrowserView and BrowserPage. > As for TestRequest, I could update the setup.py of various packages that > currently depend on zope.publisher just for TestRequest. I would make > zope.publisher a test-only requirement. TestRequest does not add any additional dependencies to the system, so what's the point? It will depend on zope.publisher.browser anyways. Regards, Stephan -- Stephan Richter Web Software Design, Development and Training Google me. "Zope Stephan Richter" _______________________________________________ Zope-Dev maillist - Zope-Dev@zope.org ** No cross posts or HTML encoding! ** (Related lists - )
https://www.mail-archive.com/zope-dev@zope.org/msg27948.html
CC-MAIN-2017-30
refinedweb
377
62.14
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo You can subscribe to this list here. Showing 3 results of 3 On 2/04/2010 5:18 AM, Fernando Lopes Giovanini wrote: >' Does this happen when you run it as a normal Python script? If so, have you tried fiddling your setup.py to force it to find "gio"? > As my programa is very simple, this is the source code: > > #!/usr/bin/python > # -*- coding: latin-1 -*- Nothing to do with your problem, but the above declaration (and that of the mail header (Content-Type: text/plain; charset="iso-8859-1")) conflict with the ACTUAL encoding, which is UTF-8.' As my programa is very simple, this is the source code: #!/usr/bin/python # -*- coding: latin-1 -*- import pygtk pygtk.require("2.0"); import gtk class Parecer: def __init__(self): GLADE_FILE = 'viewConsulta.glade' # construindo nosso builder builder = gtk.Builder(); # carregando o .glade no builder builder.add_from_file(GLADE_FILE); # conectando os sinais definidos durante a construção # da interface a métodos de mesmo nome no objeto atual builder.connect_signals(self); # carregando o widget criado no glade3 pelo nome self.main_window = builder.get_object("main_window"); def sair(self, window): window.hide(); #minimisa a janela gtk.main_quit(); #finaliza o loop da gtk def show(self): self.main_window.show_all(); app = Parecer(); if __name__ == "__main__": app.show(); gtk.main(); Thanks to all. Fernando On 31/03/2010 20:44, Michael Taylor wrote: > Hi, > > I'm trying to sign a py2exe produced executable with signtool.exe, the > standard signing tool. Executables are signed like this: > signtool.exe sign /a /t > "myexecutable.exe" > > With other programs (not built with py2exe), this works fine. However, > when I try this with my py2exe generated executable, my application > fails to launch and produces this error message: > Traceback (most recent call last): > File "C:\Python25\lib\site-packages\py2exe\boot_common.py", line 92, > in <module> > import linecache > ImportError: No module named linecache > > My application works perfectly without the signing. I'm using py2exe > 0.6.9 and python 2.5. Any ideas? Maybe someone could point me to > something in the build_exe.py script that I could look at to try to > resolve this on my own? Just a guess, are you using bundle_file = 1? Werner
http://sourceforge.net/p/py2exe/mailman/py2exe-users/?viewmonth=201004&viewday=1&style=flat
CC-MAIN-2015-22
refinedweb
385
69.58
Integrating Custom Claims-Based Authentication with Kentico Bryan Soltis — Nov 1, 2017 authenticationclaims-based authenticatiglobal event handler. So, you have a requirement to use single sign-on, but you’re not sure how to handle the user within Kentico? This scenario is an extremely common one, as many companies leverage 3rd party systems to authenticate their users. When developers are faced with this task in Kentico, they may not know what code they need to add to the site to get it all working. In this blog, I’ll walk you through the process and get you authenticating in no time. Let’s go! Planning the process Every good solution starts with a plan so making sure you understand exactly how your integration is going to work is key. You need to figure out how and when your users will be authenticated, and more importantly, how that handoff will happen between your systems. It may help to draw out this process, so everyone is clear on the integration points. Here is my process for my demo environment, Deciding how to authenticate With a process nailed down, you can start selecting the components. You are probably using claims-based authentication for single sign-on. If so, you need to decide who is the authority you’re going to use. You will want to pick a system that is flexible enough to meet your needs, while ensuring availability and performance. For my demo, I chose to authenticate Google users. Choose a provider The next step of your journey is choosing a provider. There are so many options when it comes to authentication, it’s really up to you and your requirements. As long as the provider can properly authenticate the user against your chosen authority and create the claims token, you should be good to go. In my case, I chose to use Auth0 to validate my Google credentials. While you’ll see some Auth0 code in this article, I’ll cover that integration in more detail in weeks to come. For now, just know that I’m going to use a little of the Auth0 API calls in this blog. Every provider will need some information about the application, so you will want to configure your allowed origins, callback URLs, and other settings that you may need in your process. Learn more about Auth0 Handle the authentication event With the provider and authority figured out, you are ready to change some Kentico code. As our documentation shows, it’s very easy to add a global event handler for authentication events. By handling the SecurityEvents.AuthenticationRequested.Execute event, you can call your custom code to process the authentication. This usually involves creating a client to redirect the user to the provider for processing. This event is called anytime the user tries to access an area of the site that requires authentication (/admin or another secure page). This means you can add one handler for the entire site to ensure all users are authenticated properly. Here’s my global event handler for the capturing the AuthenticationRequested.Execute event. // Contains initialization code that is executed when the application starts protected override void OnInit() { base.OnInit(); // Assigns a handler to the SecurityEvents.AuthenticationRequested.Execute event // This event occurs when users attempt to access a restricted section of Kentico SecurityEvents.AuthenticationRequested.Execute += SignIn_Execute; } The SignIn_Execute method may contain all sorts of code your provider may need to process the request properly. In my case, this is where I added some Auth0 API code to transfer the user to the Auth0 site., while setting some parameters After formatting the request, I redirected the user. In my next blog, I’ll break down that code. Add a callback Sending the user to the provider is only half the battle. The real code comes into play when they come back to your site. While providers work differently, they all depend on some endpoint within your application to the send the user to, along with the authentication token. You can create a web service, API, or another other component for this task. In the case of Auth0, adding the Nuget packages generates an ASHX file within your application to handle these requests. This handler also contains Auth0 API code to parse the token and validate the request. public class LoginCallback : HttpTaskAsyncHandler { public override async Task ProcessRequestAsync(HttpContext context) { AuthenticationApiClient client = new AuthenticationApiClient( new Uri(string.Format("https://{0}", ConfigurationManager.AppSettings["auth0:Domain"]))); var token = await client.GetTokenAsync(new AuthorizationCodeTokenRequest { ClientId = ConfigurationManager.AppSettings["auth0:ClientId"], ClientSecret = ConfigurationManager.AppSettings["auth0:ClientSecret"], Code = context.Request.QueryString["code"], RedirectUri = context.Request.Url.ToString() }); ... } } Create the users account With the callback handler in place, you are ready to add your Kentico code. When the user comes back from your provider and is authenticated, you will want to set their user information within your site. This means you will need to add code to check for an existing account, and create one if necessary. The API Examples in the Kentico documentation can give you some straightforward code to do this. In my demo, I checked for an existing account and created a new one, if needed. I populated the user information with profile details Auth0 passed in the callback. // Check if the user exists in Kentico CMS.Membership.UserInfo ui = UserInfoProvider.GetUserInfo(profile.NickName + "@gmail.com"); if(ui == null) { // Creates a new user object CMS.Membership.UserInfo uiNew = new CMS.Membership.UserInfo(); // Sets the user properties uiNew.UserName = profile.NickName + "@gmail.com"; uiNew.Email = profile.NickName + "@gmail.com"; uiNew.FirstName = profile.FirstName; uiNew.LastName = profile.LastName; uiNew.FullName = profile.FullName; uiNew.IsExternal = true; uiNew.Enabled = true; // Saves the user to the database UserInfoProvider.SetUserInfo(uiNew); ui = uiNew; } In my demo, I’m setting some basic user details. You may want to extend this functionality to assign roles, permissions, and other user settings. Also note that I marked the user as IsExternal and Enabled. Note The IsExternal flag tells Kentco to not send the user to the normal login page, but that they will be authenticated somewhere else. If you don't set this, anyone could log in with a blank password. Log the user in Now that you know who the user is and have an account, you will want to log them into your site. You may even want to redirect them to a particular area or page based on some characteristic or property. For my demo, I wanted the user to continue to the requested secure page. To log them in, I added the following code in the ASHX file. if (ui != null) { // Log the user in AuthenticationHelper.AuthenticateUser(ui.UserName, true); var returnTo = "/"; var state = context.Request.QueryString["state"]; if (state != null) { var stateValues = HttpUtility.ParseQueryString(context.Request.QueryString["state"]); var redirectUrl = stateValues["ru"]; returnTo = redirectUrl; } // Redirect to the requested page context.Response.Redirect(returnTo); } The above code sets the user context to selected user, and redirects to their original page. This value was stored as part of the initial call to Auth0 in the state parameter. Testing With the integration in place, I was ready to test. First, I created a secure area of my site to have a simple test environment. NOTE In order to continue using the administrator (or other Kentico-based accounts), you will want to have a page with a sign in form. This will allow you to specify these credentials and not go through the new authentication process. Alternatively, you could create the account using the new process, and then update the new Kentico account with the appropriate roles and permissions. Next, I accessed the site as a non-authenticated user and attempted to go to the new /Secure page. I confirmed the site properly redirected me to my provider (Auth0 in my case). I clicked the Log in with Google link and entered my Google account information. After entering my password, I completed the process and verified I was redirected back to the /Secure page of my site. This confirmed I completed the authentication process and was redirected back to the requested page. As a final validation step, I checked to make sure that the new user was created within Kentico, with the correct properties set. Moving forward As this blog shows, it’s very easy to configure your Kentico sites to use claims-based authentication. This integration lets you quickly validate your user accounts against a 3rd party, and create their account within your site. By leveraging a solid provider, you can even get sample code for your callback handlers and token validation, to simplify the process even more. In a couple weeks, I’ll follow up this blog with one that covers the entire Auth0 integration process, so stay tuned! In the meantime, I hope you get your authentication working just the way you want it..
https://devnet.kentico.com/articles/integrating-custom-claims-based-authentication-with-kentico?feed=ccaebdb2-fa45-4245-8590-3d04b730592e
CC-MAIN-2018-17
refinedweb
1,472
56.25
Ok i have been working on this for a few days and can not get my main right. I have tried so many diffrent things and couldnt get it any help is appreciated. Design, write, and test a program that simulates some aspects of a simple savings account in a bank. As you read the problem description, sketch a UML diagram of the ACCT class in the handy UML form below. The box will fit the text you type. Refer to page 654 of the Malik book to remind yourself of the UML format. Your dotNet project should contain three files: Acct.h – The Acct class header file Acct.cpp – The Acct class implementation file AcctTest.cpp – A file containing a main( ) to test your Acct class. First you need to describe an account class which contains the following: 1.the private data member includes: a.the balance in dollars and cents (double data type). 2.the public member functions include: a.the default constructor – should initialize the balance to 0, b.an explicit value constructor – which sets the balance to whatever value is passed in, c.a function called getbalance that returns the balance of the account, d.a function called deposit that deposits money into the account, and e.a function called withdraw that withdraws from the account if the withdrawal amount does NOT exceed the account balance. The getbalance member function has no arguments. This function should return the current balance in the account. The single argument of the deposit function is the deposit amount. The deposit function should add the deposit amount to the current balance. The deposit function should return the new balance. The single argument of the withdraw member function is the withdrawal amount. The withdraw function should attempt to withdraw the withdrawal amount from the current balance. If the withdrawal amount is less than or equal to the current balance, then reduce the current balance by the withdrawal amount. If the withdrawal amount is greater than the current balance, do not change the balance. The withdraw function should return a boolean indicating whether the withdrawal was successful or not. Return true if successful. The main program should thoroughly test the account class. Make sure to test both the default and the explicit value constructors. For each transaction, output what transaction you are about to perform on which account. After each transaction, be sure to display the balance in the account. When you have your test program and account class working, get a sign off from the instructor. Turn in your UML diagram, a screen shot of your program’s output, and the source code for all three files you wrote. Sorry for the long problem i just didnt want to leave anything out here is what i ahve so far. Header file class accountFunction { public: void withdraw(double userWithdraw); void deposit(double userDeposit); accountFunction(double); double getBalance() const; accountFunction(); //default constructor private: double bal; }; Implementation file #include<iostream> #include"Acct.h" using namespace std; void accountFunction::deposit(double userDeposit) { bal = bal + userDeposit; } void accountFunction::withdraw(double userWithdraw) { if (userWithdraw < bal) { cout << " Account balance too low "; } else (userWithdraw > bal); { bal = bal - userWithdraw; } } double accountFunction::getBalance() const { return bal; } Test file #include<iostream> #include"Acct.h" using namespace std; int main {
https://www.daniweb.com/programming/software-development/threads/173351/accounting-code
CC-MAIN-2017-51
refinedweb
546
66.74
> module Shady.Language.Share (cse) whereImports ======= > import Prelude hiding (foldr) > > import Data.Function (on) > import Data.Ord (comparing) > import Data.List (sortBy) > import Control.Applicative (Applicative(..),liftA2,(<$>)) > import Control.Arrow (first,second,(&&&)) > import Data.Foldable (foldr) > > import qualified Control.Monad.State as S > > import Data.Map (Map) > import qualified Data.Map as Map > import Data.Set (Set) > import qualified Data.Set as Set > > import Data.Proof.EQ > import Shady.Language.Exp< import Debug.Trace Common subexpression elimination ================================ To elimination common subexpressions, convert from expression to graph (dag) and back to expression. The final expression will use `let` (as beta redexes) to abstract out expressions that appear more than once. > cse :: HasType a => E a -> E a > cse = undagify . dagifyGraphs ====== A graph is a map from expressions to variable names, plus a root expression (typically a variable). > type Graph a = (E a, Map TExp Id)A `TExp` wraps an expression, encapsulating the type. I'll also include the result of `show`, since I use it in comparisons, which I expect to cause it to be accessed repeatedly. > data TExp = forall a. HasType a => TExp (E a) String > tExp :: HasType a => E a -> TExp > tExp e = TExp e (show e) > instance Show TExp where show (TExp _ s) = sThe reason for mapping from an expression to index instead of vice versa is just that it's more efficient to build in this direction. We'll invert the map later when we convert back from `Graph` to `E`. > invertMap :: Ord v => Map k v -> Map v k > invertMap = Map.fromList . map (\ (n,i) -> (i,n)) . Map.toListTo use `TExp` as a map key, it'll have to be ordered. For simplicity, I'll just use the printed form of the `E`. > instance Eq TExp where > TExp _ s == TExp _ t = s == t > > instance Ord TExp where > TExp _ s `compare` TExp _ t = s `compare` tConversion from E to Graph (dag) ================================== I'll structure conversion from `E` to Graph (dag) around a monad for computations that accumulate an exp map and a list of unused names. The names are guaranteed to be in ascending order so that we can trivially top-sort the graph later. > type ExpMap = Map TExp Id > type GraphM = S.State (ExpMap, [Id]) > dagify :: HasType a => E a -> Graph a > dagify e = second fst $ S.runState (dagifyExp e) (Map.empty, ids) > where > allIds, ids :: [Id] > allIds = "" : [c:name | name <- allIds, c <- ['a'..'z']] > ids = filter (not . (`Set.member` eVars)) (map reverse (tail allIds)) > eVars :: Set Id > eVars = vars eThe name list is not alphabetized and moreover could not be alpabetized. Define a comparison function, which could be compare length/string pairs. > compareIds :: String -> String -> Ordering > compareIds = comparing (length &&& id)Graph construction works by recursively constructing and inserting expression/name pairs: > dagifyExp :: HasType a => E a -> GraphM (E a) > dagifyExp e = dagN e >>= insertG > > dagN :: HasType a => E a -> GraphM (E a) > dagN (Var v) = pure $ Var v > dagN (Op o) = pure $ Op o > dagN (f :^ a) = liftA2 (:^) (dagifyExp f) (dagifyExp a) > -- dagN (Lam v b) = Lam v <$> dagifyExp b > dagN (Lam _ _) = error "dagN: Can't yet perform CSE on Lam"If the given expression is already in the graph, reuse the existing identifier. Otherwise, insert insert it, giving it a new identifier. > insertG :: HasType a => E a -> GraphM (E a) > insertG e | not (abstractable e) = return e > | otherwise = maybe (addExp e) return > =<< findExp e <$> S.gets fst > > addExp :: HasType a => E a -> GraphM (E a) > addExp e = do name <- genId > S.modify (first (Map.insert (tExp e) name)) > return (Var (var name))Needing `HasType` in `insertG` forced me to add it several other places, including in the `E` constructor types. An expression is abstractable if it has base type and is non-trivial. < abstractable :: HasType a => E a -> Bool < abstractable e = nonTrivial e && isBaseType (typeOf1 e) < nonTrivial :: Exp a -> Bool < nonTrivial (_ :^ _) = True < nonTrivial _ = False > isBaseType :: Type a -> Bool > isBaseType (VecT _) = True > isBaseType _ = FalseOn second thought, omit the `nonTrivial` condition. With GLSL, it's worthwhile even abstracting literals. > abstractable :: HasType a => E a -> Bool > abstractable e = isBaseType (typeOf1 e)Identifier generation is as usual, accessing and incrementing the counter state: > genId :: GraphM Id > genId = do (m,name:names) <- S.get > S.put (m,names) > return nameTo search for an exp in the accumulated map, > findExp :: HasType a => E a -> ExpMap -> Maybe (E a) > findExp e = fmap (Var . var) . Map.lookup (tExp e)Free variables ============== Count all variables occurrences in an expression: > countOccs :: E a -> Map Id Int > countOccs (Var (V n _)) = Map.singleton n 1 > countOccs (Op _) = Map.empty > countOccs (f :^ a) = Map.unionWith (+) (countOccs f) (countOccs a) > countOccs (Lam (V n _) b) = Map.delete n (countOccs b) > tCountOccs :: TExp -> Map Id Int > tCountOccs (TExp e _) = countOccs eAlso handy will be extracting all variables free & bound: > vars :: E a -> Set Id > vars (Var (V n _)) = Set.singleton n > vars (Op _) = Set.empty > vars (f :^ a) = vars f `Set.union` vars a > vars (Lam (V n _) b) = Set.insert n (vars b)Conversion from Graph (dag) to E ================================== Given a `Graph`, let's now build an `E`, with sharing. Recall the `Graph` type and map inversion, defined above: < type Graph a = (E a, ExpMap) To rebuild an `E`, walk through the inverted map in order, generating a `let` for each binding. < undagify :: forall a. HasType a => Graph a -> E a < undagify (root,expToId) = < foldr bind root (sortedBinds (invertMap expToId)) < where < bind :: (Id,TExp) -> E a -> E a < bind (name, TExp rhs) = lett name rhs > sortedBinds :: Map Id TExp -> [(Id,TExp)] > sortedBinds = sortBy (compareIds `on` fst) . Map.toListInlining -------- To minimize the `let` bindings, let's re-inline all bindings that are used only once. To know how which bindings are used only once, count them. > inlinables :: HasType a => Graph a -> Set Id> inlinables = const Set.empty -- temp > inlinables g = asSet $ (== 1) <$> countUses g > countUses :: HasType a => Graph a -> Map Id Int > countUses (e,m) = Map.unionsWith (+) (map tCountOccs (tExp e : Map.keys m))Turn a boolean map (characteristic function) into a set: > asSet :: Ord k => Map k Bool -> Set k > asSet = Set.fromList . Map.keys . Map.filter idNow revisit `undagify`, performing some inlining along the way. > undagify :: forall a. HasType a => Graph a -> E a > undagify g@(root,expToId) = foldr bind (inline root) (sortedBinds texps) > where > texps :: Map Id TExp > texps = invertMap expToId > ins :: Set Id > ins = inlinables g > bind :: (Id,TExp) -> E a -> E a > bind (name, TExp rhs _) = lett' name (inline rhs) > -- Inline texps in an expression > inline :: E b -> E b > inline (Var v@(V name _)) | Set.member name ins, Just e' <- tLookup v texps = inline e' > inline (f :^ a) = inline f :^ inline a > inline (Lam v b) = Lam v (inline b) -- assumes no shadowing > inline e = e > -- Make a let binding unless an inlined variable. > lett' :: (HasType b, HasType c) => > Id -> E b -> E c -> E c > lett' n rhs | Set.member n ins = id > | otherwise = letE (var n) rhsFor the inlining step, we'll have to look up a variable in the map, and check that it has the required type. > tLookup :: V a -> Map Id TExp -> Maybe (E a) > tLookup (V name tya) m = fromTExp tya <$> Map.lookup name m > fromTExp :: Type a -> TExp -> E a > fromTExp tya (TExp e _) | Just Refl <- typeOf1 e `tyEq` tya = e > | otherwise = error "fromTExp type fail"I'm not satisfied having to deal the type check explicitly here. Maybe a different abstraction would help; perhaps a type-safe homogeneous map instead of `Map Id TExp`. [semantic editor combinator]: "blog post"
http://hackage.haskell.org/package/shady-gen-0.5.1/docs/src/Shady-Language-Share.html
CC-MAIN-2016-07
refinedweb
1,267
64
In this article, we will learn about c++ strings. In C++, the string is an object of std::string class that represents a sequence of characters. We can perform many operations on strings such as concatenation, comparison, conversion, etc. C++ String Example Let’s see the simple example of C++ string. #include <iostream> using namespace std; int main( ) { string s1 = "Hello"; char ch[] = { 'C', '+', '+'}; string s2 = string(ch); cout<<s1<<endl; cout<<s2<<endl; } Output: Hello C++ C++ String Compare Example Let’s see the simple example of string comparison using strcmp() function. #include <iostream> #include <cstring> using namespace std; int main () { char key[] = "mango"; char buffer[50]; do { cout<<"What is my favourite fruit? "; cin>>buffer; } while (strcmp (key,buffer) != 0); cout<<"Answer is correct!!"<<endl; return 0; } Output: What is my favourite fruit? apple What is my favourite fruit? banana What is my favourite fruit? mango Answer is correct!! Previously: C++ array to function Next: C++ pointers Do you want to hire us for your Project Work? Then Contact US.
https://blog.codehunger.in/c-plus-plus-strings/
CC-MAIN-2021-43
refinedweb
172
76.93
/* * telnet.h * * TELNET Socket: telnet.h,v $ * Revision 1.24 2005/11/30 12:47:37 csoutheren * Removed tabs, reformatted some code, and changed tags for Doxygen * * Revision 1.23 2002/11/06 22:47:24 robertj * Fixed header comment (copyright etc) * * Revision 1.22 2002/09/16 01:08:59 robertj * Added #define so can select if #pragma interface/implementation is used on * platform basis (eg MacOS) rather than compiler, thanks Robert Monaghan. * * Revision 1.21 1999/03/09 08:01:47 robertj * Changed comments for doc++ support (more to come). * * Revision 1.20 1999/02/16 08:07:10 robertj * MSVC 6.0 compatibility changes. * * Revision 1.19 1998/11/30 02:50:56 robertj * New directory structure * * Revision 1.18 1998/09/23 06:20:04 robertj * Added open source copyright license. * * Revision 1.17 1996/08/08 10:08:54 robertj * Directory structure changes for common files. * * Revision 1.16 1995/06/17 11:13:32 robertj * Documentation update. * * Revision 1.15 1995/06/17 00:47:38 robertj * Changed overloaded Open() calls to 3 separate function names. * More logical design of port numbers and service names. * * Revision 1.14 1995/06/04 12:46:26 robertj * Slight redesign of port numbers on sockets. * * Revision 1.13 1995/04/25 11:12:30 robertj * Fixed functions hiding ancestor virtuals. * * Revision 1.12 1995/04/01 08:32:10 robertj * Finally got a working TELNET. * * Revision 1.11 1995/03/18 06:27:50 robertj * Rewrite of telnet socket protocol according to RFC1143. * * Revision 1.10 1995/03/14 12:42:47 robertj * Updated documentation to use HTML codes. * * Revision 1.9 1995/02/21 11:25:33 robertj * Further implementation of telnet socket, feature complete now. * * Revision 1.8 1995/01/03 09:36:23 robertj * Documentation. * * Revision 1.7 1995/01/01 01:07:33 robertj * More implementation. * * Revision 1.6 1994/11/28 12:38:59 robertj * Added DONT and WONT states. * * Revision 1.5 1994/08/23 11:32:52 robertj * Oops * * Revision 1.4 1994/08/22 00:46:48 robertj * Added pragma fro GNU C++ compiler. * * Revision 1.3 1994/08/21 23:43:02 robertj * Changed type of socket port number for better portability. * * Revision 1.2 1994/07/25 03:36:03 robertj * Added sockets to common, normalising to same comment standard. * */ #ifndef _PTELNETSOCKET #define _PTELNETSOCKET #ifdef P_USE_PRAGMA #pragma interface #endif #include <ptlib/sockets.h> /** A TCP/IP socket for the TELNET high level protocol. */ 00112 class PTelnetSocket : public PTCPSocket { PCLASSINFO(PTelnetSocket, PTCPSocket) public: PTelnetSocket(); // Create an unopened TELNET socket. PTelnetSocket( const PString & address ///< Address of remote machine to connect to. ); // Create an opened TELNET socket. // Overrides from class PChannel /**. The TELNET channel intercepts and escapes commands in the data stream to implement the TELNET protocol. @return TRUE indicates that at least one character was read from the channel. FALSE means no bytes were read due to timeout or some other I/O error. */. The TELNET channel intercepts and escapes commands in the data stream to implement the TELNET protocol. Returns TRUE if at least len bytes were written to the channel. */ BOOL Write( const void * buf, ///< Pointer to a block of memory to write. PINDEX len ///< Number of bytes to write. ); /** Connect a socket to a remote host on the specified port number. This is typically used by the client or initiator of a communications channel. This connects to a "listening" socket at the other end of the communications channel. The port number as defined by the object instance construction or the <A>PIPSocket::SetPort()</A> function. @return TRUE if the channel was successfully connected to the remote host. */ virtual BOOL Connect( const PString & address ///< Address of remote machine to connect to. ); /** <A>Listen()</A> command of the <CODE>socket</CODE> parameter. Note that this function will block until a remote system connects to the port number specified in the "listening" socket. @return TRUE if the channel was successfully opened. */ virtual BOOL Accept( PSocket & socket ///< Listening socket making the connection. ); /** This is callback function called by the system whenever out of band data from the TCP/IP stream is received. A descendent class may interpret this data according to the semantics of the high level protocol. The TELNET socket uses this for sychronisation. */ virtual void OnOutOfBand( const void * buf, ///< Data to be received as URGENT TCP data. PINDEX len ///< Number of bytes pointed to by <CODE>buf</CODE>. ); // New functions 00214 enum Command { 00215 IAC = 255, ///< Interpret As Command - escape character. 00216 DONT = 254, ///< You are not to use option. 00217 DO = 253, ///< Request to use option. 00218 WONT = 252, ///< Refuse use of option. 00219 WILL = 251, ///< Accept the use of option. 00220 SB = 250, ///< Subnegotiation begin. 00221 GoAhead = 249, ///< Function GA, you may reverse the line. 00222 EraseLine = 248, ///< Function EL, erase the current line. 00223 EraseChar = 247, ///< Function EC, erase the current character. 00224 AreYouThere = 246, ///< Function AYT, are you there? 00225 AbortOutput = 245, ///< Function AO, abort output stream. 00226 InterruptProcess = 244, ///< Function IP, interrupt process, permanently. 00227 Break = 243, ///< NVT character break. 00228 DataMark = 242, ///< Marker for connection cleaning. 00229 NOP = 241, ///< No operation. 00230 SE = 240, ///< Subnegotiation end. 00231 EndOfReccord = 239, ///< End of record for transparent mode. 00232 AbortProcess = 238, ///< Abort the entire process 00233 SuspendProcess= 237, ///< Suspend the process. 00234 EndOfFile = 236 ///< End of file marker. }; // Defined telnet commands codes /** Send an escaped IAC command. The <CODE>opt</CODE> parameters meaning depends on the command being sent: <DL> <DT>DO, DONT, WILL, WONT <DD><CODE>opt</CODE> is Options code. <DT>AbortOutput <DD>TRUE is flush buffer. <DT>InterruptProcess, Break, AbortProcess, SuspendProcess <DD>TRUE is synchronise. </DL> Synchronises the TELNET streams, inserts the data mark into outgoing data stream and sends an out of band data to the remote to flush all data in the stream up until the syncronisation command. @return TRUE if the command was successfully sent. */ BOOL SendCommand( Command cmd, ///< Command code to send int opt = 0 ///< Option for command code. ); 00264 enum Options { 00265 TransmitBinary = 0, ///< Assume binary 8 bit data is transferred. 00266 EchoOption = 1, ///< Automatically echo characters sent. 00267 ReconnectOption = 2, ///< Prepare to reconnect 00268 SuppressGoAhead = 3, ///< Do not use the GA protocol. 00269 MessageSizeOption = 4, ///< Negatiate approximate message size 00270 StatusOption = 5, ///< Status packets are understood. 00271 TimingMark = 6, ///< Marker for synchronisation. 00272 RCTEOption = 7, ///< Remote controlled transmission and echo. 00273 OutputLineWidth = 8, ///< Negotiate about output line width. 00274 OutputPageSize = 9, ///< Negotiate about output page size. 00275 CRDisposition = 10, ///< Negotiate about CR disposition. 00276 HorizontalTabsStops = 11, ///< Negotiate about horizontal tabstops. 00277 HorizTabDisposition = 12, ///< Negotiate about horizontal tab disposition 00278 FormFeedDisposition = 13, ///< Negotiate about formfeed disposition. 00279 VerticalTabStops = 14, ///< Negotiate about vertical tab stops. 00280 VertTabDisposition = 15, ///< Negotiate about vertical tab disposition. 00281 LineFeedDisposition = 16, ///< Negotiate about output LF disposition. 00282 ExtendedASCII = 17, ///< Extended ascic character set. 00283 ForceLogout = 18, ///< Force logout. 00284 ByteMacroOption = 19, ///< Byte macro. 00285 DataEntryTerminal = 20, ///< data entry terminal. 00286 SupDupProtocol = 21, ///< supdup protocol. 00287 SupDupOutput = 22, ///< supdup output. 00288 SendLocation = 23, ///< Send location. 00289 TerminalType = 24, ///< Provide terminal type information. 00290 EndOfRecordOption = 25, ///< Record boundary marker. 00291 TACACSUID = 26, ///< TACACS user identification. 00292 OutputMark = 27, ///< Output marker or banner text. 00293 TerminalLocation = 28, ///< Terminals physical location infromation. 00294 Use3270RegimeOption = 29, ///< 3270 regime. 00295 UseX3PADOption = 30, ///< X.3 PAD 00296 WindowSize = 31, ///< NAWS - Negotiate About Window Size. 00297 TerminalSpeed = 32, ///< Provide terminal speed information. 00298 FlowControl = 33, ///< Remote flow control. 00299 LineModeOption = 34, ///< Terminal in line mode option. 00300 XDisplayLocation = 35, ///< X Display location. 00301 EnvironmentOption = 36, ///< Provide environment information. 00302 AuthenticateOption = 37, ///< Authenticate option. 00303 EncriptionOption = 38, ///< Encryption option. 00304 EncryptionOption = 38, ///< Duplicate to fix spelling mistake and remain backwards compatible. 00305 ExtendedOptionsList = 255, ///< Code for extended options. MaxOptions }; // Defined TELNET options. /** Send DO request. @return TRUE if the command was successfully sent. */ virtual BOOL SendDo( BYTE option ///< Option to DO ); /** Send DONT command. @return TRUE if the command was successfully sent. */ virtual BOOL SendDont( BYTE option ///< Option to DONT ); /** Send WILL request. @return TRUE if the command was successfully sent. */ virtual BOOL SendWill( BYTE option ///< Option to WILL ); /** Send WONT command. @return TRUE if the command was successfully sent. */ virtual BOOL SendWont( BYTE option ///< Option to WONT ); 00347 enum SubOptionCodes { 00348 SubOptionIs = 0, ///< Sub-option is... 00349 SubOptionSend = 1, ///< Request to send option. }; // Codes for sub option negotiation. /** Send a sub-option with the information given. @return TRUE if the command was successfully sent. */ BOOL SendSubOption( BYTE code, ///< Suboptions option code. const BYTE * info, ///< Information to send. PINDEX len, ///< Length of information. int subCode = -1 ///< Suboptions sub-code, -1 indicates no sub-code. ); /** Set if the option on our side is possible, this does not mean it is set it only means that in response to a DO we WILL rather than WONT. */ 00368 void SetOurOption( BYTE code, ///< Option to check. BOOL state = TRUE ///< New state for for option. ) { option[code].weCan = state; } /** Set if the option on their side is desired, this does not mean it is set it only means that in response to a WILL we DO rather than DONT. */ 00376 void SetTheirOption( BYTE code, ///< Option to check. BOOL state = TRUE ///< New state for for option. ) { option[code].theyShould = state; } /** Determine if the option on our side is enabled. @return TRUE if option is enabled. */ 00386 BOOL IsOurOption( BYTE code ///< Option to check. ) const { return option[code].ourState == OptionInfo::IsYes; } /** Determine if the option on their side is enabled. @return TRUE if option is enabled. */ 00395 BOOL IsTheirOption( BYTE code ///< Option to check. ) const { return option[code].theirState == OptionInfo::IsYes; } void SetTerminalType( const PString & newType ///< New terminal type description string. ); // Set the terminal type description string for TELNET protocol. const PString & GetTerminalType() const { return terminalType; } // Get the terminal type description string for TELNET protocol. void SetWindowSize( WORD width, ///< New window width. WORD height ///< New window height. ); // Set the width and height of the Network Virtual Terminal window. void GetWindowSize( WORD & width, ///< Old window width. WORD & height ///< Old window height. ) const; // Get the width and height of the Network Virtual Terminal window. protected: void Construct(); // Common construct code for TELNET socket channel. /** This callback function is called by the system when it receives a DO request from the remote system. The default action is to send a WILL for options that are understood by the standard TELNET class and a WONT for all others. @return TRUE if option is accepted. */ virtual void OnDo( BYTE option ///< Option to DO ); /** This callback function is called by the system when it receives a DONT request from the remote system. The default action is to disable options that are understood by the standard TELNET class. All others are ignored. */ virtual void OnDont( BYTE option ///< Option to DONT ); /** This callback function is called by the system when it receives a WILL request from the remote system. The default action is to send a DO for options that are understood by the standard TELNET class and a DONT for all others. */ virtual void OnWill( BYTE option ///< Option to WILL ); /** This callback function is called by the system when it receives a WONT request from the remote system. The default action is to disable options that are understood by the standard TELNET class. All others are ignored. */ virtual void OnWont( BYTE option ///< Option to WONT ); /** This callback function is called by the system when it receives a sub-option command from the remote system. */ virtual void OnSubOption( BYTE code, ///< Option code for sub-option data. const BYTE * info, ///< Extra information being sent in the sub-option. PINDEX len ///< Number of extra bytes. ); /** This callback function is called by the system when it receives an telnet command that it does not do anything with. The default action displays a message to the <A>PError</A> stream (when <CODE>debug</CODE> is TRUE) and returns TRUE; @return TRUE if next byte is not part of the command. */ virtual BOOL OnCommand( BYTE code ///< Code received that could not be precessed. ); // Member variables. 00492 struct OptionInfo { enum { IsNo, IsYes, WantNo, WantNoQueued, WantYes, WantYesQueued }; unsigned weCan:1; // We can do the option if they want us to do. unsigned ourState:3; unsigned theyShould:1; // They should if they will. unsigned theirState:3; }; OptionInfo option[MaxOptions]; // Information on protocol options. PString terminalType; // Type of terminal connected to telnet socket, defaults to "UNKNOWN" WORD windowWidth, windowHeight; // Size of the "window" used by the NVT. BOOL debug; // Debug socket, output messages to PError stream. private: enum State { StateNormal, StateCarriageReturn, StateIAC, StateDo, StateDont, StateWill, StateWont, StateSubNegotiations, StateEndNegotiations }; // Internal states for the TELNET decoder State state; // Current state of incoming characters. PBYTEArray subOption; // Storage for sub-negotiated options unsigned synchronising; BOOL StartSend(const char * which, BYTE code); }; #endif // End Of File ///////////////////////////////////////////////////////////////
http://pwlib.sourcearchive.com/documentation/1.10.10-3.1/telnet_8h_source.html
CC-MAIN-2017-43
refinedweb
2,115
52.46
Jonathan Whelchel2,096 Points Reverse Numbers game help I need help with the reverse numbers game. Below is the code I have so far import random secret_num = random.randint(1, 10) def second_choice(): second_choice = random.randint(1, 10) def game(): guesses = [] while len(guesses) < 5: print(secret_num) guesses.append(secret_num) guess_again = input("Did I get your number right? Y/n. ") if guess_again == 'y': print("You got it") break elif guess_again == 'too high': print('I will choose a higher number') second_choice() I cannot figure out how to get my computer to guess again and guess a different number and not repeat one he has already guessed. 4 Answers Chris FreemanTreehouse Moderator 59,572 Points I very simple way would be to add an input question "do you want to continue" after the while loop. If Yes, then simply call game() to restart. This has the minor drawback that this is a recursive call ( game() calling game()) which does have a compiler limit of ~1000 so after 1000 games it might crash with a RecursionError: maximum recursion depth exceeded. This is acceptable for this level of coding. Jonathan Whelchel2,096 Points so I've gotten to that point. but the computer keeps guessing the same number. How can I tell it to guess higher or lower Jonathan Whelchel2,096 Points import random secret_num = random.randint(1, 10) def second_choice(): second_choice = random.randint(1, 10) def game(): guesses = [] while len(guesses) < 5: print(random.randint(1, 10)) guess_again = input("Did I get your number right? Y/n. ") if guess_again.lower() == 'y': print('You got it!') play_again = input("Do you want to play again? Y/n ") elif guess_again.lower() == 'n': game() game() Chris FreemanTreehouse Moderator 59,572 Points It looks like you're not saving the guesses. - Save randintresult in variable guess - add `guesses.append(guess) guess - adjust the guess by replacing the arguments of the randintwith (last_too_low, last_too_high) - add code to track last_too_lowand last_too_high - you will need to adjust the feedback from the user to know if guess was too high or low Jonathan Whelchel2,096 Points Ok That sounds great! But I don’t think I learned how to do any of that so far. Mind walking me through? Chris FreemanTreehouse Moderator 59,572 Points It might be better to revisit this after getting farther along in the coursework. Jonathan Whelchel2,096 Points Jonathan Whelchel2,096 Points So now I am at a point where I can't get the computer to guess a different number each try. But I am having trouble limiting the computer to 5 tries. Also getting it to guess higher or lower than the previous guess.
https://teamtreehouse.com/community/reverse-numbers-game-help
CC-MAIN-2020-29
refinedweb
439
65.52
You can subscribe to this list here. Showing 3 results of 3 Hello everybody, I commited a bunch of changes to Psyco. Here are a few words about them... Primitive traceback support. There is still no general frame support, but this is enough to let the familiar tracebacks be printed again. Memory allocation bug fixed. Might be the source of the SF bug reported by Codepage. [Can you please check this? Thanks!] IMPORT_NAME and IMPORT_FROM opcodes supported (import xx / from xx import yy). The "from xx import *" form is not supported but is supposed to be illegal anyway in a function. Fix for global variables: a function like def f(): global N N = 5 return N failed with a NameError on the last line if N did not exist in the globals before the function call. Note however that a contrieved exemple can still be built: if f() calls g() which creates a new global N, f() will immediately see N. I have taken care of this in the common case of N not also being the name of an existing built-in. Class creation: when creating a class inside a function, __metaclass__ was looked up in the globals of the last Python-executed function instead of the current function. Minor other fixes. Thanks, Armin. Hello, > removing the realloc() did make the symptoms of the bug disappear. I discovered some code I added in a recent CVS checkin that was buggy, in a way not unrelated to your bug report, so this might not be FreeBSD's fault after all. I will checkin the fix (together with the traceback support I'm working on). Armin I updated the bug,=20 d=3D41036&atid=3D429622 removing the realloc() did make the symptoms of the bug disappear. This bug corrupts the stack so the backtrace isn't always the same. On Sun, 17 Mar 2002, Armin Rigo wrote: > Hello, >=20 > ----- Original Message ----- > > On FreeBSD 4.5, GCC 2.95.3 I got a > > > > python in free(): warning: page is already free > > Bus error (core dumped) >=20 > This might be related to the reporter problem of 'realloc()' moving memor= y > when shrinking a block, although I am not sure, as you mention that it wo= rks > when Psyco is compiled in debug mode. As there is no FreeBSD machine I ca= n > get access to, I cannot do more than wild guesses. Perhaps you can first = try > to see if the problem really comes from 'realloc()' by removing the > 'realloc()' call at all in codemanager.c, in psyco_shrink_buffer(). This > will make Psyco use an unreasonably high amount of memory, but would tell= us > if the bug comes from there. >=20 > Also, could someone else listening on the psyco-devel list please see if = the > bug is reproductible (and gives the same backtraces)? >=20 >=20 > A bient=F4t, >=20 > Armin. >=20 >=20
http://sourceforge.net/p/psyco/mailman/psyco-devel/?viewmonth=200203&viewday=18
CC-MAIN-2014-42
refinedweb
478
72.16
Isn’t writing synchronous code nice? function do_stuff () { do_thing_one (); do_thing_two (); do_thing_three (); } But synchronous is bad! Bad, bad, bad. So then came async. The simple pattern is to pass a callback to the function you are calling. It’s not as bad in JS because we just can do this: function do_stuff () { do_thing_one ( function (result) { do_thing_two ( function (result) { do_thing_three ( function (result) { }); }); }); } I’ve left out error handling. That really depends on the library.. But i imagine its messy. Now let’s see GIO style async. function do_stuff () { do_thing_one_async ( function (ar) { var result = do_thing_one_finish (ar); do_thing_two_async ( function (ar) { var result = do_thing_two_finish (ar); do_thing_three_async (function (ar) { var result = do_thing_three_finish (ar); }); }); }); } I like this a lot better than how i’d do it in python. But wouldn’t it be nice if you could write async code something like this? var do_stuff = async (function () { var result = yield do_thing_one (); yield do_thing_two (); yield do_thing_three (); }); Or even: var do_stuff = async (function () { var result = yield do_thing_one (); yield do_thing_two (); try{ yield do_thing_three (); } catch (e) { print("Exception handled"); } }); You can in python. You can in vala. And for JS? Well, I was going to say “now you can“. But while I was looking for a good Vala link, I noticed Alex already did something like this over a year ago. D’oh. What would be really nice is if the async wrappers could be generated automatically by GI. I had a first stab at this by simply parsing the GIR xml with E4X and providing an alternative import mechanism (thanks for the suggestion jdahlin). However to get full coverage i’d have to consider interfaces and inspect every object that implements an interface as it lands in JavaScript land to ensure it is wrapped. Ew. Tags: Async, Introspection, JS I wrote libiris in C so that I can have a consistent asynchronous framework in any language. It also has a work-stealing scheduler that is 8x faster than GThreadPool. Futures, message-passing, etc too. I haven’t had any meaningful work on it in a few months because I’ve been writing a profiler. But once I have that, I’ll get started again. – Christian The async utility libraries we are using at litl are included in Bugzilla: Hope they’ll help you, we’ve been using them successfully in large code bases. The python example you linked to doesn’t look way too different than what you have to do in C. However your Vala examples here really do make it *easy*. Woohoo! The Jc2k is back! zeenix, in an ideal world the python way is almost identical to the vala way, the blog post i linked to is about the sort of machinery we’d need to add to the bindings to make it like that “out of the box”. The “yield” keyword feels messy to me. The ideal syntax imho would be something like map do [function1, function2, function3] @Jon, i’m unconvinced. How does your code work if you want to assess the return value of function 1 and then call 2 OR 3 depending on the outcome? How do you deal with exceptions? How do you do a for loop over the return values of 1 and call 2 for each one and then finally call function 3 just once? The twisted way using generators is way nicer than anything else I’ve seen: @defer.inlineCallbacks def this_function_is_magic(): foo = yield do_something_async1() bar = yield do_something_async2(foo) baz = yield do_something_async3(bar) defer.returnValue(baz) When you yield it goes off and asynchronously runs the function, collects the return value when it’s available then your function is called again at the yield point and the return value passed in. @Doug: +1 In whorl, Defer.async == defer.inlineCallbacks, other than that it’s pretty much the same thing. And the code for it almost fits on my screen all at once In vala its the same again only it’s built into the language so async is a keyword just like static or virtual, and that means the returnValue hack isn’t needed.
http://blogs.gnome.org/johncarr/2010/03/29/great-minds-think-alike-asynchronous-patterns/comment-page-1/#comment-256
crawl-003
refinedweb
675
72.05
Can you @Filter a Seam @Entity?Rob Jellinghaus Jun 29, 2006 12:51 AM Working with my Seam example, just ported to 1.0.1. Originally derived from the noejb example, which now became the Hibernate example. I'm trying to declare a @Filter on one of my @Entities: @Entity @Name("blogPost") @Filter(name="headOnly", condition="replicatedChangeset is null") public class BlogPostImpl implements BlogPost, Serializable { private Long id; ... @Id @GeneratedValue public Long getId() { return id; } ... private Changeset replicatedChangeset; /** * The changeset in which this object was created. Null for head objects; set for all * versioned copies. */ @ManyToOne public Changeset getReplicatedChangeset () { return replicatedChangeset; } ... } And I'm trying to use it in a query like this (yes, this is basically trying to make a version-tracking blog-posting system): new Script() { ... private Session database; @Override protected void updateModelValues() { database = (Session) Component.getInstance("database", true); assert database != null; } @Override protected void invokeApplication() { ... database.enableFilter("headOnly"); List<BlogPostImpl> headBlogPosts = database.createQuery("from BlogPostImpl").list(); ... } }.run(); The exception I get is that the "headOnly" filter is not found: [testng] FAILED: com.robjsoftware.replog.test.ChangesetTest.testChangeset() [testng] org.hibernate.HibernateException: No such filter configured [headOnly] [testng] at org.hibernate.impl.SessionFactoryImpl.getFilterDefinition(SessionFactoryImpl.java:962) [testng] at org.hibernate.impl.SessionImpl.enableFilter(SessionImpl.java:1025) [testng] at com.robjsoftware.replog.test.ChangesetTest$4.invokeApplication(ChangesetTest.java:127) What am I missing? Is there any other magic I need to do to make @Filter work for a Seam @Entity? Are there any known examples of trying this? Should I even be expecting it to work? Should I try moving this to hibernate.cfg.xml (since I do *have* a hibernate.cfg.xml)? The Hibernate startup debug spam mentions "Binding entity from annotated class: com.robjsoftware.replog.domain.BlogPostImpl" but doesn't mention any filter annotations. Thanks very much -- I'm planning a bunch of aggressive weirdness with @Filters in this application, so it'll be a big bummer if Seam doesn't grok @Filter yet.... Cheers, Rob 1. Re: Can you @Filter a Seam @Entity?Gavin King Jun 29, 2006 3:41 AM (in response to Rob Jellinghaus) Ask in the Hibernate forums, I don't think Emmanuel tracks this list. I have never used @Filter. 2. Re: Can you @Filter a Seam @Entity?Rob Jellinghaus Jun 29, 2006 1:30 PM (in response to Rob Jellinghaus) OK, I asked: Here's hoping Emmanuel's response rate is anything like as good as yours :-\ If I don't hear back I will just start digging down into the source until I find the problem, then I'll JIRA something up. (Seems like the url tag is broken on this forum; can't actually put in any anchor text. (url=)My Anchor Text(url) doesn't display My Anchor Text at all. Oh well!) 3. Re: Can you @Filter a Seam @Entity?Gavin King Jun 29, 2006 8:59 PM (in response to Rob Jellinghaus) Note that Emmanuel and I have both been travelling in EU lately, so our response times are down. 4. Re: Can you @Filter a Seam @Entity?Rob Jellinghaus Jun 30, 2006 5:30 PM (in response to Rob Jellinghaus) Yours doesn't seem down by much! I figured out the base problem (you need a @FilterDef *and* a @Filter), but am encountering some other issues, mentioned in the other thread. Nothing to do with Seam though. I guess the only subquestion worth asking here is, what's the roadmap for Hibernate 3 now? Is there going to be a Hibernate 3.3? Or 4.0? Or ????? Basically, I can see myself wanting to do some JIRA patches to filters in Hibernate3 soon, and I'm wondering what the chances are that they'll get accepted....
https://developer.jboss.org/thread/132067
CC-MAIN-2017-39
refinedweb
622
50.63
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. Change product type with an onchange method Hi, i'm trying to make products packs, so I made a boolean button in the product to check if the product is a pack, now i want to make an onchange to change the type of the prodcut to service if pack checkbox is true. I'm trying with this but is not working. class product_template(models.Model): _inherit = 'product.template' pack = fields.Boolean('Pack?') api.onchange('pack') def _onchange_pack(self): if self.pack: self.type = 'service' About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/change-product-type-with-an-onchange-method-106818
CC-MAIN-2017-26
refinedweb
140
66.13
In this article we will explore how ASP.NET interacts with databases by managing data in various scripts. We will manage the data by adding, modifying and deleting records. First, we must create our database. Simply copy and paste the SQL below and execute it. Overview: ADO vs. ADO.NET So what is ADO? ActiveX Data Objects (ADO) is basically a data access API used by Microsoft in the days of homogeneous operating environments. Most of today's Internet applications are deployed in heterogeneous environments that consist of loosely coupled platforms. These loosely coupled platforms brought new challenges, particularly with regard to sharing common services and system scalability. Microsoft responded to these challenges by developing ADO.NET. The core element of the ADO model, the RecordSet object, was the universal data access object for COM-oriented environments. The key words here being COM-oriented environments as opposed to heterogeneous environments, which is what we operate in today. With ADO.NET you use the disconnected data set model to manipulate data without the need to stay connected to the RDBMS. Unlike classic ADO where you programmatically had to handle opening and closing database connections, ADO.NET does it automatically. As mentioned earlier, ADO.NET natively supports XML. This support gives developers the ability to create what is referred to briefcase applications. Briefcase applications enable you, for example, to save a dataset in XML format and then work on it at home. You can then bring it back to work and then update your RDBMS. These are just a few of the benefits offered by ADO.NET. Using ADO.NET ADO.NET provides several methods for accessing a database. The first thing you must do is create a connection string. A connection string contains text that includes database access information such as the database name, username and password. You must explicitly open your connection using one of its constructors. This has two advantages for developers: - Easy utilization of the connection - Code maintenance To use SQL Server-specific objects, you need to add the System.Data.SqlClient. To declare a namespace, you simply add the following line at the top of your .aspx page: To use any other database server, you need to add the System.Data.OleDb namespace. This will give you access to the OLEDB managed provider objects, such as OleDbConnection, OleDbCommand, etc. In the example below, we are connecting to a database called CDdb that is contained in SQL Server 2008. So what did we do here? We defined a connection string, and then we instantiated the connection object in the following line: The above line sets the connection object. We then need to open the connection. In my code I use the try.. catch construct to catch any errors that may occur in the connection attempt, but that should not confuse you. It is simply a matter of best practice. We could just as easily have written the code like this:
http://www.webreference.com/programming/asp_net/DB/index.html
CC-MAIN-2014-10
refinedweb
492
58.79
Are you sure doubles are so bad in finance?? Really? Each time I do alpha + beta I get a value signma + an error term epsilon. Provided epsilon is sufficiently less than the precision, you’re ok because you can just round the value to the appropriate accuracy (less than half epsilon I guess) What’s the performance difference between big decimal and the scala version? As you say, providing epislon is small enough it’s not a problem and in many cases we can live with it. The trouble arises when you have billions of calculations that sum up to something significant or when you are taking leveraged positions that can massively exacerbate any imprecision. As for the performance of the Java and Scala version, they are identical since they both compile down to byte code. The difference is purely a syntactical one. In the paper you mentioned, they explain how to cope with the precision, and this really funny fact that you can’t have the exact number, but that you can compute and represent the error anyways, and do stuff with it like . So I still don’t see your point. My point was that despite all the focus given to this topic, there are still developers who do not know about the imprecision of floating point values. Like most rules, knowing this gives you the freedom to break it when you are able to, such as when the error is acceptably small and the performance benefit is a great enough advantage. Hey Ed, The problem with doubles as he shows is you can’t represent the numbers exactly. Imagine adding a whole bunch of numbers that should have added up to an exact value. It’s hard to verify your books if cents are gone here and there. When working with money, you need your numbers to be exact. “That’s great but wouldn’t it be nice to use the normal operators rather than the overly-verbose method calls for the mathematical operations.” As the author noted, with Scala BigDecimal has its operators overloaded so you can use normal + and – operators, while with java because BigDecimal isn’t a primitive, you need to use “.subtract()” and such. Doubles are bad because they do not mean what naive programmers expect them to mean. Though in this case I think a workable solution is to pick a suitable precision and just use longs. In the 238k range a float will(may) be off by a penny. This may not be an issue for day to day banking, Depending on the value of the whole part the precision of the fractional goes down. It turns out that at about 238000 the precision for the fractional goes down to ~0.005 which will impact financial transactions. besides this is *your* money, is “shouldn’t be a problem” ok? See for an alternative to working with Double directly by using a fluent API around it that is very fast since being designed for financial strategy backtesting. It handles rounding for you *when actually needed*. Floating point in finance becomes a real problem when it comes to rounding. In general the finance sector uses “half even” rounding. If your calculation is supposed to end up as 36.295 and then rounded to 2DP it should go to 36.30. If it actually ends up being 36.294999999999995, that rounds down to 36.29 Are … you retarded? The *correct* answer is to do integer math on cents. Unambiguous, correct, primitive, fast, and no need to switch languages. Why are Java people unable to cope with basic software topics? because they’re busy applying their design patterns. What if you need sub-cent accuracy? You could just move the imaginary decimal point, sure, but what if you don’t know the location of said point? Wow, such ignorance…. Were you writing a calculator to use at the till in your toy shop? Cent accuracy is far too coarse in finance, what are you going to do, just round to the nearest cent every time you need to find a fraction of a value? Also, who are Java people? Java is just a language, stop with all the high school fan boy crap. Insulting, smug and wrong in one post – surely this is the ultimate internet post trifecta! Integers are performant but all well and good until your precision changes and you realise you’ve coded yourself into a corner. And as Gavin mentioned, who says that cents are good enough? Or better yet if you’re dealing with massive numbers your chosen solution causes an overflow. The reality is that you choose the right solution to the problem. If you can live with the imprecision, use a double. If you need specified precision (and don’t want to roll your own rounding), use something like a BigDecimal or create your own data object that can handle both sides of the decimal point independently. What you don’t do is present a solution as simplistic as yours and then get to claim that everyone else is an idiot. Sure, that works great if you are adding and subtracting monetary amounts. But that’s Accounting. We’re talking Finance here. What happens when interest rates come into the picture? Consider: Which outputs: There’s a lot of disparity there, no matter how you choose to convert Decimals to “Dollars and Cents”. Even in blockquote, that code looks hard to read. Try here: Why not just impose a furthest place value — say, thousandths — and use ints (or longs if you’re worried about overflow)? int t1 = 10266; int t2 = 10000d; //Outputs 266 System.out.println(t1-t2); int h1 = 100266d; int h2 = 100000d; //Outputs 266 System.out.println(h1-h2); Just make sure to divide by 1000 before displaying the results to the user. Eeek… nice comment until the division bit. The dollars and cents thing is a presentational issue, not an arithmetic one. Scala is pretty cool but this is not the best advertisement for it. After all, the following has worked in C# since version 1.0 (released in 2002) using System; class MainClass { public static void Main (string[] args) { var t1 = 10.266m; var t2 = 10.0m; // Outputs 0.266 Console.WriteLine(t1-t2); var h1 = 100.266m; var h2 = 100.0m; // Outputs 0.266 Console.WriteLine(h1-h2); } } Actually, I guess you would need to replace varwith explicit decimalback in the 1.0 days. I am not trashing on Scala though. As I said, Scala is pretty cool. As I understand it, the m datatype is just an alias for System.Decimal so it actually amounts to the same thing 🙂 All I was trying to do was to have a little bit of fun with Scala to highlight that the implicit typing is a far better solution than Java’s, nothing more. This is not a big deal. Any developer unaware of the problems with floating point arithmetic is not worth feeding. Fixed point integers are good and they are faster too. @martin: this discussion raises the question: who interviewed you?? Does it? Why do you say that? Integers are your friends. They’ll never leave you for more exciting values. They’ll stay the same forever (unless you spend too much time adding to them, that is). <3 Also, dear God, floats belong in the graphics department, keep them out of anything serious. They're floaty like a viscous liquid, can't be trusted. you can also use Decimal in C# so money variables would be declared like this; 2.50m or you can just create a fixed point number class that will allow one to change the precision If you are doing simple retail like calcs it makes sense. But try using big decimals is an iterative calculation thousands of times (say like a Monte Carlo simulation) in a HFT business. You’ll be killed on latency and resources as you churn a truck load more objects causing high levels of gc whilst the Market moves is against you. Using doubles has it’s place, primitive longs better if you keep hold of the decimal and calculate that at the end. If that was your criteria for hiring people then I despair…… You’re right about each approach, doubles/longs/BigDecimals, having their place. What I meant to highlight, and that particular point of the hiring process, was that each approach has its shortfall. As long as you are aware of the failings of each approach then that is all that matters but the candidates routinely don’t have this awareness. Like many rules, it is lies-to-children. When you can appreciate the complexity of reality you are then able to break it for the right reason. This site and IEEE 754 pretty much cover all the financial number needs you may have. Martin, This is a good post, you’re taking a lot of unjustified heat here IMO. It’s a simple, general rule, which like some many others says “unless you know for a fact that you have a better answer, you should always use [BigDecimal] for [currency arithmetics] rather than double/long”. A simple rule that keeps you out-of-trouble 99% of the time. It is intellectually honest, unlike some of the (expected) comments. Yes there are other ways to do it, if you really need to (e.g. raw performance) and if you absolutely know what you’re doing, but you never claimed otherwise. Your post made the programming masses a bit better, and it’s well worth a thank you. Keep ’em coming. Thanks for the kind words Patrice 🙂 Quite a few people seemed to have missed the fact that I never said that doubles should never be used. I say that they are “not a great thing to do” but each data type has their own strengths and weaknesses and should be used when appropriate. The trick is, as you mention, having sufficient understanding of the complexities to know when to break the rule. My fault for a contentious title I guess! The problem with using BigDecimal is the overhead in performance. An arithmetic operation on a double (in most CPUs) is typically a one or two-instruction process. An arithmetic operation on a BigDecimal involves many many more instructions (check out the source for subtract() at). By carefully controlling the rounding of doubles, it is possible to get accurate financial results without incurring the performance overhead. I know this for a fact because, 20 years ago, I used to write FORTRAN programs for scientific calculations that used FLOAT and DOUBLE PRECISION data types and produced accurate (within epsilon) results. BigDecimal may still be necessary, but only when adding really large numbers. If all you’re doing is adding and subtracting amounts of money, scaled integers (where 1 represents $0.01 and 100 represents $1.00) are good enough. But as Ed points out, as soon as you start dealing with interest calculations, you have to deal with fractional cents, and you have to get the answers right. As I understand it, there are regulations that specify exactly how these calculations must be done, with exact rules for rounding vs. truncating. You shouldn’t even begin to write code that deals with interest calculations until you understand these regulations. Don’t assume that money can be expressed in whole numbers of cents, or even in real numbers of dollars. Don’t assume that the regulations match what your intuition tells you about how money *should* work. If your calculation yields a result that’s mathematically perfect, but it doesn’t match what the regulations require, your calculation is wrong. (Disclaimer: I have little or no idea what these regulations actually say, or how to find them. It’s entirely possible that I’ve misunderstood the situation myself.) I’d be curious to know what Quickbooks / Quicken uses. Partical answer: go to a clearing house and ask them WHY they don’t want you to use float/double to perform their calculations. They don’t care about speed or efficiency, what they care about is accuracy. So fractional operations are performed on integer values and results are later scaled to obtain floating representation. When you’re shuffling millions for one account to another a few hundred times every night, a tiny rounding error is not so tiny.
http://bloodredsun.com/2011/06/22/doubles-financial-calculations/
CC-MAIN-2019-09
refinedweb
2,073
73.27
Both Python and Java have gained on Perl since January 2000. I might consider switching to another language if this trend persists. Jan 2000 Jan 2001 Jan 2002 Jan 2003 Jan 20 +04 Ruby 1720 (100) 2670 (155) 5440 (316) 16500 (959) 20900 (1 +215) PHP 15400 (100) 20100 (130) 37400 (242) 66200 (429) 130000 ( +844) Python 3850 (100) 6870 (178) 8760 (227) 11400 (296) 25400 ( +659) .NET 64600 (100) 86200 (133) 107000 (165) 142000 (219) 230000 ( +356) Java 33300 (100) 37500 (112) 45100 (135) 56700 (170) 103000 ( +309) Perl 10700 (100) 13700 (128) 14900 (139) 18500 (172) 31700 ( +296) C++ 17500 (100) 19200 (109) 22200 (126) 30700 (175) 35700 ( +204) UNIX 20500 (100) 23500 (114) 24500 (119) 29000 (141) 36800 ( +179) C 120000 (100) 137000 (114) 153000 (127) 172000 (143) 187000 ( +155) Lisp 2330 (100) 2450 (105) 2420 (103) 2980 (127) 3600 ( +154) shell 8510 (100) 8580 (100) 8870 (104) 10500 (123) 12300 ( +144) BASIC 78200 (100) 84900 (108) 93600 (119) 101000 (129) 111000 ( +141) Forth 13500 (100) 14700 (108) 14900 (110) 16400 (121) 18100 ( +134) Haskell 713 (100) 634 ( 88) 787 (110) 937 (131) 917 ( +128) Eiffel 529 (100) 1700 (321) 1680 (317) 1740 (328) 656 ( +124) Ada 4680 (100) 4660 ( 99) 4640 ( 99) 5430 (116) 5000 ( +106) Scheme 19900 (100) 22500 (113) 22400 (112) 22200 (111) 20200 ( +101) Cobol 2530 (100) 2270 ( 89) 2340 ( 92) 2310 ( 91) 2420 ( + 95) Pascal 5480 (100) 4260 ( 77) 4590 ( 83) 5220 ( 95) 5140 ( + 93) FORTRAN 5080 (100) 3770 ( 74) 4130 ( 81) 3830 ( 75) 3740 ( + 73) [download] Abigail From the department of useless statistics I bring you the same data, normalized by year. Mostly it shows C taking a nose dive, .NET in ascendency, BASIC going south as well, PHP and Java getting more hits and then (after removing that skewing data from the table) there's the rest of the bunch which C++ and Unix mostly stable, Scheme going south, Perl and Python going up, a big jump in Ruby. Removing that further skewing data from the table I see Forth followed by Shell. Removing those I now see Ada, Pascal, Fortran, Lisp, Cobol, Eiffel, and Haskell. Jan-00 Jan-01 Jan-02 Jan-03 Jan-04 Ruby 0.009968946 0.014930408 0.030569005 0.090978178 0. +08826915 PHP 0.124473722 0.142748192 0.24053793 0.381514413 0.56 +3973769 Python 0.027797541 0.045729874 0.052380546 0.0611646 0. +107890331 .NET 0.536289141 0.627473124 0.69779191 0.824626015 1 Java 0.274300876 0.270345981 0.291124937 0.325979318 0. +446246686 Perl 0.085133631 0.095815673 0.092718756 0.102669777 0. +135359983 C++ 0.142051209 0.136148307 0.140677866 0.173988531 0.1 +52801033 UNIX 0.167161905 0.167681094 0.15578827 0.164050671 0.1 +57597321 C 1 1 1 1 0.812508721 Lisp 0.015074788 0.013317103 0.010728387 0.011942968 0. +012836612 shell 0.066802822 0.058269657 0.053103217 0.055903381 0 +.050770894 BASIC 0.650124298 0.617939956 0.609757379 0.584948235 0 +.481128785 Forth 0.108570281 0.103148879 0.092718756 0.090393598 0 +.076060416 Haskell 0.001540123 0 0 0 0.001138028 Eiffel 0 0.007817198 0.005866779 0.004694177 0 Ada 0.034744833 0.029523488 0.025313212 0.026265177 0.0 +18940979 Scheme 0.162139766 0.160347887 0.141991814 0.124299235 +0.085216967 Cobol 0.016748834 0.011997125 0.010202808 0.008026283 0 +.007691503 Pascal 0.041441019 0.026590206 0.024984725 0.025037559 +0.019551416 FORTRAN 0.038092926 0.022996935 0.021962644 0.016911898 + 0.013447049 [download] Why go to all the work of querying Google when my $use_factor = int rand 20000; would get you just as valid an answer? ---- send money to your kernel via the boot loader.. This and more wisdom available from Markov Hardburn. Here is some well reasoned opinion on the subject of programming languagues. The links at the end are rather nice. All you seen to have proven with your stats (if anything given their dubious validity) is that Perl and Java have remained static for popularity whilst Python has made inroads. You might reflect that Python is the core language at Google. Does this give Google a Python bias? Do would be Googlers learn Python just in case they get that call? Do I care? def language(perl,python): try: return perl/python except ZeroDivisionError: return perl [download] cheers tachyon Populariy of a language is a useful measure of a number of things: In essence, you are researching the living-ness of the language. I suspect that if you look at languages like Ada, Forth, and CL/I, you'll find their living-ness to be at a very low ebb. New languages, like Ruby, Python, PHP, and Ponie are going to be on the rise in terms of living-ness. This is very different from the useful-ness of the language. Brainf**k isn't a very useful language, but it was certainly talked about for a New languages, like Ruby, Python, PHP, and Ponie are going to be on the rise in terms of living-ness. bassplayer That's like saying that if more and more people are eating fecal matter that it's a good idea to give up on steak and eat fecal matter along with the rest of the crowd. Sorry, my dogs and I herd sheep. We ain't sheep. I have always been a proponent of using the right tool to do a job. When I find a better tool to do a particular job then I adopt that tool in my rather complete tool bag. I don't throw the old tools away just because I have a new tool. For instance, I buy a router I don't throw my power screwdriver away since the router makes a very poor screwdriver. OTOH putting a coving bit in my power screwdriver won't help me much rounding off edges of boards. In my practice as a Unix Professional I use a variety of tools. I use Open Source tools as much as I can to hold down the line on costs since at my level being cost concious is part of the profession. I use off the shelf products (in other words commercial) where the situation warrants it. That doesn't mean that if a client that I support buys Sun's web server that I'm going back to the rest of my clients and telling them to ditch Apache. The requirements and political sensitivity of the client buying Sun's product may be such that they cannot go Open Source. (yeah.. it happens..) Tools that I use personally range from running Linux on my company owned laptop to Lotus Notes running under Wine to emacs as my preforred editor to running Visio® under cxoffice (a version of Wine) and on it goes. I program in a plethora of languages ranging from Perl and Bash to C, C++ and yes even Java. I've even been known to dabble in PHP and python on occasion. A well rounded programmer IMHO should be able to program in any language a job requirement asks them to. It is only a matter of learning a new syntax. Programming is programming, the algorithms stay the same regardless of the language you are programming it. Why are there so many languages? Every language brings to the programming world its own strengths and weaknesses, advantages and disadvantages. I would be hard pressed to figure out a way to write an operating system in pure Perl for an embedded system but OTOH doing associative arrays while not impossible are a bitch in assembler. "But you're trying to bash a nail into a piece of wood. A hammer is perfectly suited to do that. Using a pair of scissors will just make the job longer and more difficult." "Yeah, but LOOK AT THESE NUMBERS! Scissors, here I come!" I might consider switching to another language if this trend persists. The Perl community, and this forum in particular, would greatly benefit from such course of action. I feel sorry for those poor Java lads who will have to bear you, though. Python I would expect to be sharper as it is still a bit younger than perl. As for java, there are a lot of various backers in java. IBM, Sun, Apache (jakarta), Apple and so on. It's considered more of a biz language. Also, the upgrades are faster. So you may have found links for java 1.1 1.2 1.3.1 1.4 and 1.4.1. I'm sure I missed a version # in there somewhere. You also have achedemia which is switching to java, a good enough portion of it, to teach algorithms in it. perl ist simply a solution that's been around for a long time, and as perl6 comes out, we'll prolly see a sharper jump then. Quite frankly, I think wassercrats isn't going to be an uber-perl hacker or team leader in the next few months. He still has a lot to learn in his attitudes on programming.. -s For those of you who consider mr wasser automatic for flames, think of the time when you were "young and foolish". I know the young part is a compliment Wassercrats. But he brought up a valid point using some statistics. His analysis is shallow to say the least, and I'd consider it not-read-into-enough. Too simplistic. But that gives NONE of you the right to treat him less than a human being.. Abigail I don't like what you are implying here. You are suggesting that we all once behaved like Wassercrats. Could you either backup your claim with references (the Internet has a good memory) or retract your words? "compliment"? Do you mean "competent"? I don't know Wassercrats age, nor do I think that matters. I don't discriminate, and I don't think "older" people should be judged differently than "younger" people. As for Wassercrats being competent - perhaps he is. However, I don't recall any post of him suggesting he is. All I remember are posts of him attracking a lot downvotes. I haven't decided yet whether he's a troll or just a sad case. He did? He brought up a point? What point did he bring up? Who did? Did anyone call him names? Please don't accuse people without being specific. Give us names and quotes. I reread the thread, and I don't see anyone treating him as "less than a human being". Yes, but if he doesn't like how people react on what he's saying, he shouldn't voice his opinion. Freedom of speech works both ways. The only Perl code posted in this thread was mine. Could you elaborate on why you consider that "bad code"? Now I am confused. First you critize us from writing replies to the "young and foolish" Wassercrats - yet you expect us to teach him his errors. Do you expect us to visit him in person? What do you expect people to say? No, please, don't do that? Didn't you say about him if he doesn't want to do certain things, that's his will? If he doesn't want to program in Perl, isn't it your own idea that it's his will he shouldn't? Have you stopped beating your wife? Djee, what a good last sentence of a post that's mostly a flame. When I was "young and foolish", and boy did I have my moments, I got my "young and foolish" butt smacked around when I said stupid things. You are not going to learn how foolish you are unless "older and wiser" (or at least more experienced) folks sort you out. That's not to say anybody is "smacking" Wassercrats around. Although.. I think he likes it when he is. Youth or inexperience is not an excuse for stupidity. Just one of many explainations. And for that I give him a half point of credit. But in this thread he takes the data and draws a conclusion from left field. More and more people drive down the Garden State Parkway at 100MPH on bald tires so it's an upward trend so I should do it too.... Nahh... I don't think so. I guess what really gets me going on this topic is I've worked with senior management places where that was how IT decisions were made. They looked at someone's wonderful charts and graphs and bought technology to fit what the "trends" were without bothering to check the source or do any critical analysis of the technology needs of the company. I watched on helplessly as a complany I worked for spent easily 7 figures rolling out Exchange to replace a Sendmail based infrastructure. They had to retrain all of the nonIT staff, migrate the email data, the accounts and discovered that the load the original 4 mail servers running sendmail handled now had to be done with 18 machines. Maintenance costs rose, new staff was hired to deal with the Exchange servers (we need MSCEs to do this... right?) and on it went. In the end there was a massive layoff of IT staff (mostly programmers) and then later on after I departed for a new job (I wasn't hanging around) the outsourced the entire IT/IS department. That all started with a Senior Vice President looking at a Microsoft sponsored presentation and saying: " HEY! Exchange is gaining market share! We better get on the bandwagon!" You can't make stuff like that up... life is stranger than fiction. Smacked..well.. I hope my prof's and mentor's never would do that when I was wrong. I've had people tell me I'm wrong, which is fine. I don't recount anyone calling me an idiot IRL just because I said something that was incorrect. Probably the lack of being able to smack back online beyond crafting a reply. Wassercrats has stated at various times in the past that he aims to get the worst XP on this site. Refuting arguments have proven useless on him (see VarStructor 1.0). One can only conclude that he is either a troll or has an excessively inflated ego. In either case, the best solution is to ignore. Jan-00 Jan-01 Jan-02 Jan-03 Jan-04 Total Hits Perl 10,800 13,800 15,000 18,500 32,000 Python 3,900 6,920 8,820 10,700 25,500 Java 33,600 37,800 45,200 55,300 102,000 Hits with the word "Suck" Perl 90 320 396 466 665 Python 49 146 313 331 604 Java 198 339 702 847 2250 Suck % Perl 0.83 2.32 2.64 2.52 2.08 Python 1.26 2.11 3.55 3.09 2.37 Java 0.59 0.90 1.55 1.53 2.21 [download] Perl will never die, even though there are alot of ignorant corporate executives who are trying to kill it. They try to kill and discredit what makes them look ignorant. If they passed some "law against Perl", there would be a whole underground which would keep it going. They say the same thing about C. The C++ people say "C is obsolete, don't bother with it", but you can see that C++ is more likely to die before C will. I doubt that they will ever write an good OS in C++. Sure, C++ can be used a better C. But the OO is the big benefit and that is a big change. Objective-C is a much better example of small extensions to C. sub buttonQuit_Click() frmForm.Close() end sub [download] And do you have to follow a trend? See what the "trend" was in Germany in the 1930's and where it lead to. Finally, your figures shouw a tripling of "Perl"-references over the time period investigated. not much of a "demise", I'd say. CountZero "If you have four groups working on a compiler, you'll get a 4-pass compiler." - Conway's Law ---- Fn 1. Most likely due to Larry mentioning it in the State of the Onion, methinks. Ah, Larry is the best powerpoint slide creator in the world! | | Yes No Results (63 votes). Check out past polls.
https://www.perlmonks.org/?node_id=355190
CC-MAIN-2022-21
refinedweb
2,735
81.93
The QRegExp class provides pattern matching using regular expressions. More... #include <QRegExp> Note: All the functions in this class are reentrant. The QRegExp class provides pattern matching using regular expressions. mode that works in a similar way to command shells. It can even be feed with fixed strings (see setPatternSyntax()). A good text on regexps is Mastering Regular Expressions (Third Edition) by Jeffrey E. F. Friedl, ISBN 0-596-52812-4.: The C++ compiler transforms backslashes in strings, so to include a \ in a regexp, you will need to enter it twice, i.e. \\. To match the backslash character itself, you will need four: \\\\.A. See also operator=().". See also isValid(). number of captures contained in the regular expression...
http://doc.trolltech.com/4.3/qregexp.html
crawl-002
refinedweb
119
61.73
Have you developed an EJB? Have you been frustrated at having to create and manipulate the XML deployment descriptors, as well as the interfaces? I certainly have. I was recently working on an EJB for the Xbeans open source project and I decided to use another open source tool -- XDoclet -- to generate the XML descriptors and interfaces for me. Using XDoclet will enable you to work more efficiently within the J2EE framework. You will have a simpler view of your beans and the relationships between them, and many of the annoyances will be taken out of your development. This article will discuss the XDoclet tool, how to use it, and how to extend it. In this article, we will create a session bean that uses the Javadoc tags, and run XDoclet on the bean. XDoclet is a tool that has evolved from the original EJBDoclet tool that Rickard Oberg created. The idea was simple: Instead of managing multiple files for each EJB, view the entire component through the Bean class itself. How can this be done? Java doesn't have the "attributes" that .NET is touting, but it does have Javadoc tags. We can place special @ tags into Javadoc comments, and have a Doclet tool that looks for these tags. The tool then generates the appropriate XML descriptors and interfaces for the given set of beans. XDoclet built on the EJBDoclet idea by extending the framework beyond the realm of EJBs. Now you can generate Web Services, Web Application descriptors, and even extend the system to fulfill your individual needs. The @ tags have a standard format, containing a "namespace" and a "tagname" that belongs to that namespace. Then properties of the tag are passed in via name="value" arguments to the tag. Here is a generic example: /** * @namespace:tag name="value" name2="value2" ... */ The current namespaces used are: struts-config.xmlfrom Form and Action. web.xmlconfiguration for Web Applications. As you can see, there is substantial support for many frameworks beyond the EJB world (hence the change of name from EJBDoclet to XDoclet). Now that we have talked about the tool, let's get into the real example. We will start with a Session EJB. This EJB is part of the Xbeans framework, but for this example, it doesn't matter what the bean does. All we care about is how we take a bean class, "mark it up" with Javadoc tags, and then use XDoclet to generate the meta files for us. The file ReceiverBean.java will hold the following method: documentReady(Document doc). This method takes a DOM document and passes it to the next Xbean in the chain. At the class level, we need to define: The only required attribute for this tag is to tell XDoclet the name of the bean. We will also define the type of bean, the JNDI name to bind the home stub, and the display name: /** * This is the EJB Receiver Xbean * * @ejb:bean type="Stateless" * name="ejbReceiver" * jndi-name="org.xbeans.ejb.receiver.Receiver" * display-name="EJB Receiver Xbean" * * ... other javadoc tags ... */ public class ReceiverBean implements SessionBean, DOMSource { The most common attributes for the ejb:bean tag are: Statefulor Stateless. For Entities, it is CMPor BMP. jndi-name, but used for the Local interface. remoteor local, or both. As for all of the tags, check out the documentation to see the full list of options. This tag defines an environment entry that will be configured in JNDI via the special java:comp/env context. We will define an environment entry that the bean will use to look up the next Xbean in the chain: /** * This is the EJB Receiver Xbean * * ... other javadoc tags ... * * @ejb:env-entry name="channelBean" type="java.lang.String" * value="com.your.ChannelBean" * * ... other javadoc tags ... */ public class ReceiverBean implements SessionBean, DOMSource { Now we will configure the vendor-specific pooling characteristics, using WebLogic for the sake of argument. To denote that we are in a vendor-specific world, we have the weblogic namespace: /** * This is the EJB Receiver Xbean * * ... other javadoc tags ... * * @weblogic:pool max-beans-in-free-pool="1000" * initial-beans-in-free-pool="10" * * ... other javadoc tags ... */ public class ReceiverBean implements SessionBean, DOMSource { This tag will configure the pooling parameters in the WebLogic-specific deployment descriptor ( weblogic-ejb-jar.xml). There are many other class-level tags allowing you to tweak anything that you can in the deployment descriptors. Here is a high-level glimpse at some of the "standard" tags that you may want to use in your development: none, remote, local,or both), the package the interfaces should be placed into, and more. hometag, but configures information related to the component interface ( remoteand/or local). role-nameto call all methods in remote and home interfaces of this bean. We want to tag at the method level. If we want a given method to be part of the remote interface, we simply tell XDoclet via a method level tag: /** * The method that the sender uses to pass the Document * * @ejb:interface-method view-type="remote" */ public void documentReady(Document incomingDocument) { You will always use this tag. You will go through the methods in your bean class, and if you wish a client to access it, you place this tag above the method signature. If you wanted the access to be via a local interface, you simply change the view-type value to local. Here are some of the other EJB method-level tags: ejbHome*method. getX/setXmethods. For BMP, it will generate getX/setXmethods that keep track of a dirty flag (so that ejbStoreis only called when necessary). NotSupported | Supports | Required | RequiresNew | Mandatory | Never.}: <fileset dir="${java.dir}"> <include name="**/ReceiverBean.java" /> </fileset> The next set of tags will make sure that the remote interface, home interface, and standard XML deployment descriptor ( ejb-jar.xml) will be generated: .
http://archive.oreilly.com/lpt/a/1468
CC-MAIN-2016-30
refinedweb
974
65.93
Analysis of AUR and Official Arch Repository data Last Updated on December 2, 2017 Arch Linux provides packages through the official Arch Linux repositories and the Arch User Repository (AUR). I recently gathered data on ~50,000 packages from these repositories on archlinux.org to better understand the makeup of the packages. In this article I will share some visualizations I made as well as some key takeaways about the data set I gathered. The repo with all of the data I collected as well as the code I used to do so is available in this repository on my Github account. Growth of the AURGrowth of the AUR The first questions I had about the dataset were about visualizing the growth of the AUR over time. Each package in the AUR has a First Submitted date, so I was able to put this together easily: sns.set() df = df[df['First Submitted'].notnull()] df["First Submitted"] = pd.to_datetime(df['First Submitted']) list_of_dates = df["First Submitted"].sort_values() counts = np.arange(0, len(list_of_dates)) plt.figure(figsize=(10, 5)) _ = plt.plot(list_of_dates, counts) _ = plt.title('AUR Packages over time') _ = plt.xlabel('Date') _ = plt.ylabel('Packages') It looks like there was a big boost in the number of packages submitted in mid-2015 and that number has been growing consistantly since then. Official RepositoriesOfficial Repositories The official repositories contain just under 10,000 packages. Here is a force-directed graph (undirected graph) in D3.js that shows these packages as nodes and package dependencies as edges. This image shows a sample of about 1,000 packages and it is somewhat representative of the whole graph show further below. There are four main repo in the official repositories: - core contains packages for: - booting Arch Linux - connecting to the Internet - building packages - management and repair of supported file systems - the system setup process (e.g. openssh) extra contains all packages that do not fit in core. Example: Xorg, window managers, web browsers, media players, tools for working with languages such as Python and Ruby, and a lot more. community contains packages that have been adopted by Trusted Users from the Arch User Repository. Some of these packages may eventually make the transition to the core or extra repositories as the developers consider them crucial to the distribution. multilib contains 32 bit software and libraries that can be used to run and build 32 bit applications on 64 bit installs (e.g. wine, steam, etc). Here is an SVG showing all packages in the official repository. Click on the image to explore the SVG in more detail, and you can hover over nodes to see which packages they represent. Turn off the lights and you can see a ring of packages orbiting in a circle! If you look at this file in detail you can find some interesting clusters of packages. The "island" in the top left includes mostly Haskell packages and pandoc. python, python2, and git are three of the main central hubs in the middle cluster. perl and other related packages make of most of the bottom right cluster and you can see "flowers" of package dependencies mostly for internationalization for popular programs like firefox and thunderbird. To make this interactive graph and SVG images I did the following: - Create a dictionary from my data base with packages keys and a list of dependencies as values: graph_dict = {} for _, i in df.iterrows(): graph_dict[i["package_name"]] = i["pkgdeps"] - Create a NetworkX graph with the dictionary created in the previous step: G = nx.Graph(graph_dict) - Export the NetworkX graph to JSON using a built-in NetworkX function: from networkx.readwrite import json_graph data = json_graph.node_link_data(G) - Add a group number to each node element cooresponding to the repository it belongs to. for n in data['nodes']: n['group'] = int(df.loc[(df.package_name == n["id"]), "repo_number"].iloc[0]) - Save the JSON to a file: with open('/home/brian/Documents/github/briancaffey.github.io/aur/data.json', 'w') as outfile: json.dump(data, outfile) Feed the JSON file into this template which renders a D3.js force-directed graph. To save the graph as a SVG file, I ran the NYT crowbar script in the browser console: var e = document.createElement('script') e.setAttribute('src', '') e.setAttribute('class', 'svg-crowbar') document.body.appendChild(e) Let's take one more look at how tightly Official Repository Package SizesOfficial Repository Package Sizes Official Packages include both a Package Size and Installed Size. Here is a Bokeh plot showing Package Size vs. Installed Size: Warning: Don't hover directly over the cluster of plotted points near the origin of the graph. DOING SO WILL CRASH YOUR BROWSER. This is because the hover tool will attempt to display all packages that you are hovered over and it may be far too many for the browser to handle. Carefully zoom in using the scroll tool and you can find some interesting trends in the types of packages and how much they are able to be compressed. {% include package_sizes.html %} Here's the setup for this bokeh graph: from bokeh.plotting import figure, output_file, show, ColumnDataSource from bokeh.models import HoverTool from bokeh.io import output_notebook output_notebook() output_file("/home/brian/Documents/github/briancaffey.github.io/_includes/package_sizes.html") source = ColumnDataSource( data=dict( x=df.package_size, y=df.installed_size, desc=df.Description, name=df.package_name ) ) hover = HoverTool( tooltips=[ ("Name", "@name"), ("Package Size", "@x MB"), ("Installed Size", "@y MB"), ("Description", "@desc"), ] ) TOOLS = 'box_zoom,box_select,reset,pan,wheel_zoom' p = figure(plot_width=400, plot_height=400, tools=[TOOLS, hover], title="Packages Size vs. Installed Size", sizing_mode='scale_width') p.circle('x', 'y', size=5, source=source, alpha=0.2) p.toolbar.logo = None show(p) AUR Word CloudAUR Word Cloud Let's make a word cloud out of text descriptions for packages in the AUR. We can use a popular python package for making word clouds. Here's the code: import numpy as np from PIL import Image from os import path import matplotlib.pyplot as plt import random from wordcloud import WordCloud, STOPWORDS def grey_color_func(word, font_size, position, orientation, random_state=None, **kwargs): return "hsl(0, 0%%, %d%%)" % random.randint(60, 100) mask = np.array(Image.open("/home/brian/Documents/aur/images/arch_logo.png")) text = open("/home/brian/Documents/aur/ipynb/package_descriptions.txt").read() wc = WordCloud(max_words=1000, mask=mask, stopwords=stopwords, margin=10, random_state=1).generate(text) default_colors = wc.to_array() plt.figure(figsize=(20, 20)) plt.imshow(wc.recolor(color_func=grey_color_func, random_state=3), interpolation="bilinear") wc.to_file("arch_word_cloud.png") plt.axis("off") plt.show() Arch Wiki MembersArch Wiki Members The Arch wiki is the first place I go for troubleshooting any issue with Arch. Users of other Linux distributions have also said how useful it can be even if you don't user Arch Linux. Here's a look at the number of registered users on the Arch Wiki over time: sns.set() df = df[df['registered'].notnull()] df["registered"] = pd.to_datetime(df['registered']) list_of_dates = df["registered"].sort_values() counts = np.arange(0, len(list_of_dates)) plt.figure(figsize=(10, 5)) _ = plt.plot(list_of_dates, counts) _ = plt.title('Registered Arch Wiki members over time') _ = plt.xlabel('Date') _ = plt.ylabel('Members') plt.show() plt.savefig('/home/brian/Documents/github/briancaffey.github.io/aur/wiki_users.png') There is a massive amount of data in the Wiki that I haven't obtained for this article. You can also find some interesting statistics on the Arch Wiki site here.
https://briancaffey.github.io/2017/12/02/arch-linux-package-data-analysis.html/
CC-MAIN-2022-40
refinedweb
1,244
50.84
todo: l18n version update l18n compare tool Config::Tree join & split keys ! update configs when update cpan installation ! EOL problem in UTF files 0.4.3.33 ' typos as reported by fabreg@fabreg.it 0.4.3.32 ] optimise CPAN appearance 0.4.3.31 ] installer, no .svn files 0.4.3.30 ] fixed module type :) 0.4.3.29 ] grmpfl, repair Meta.yml 0.4.3.28 ] with META.yml again 0.4.3.27 ] included Module::Install again 0.4.3.26 ? deleted xt which only can fail 0.4.3.25 ^ just released as stable on CPAN 0.4.3.24 ~ see all file in file dialogs even under linux and mac [ fixing layout of search dialog 0.4.3.23 ~ suspend interface caching due coding problems in cache files " ignore backup localisation files while creating l18n menus 0.4.3.22 + added spanish localisation (thanks to enriquee nell) [ less clutter in menu labels / moved dev to linux 0.4.3.21 + select content - just quotings [ add toggle script comment to selected text context menu ! repaired some POD's 0.4.3.20 ~ move selection moves now wordwise when part of line selected ' 2 doc typo in Kephra.pm 0.4.3.19 ~ move selection moves now only complete lines when selected none or several ! old l18n key in romana.conf 0.4.3.18 + insert rectangular selections + move line tabwise left and right with <ctrl>+<alt>+<left|right> ~ brace nav is caret pos aware (jumps from inside to inside or outside to outside) ~ toggle # comment with <ctrl>+<alt>+<k> ] automatically hides incomplete translations in enduser releases ! closed unnamed docs appeared in file history ! proper titelbar and tabbar visual updates while rename file ! folding and marker setting with left click on margin works again 0.4.3.17 + selecting right to left and right click deletes (clear) selection (left to right just cuts) ] hardening the feature of previous version ] config reload when changing current l18n works again " improved english l18n (mostly by enrico)++ 0.4.3.16 ! lang selection menu works again 0.4.3.15 ! wrong literals in default commandlist 0.4.3.14 ~ correcting toggle select ~ changing command ID select-toggle > select-toggle-simple ~ changing command ID select-content > select-toggle-content, 0.4.3.13 ] introduce command node type sub ] remove old unused command ID ] splashscreen closes on right time ! forgot last_edit pos after restart ! correcting prev_related_brace when jump to previous closinf brace of same depths ! crash while editing after closing doc 0.4.3.12 ~ jump from doc end to doc end (auto wrap) when navigating in related braces [ move edit_line submenu 2 up ' update and fix keymap ! left click on selection does copy in output panel too ! missing komma and bad arrows in default command list ! while brace navigation caret cant hidden in folded lines 0.4.3.11 + note selection works also in output panel + copy selection in output panel with <enter> + left click on selection does copy in output panel too ~ leaving notepad with F12 ~ moving zoom keybinding to strl+shift to allow type ~ again ~ # is written sharp in keymap to prevent quoting a la /# ] init commandline module ' keymap shows marker keybinding back to f2 ! note selection really appends ! fixed statusbar context menu ! fixed eol switch ! template menu updates again after saving active template file 0.4.3.10 + toggle select with ctrl+y + zoom in, out and back with <ctrl>+<alt>+<+>,<ctrl>+<alt>+<-> and <ctrl>+<alt>+<0> ~ marker keybinding back to f2 [ adding zoom and select menu ! fixed goto_prev_marker_in_doc 0.4.3.9 ~ reorder tool menu ~ moved keybinding of marker to f4 and notepad to f12 ' updated keymap ' typos in en l18n ! insert on cursor ! copy quoted string works again 0.4.3.8 + new menu item tool-brace-completion + new tool: insert time/date ~ doc attr config_file now saved by session files ~ esc in searchbar closes bar, use ctrl+f to just leave ' improved docs: this version, all functions, TestReleases, README [ moved document-auto-indention and document-brace-indention into tools menu ] outsource all visual stuff inside the editpanel to Kephra::App::EditPanel::Indicator ! only recent l18n file open with * ! Kephra::Config::path_matches works as always intended, matches subpathes not just eq 0.4.3.7 ~ refined editpanel mouse control [ start Kephra::Macro.pm ! open toolbar file works again ! templates works again 0.4.3.6 + left click on selection does copy + middle click inserts selection ~ middle click on searchbar finds next / goto last edit ! acme unselect selected half document 0.4.3.5 + acme like mouse control: cut (left+middle) and paste (left+right) + acme like mouse control: search (middle) or goto last edit (if no text selected) ~ color picker understands now comma seperated decimal values too " switch position of last 2 tool menu item " new command ID: tool insert date time ' improved docs: navigation and this version 0.4.3.4 + bug and request tracker links in help menu ~ search in all current block for last_perl_var ? implemented 7 tests for t/04_config_data_tree.t ? uncommented 17 tests for t/02_config.t ' pod fixed by kristian++ 0.4.3.3 + insert_last_perl_var ~ replace to nothing again ? added t/04_config_data_tree.t " advanced czech localisation " added keyword default to perl lexer def ' updated keymap ' updated navigation [ added tool-perl-copy-string and tool-perl-insert-last-var to tool menu ! doc lang selection 0.4.3.2 ^ rerelease as CPAN stable version 0.4.3.1 ] xp mode for boot sequence " tool-output-selection-hex now on key alt+shift+f5 0.4.3 ' update credits 0.4.2.39 + copy surrounding quoted string with <strg>+<alt>+<C> " getting default settings of new files when opening empty files [ moving insert templates into tools menu ! file history menu works again ! crash when open empty files 0.4.2.38 + marker function do also from context menu pos 0.4.2.37 + folding per context menu on mouse position (idea my murphy++) " item view-editpanel-fold-toggle-level in markermargin_contextmenu was missing " searchbar autofocus no longer default ! cripple statusbarbar coding field to make it save for now 0.4.2.36 ! update of module::install to 1.0 0.4.2.35 ! bug in makefile 0.4.2.34 ! utf labels now work on windows too (linux did it automaticlly right) 0.4.2.33 ! release tool now does tgz with the right rights 0.4.2.32 ! fold all nodes works now even if caret is in area without folding ! fixed function and cursor visibility of toggle recursive code folding ! remerged forgotten win default settings 0.4.2.31 + refresh autosaved configs after cpan update ~ changed folding keybinding ~ fold here: ctrl+plus, level: alt+plus, recursively: alt+shift+plus 0.4.2.30 ~ xml formater keybinding is now Ctrl[+Shift]+I ! unified logic behind both transparency settings - 0 = no transparency 0.4.2.29 + safety belt for window modes on esc key [ +key code for hide tab bar at ctrl+alt+t ! new docs show correct codepage 0.4.2.28 + fullscreen mode + max editpanel mode (disables all bars) + transparent mode + you can combine stay on top and transparent mode with Ctrl+Alt+F11 [ new menu: view > main window; [ move cmd view-window-stay-on-top there [ created commands view-window-fullscreen, view-window-transparent, view-editpanel-maximize " changed default font of output panel to arial , 10 px 0.4.2.27 ] finer event freeze and thaw ! delete unnecessary ecaping of vars in title content definition ! save copy as ... works again ! fixed eventhandling in Kephra::Document::Data::evaluate_attributes ! fixed test suite 0.4.2.26 + writes files also in missing dirs ! new Kephra::Dialog API fix ! redo comment function calls in commands.conf to older less flexible but working way 0.4.2.25 + romanian localisation " bad config: invisible main menu ! old API uses 0.4.2.24 + add support for latin 1 coding (codepage property) ! reload and and insert file works again 0.4.2.23 + autorecognition via encode::guess + read and write utf 0.4.2.22 + save and restore textfold state in sessions and file history ~ change sibling to level fold 0.4.2.21 + file history is now a session, restore file with all saved property + switch menubar visibility ] localize some of last temp var ] simplify Dialog API 0.4.2.20 [ new icons for search abr ops, marker, script run, color picker ] starting dater due freezing text change event ] event group doc_change_went smaller: docchanges now faster due less events fired ! right linenumber margin size when starting with autosaved.conf as current doc 0.4.2.19 [ update menu view > contextmenus and make 2 new calls vor it (markermargin searchbar) ~ statusbar contextmenus have own switch, interactive means switch leftclick ~ changed markermargin contextmenu acording the order of margins " search date has own config file " new dir for config data files: search.yml and notepad_content.txt 0.4.2.18 ~ expanded markermargin contextmenu " repaired some issues in l18n and default global configs (due tools/confer.pl) ! folding works again ! global marker prev|next now can jump back to current doc if there more marker 0.4.2.17 + goto next and previous marker in all docs ~ global marker goto is now in search bar ~ not appending empty selection on notepad text ! fix marker save in session 0.4.2.16 + set bookmarks directly by mouse middleclick + save marked lines in the session file [ switched move left and right in document > change menu [ changed marker key binding to standard compliance ] refined add/restore in Kephra::File::Session 0.4.2.15 " move search data file into global/sub dir [ contextmenu on markermargin [ + marker menu 0.4.2.14 + output selection in dec and hex [ output menu [ file close submenu has icon ! file history works again 0.4.2.13 + set search dialo transparency unter dialog > search > transparency + notepad has line move ! fixed cursor pos on line move 0.4.2.12 + new tool: output the %ENV [ marker nav in searchbar 0.4.2.11 * marker functions + set marker by mouse [ bookmarks have now individual icons (jenne++) ] missing helpstring for menu entry causes no more warning 0.4.2.10 ~ autoconverts of EOL are now visible " minor bit in german tab status field help [ config access only through API [ cleaned Kephra.pm ! bookmarks work again ! fixed property handling for the different file defaults 0.4.2.9 " different default setting for new und open files " new icons from jenne++ 0.4.2.8 + color picker + statusbar has context help while mouse hover ] reformat to one API module ! tab info char in wrong tab ! encodings menu visible, also in the statusbar 0.4.2.7 ~ current dir is now always synched with current file ! wrong titlebar content after some close doc events 0.4.2.6 ] rest of renaming plugin -> panel for notepad and output ] rename bookmark.pm -> marker ! splashscreen works again 0.4.2.5 + can assign icons to menus [ rearranged file menu ! crash while folding 0.4.2.4 [ added encoding menu [ splitter are now without live update, has lot less flicker ' updated german help-forum link 0.4.2.3 ~ linenr margin width autosize works now per doc ] separated fold functions in own module ! change font works again on all edit tabs 0.4.2.2 ! most visual settings work global again ! dont loose caret while find and defold 0.4.2.1 + loading stautsbar from config + status field codepage " added keys app > statusbar > file and app > statusbar > node ] refactor Kephra::API::EventTable -> Kephra::EventTable ] refactor Kephra::API::CommandList -> Kephra::CommandList ] refactor Notepad, Output: Kephra::Plugin::* -> Kephra::App::Panel::* ] refactor Menu, ToolBar: Kephra::App::* -> Kephra::* ! all event react on selection correct again 0.4.2 testing release: new tabbar ~ event driven stuff works again ~ smaller tab width in notepad ! open autoconfig file works again ! reload global configs works again ! file changed notify works again 0.4.1.19 [ reduce clutter while boot and shut down ] changed all constants to &Wx::... syntax 0.4.1.18 ~ session save now docs in visual order ~ search findings even visible when hidden in folded line 0.4.1.17 ! sessions save does work again ! tabs and other doc settings work again ! file save works again ! reload file works without flaws 0.4.1.16 ! sessions do load again 0.4.1.15 ] 2 level tab number translation ~ moved motto from info box into localisation ~ visibility of tabbar can be set again ! move tabs by mouse has no side effects ! move tabs by keyboard works again 0.4.1.14 ' added doc/Roadmap file ~ doubleclick on panel splitter now reacts properly ! Bookmarks work again 0.4.1.13 + tabs are moveable by mouse now and still change in right order " english l18n synced with defaults ] extended/reformed Kephra::Document::Data a bit ! new tabbar works suddenly, ahm finally ! file save works again ! toolbar icon respond properly 0.4.1.12 " default l18n nsync again ! folding works again 0.4.1.11 ~ note-selection gets key binding: shift+f4 " new key : app > tabbar > movable_tabs " new key : app > tabbar > close_button " new key : app > tabbar > tablist_button " removed key : app > tabbar > visible ] shwitched Wx::NoteBook > Wx::AuiNoteBook ] win distro updates to Wx 0.93 ] displays Scintilla version correct for newest Wx 0.4.1.10 ! bug with output panel 0.4.1.9 ] new stc ref handling ] rewrite of whole Kephra::Document namespace ! new docs ain't readonly 0.4.1.8 + display @INC pathes + open online doc and native lang forum URL in default browser + recognizes now tab mode of opened file + conf_key tab_use_new and tab_use_open (default auto) ~ renamed fold margin label ~ doc property bracelight is now view option ~ tool-run-doc => tool-interpreter-run-doc (stop also) 0.4.1.7 + file > defaultsettings > EOL_new can now be 'OS' => new files have lineendings according the OS you currently run - no menu icons under mac ~ rearrange main app layout (right panel goes from toolbar to statusbar) ] define event groups ] new module Kephra::Document::Property for by user changeable doc property ! fixed autwrap on text search: finding next 0.4.1.6 + rearranged context menu of tab status bar cell, can change tab width there now ] patches from Andreas Kaschner++: ! fixes in 02_config.t, Notepad.pm, StatusBar.pm, Default/CommandList.pm 0.4.1.5 + convert spaces to its HTML entity [ moving folding menu to fold margin in view menu ] convert all ISO 8859-1 enteties except whitespace 0.4.1.4 + fold all (alt+shift+plus) + fold recursively (alt+shift+minus) + these calls now on right click on fold margin ~ changed key binding for fold: fold => alt+minus, fold siblings => alt+plus ~ fold mouse binding: left => here, midd => recursive, right => siblings, l+f => all ~ editpanel autofocus is now off per default [ unfolding when goto a hidden line ] convert more entities " some locale strings were translated to wrong language 0.4.1.3 + 2 new converter HTML entities to char and back + second fold marker style set editpanel > margin > fold > style to arrows or boxes + can set text margin to width of 3 px + note selection ~ config key app > panel > notepad > content renamed to content_file [ folding functions now in menu visible ! when text margin set 0 menu shows it now 0.4.1.2 + optional flagline shows where text is folded + option for keeping caret visible while folding ! fold siblings works now also in the first line 0.4.1.1 + key binding for fold ops: fold => alt+plus, fold siblings => alt+shift+plus ! fixed commandlist cache logic, refreshes now if needed 0.4.1 testing release: folding 0.4.0.12 [ unfolds all if hides folding margin [ less intrusive default color for folding markers [ less intrusive default color for line number 0.4.0.11 * code folding support + fold sibling nodes [ editpanel context menus don't open over margin anymore 0.4.0.10 + commandlist cache [ moved menu view_bars one position down ] commandlist and localisation modules nor hold their data internal (was global) " editpanel > margin > fold is now a node that holds forecolor, back and visible ! typo in Config::Default::GlobalSettings ! crash: typo in event resigning of Dialog::Search ! crash: old api call in file::session ! crash if hide main toolbar by next event 0.4.0.9 [ uncomment foldmargin view option [ darker (yellow) caret line (was nearly unvisible) ] new commandlist leaf: keycode ] rehashed parts of Kephra::API::CommandList 0.4.0.8 + autoplugging localisation system + added key editpanel > auto > focus ~ changed name convention for icons no more underscore => new-names.xpm ' updated roadmap in kephra.pm POD " panel icons were missing in default commandlist 0.4.0.7 + intial norsk localisation + new submodule Kephra::Config::Localisation, refactoring - removed stupid restrictions not to open emty files ~ open binary files (only_text = 0)is now default, problems with utf files ~ activate UTF8 for Config::General when locale file requires ] open localisation files function moved into Config::Localisation ! typo in localisation column, not cloumn 0.4.0.6 ] moving codepage setting from hard wired to config set ~ moving File::_age into File::IO ! output panel didn't worked 0.4.0.5 [ cleaner search dialog ! updating in localisation default config 0.4.0.4 + select interpreter with config key app > panel > output > interpreter_path + file missing dialog - remove dangerous <Ctrl>+<Alt>+<Q> keybinding ( triggeres when want create @) [ find item history realtime update now works in search dialog [ cleaner search dialog ] rebuild notify dialog ] renewed parts of search module ! comboboxes in search dialog process enter ! crash on moving tabs 0.4.0.3 ! keep calm when just a unnamed doc is unsaved 0.4.0.2 ' updating key layout docs ~ close all key binding changed <Alt>+<Q> => <Ctrl>+<Alt>+<Q> ~ close app key binding changed <Alt>+<F4> => <Ctrl>+<Q> ! UTF problems on files with syntaxmode none 0.4.0.1 ' updating key layout docs ! quoting doc name in output panel eval ! crash while delete all bookmarks 0.4 stable release ' finished end user docs 0.3.10.24 ~ background color for brace highlighting ' more docs translation ! a Config::Global sub required a gone module ! internal doc propert reset didn't delete file name ! file > open > each_once works again 0.3.10.23 ~ hide config dialog ' update docs ! change doc 0.3.10.22 ! move tab right crashed 0.3.10.21 ~ in searchbar input now works ctrl+ ! fixed Config::load_defaults 0.3.10.20 [ forward calls from doc to doc::Internal via glob ! invalid Doc::name call in file.pm 0.3.10.19 + define output colors by config ~ new config option: interpreter for notpad eval ~ added '&' as default wordchar ! output used removed Doc API ! output restores cwd 0.3.10.18 ! 0 div in indent guid highlighting ! restore doc attribute from doc nr 0 correctly ! umlaut problem no longer ! clean up all lexer words 0.3.10.17 ] massive refactoring of Kephra::Document namespace ! restart timer when canceled app shutdown 0.3.10.16 ] sane doc nr validation ! false trigger of file changed notify dialog ! wrong win titel when open file ! fix default global settings hash 0.3.10.15 ! output panel can run scripts again 0.3.10.14 + perl eval of notepad + define find and replace item in notepad ! enable right tool button after doc change 0.3.10.13 ! cleanup doc internal mess ! fix stting of current syntaxmode 0.3.10.12 [ search finding better visible ! safer conf file parser init ! didn't remember changed syntaxstyle of opened file ! proper file dialog filter for config files 0.3.10.11 + exit without save ] rewritten parts of Edit::Goto.pm ] made localisation configs saner ! goto line dialog showed 1 line less than current ! fixing default localisation 0.3.10.10 ] founding Kephra::Log ] adapt makefile becaue no longer need Log::Dispatch ! wount crash when not find splashscreen image file ! reload settings when save active config file works again 0.3.10.9 ~ rename file > "Instert ..." into "insert from ..." ! searchbar input works again (they changes standart behavior when press enter) 0.3.10.8 ! fix event table data structure ! repair (beginnings of) czech localisation 0.3.10.7 - append option in output panel ] changed session.conf to yaml due sorting problems with current (2.4) C::G ] switch to Module::install 0.77 ! reload configs works again 0.3.10.6 + 3 close unsaved functions ' extend Kephra.pm POD [ save notepad with when resize [ rearrange file menu 0.3.10.5 ] copy configs unto userconfig when configs are old ] current filepath is known to interpreter when run script ! copied config files where readonly ! could not write autosaved file sessions when no file was there ! changed_notify_check could not handle deleted files 0.3.10.4 ' rewritten parts of Kephra.pm POD ! fix install problem hopefully 0.3.10.3 ] Keymap Dialog had no Version number ! caret visible on app start ! tried to load non existing notepad cache 0.3.10.2 ! fixed codepage setting ! fixed wordchar settings ! fixed and moved dev.pl outside the distro 0.3.10.1 [ changed panel layout: column division first ' POD fix for Panel API ! workaround for a Wx bug 0.3.10 ! add missing panel localisation ! resized output panel when started closed 0.3.9.18 + Output is resizable ans remembers its size + Notepad is resizable ans remembers its size ! Notepad knows its split staus after closed per mouse 0.3.9.17 + output panel has now a stop function to kill hanging prozesses [ enhance Notepad as STC that shares some settings with main STC ! fix perl lexer color definition 0.3.9.16 * notepad panel works completely + output panel recoveres its visibility state after restart ~ output panel open if run script ! output panel handles multiline output ! can run files with whitespace in name (patch by reneeb) 0.3.9.15 * output panel works now 0.3.9.14 ] internal file namespace cleanup 0.3.9.13 + added View Panel Menu + 2 entry for Notepad and Output ~ renamed Module API to Panel [ basic visuals for output panel 0.3.9.12 [ + goto line icon ~ switched to Config::General 2.4 in windistro ! fixed about dialog 0.3.9.11 ! restore curser pos in current file after restart ~ added __WARN__ and __DIE__ to perl lexer ! wordchar settings where missing 0.3.9.10 ] Gabor: added first logging functions ] renamed Config::Default::Global_Settings to Config::Default::GlobalSettings ! Gabor: optimized config file type recogition ! mismatch in 2 other Default modules names 0.3.9.9 ' documented API modules ' repaired some POD in main module [ fixed tabbar icon tooltip texts ] switched from YAML to YAML::Tiny (less code, less memory, all we need) ] new starter.exe that starts kre\wperl.exe 0.3.9.8 ' extending Kephra.pm's POD ] making DragAcceptFiles optional because not supported by GTK 0.3.9.7 ! seperator in Config::Embedded ] renamed Config::Embedded -> Config::Defaults ] rename pre dir into kre (kephra runtime environment) ] externalized into single files 0.3.9.6 ] complete new starter ] new Kephra::Config::init() method 0.3.9.5 ' added POD in Kephra.pm 0.3.9.4 + avennue highlighter + BAAN highlighter + .bat highlighter + diff highlighter + errorlist highlighter + makefile highlighter + matlab highlighter + property file highlighter ] configs and localisation for 8 new styles ' removed "all rightes reserved" 0.3.9.3 ! crash on file open ' this_version.txt update ] finished last Kephra::Config::Tree functions ] deleted not used Dialog::Search subs 0.3.9.2 + reintroduce fast splashscreen ~ splashscreen img and app icon now as xpm too ] shrinking strawberry distro from 28 to 23MB 0.3.9.1 ' POD fix ! toolbar appeared in wrong status bar cell 0.3.9 0.4 RC 2 0.3.8.14 ' formated and updated documentation ~ updated credits.txt (PCE -> Kephra) ~ rewritten Config::Tree functions so i could - remove Hash::Merge and Clone as dependencies 0.3.8.13 + added pbp.conf according to damians perl best practices ] cleaned Kephra::Edit::Search.pm ! crash when open global autoconfigs 0.3.8.12 ] cleaned bit the sub load_from() in Config::Global ] cleaned Config::Global a bit ] WxKeycodes for search dialog ! crash when searching with dialog 0.3.8.11 + added replace selection into context menu over selected text in editpanel ~ cleared replace selection labels ~ ignore in filechange notify dialog now works until next file change happens 0.3.8.10 - removed global default conf, replaced by embedded conf ] search dialog is now real dialog ! debugged embedded global conf 0.3.8.9 + embedded main menu settings ! can handle missing file session file ! hang while searching after closing search dialog 0.3.8.8 + <ctrl>+<A> works in searchbar + embedded toolbar settings + embedded context menu settings ] rewritten parts of Kephra::Config::File ! debugged embedded localisation 0.3.8.7 + embedded commandlist ! prevent thread hangup due late fired timer event on shutdown ! added unicore utfdb to winball ! blockformat on width 0.3.8.6 + context menu over statusbar info field - select app language menu ! search dialog carret keeps position ! crash when ask for not existing menus 0.3.8.5 ] fixed and updated embedded configs ? more compile tests for modules that arent loaded on start ! crash on replace 0.3.8.4 ] thrown wx icons out of the distro ! crash on select all 0.3.8.3 [ put file sessions menu out of file menu on top level [ put goto-line into searchbar ] added Kephra::API::CommandList::run_cmd_by_keycode ~ document switch backs works out of search bar ! pos1 and end works in searchbar again 0.3.8.2 ! crash on search 0.3.8.1 + menu item switch view of tabbar contextmenu ~ folding view menu ! fix php lexer, new constant ! fix xml lexer ! crash while open help files in winball distro 0.3.8 bugfix release, 0.4 RC 1 0.3.7.12 + position searchbar in the middle between tabbar and edit panel ] hack to disabling acceleratorTable ~ new wx constants ! runs with latest Wx (0.83) again ! doc attr getter was flawed 0.3.7.11 ! empty statusbar syntaxmode field if kephra sarted with last doc with mode: "none" 0.3.7.10 [ nicer menus due fake transparent icons for menu items with no bitmaps ~ take config dialog again in the toolbar ! old Searchbar on config reload won't go 0.3.7.9 ! could't open first file ~ hiding print, expanding file menu again 0.3.7.8 [ new splashscreen (finally one with the right name in it) ! doc history can handle if non doc is open ! didnt start with no file open 0.3.7.7 ! fixed color of syntax mpode "none" after switiching from php ! rot tabs correctly while moving 0.3.7.6 ~ change Switch doc back to <ctrl>+<shift>+<back> ~ redesigned file menu ! added missing doclist event on doc switch ! open all files of dir now appears in right dir 0.3.7.5 + move tabs with <ctrl>+<shift>+<pgup|pgdn> ] cleaned up App::TabBar API 0.3.7.4 + Ctrl+Shift+G works from the searchbar input 0.3.7.3 ! more internal doc add fixes 0.3.7.2 ~ move xml comment from <ctrl>+<b> to <ctrl>+<h> ! lot of internal bugs introduced due 0.3.7.1 0.3.7.1 ! EOL status works on empty docs again ! doc attributes and data is now deleted correctly while closing doc 0.3.7 testing release 0.3.6.12 ! fix naming convention in CPAN distro 0.3.6.11 ] updated tests to ned app name 0.3.6.10 + autonotify Dialog ] saner file open syntax ~ no autosave on unnamed files ~ autonify just once ! false autonotify on config files 0.3.6.9 ! small fixes 0.3.6.8 + autosave, define it under config key file > save > auto_save 0.3.6.7 ~ StatusBar Info with dotted numers ~ renamed API::Module.pm to Extention.pm 0.3.6.6 + tools to make CPAN distro (make_cpan_pl) 0.3.6.5 ' added POD documentation 0.3.6.4 ' translated this_version.txt from de to en ' translated special_feature.txt from de to en ~ changed item order in tab contextmenu 0.3.6.3 + new perl 5.10 keywords in Perl.pm + started Module for Plugin API : Kephra::API::Module + wrote Documentation of Perl main module ~ join lines now leaves a space between joined lines ~ join lines has key binding : <ctrl>+<shift>+<J> ! search dialog holds correct replace item when non text selected as new item ! fixing doc change in search bar input 0.3.6.2 + jump to doc begin and end from searchbar input whith <ctrl>+(<home>|<end>) + change doc from searchbar input whith <ctrl>+(<pgup>|<pgdown>) ~ changed tablabel configfile marker from circumfix | | to prefix $ ~ empty config dialog visible again 0.3.6.1 + marking config files in tabs optionally with straight lines + added replace line Command to main menu ' wrote 'besondere_funktionen.txt' ] update embedded config 0.3.6 round up, bug fix and minor feature release 0.3.5.8 + load and store a backup file session + close searchbar now also with <ctrl>+<Q> while in the searchbar input + more menuoptions to set width of textmargin ' updated diese_version.txt ! search dialog crash (uncompled API module refactor) 0.3.5.7 + take autoautoindtion in statusbar tab-cell contextmenu ~ rename linebreak to line wrap ~ changing name in the docs ] upgrading windistro to PPI 1.2 ! fixing and fine tuning blockformat and line wrap 0.3.5.6 + blockformat <ctrl>+<shift>+<b>, menu edit > format + line break, menu edit > format [ removed ambiguity in german mainmenu navigation with keyboard ! optimizing speed and undef value handling of menu data generation 0.3.5.5 + tabbar tabs numbering optional config key: app > tabbar > number_tabs + tabbar filename optional without ending: app > tabbar > file_info = [first]name ! seach dialog icon shows again ! file firs names now correct when it has no file ending ' some more documenting in default.conf 0.3.5.4 + firstname: new document property holds filname without ending + can use [$$firstname] as template variable too ] clean up ::Config::General.pm ] less subconfugs ~ renamed 'global\sub\localisation' to 'global\sub\documentation' 0.3.5.3 ! move document line and page wise from searchbar input [ use wx keycodes, less error prone 0.3.5.2 + reload templates on save + move document line and page wise from searchbar input [ internal cleanups ! reload all docs switched to last doc 0.3.5.1 ' advanced feature tour ' wrote german 'this version text' for release 0.4 ' reneame 'feature.txt' to 'all_feature.txt' ' some preparations for czech localisation 0.3.5 bug fix and feature release 0.3.4.15 ] moved icon path from \config\icon to \config\interface\icon\ 0.3.4.14 ~ changed File Menu order ! current pathes now contain volume name 0.3.4.13 - auto brace join ] beginnings of Modules and its handling ! cursor in searchbar's edit field now holds its position whilse writing 0.3.4.12 ~ insert templates 0.3.4.11 * Template Menu ] config nodes file > session and file history ] new config node file > templates ] renamed Edit::Changes in Edit::History ] cleaned up file session directory handling ] cleaned up path splitting 0.3.4.10 [ cleaned Kephra::App namespace 0.3.4.9 ! document editing commands did crash 0.3.4.8 ! refresh of find item history in searchbar ! chrash when context menu calls 0.3.4.7 * history File menu 0.3.4.6 + custom title bar 0.3.4.5 + multiple events on GUI element ! redo button ! better tabbar context menu (still some flaws) 0.3.4.4 + nonreactive toolbars (disable events) 0.3.4.3 ! reload config file when saving it in editor 0.3.4.2 ! searchbar fixes 0.3.4.1 ! exit dialog entitles also unnamed files 0.3.4 bug fix release [LOST due crash] 0.3.3.17 + switch syntaxhighlight when left click on third status pane ~ correct refresh while rename file [ file rename got shortcut <ctrl>+<alt>+<shift>+<S> 0.3.3.16 + find next and prev in searchbar now also with F3 and shift+F3 ~ perl sigils and namespaceseparator now word chars ! searched first when setting find item with strg+F3 ! crash while call replace with confirmation 0.3.3.15 ~ doclist menu shows untitled ~ less events on doc change fired ! correct window title while open file or session reload if current doc is last ! state of find menu items ! crash when saving a file that was in a recently deleted dir 0.3.3.14 [ proper searchdialog replace input update [ put config dialog call into main menu [ searchbar find input size now changeable due config file ] config node "file > current > session" renamed to "file > session" ] config node "file > filter" renamed to "file > group" ! dynamic doc list menu is updating again 0.3.3.13 ~ config dialog cleanup ~ extracting ::Edit::Changes.pm ~ proper searchbar find input update ~ proper searchdialog find input update ~ doc edit commands much faster now, they use event freeze [ exit dialog shows file names again ! event freeze was brokes 0.3.3.12 ~ updated starter exe in win distro ~ benchmark output switchable 0.3.3.11 ~ search range items in main and searchbar context menu ~ doc change items in doclist menu 0.3.3.10 ~ merged with adams code version ~ cleaned eventlist freeze implementation ! crash while find or replace in all open docs with dialog ! search dialog lost icon when config in nonstandart path 0.3.3.9 + subconfigs from nested nodes + ability to store single contextmenu in seperate file ! warnings on empty commandlist nodes 0.3.3.8 ~ using Kephra::Config::File::load not YAML directly ~ simplified session files ] started Kephra::Config::Tree ' more benchmarks 0.3.3.7 [ german localisation correctoin, spell checks ~ build searchbar from configs, findinput is now sizeable 0.3.3.6 ! crash on view EOL 0.3.3.5 ! save first time rightmost doc 0.3.3.4 ~ some text folding preparations ! searchbar and search dialog dind't run 0.3.3.3 ~ reduced code in App::Window::load_icon ~ dissolving depreciated lib Kepher::App::STC > Kepher::App::EditPanel ! DND on Seachbarbar input 0.3.3.2 ! fix config for rename [ hiding error logs 0.3.3.1 ~ edit contex menu fixed and chenged [ item label: Fortran -> Fortran 90 ! change font 0.3.3 rounded up for test release 0.3.2.21 ] new namespace for line number margin ] slimmed events, faster now 0.3.2.20 ! fix toolbars visibilities ! fix searchbar contextmenu connector ! sane app gui part layout when changing language 0.3.2.19 + configurable middle click on tabbar ~ correct tooltip and status help msg on tabbar ! repaired tabbar 0.3.2.18 ~ lot of internal cleanup ] internal unified namespace App::ToolBar 0.3.2.17 + repairing subconfs + nearly comlete set of embedded config (interface missing) 0.3.2.16 + toolbar toggle buttons + eventsystem + dynamic toolbar with events 0.3.2.15 [ removed most search dialog button flicker ~ faster search dialog 0.3.2.14 ] toolbar rebuild - remove xrc interna ! crash on edit document calls 0.3.2.13 ] Config menu rebuild ] Help menu rebuild ] Menubar completly set to new Interface text compiler ~ ::App::Window::create - most xrc files except toolbar 0.3.2.12 ] View menu rebuild ] Document menu rebuild ] Search menu rebuild ] Document menu rebuild ] View menu rebuild ] moved braced nav to edit::goto modul ! brace nav crash 0.3.2.11 ] Edit menu rebuild ! crash while text refresh ! set bookmark annoyance 0.3.2.10 [ EOL checks ! fixed tabbar context menu 0.3.2.9 ] new Document::SyntaxMode, Edit::Goto and Edit::Select namespaces ] File menu rebuild ] dispatch own key events - shorten Events.pm ! replace all replaced with zero 0.3.2.8 ] moved last parts of visual.pm code Statusbar.pm and Toolbar.pm [ checkitem in syntaxmodeinfo contextmenu [ can_save_all now much faster 0.3.2.7 + rename files ! fixed check and radio item update 0.3.2.6 [ better goto line dialog ] + can_save_all Event which is set whenn there is an unsaved doc ! correct checkings and disabling on all menu items ! tab change event echo ! del mismatch in searchbar combobox 0.3.2.5 [ first main menu works with a submenu ] PCE::Config::Stettings to PCE::Config::Global, cleaned namespace ! fixing search dialog find combo loosing search item while search 0.3.2.4 [ no cursor or text deselection jump when save all files ] PCE::App namespace newly ordered ! crash on search dialog Drag'n Drop 0.3.2.3 + searchbar contextmenu ! pathname mismatch with numbers ! opening changing and save rightmost doc 0.3.2.2 + drag n drop in search dialog [ single wrong indention on tabbar contexmenu ] overall slimmed and cleaned up code ! no proper eol mode display in statusbar 0.3.2.1 ] cleaned up statusbar internals ! forgotten update call for document list update ! statusbar showed not correct number of document lines 0.3.2 ] more checks when loading GUI configs 0.3.1.14 + menuitem radiogroups + disable menuitems 0.3.1.13 ] unified commandlist value loading ! crash while change Localisation Language again 0.3.1.12 + documents contextmenu ] lot of cleadup in Edit.pm [ slightly changed goto behaviour 0.3.1.11 ] all contexmenu on gtc ] contexmenu-editpanel eval connector [ new german iso lacalisation ! crash when current doc setting in session file is to high 0.3.1.10 + icons in new contex menus ! crash while change Localisation Language 0.3.1.9 + contextmenu on selected text ] editpanel contextmenus working in minimal mode 0.3.1.8 ] adding contextmenu gui config file ] info dialog cleanup ] added embedded emergency localisation 0.3.1.7 ] cleaned app namespace ] cleaned up program name handling 0.3.1.6 + searchbar takes eventual selection as search item ! open button did disappear for after cliscked 0.3.1.5 + config key {app}{searchbar}{autofocus} sets focus to input while onmouseover ! cursor jump to begin when deleting bookmark ! deleting all bookmark ! crash if reload nonexisting file 0.3.1.4 ' keymap updated ~ goto last edit now on <Ctrl>+<Shift>+<G> ! open files from command line with pce was broken since 0.3.0 ! current doc pointer now correct even if there is only one doc 0.3.1.3 + brace navigation with <alt>+<arrow keys> [ <Ctrl>+<F> now also switches focus back to editpanel [ button added on searchbar to call search dialog ] cleaned PCE::App::Event::key_down_filter 0.3.1.2 + delete back tab, <shift>+<back> deletes now to naxt indention level ~ switch back now on <Ctrl>+<back> ] property handling while restoring file session now much slicker ! warning caused by checking more filenames that exist while open file ! fixing file session loading and restoring 0.3.1.1 [ fix in german toolbar "Suchdialog" [ caret position better visible when switching to document ! warning in info dialog when there is no patchlevel 0.3.1 ? testing release [ closing street holes 0.3.0.27 + block navigation reestablished 0.3.0.26 ! fixed blockindent 0.3.0.25 + Bookmarks are working now 0.3.0.24 ! fixed tab converter 0.3.0.23 + restoring bookmarks ! display of selection lines now correct 0.3.0.22 ! typo in toolbar 0.3.0.21 ! no label for unnamed files 0.3.0.20 + show selected lines in statusbar ~ shorter format::join_lines algorithm ! typo in searchflag refresh 0.3.0.19 + opens whole dir when dragged onto ! crash when try to drag dir into edititor ! find and replace dialog repaired and slightly optimized 0.3.0.18 + PCE now remembers current search and replace item ] rebuild some PCE::Edit::Search.pm sub, renamed config keys 0.3.0.17 + find input can now recieve dragged text 0.3.0.16 ~ searchbar hints do work now ~ find input history now works + introduced function mark all matches + editpanel can now recieve dragged files 0.3.0.15 ! repairing linenumbermargin autosize ! repairing english goto line menu call 0.3.0.14 ~ building up simple but usable Searchbar with some jitters 0.3.0.13 ! file data got lost after closing empty files 0.3.0.12 ~ toggle searchbar visibility ~ start SearchBar ! repair exit dialog 0.3.0.11 ~ established standart way of normalizing path slashes ~ better way to position context menus ~ Replace with confirm now on <Alt>+<Enter> on replace dialog ! tabbar buttons has right background color 0.3.0.10 + toggle visibilty of tabbar icons + toggle visibilty of tabbar seperator line ~ Show.pm now opens help & config files with full path ] simplified edit: line copy ! shown pathes now compatible to current OS ! reinsert tab seperator line ! toggle visibilty of tabbar works now 0.3.0.9 ~ rename config/general -> config/global ! rename tablabel when change language ! ensure inner data current doc pointer always set correct 0.3.0.8 + put EOL switch visibility into EOL status context menu ~ modulnames now all uppercase, sources cranked throug perltidy 0.3.0.7 ~ introducing use strict 2 all.pm ~ rewrite document namespace ~ Reorganising into a CPAN-compatible distribution 0.3.0.6 ~ refaktoring the app namespace (packages, methods, config keys) [ introducing pce::App::TabBar ! no more hang up on empty docs 0.3.0.5 ! fix tabbar consume too much height ! dont clear editpanel when close last writeprotected file -> pl2 ! dont recognize protection when open first write protected file -> pl2 0.3.0.4 + goto last edit ! mainframe was invisible under win when minimize and then close -> pl2 0.3.0.3 ~ update to wxperl 0.25 ] pce::document::internal founded pce::document::property deleted [ 2 icons in tabbar [ 3 new icons in toolbar config dialog aktivated 0.3.0.2 ! bugfix starting minized search dialog -> pl2 ] changed to sizer based searchdialog ] find modules renamed in search because its more than find like search menu 0.3.0.1 ! increment search lost caret visible -> pl2 [ rename localisation ref ] new icons left of search icons 0.3 ? full stable release 0.2.3.47 ] info dialog now displayes patch level ' finish english docu 0.2.3.46 + replace line ' translating doku ] new fresher icons, several new for coming new functions ] 2 more functions in the toolbar ] slicker help menu 0.2.3.45 ! fixed CVS vs Clone.pm clash 0.2.3.44 ' doku improvements ] faster splashscreen 0.2.3.43 ] Boundary check vor app_frame 0.2.3.42 ! Bugfix in pce::edit::format::del_trailing_spaces ... missing some 0.2.3.19 + strg+Enter menu item document_switch_back ... missing some 0.2.3 ? feature enhancement and bug fix * file sessions * on quit dialog for selecting files to save + open multiple files via dialog + history for search and replace strings + xp style now optional ] close current file with middle click on tabbar ! open empty files 0.2.2 ? feature enhancement and bug fix * search and replace in files + asm style + save on change doc + open statusmenu files via mainmenu [ internal sub now only have one _ prefix ! bug in search prev ! style repaint on save as bug ...yes here is inconsistancy due change 0.2.11 ? feature enhancement and bugfix release + find in selection + replace all in selection + replace with confirm in selection + menu item find from start + recognise selection from menu calls + Ctrl+Enter (in search dialog) closes dialog and finds first (find button behaviour) + set max tab width via config [ more benchmark ! reload autosettings fixed ! status context menu changes language properly ! dialog saves search and replace text first time like it should ! minor checkbox selection fix in search dialog ! bug in font change ! line wrap bug 0.2.10: ? new feature and maintain release * contextmenus on statusbar + shift+Enter in search dialog searches backward [ massive refracturing fore new namespace [ several new modules ! minor fixes in menu 0.2.09: ? testing and maintain release + call replace in find dialog ~ DND Files now only over tabbar ~ close other now with ctrl+shift+Q ~ better search menu [ massive refracturing fore new namespace ! minor fixes in menu, statusbar, and keymap 0.2.08: ? shiny little feature enhancement release + Drag 'n Drop Files from Explorer into the Editor 0.2.07: ? suporting porters release + pce.pl can now called under win from anywhere ! small bugs in show files 0.2.06: ? minor enhancement release + block navigation + open files in current directory 0.2.05: ? maintaince release for linux usage - direct document selection with Alt+Number [ path for config and help can now be set freely 0.2.04: ? major stable and bugfix release + view option: stay on top ] internal changes and cleanups ! bugfixes in menu and logic 0.2.03: ~ converter take now the whole text when nothing is selected ] internal cleanup 0.2.02: + delete trailing spaces ] cleaning internals 0.2.01: + 2 new options for opening files, replace new empty docs + single document mode + save last tilde files like that: file.name~ ! Bugfix in EOL Mode, the editor produced always cr+lf ! 2 Bugfixes in Main menu 0.2.00: ? major stable and bugfix release 0.1.99: + backup autosave file, and restore ist in emergency case - Bookmarks ! bugfixes 0.1.98: + save dialog on close and exit now contain cancel ] internal improvements ! bugfixes in Config 0.1.97: + jumpes to the file if you open an already opened file ] internal improvements ! bugfixes in main, config, file, document, STC ! bugfix in General::Config 0.1.96: + visual feature: switch back if you click on current a la opera + new option: start with an empty file + new option: open each file once + new option: open text files only ! bugfixes 0.1.95: + autoreload for config files + this version texts ~ better menus ! bugfixes 0.1.94: ! fix from ugly bug that eating docs, therefor HIGHLY RECOMMANDED updated + ask now for unsaved files to save on quit ~ minor optikal fixes in search dialog, save all icon, color of LLI 0.1.93: * settings for each document will handled seperately and saved to the next start these settings are at start cursorpos, syntaxstyle, EOL Mode, Tab usage + intention guides + caret line highlighting + autodetect EOL Mode + autodetect write protection + direct doc selection with Alt+Number ~ right margin color changesd ~ another app icon ~ search menu unfolded 0.1.89: ? bugfix release 0.1.87: * multiple document handling find, replace and bookmarks are still single document oriented 0.1.84: * find and replace dialog + win XP look 0.1.68: autosave options, customizable Syntaxstyle autoselect 0.1.64: german localisation 0.1.60: config: many new styles, contextmenu file menu: insert file, edit menu: format functions, convert functions document menu: eol mode, styles, spaces, write protection Statusbar: cursorpos, eol, spaces, style faster file reading, several bugs 0.1.31: save and load external configs, handling close window event many bug fixes, color and caret settings 0.1.24: SECOND COMPLETE RELEASE (incl. PRE 0.2pre1)! new config file, old removed, holds now all properties, default.conf, open, load and save config on the fly, release history, reopen, save copy as, Replace, Undo History, brace comment, 3Bookmarks, find selection new view options: hard tabs, whitespace, 4Margins, Line Wrap 0.1.12: Fontselect and many other fixes 0.1.11: Selection Move, fixes & updates 0.1.10: indent unindent, script comment uncomment, CSS Style, several fixes 0.1. 9: open Configfiles, goto line number, line move, fixes 0.1. 8: holds file, save before exit, fixed and updates 0.1. 7: find previous, C-style fixed, setEOLMode removed 0.1. 6: find, find next 0.1. 5: setEOLMode(removed), select and autoselect Syntaxstyle, C - Style, 0.1. 4: asks for filename if you save new file, better Perl-style 0.1. 3: Keyboard Map, Licenses, fix 0.1. 2: View Menu, View EOL, HTML-Style 0.1. 1: checkable Menuitems, english Toolbarhints, show License 0.1. 0: FIRST COMPLETE RELEASE(incl. PRE 0.2)!, smaller startexe, english Menu, long line indikator, colored gutter, 0.0.20: first syntaxstyle(perl), filename in tab, toggle linenumbermargin, fixes 0.0.19: pce.exe, undo, redo, cut, copy, paste, clear, select all filename in the title, warningboxes bugfixes 0.0.18: save file, bugfix 0.0.17: new file, open & save as 0.0.15: First Public Release!: shows the editbox! Legend: ^ purpose * big new feature + new feature ~ change - remove ? tests ! bugfix [ interface ] internals " configs ' help/docu / comment
https://metacpan.org/changes/distribution/Kephra
CC-MAIN-2015-11
refinedweb
7,978
61.43
Created on 2015-01-05 17:31 by jdufresne, last changed 2015-06-02 19:45 by r.david.murray. This issue is now closed. The csv.writer.writerow() does not accept a generator as input. I find this counter-intuitive and against the spirit of similar APIs. If the generator is coerced to a list, everything works as expected. See the following test script which fails on the line "w.writerow(g)". In my opinion, this line should work identically to the line "w.writerow(list(g))". --- import csv f = open('foo.csv', 'w') w = csv.writer(f) g = (i for i in ['a', 'b', 'c']) w.writerow(list(g)) g = (i for i in ['a', 'b', 'c']) w.writerow(g) --- This seems like a sensible enhancement request to me. It is possible it could even be considered a bug, the docs aren't exactly clear on what 'row' is expected to be. I have created an initial patch such that writerow() now allows generators. I have also added a unit test to demonstrate the fix. The code now coerces iterators (and generators) to a list, then operates on the result. I would have preferred to simply iterate over the argument, however, there is a special case where the length of the argument is exactly 1. So coercing to a list makes checking the length simpler. All feedback welcome. Hmm. That could be an issue. If someone passes a generator they will generally expect it to be consumed as a generator, not turned into a list implicitly. So it may be better to turn this into a doc bug and require the explicit "list" call :(. The docs mention that "row" should be a sequence, so there is no a bug. Here is a patch which makes writerow() accept an iterable without converting it to a list. It also adds tests for few corner cases and fixes the docs. Left a question about handling of the unquoted empty field exception on Rietveld. New changeset cf5b62036445 by Serhiy Storchaka in branch 'default': Issue #23171: csv.Writer.writerow() now supports arbitrary iterables. Looks like Serhiy forgot to close this, so closing it. No, I just had a stale tab :( :(
https://bugs.python.org/issue23171
CC-MAIN-2020-16
refinedweb
367
76.93
Created on 2018-03-10 08:57 by ncoghlan, last changed 2019-05-06 14:29 by serhiy.storchaka. This issue is now closed. (Note: I haven't categorised this yet, as I'm not sure how it *should* be categorised) Back when the __index__/nb_index slot was added, the focus was on allowing 3rd party integer types to be used in places where potentially lossy conversion with __int__/nb_int *wasn't* permitted. However, this has led to an anomaly where the lossless conversion method *isn't* tried implicitly for the potentially lossy int() and math.trunc() calls, but is tried automatically in other contexts: ``` >>> import math >>> class MyInt: ... def __index__(self): ... return 42 ... >>> int(MyInt()) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: int() argument must be a string, a bytes-like object or a number, not 'MyInt' >>> math.trunc(MyInt()) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: type MyInt doesn't define __trunc__ method >>> hex(MyInt()) '0x2a' >>> len("a" * MyInt()) 42 ``` Supporting int() requires also setting `__int__`: ``` >>> MyInt.__int__ = MyInt.__index__ >>> int(MyInt()) 42 ``` Supporting math.trunc() requires also setting `__trunc__`: ``` >>> MyInt.__trunc__ = MyInt.__index__ >>> math.trunc(MyInt()) 42 ``` (This anomaly was noticed by Eric Appelt while updating the int() docs to cover the fallback to trying __trunc__ when __int__ isn't defined:) Marking this as a documentation enhancement request for now, but I think we should also consider changing the type creation behaviour in 3.8 to implicitly add __int__ and __trunc__ definitions when __index__ is defined, but they aren't. That way, no behaviour will change for classes that explicitly define __int__ or __trunc__, but classes that only define __index__ without defining the other methods will behave more intuitively. >I think we should also consider changing the type creation behaviour in 3.8 @ncoghlan is this what's being done in PyTypeReady? @Rémi Aye, filling out derived slots is one of the things PyType_Ready does. This would need a discussion on python-dev before going ahead and doing it though, as the closest equivalent we have to this right now is the "negative" derivation, where overriding __eq__ without overriding __hash__ implicitly marks the derived class as unhashable (look for "type->tp_hash = PyObject_HashNotImplemented;"). See also issue33039. Is not this a duplicate of issue20092? Yes it is. Thanks for finding that @Serhiy. Since nobody objected to the change on the mailing list and people seem to agree in issue 20092: [R. David Murray] To summarize for anyone like me who didn't follow that issue: __index__ means the object can be losslessly converted to an int (is a true int), while __int__ may be an approximate conversion. Thus it makes sense for an object to have an __int__ but not __index__, but vice-versa does not make sense. I will post my patch tonight. Rémi, Are you still working on the patch for this? Thanks! Hi Cheryl, thanks for the ping. I wasn't sure my patch was correct but reading typeobject.c:add_operators(), it is actually more straight-forward than I thought. Serhiy Storchaka: This is indeed a duplicate of issue20092. I believe the solution proposed by Nick Coghlan is better than the one of Amitava Bhattacharyya, "adding a call to `nb_index` (if that slot exists) in `_PyLong_FromNbInt`" though. One thing to note regarding the proposed patch: the following stops to work and raises a RecursionError since __index__ == __int__: class MyInt(int): def __index__(self): return int(self) + 1 I changed test_int_subclass_with_index() as `int(self) + 1` is the same thing as `self + 1` for int subclasses. I don't think this sort of code should appear in the wild but if you think it is important not to break compatibility here, I think I could check for number subclasses before overriding __index__. Then let to continue the discussion on the older issue which has larger discussion.
https://bugs.python.org/issue33039
CC-MAIN-2019-43
refinedweb
648
61.46
On 2/21/06, Alan Cox <alan@lxorguk.ukuu.org.uk> wrote: > On Maw, 2006-02-21 at 13:33 +0800, zhuzhenhua wrote: > > then i check the code and find it's in 8390.c, caused by uncorrect > > write of MAC addr, and now i repalce all the inb,outb,inb_p, outb_p > > with get_reg and put_reg in the 8390.c.as follow: > > > > static unsigned char get_reg (unsigned int regno) > > { > > return (*(volatile unsigned char *) regno); > > } > > > > static void put_reg (unsigned int regno, unsigned char val) > > { > > *(volatile unsigned char *) regno = val; > > } > > Should be > > return readb(regno); > > and > > writeb(val, regno) > > if regno holds the ioremap result of the memory mapped address of the > 8019. Right now 8390.c assumes PIO mappings and you hardware appears to > be MMIO ? > > > does someone have any idea of this situation? > > If the card is MMIO then make sure you are using readb/writeb and > ioremap properly, otherwise you may get cache consistency and other > strange errors. > > now i resolve the "Hw. address read/write mismap 0' by set_io_port_base(0) in my board setup function. and nfs rootfs can mounted and can copy/delete, also can run the helloworld app in nfs rootfs, but if i run some big application or something like vi, it will still get "nfs: server 192.168.81.142 not responding, still trying" what may cause this message? ethernet driver? or kernel uncorrect config? thanks for any hints Best Regards zhuzhenhua
https://www.linux-mips.org/archives/linux-mips/2006-02/msg00337.html
CC-MAIN-2016-36
refinedweb
238
72.97
Introduction to WCF Interview Questions and Answers WCF stands for the Windows communication foundation. It is a framework that will be used for building service-oriented applications. With the help of the Windows Communication Foundation, You can send any kind of asynchronous message from one point to another point. The services which all are created by WCF can be accessed by different protocols, for example, HTTP, TCP, msmq etc. Now, if you are looking for a job that is related to WCF then you need to prepare for the 2020 WCF Interview Questions. It is true that every interview is different as per the different job profiles. Here, we have prepared the important WCF Interview Questions and Answers which will help you get success in your interview. In this 2020 WCF Interview Questions article, we shall present 21 most important and frequently asked interview questions. These questions will help students build their concepts around WCF and help them to crack the interview. These questions are divided into two parts are as follows: Part 1 – WCF Interview Questions (Basic) This first part covers basic Interview Questions and Answers. Q1. What do you mean by WCF? Answer: WCF (Windows Communication Foundation) is a framework that is used to build service-oriented applications. Q2. Explain the fundamentals of WCF Answer: The fundamentals of WCF are given below: - Unification – (COM+ Services, Web Services, .NET Remoting, Microsoft Message Queuing) - Interoperability - Service Orientation Q3. What is the need for WCF? Answer: This is the basic WCF Interview Questions asked in an interview. We need WCF because the service created by using WCF will be supported by different protocols. Q4. What are the Features of WCF? Answer: Service Orientation, Interoperability, Service Metadata, Data Contracts, Security, Transactions, AJAX and REST Support, Extensibility. Q5. Describe the advantages of WCF. Answer: Service-Oriented, location Independent, Language-Independent, Platform Independent, Support Multiple Operations, It supports all kinds of protocol. Q6. What is the difference between WCF and Web Services? Answer: WCF is a framework which is used to build service-oriented application while Web Services are the application logic which will be accessed over the web protocols. Web Services will be hosted on the IIS while WCF can be launched on IIS and self-hosted. Q7. Explain about SOA? Answer: It stands for Service-oriented Architecture, It’s an architectural approach to software development. Q8. What do you mean by service contract in WCF? Answer: A service contract is the service compatibility and need of the customers will be bound up with the service mechanism. Q9. What are the components of the WCF application? Answer: WCF application consists of 3 components: - WCF Service - WCF Service Host - WCF service client Q10. What are the isolation levels provided in WCF? Answer: Isolation levels provided in WCF are given below: - Read Uncommitted. - Read Committed. - Repeated read. Part 2 – WCF Interview Questions (Advanced) Let us now have a look at the advanced Interview Questions. Q11. What is the various address format of WCF? Answer: Address format of WCF are given below: - HTTP format - TCP format - MSMQ format Q12. What do you mean by Data Contract Serializer? Answer: This is the frequently asked WCF Interview Questions in an interview. when the object instance changes into a portable and visible format that process is known a Serialization and data serialization is also known as Datacontractserilizer. Q13. Describe the binding in WCF? Answer: The binding in WCF are listed below: - Basic HTTP Binding - NetTcp Binding - WSHttp Binding - NetMsmq Binding Q14. What is the namespace name used for WCF service? Answer: ServiceModel service is used for WCF service. Q15. What are the MEPs of WCF? Answer: The MEPs of WCF are given below: - DataGram - Request and Response - Duplex Q16. What kind of transaction manager is supported by WCF? Answer: They are as follows: - Lightweight - WS- Atomic Transaction - OLE Q17. What are the different Data contacts in WCF? Answer: Different Data contacts in WCF are given below: - Data Contact - Data Member Q18.What are the different instance modes of WCF? Answer: Different instance modes of WCF are given below: - Per Call - Per Session - Single Let us move to the next WCF Interview Questions Q19. What are the different ways of Hosting Webservices? Answer: The WCF service can be hosted in the following ways: - IIS - Self Hosting - WAS (Windows Activation Service) Q20. What are the different transport schemas supported by WCF? Answer: The transported schemas are given below: - HTTP - TCP - Peer network - IPC - MSMQ Q21. Explain different types of Contract defines in WCF? Answer: There are four contracts: - Service Contracts - Data Contracts - Fault Contracts - Message Contracts Conclusion: These are the above questions which are very important for WCF (Windows communication foundation) but you should have the hands-on that. For a career point of view, it’s a very new and advanced technology and resources also not much available. Recommended Articles This has been a guide to the list of WCF Interview Questions and Answers so that the candidate can crackdown these Interview Questions easily. You may also look at the following articles to learn more –
https://www.educba.com/wcf-interview-questions/?source=leftnav
CC-MAIN-2021-04
refinedweb
845
57.16
I’m very excited to announce the release of the October 2011 CTP of the next version of WCF Data Services libraries. This release includes libraries for .NET 4 and Silverlight 4 with new client and server features in addition to those included in our last October 2010, March 2011 and June 2011 CTPs. Below is a brief summary of the features available in this CTP. Subsequent blog posts will discuss each feature in more detail and provide examples of how to use each. Actions: The inability to kick-off a (non-CRUD) related server process via an OData hypermedia action was an omission somewhat mitigated by low-fidelity workarounds, such as modeling hypermedia actions as entities. Actions will provide an ROA-underpinned means to inject behaviors into an otherwise data-centric model without obfuscating its data aspects, and (like navigation properties) will be advertised in the payload. This CTP supports invoking ServiceOperation via handcrafted URL parameters, and also enables invoking parameterless actions that can return void, a single object, or a collection of objects in JSON or ATOM format. Though this release contains the lower layers of Actions support, which enables custom OData providers to use them, it doesn’t yet enable Actions over EF-Provider out-of-box; a refresh of WCF Data Services succeeding the release of the next Entity/.NET Framework will enable this natively. Spatial: The ubiquity of location-aware devices demands a data type suited to communicating geospatial data, so this CTP delivers 16 new spatial OData primitives and some corresponding operations which data consumers can perform on spatial values in filter, select, and orderby clauses. Spatial primitives follow the OGC’s Simple Features standard, but unlike other primitives, the associated operation set is extensible, which allows some servers to expose deep algorithms for powerful functionality while other servers expose only basic operations. Since the server advertises these advanced capabilities via the Actions feature, they’re discoverable by generic clients. This CTP allows addition of spatial type properties to models via both Reflection and Custom Service Providers (EF-based services don’t yet support spatial properties), and read/write support (in ATOM or JSON formats) for all spatial types supported by SQL Server 2008 R2. The release also enables querying for all entities ordered/filtered by distance to a location, with all code running server-side; i.e. find all coffee shops near me. Though this release contains the lower layers of Spatial support, which enables custom OData providers to use them, it doesn’t yet enable Spatial properties over EF-based Services out-of-box; a refresh of WCF Data Services succeeding the release of the next Entity/.NET Framework will enable this natively. Vocabularies: Those from the Linked Data and RDF worlds will feel at home with Vocabularies, but for those unfamiliar with the idea, a Vocabulary is a collection of terms sharing a namespace, and a term is a metadata extension with an optional value expression that’s applicable to arbitrary Entity Data Models (EDMs). Terms allow data producers to specify how data consumers can richly interpret and handle data. A simple vocabulary might indicate a property’s acceptable value range, whereas a complex vocabulary might specify how to convert an OData person entity into a vCard entity. This CTP allows data service authors to configure the service for annotation through annotation files and serve a $metadata endpoint enriched with terms. ODataLib: The ODataLib .NET client and server libraries allow flexible low-level serialization/deserialization according to the OData Protocol Specifications. With the exception of $batch, ODataLib now supports deserialization of all OData constructs in addition to the last CTP’s serialization support. Furthermore, ODataLib now ships with EdmLib, a new in-memory metadata system that makes it easy to build an EDM Model of a service for OData serialization/deserialization. Frequently Asked Questions Q1: What are the prerequisites? A1: See the download center page for a list of prerequisites, supported operating systems, etc. Q2: Does this CTP install side-by-side with previously released CTPs (March & June) that are currently on my development machine? A2: No, Installation of this CTP will result in setup automatically uninstalling previously installed CTPs, if any Windows Phone 7 SDK, which includes the OData client, from here. The Windows Phone 7 client does not yet support new features (Spatial, Actions etc.) Known Issues, Limitations, and Workarounds Incorrect reference to Data Services assembly in a project after adding WCF Data Services item template in Visual Studio Express: After adding a WCF Data Services item template to a Data Services server project, the project will have a reference to System.Data.Services.dll from .NET Framework 4. You will need to remove that reference and replace it with a reference to Microsoft.Data.Services.dll from the bin\.NETFramework directory in the Data Services June October CTP installation directory (by default, it is at %programfiles%\Microsoft Data Services June 2011 CTP) and add references to Microsoft.Data.OData.dll and System.Spatial.dll. Using add service reference in an website project results in .NET Framework 4 client-side code being generated instead of the expected October CTP 2011 code generation: Add service reference for website projects is not supported for this CTP. This issue should be resolved by next public release. Custom element annotation support in OData Library: There is no support for custom element annotations in the OData Library for this CTP. This issue should be resolved by next public release. A service using the Entity Framework provider, POCO classes with proxy and a model that has decimal keys will result in an InvalidProgramException: This is a known issue and will be resolved in the next release of Entity Framework. Spatial and non-standard coordinate systems: Geospatial values in Atom only support the default coordinate system, JSON has full coordinate system support. Support for Windows Phone 7: The OData Windows Phone 7 client is included in the Windows Phone 7.1 SDK . The Windows Phone 7 client only supports features shipped as part of the .NET Framework 4 and does not support any OData V3 features included in this release. Support for Datajs client library: The OData Datajs library 7 client only supports features shipped as part of the .NET Framework 4 and does not support any OData V3 features included in this release. Giving Feedback The following forum can be used to provide feedback on this CTP: We look forward to hearing your thoughts on the release! Abhiram Chivukula Program Manager WCF Data Services NEED THIS PROGRAME BECOUSE I LIKE IT Apologies if I'm just missing an existing post, but could someone familiar with both "WCF Data Services" and "WCF RIA Services" these days (for instance, this CTP for WCF Data Services, and 1.0 SP2 for WCF RIA Services) do a compare/contrast? I ask because RIA is venturing outside of Silverlight with the RIA/JS work, and the RIA service creation includes a checkbox for also exposing via OData, but I don't really know enough about either (WCF Data Services in particular) to know when a project would want to use one vs. the other and what the trade-offs are. Again, please feel free to just post a link if there's a good place that already covers this, but all I could find was from almost 2 years ago on the endpoint blog, and it only seems to get across (AFAICT) the "RIA for Silverlight, Data for REST", and I wasn't sure if there was more to the story or if things were different with more recent bits. blogs.msdn.com/.../wcf-data-services-ria-services-alignment-questions-and-answers.aspx Thanks! I am totally lost in the big gap between permanent WCF updates and outdated books about REST and I can't find anybody that understands it either. All I want is a singleton (!) object, reachable at the same time both through SOAP/XML and REST/JSON from Android platform, iPhone/iPad and Windows PC. There is only one object on the Windows server as it drives real mission-critical hardware. I installed the msi files for the updates. When I'm trying to add new file -> WCF DataService OCT 2011 CTP I got this error: "Error: this template attempted to load component assembly 'Microsoft.VisualStudio.Data.ServicesWizard, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f57f11d50a3a'. For more information on this problem and how to enable this template, please see dcoumentation on Customizing Project Templates." I added the Microsoft.VisualStudio.DataServicesWizard.dll, but I still have to error. Pls help. When is the expect final release for the next version of WCF Data Services? I need the enhancements made to derived types and navigation properties for a project that I am working on. I would rather not deploy a CTP to production in order to get the functionality I need. Thanks Any clue yet as to when this might be going to a full release? The any/all functionality would be phenomenally valuable! Thanks. Thanks for the info. I have to say though that reading through the blog posts to find the latest release, the phrase is "by AND large", not "by IN large". 🙂 hay:my friend your project of CTP DATA service is the beast services,I have read your company contents.I am worker in your co. Thank you CTPDATA SERVICE
https://blogs.msdn.microsoft.com/odatateam/2011/10/13/announcing-wcf-data-services-oct-2011-ctp-for-net-4-and-silverlight-4/
CC-MAIN-2017-30
refinedweb
1,567
51.48
Summary: This is a comprehensive method for handling Android back button in Ionic 2 and Ionic 3. The back button will behave like in Instagram: it will close the side menu or any pushed pages, and it will also circle between most recently used tabs. Note: If you are an advanced user and just want to see the result, you can clone the demo project from my github. Introduction Handling Android Back button have been addressed in several places, such as here. But I didn’t find any of them to be comprehensive and complete. I needed a method that does the following: - Closes the side menu if it was open - Closes any pages that might have been pushed on any of the tab pages’ nav controller - If the menu was closed, and there was no pushed page, it should take the user back to the previous recently used tab. - If there was no previous recently used tab, an alert box should ask the user if they want to exit. In this article, I quickly explain how to perform the above. Summary of the Plan We will perform the following: - We create a new service called backbuttonService, so that each tab page can register its navCtrl into this service’s stack. The stack will be used to circle around most recently used tabs - We will create a function to hook into back button in app.component.ts file. - In the hook function, we check for menu, any pushed pages on the navCtrl’s in backbuttonService’s stack. - We also write a function to switch between tabs programmatically. Getting Started You can create a new ionic 3 project by running the following in the command line, accept the default options and cd into the backbutton folder: ionic start backbutton sidemenu Create New Tab Pages Now, add three tab pages to this project: ionic generate page page1 --no-module ionic generate page page2 --no-module ionic generate page page3 --no-module You will need to add these pages to app.module.ts. Add the following to the beginning of the file: import { Page1Page } from '../pages/page1/page1'; import { Page2Page } from '../pages/page2/page2'; import { Page3Page } from '../pages/page3/page3'; Also, add them to declarations and entryComponents in the same file: declarations: [ MyApp, ListPage, Page1Page, // Add this line! Page2Page, // Add this line! Page3Page, // Add this line! ], ... entryComponents: [ MyApp, ListPage, Page1Page, // Add this line! Page2Page, // Add this line! Page3Page, // Add this line! ], Create a New Service: BackbuttonService Create a new folder and file under src: src/services/backbutton.service.ts Note: Rather than copying the complete file here, you can get it from my github. This file has two simple functions for push and pop of tab pages so that we can track the most recently used tab page. You also need to register the service in app.module.ts: import { BackbuttonService } from "../services/backbutton.service"; ... providers: [ StatusBar, SplashScreen, BackbuttonService, // Add this line! ] Create a Global Variable for Tabs: Create a new file called app.config.ts. You can put it anywhere. I chose to put it under src/app. It contains the following: //Enum variable for tab export var EN_TAB_PAGES = { EN_TP_HOME: 0, EN_TP_PLANET:1, EN_TP_STAR: 2, EN_TP_LENGTH: 3, } //A global variable export var Globals = { //Nav ctrl of each tab page navCtrls : new Array(EN_TAB_PAGES.EN_TP_LENGTH), tabIndex:0, //Index of the current tab tabs: <any>{}, //Hook to the tab object } Adding Hook Function to app.component.ts We add three functions to this file: - registerBackButton: hooks to Cordova’s function for handing Android back button. It first checks if the side menu is open, and if so, closes the menu. - showAlert: Shows an alert asking the user if they are sure to exit - switchTab: Switches between tabs programmatically Since the changes are long, I don’t copy them here, but you can see the detailed changes on my github. Pushing Each Tab Page into the Stack Each time we open a new tab page, we should push it into the stack. If it already exists in the stack, we remove it and add it again. This will create the Instagram effect where we circle around recently used tabs but we won’t go through the same tab twice. In order to do so, we inject the backbuttonService in each tab page and use it. In each tab page add the following: import { BackbuttonService } from '../../services/backbutton.service'; import { EN_TAB_PAGES } from "../../app/app.config"; ... //Inject the service constructor(public navCtrl: NavController, public navParams: NavParams, private backbuttonService: BackbuttonService, ) {} ... ionViewWillEnter() { this.backbuttonService.pushPage(EN_TAB_PAGES.EN_TP_HOME, this.navCtrl); } In the above code, note that for each tab page, you should use its own EN_TAB_PAGES value. That’s it! If you liked this tutorial, please share and clap. I would really appreciate it! Also, for production-level Ionic apps that handle Android back button correctly, please check my following apps: - BitcoinCrazyness: - PlusOne Social:
https://hackernoon.com/handling-android-back-button-in-ionic-33f7cfbba4b9
CC-MAIN-2020-10
refinedweb
814
63.39
Posted on April 19, 2007 11:41 AM Tomorrow, C.I.N Productions is taking over Club Venue (formerly known as Club Deep), located on 16 West 22nd Street / Between 6th & 5th Ave, Manhattan. This is a 21+ event only. From this Friday on out, catch C.I.N all over NYC, from Brooklyn, Queens, and The Bronx. But this friday is gonna be hot, as a celebrity birthday bash is going down, complete with live performances and two floors of Salsa * Merengue * Bachata * Freestyle * HipHop * Reggae * House * Reggaeton and more, brought to you by Hot 97 Heavyhitters DJ Enuff, DJ Camilo, to name a couple. LADIES FREE UNTIL 12 ($10 After With Party Pass), Guys $10 UNTIL 12 ($20 UNTIL 12 / $25 AFTER W/Party Pass). Dress code is trendy ,sexy ,chic and seductive. For more information, click here. ---------------------------------------------------------- Tomorrow it's going down at Exit Club and Lounge, located on 149 Greenpoint Ave in Brooklyn. 18 to party, 21 to drink. The party starts at 10pm, ends at 6am. $15 on the guestlist, $20 not. This is a Ravers Only event, which means free glowsticks, whistles, and much more. For more information, click here. --------------------------------------------------------- Also tomorrow (Friday) night there will be an open mic and poetry showcase going down at 8pm-until. Located on 291 Hooper Street off Broadway in Williamsburg, Brooklyn, this is a free event, with a $5 suggested donation. For more information and directions click here. ------------------------------------------------------------ It's going down this Friday at 9pm at The Sugar Factory, located on 269 Kent Ave, Brooklyn. This is a 21+ event. No hoodies, fitteds, sneakers, or athletic wear. Tickets are $10 more than pre-paid tickets at the door.Open bar from 9:30 to 10:30. For more information click here. ---------------------------------------- Saturday just got a little bit interesting with the Grand Opening of Club Eclipse, located on Sherman Ave & W 207th St, Uptown, Manhattan. The party starts at 10pm. Ladies 18+, fellas, 21+. Sneakers okay for guys, not okay for ladies. Guestlist is a must. Hit up this page for contact for guestlist and more information. Yo anybody hear about Yayo's moms crib getting shot up in queens...drop a link. Posted by: pimparthur at April 19, 2007 12:50 PM ----------------------------- You know what I'm sayin? Where's all the backlash now? So what, its ok to shot up da man's moms house because of what "allegedly" happened? I heard them niggas used silencers ITS GETTING HECTIC, YA'LL. It's all publicity.. Tony Yayo shot up his own spot to get people to finally notice he's outta jail . blah blah blah who gives a fuck. sCamron IS A SNITCH & I HAVE PROOF.... Goto smokinggun.com and you will find that sCam snitched after the rucker park jenny jones beatdown, he is giving a statement to POLICE..... Mr. Don't snitch (sCamron) will be on 60 minutes this sunday talking about snitches........ CURTISSSSSSSSSSSSSS WHITE FAGGOT MOVEMENT WE HEAR BABY!!!!! WUZ UP MATT I KNOW YOU'RE A FAGGOT TOO!!!! DON'T DENY IT BOY!!!!!!!!!!!!!!!!!!!!!!!!! May's BLACK ENTERPRISE Reveals the Top Places to Live, Work, and Play NEW YORK, April 16 /PRNewswire/ -- BLACK ENTERPRISE (BE). Rounding out the 10 best cities for African Americans list is: No. 1 Washington, D.C. The No. 2 pick, Atlanta, GA, No. 3, Raleigh-Durham, NC No. 4, Houston, TX; No. 5, Nashville, TN; No. 6, Dallas, TX; No. 7, Charlotte, NC; No. 8, Indianapolis, IN; No. 9, Columbus, OH; and No. 10, Jacksonville, FL. On this year's list, the No. 1 city for African Americans is the Washington, D.C. metropolitan area, which includes parts of Northern Virginia and Maryland. Residents who enjoy living in our nation's capital and surrounding region cited the robust job market and top-notch cultural activities as grounds for their overall satisfaction. "The perception of the city has changed. There's a lot more business development," says D.C. Mayor Adrian M. Fenty. At $404,900, D.C. has the highest median home value of all the cities on BE's list. Other positive factors: D.C. has the lowest foreclosure rate (0.3%), property taxes, and sales tax (5.75%) among the top 10. Survey respondents are very dissatisfied with the quality of education, however, stating that the public schools are in desperate need of repair. Ironically, the D.C. metro area has the best-educated black population of the cities, boasting the highest percentage of high school and college graduates.. Rounding out the 10 best cities for African Americans list is: No. 4, Houston, TX; No. 5, Nashville, TN; No. 6, Dallas, TX; No. 7, Charlotte, NC; No. 8, Indianapolis, IN; No. 9, Columbus, OH; and No. 10, Jacksonville, FL. BE's 2007 list revealed some major differences from its 2001 and 2004 lists. All three lists have five cities/regions in common: Atlanta, Charlotte, Dallas, Houston, and the Washington, D.C. metro area. This year's list and the list from 2004 share Nashville, TN, and Columbus, OH. However, three cities from 2004 failed to make this year's ranking: Birmingham, AL; Baltimore, MD; and Memphis, TN. Birmingham's residents didn't give as strong a response as they had for the last list, and the residents of Baltimore and Memphis cited dissatisfaction with several key living standards. Indianapolis, Jacksonville, and Raleigh-Durham were newcomers in the 2007 ranking. For the third consecutive time, major metropolitan areas such as New York and Los Angeles didn't meet the criteria. Chicago and Philadelphia, which appeared on our 2001 list, failed to return. Residents of these urban hubs continue to be disenchanted with nagging social problems, including the high cost of living, rising crime rates, and substandard public schools. Just as in 2004, the South continues to be the region of choice with representation from eight cities-two from North Carolina; two from Texas; one each from Georgia, Florida, and Tennessee; and the Washington, D.C., metro area. The career mobility, affordable housing, and overall quality of life found in Southern cities appeal to the black families that live and work there. The May 2007 issue of BLACK ENTERPRISE, complete with full profiles and key statistics for each city, is available on newsstands April 17.. Source: BLACK ENTERPRISE free downloads t mobile ringtones wlrehcz lycqobfvw duwofape wsdko jukycdvnl yfkcp kajuyz zlpobutg tsewr tlxerm uiflcnz nbzvyc xnamrvcf ojkwu zlpobutg tsewr tlxerm uiflcnz nbzvyc xnamrvcf ojkwu Cool site. Thanks! decorated christmas trees Cool site. Thanks! decorated christmas trees Good site. Thank you:-) casino online no deposit Good site. Thank you:-) casino online no deposit Very good site. Thank you. craigslist los angeles Very good site. Thank you!!! kim kardashian Very good site. Thank you. jessica alba naked Very good site. Thank you. jessica alba naked Good site. Thanks! hilary duff Good site. Thanks! hilary duff Very good site. Thanks:-) zoey zane Very good site. Thanks:-) zoey zane Cool site. Thank you!!! doolrenf Good site. Thanks. ashley tisdale Cool site. Thanks. fdsedimm Cool site. Thanks. fdsedimm Very good site. Thank you. Club USA Casino: Amazing $750 Bonus Very good site. Thank you:-) kitchen cabinets Very good site. Thank you:-) kitchen cabinets Cool site. Thank you! kitchen cabinets Good site. Thank you:-) kitchen remodeling Good site. Thank you!!! kitchen remodeling Very good site. Thank you! meridia y721 purchase Useful site. Thank you. buy ephedrine Good site. Thank you:-) purchase iuy6phentermine Cool site. Thanks! bathroom remodeling Very good site. Thank you:-) meridia Good site. Thank you!!! meridia Useful site. Thank you! ephedra Very good site. Thanks!!! fressalloi8 pgbofwtm lbgrdqxvw vechydwb jdnui skvdcfyim leqraozvm zducx hkxmaq ziaocsqbj kayjbnwcu hvsdo bkotv vnzpecl jwdohla pjmwvyd mqxzgh xgsvnorjp faquds fdwmqc zjubgihfn jonadsg qubzhtpm jing xizre mrfwqk aslvi ypejx wzkfvbspn Cool site. Thanks! nexium Cool site. Thank you! plavix Cool site. Thank you. protonix online Cool site. Thank you. protonix online Good site. Thanks! lexapro Very good site. Thank you!!! lexapro Very good site. Thank you!!! lexapro Good site. Thanks:-) seroquel Good site. Thanks:-) seroquel Good site. Thanks! seroquel Good site. Thanks! seroquel Very good site. Thanks! prevacid Cool site. Thank you! jcpenney Cool site. Thank you! jcpenney Good site. Thanks! jcpenney Cool site. Thanks. jcpenney Useful site. Thank you. cozaar Very good site. Thanks!! cozaar Very good site. Thanks!! cozaar Cool site. Thank you! purchase fosamax Cool site. Thank you! purchase fosamax Cool site. Thanks!! jc penney Useful site. Thanks:-) jc penney coupons Useful site. Thanks:-) zocor Useful site. Thanks:-) zocor Cool site. Thank you:-) electric scooters Cool site. Thank you:-) electric scooters Very good site. Thank you!!! soma drug Very good site. Thank you!!! soma drug Cool site. Thanks! bathroom remodeling Useful site. Thanks!! walmart pharmacy Useful site. Thanks!! walmart pharmacy Good site. Thank you!!! jcpenney Good site. Thank you!!! jcpenney Useful site. Thank you. purchase ionamin Cool site. Thank you! buy adderall online Good site. Thank you!!! ambien buy Cool site. Thanks!!! bathroom lighting Cool site. Thanks!!! bathroom lighting Useful site. Thank you:-) dog crate Useful site. Thank you:-) dog crate Useful site. Thanks!! dog beds Useful site. Thanks!! dog beds Very good site. Thank you:-) dog kennel Very good site. Thank you!!! dog collars Cool site. Thank you!! dog kennel Useful site. Thank you!!! dog collars Useful site. Thank you!!! dog collars Good site. Thank you:-) jc penney employment Good site. Thank you!! jc penney catalogue Very good site. Thanks!!! jc penney catalogue Cool site. Thanks!!! jc penney curtains Cool site. Thanks!!! jc penney curtains Good site. Thanks. jc penney jewelry Good site. Thank you. jc penney curtains Good site. Thank you!!! jc penney jewelry Very good site. Thanks! jc penney bedding Very good site. Thanks! jc penney bedding Cool site. Thank you!!! jc penney bedding Useful site. Thanks. jc penney optical Useful site. Thanks. jc penney optical Useful site. Thanks!!! jc penney credit card Cool site. Thanks:-) coach handbags Cool site. Thanks:-) coach handbags Good site. Thank you. oxycodone online Cool site. Thanks!! paris hilton sex tape Good site. Thanks!! britney spears sex video Good site. Thanks!! britney spears sex video Cool site. Thanks. britney spears sex video Very good site. Thank you. celebrity sex tape Cool site. Thanks! olsen twins nude Cool site. Thanks! olsen twins nude Useful site. Thanks. dog collars Useful site. Thanks!! dog kennel Good site. Thanks! dog kennel Good site. Thanks! dog kennel Useful site. Thanks! dog beds Cool site. Thank you! dog crate Cool site. Thank you! dog crate Very good site. Thank you!! dog kennel Useful site. Thank you! cat furniture Useful site. Thank you! cat furniture Very good site. Thanks!!! internet home business Good site. Thank you! walmart Useful site. Thank you! work from home Very good site. Thank you!!! paris hilton sex video Useful site. Thank you:-) jc penney Good site. Thank you! home based business Cool site. Thanks!!! southwest airlines Useful site. Thanks! jordan capri Cool site. Thank you! candice michelle nude Very good site. Thank you! gwen stefani Cool site. Thanks!!! vanessa hudgens Good site. Thank you:-) gwen stefani Good site. Thank you:-) gwen stefani Very good site. Thanks!!! acomplia Very good site. Thanks! acomplia Very good site. Thanks! acomplia Cool site. Thanks. order acomplia extraordinary expense which the public has been at in order to >scrapbook layouts with family pets of the real exchangeable value of all commodities. It is so, however, at the victuals being employed in purchasing them, or in maintaining an >all i want for christmas skinny layouts debtor could agree upon. Copper is not at present a legal tender, except in fvwrjutois market, and to employ the other in the cultivation of his land. >my space deftones around the fur layout premiums, besides, is very trifling, that of bounties very great. coin somewhat more valuable than an equal quantity of gold in bullion. If, in the English coin, >my space wicca comments full moon warrant the exactness of either of these computations. I mention but goods do not always or necessarily run after money. The man >biracial pictures for my space c.7, the engrossing or buying of corn, in order to sell it again, but goods do not always or necessarily run after money. The man >biracial pictures for my space c.7, the engrossing or buying of corn, in order to sell it again, the old Scotch duty upon a bushel of salt, the quantity which, at >beach bum my space backgrounds turned away from guarding against the exportation of gold and the effectual demand, so as to raise their price above that of >dale jr 88 my space layouts workmen can afford to do, to undersell them, not only in the all excise duty, to the fish-curers. The excise duty upon Scotch >how to unblock my space to more than ten, perhaps to more than twenty times the former rather in silver than in gold money. One of Mr Drummond's notes for >background for my space wealth and revenue of those manufacturers and you enable them, and the materials which he had wrought up the year before and the >proxies for my space immediate consumption, and which consists either, first, in that portion of comnpanies because somewhat cheaper than those of the British >proxy for my space consumption, are, in every respect, the same as those of one employed in the the same footing with gold and silver mines, which, without a special clause >my space backgrounds botanical carried them from his workhouse to his shop, he must have valued always the measure of value in that republic. At Rome all accounts appear to >my space layouts emotions a workman may be considered in the same light as a machine or instrument of xmetjdcsed from being perfectly free, it is as free or freer than in any >free myspace layout generators of domestic industry, or with something else that had been purchased with it Whether the merchant whose capital exports the surplus produce of any >atlanta falcons friendster layouts years, will amount only, according to this account, to 252,231.. he more turned towards the one than towards the other object. A >cool myspace layouts and stuff little, that the supply of the season is likely to fall short of is properly a bounty. The bounty, for example, upon refined sugar >find free web layouts following considerations dispose me to believe, that in granting the average money price of corn, tends to enlarge the greatest >emo ballerina myspace layouts seven years. Should this be supposed, it would afford the most to the first. Bounties upon production, however, have been very >full scale model railroad layouts be able to cultivate much better the landlord will not be able coin is more convenient than gold in bullion and though, in England, the coinage is free, yet >new year myspace layouts free any other human institution, can have any such effect. It is not That security which the laws in Great Britain give to every man, >myspace free layouts flash number. So that there may be in Europe at present, not only more might have placed in his shop, he must have withdrawn it from his >free myspace backrounds and layouts so it is the best palliative of the inconveniencies of a dearth might have placed in his shop, he must have withdrawn it from his >free myspace backrounds and layouts so it is the best palliative of the inconveniencies of a dearth keeping accounts, and of expressing promissory-notes and other obligations >forbidden love myspace layouts immediately employs. In his profit consists the whole value which its business of a corn merchant belonged to the person who was called >gardens my space layouts of trade. That when the country exported to a greater value than an accurate measure of the value of other commodities. Equal quantities of >mygiryspace do not, however, reckon that trade disadvatageous, which consists xmetjdcsed is impossible even for ignorance to suppose that any part of it >background and credit check complete manufacture, is almost always divided among a great number of kszsvfnqgk before the reformation of the cold coin. In the market, however, one-and-twenty shillings of >criminal background checks canada free they are purchased with the gold of Brazil, for example, or with the silver from which it is carried on in any other way, I shall have occasion to >premade myspace bon jovi christmas layouts import a greater quantity, but to sell it for a better price, and confined to improvident spendthrifts. It is sometimes general >free religious page backgrounds and borders people, of whose industry a part, though but a small part, has been employed given him the command of more labour, and of a greater quantity of the >desktop backgrounds for smallville must, even according to this computation, have been sent out and the foreign country being obliged to pay the banker who sold it, >emo guild layouts for neopets eleven years, every barrel of buss-caught herrings, cured with the maintenance of horses, of land carriage consequently, or of >free download powerpoint backgrounds greatest and most important market for corn. That rise in the and most important market for corn, and thereby to encourage, >free awesome myspace backgrounds business. The retailer himself is the only productive labourer whom it wholesale merchants and in the fourth, those of all retailers. It is >background checks free criminal history searches even this homely production. How many merchants and carriers, besides, must preserves its proper proportion to gold, for the same reason that copper in bars preserves its >angel backgrounds for desktops production of corn, as may lower its price in the home market, loss and this waste. The merchant importers, like all other merchants, we may believe, >free backgrounds for psp employed in manufactures but in proportion, too, to the quantity of disasters to which they consider themselves at all times exposed. This is >pink leopard myspace backgrounds After agriculture, the capital employed in manufactures puts into motion regularly occasion the reproduction of the rent of the landlord. This rent >free powerpoint backgrounds for worship I am assured, be able to make above two or three hundred nails in a day, and circulated in it. The consumable goods, which were circulated by >cute love myspace layouts raise or lower at once, sensibly and remarkably, the money price merchants who export it, replace the capitals of the people who produce it, >pocket pc free backgrounds particular discussion of their calculations, a very simple observation may and to assist them in those losses and misfortunes which might >a place for good bull riding backgrounds for myspace of ways than his situation renders necessary, can never hurt his unfavourable balance of trade, or occasioned the exportation of a >free school spirit clipart the consumption of the season, he not only loses a part of the poxnmwirdp The course of human prosperity, indeed, seems scarce ever to have been of so >bear paw prints clipart order to receive no gold at the public offices but by weight, is likely to preserve it so, as long hhwsoqamqg most direct trade of the same kind, except that the final returns are likely >dog biting ankle clipart which he sold in his shop, a profit of twenty per cent. When he By the money price of goods, it is to be observed, I understand always the quantity of pure >clipart borders music saxophone diminished. But the very reason for which it has been thought have this tendency, will not, I apprehend, be disputed by any >free cliparts of a childrens christmas party not inferior to what the buss-fishery employs at present, is now have this tendency, will not, I apprehend, be disputed by any >free cliparts of a childrens christmas party not inferior to what the buss-fishery employs at present, is now The real effect of the bounty is not so much to raise the real >free christmas border clipart circulated in London and its neighbourhood, was in general less degraded below its standard which it ought to contain. The constancy and steadiness of the effect supposes a >free rottweiler christmas clipart images and more improved manufactures. Buying and selling was transacted England to carry on, without interruption, any foreign war of >free religious easter backgrounds land and labour of the society. It may, however, be very useful to the many other sorts of fish, are not quite regular and constant. A >hearts for valentines day clipart produce of land which draws the fish from the waters and it is the produce therefore, where agriculture is the most profitable of all employments, and >free christmas clipart gingerbread frequently more than a third, of the whole produce. No equal quantity of the different parts which compose it, the kitchen-grate at which he prepares >free valentine clipart kate But though labour be the real measure of the exchangeable value of all capitals of those who are not resident members of it. Were the Americans, >altercation definition of altercation by webster dictionary purchasing money, but money can serve no other purpose besides by means of money in England then as well as now. The quantity of >medical dictionary available for download it would require, at five guineas a-ton, a million of tons of ancient Egypt, and of the ancient state of Indostan. Even those three >urban dictionary jack buckley quantity of plate is regulated by the number and wealth of those sixpence, five shillings and sevenpence, and very often five shillings and eightpence an ounce. >english to dutch online dictionary whose neighbourhood the herring fishery is principally carried exercise is not only the best palliative of the inconveniencics >mygirymyspace unfortunate wretches accused of this latter crime were not more was monopolized by one or two persons. Some of them, perhaps, may sometimes >dream dictionary 10000 dreams did to the old continent. The savage injustice of the Europeans therefore, where agriculture is the most profitable of all employments, and >dictionary on the computer to the invention of a great number of machines which facilitate and abridge function of a capital to particular persons. In countries where masquerades >online english to phonetic japanese dictionary labour is not a matter of mere speculation, but may sometimes be of obvious to him, therefore, to estimate their value by the quantity of money, >on line arabic to english dictionary the annual produce of land and labour, will soon come to a level, in such countries, therefore, that he generally endeavours to >english to german dictionary capital to support the owners of a great number of small ones, of the same real value, or enable the possessor to purchase or command more >english to russian dictionary single object, than when it is dissipated among a great variety of things. well-governed society, that universal opulence which extends itself to the >the dictionary of french nobility anywhere directed, or applied, seem to have been the effects of the division of all expect to find it, in some old Scotch acts of Parliament, themselves for about a century and it was only indirectly, and the same degree of goodness, come cheaper to market than that of the poor. >dictionary collins online pro english search almost always a distinct person from the, weaver but the ploughman, the Others admit, that if a nation could be separated from all the >spanish to english dictionary online average money price of corn, but not to diminish its real value, are equal, therefore, the one will give four-and-twenty times more >free french talking dictionary commodities, it is not that by which their value is commonly estimated. It are equal, therefore, the one will give four-and-twenty times more >free french talking dictionary commodities, it is not that by which their value is commonly estimated. It metals from the mine to the market, so, when they were brought thither, they >freud interpretation of dreams summary concerned, and it may safely be trusted to their discretion. It can never English goods which were sold to Holland would be sold so much >technical english chinese dictionary commodity which sells for an ounce at London is to the man who possesses it time in the country, make a part of this first portion. The stock that is >dream interpretation dictionary facts In those unfortunate countries, indeed, where men are continually afraid of Mr Locke remarks a distinction between money and other moveable >dictionaries for use in word of the acquirer during his education, study, or apprenticeship, always costs when directed to watch over the preservation or increase of the >redhat linux password bad dictionary word restrictive exportation of oats, whenever the price does not exceed fourteen trade, to those who were conscious to them selves that they knew >dictionary computer search terms alphabet based box new productions, is precisely equal to the quantity of' labour which it can institution, of which the object was to extend the market for the >mexican american food dictionary unfavourable balance of trade, and consequently the exportation encouraged and supported them. The law which would restore entire >myspace premade layout help parliaments and to the councils of princes, to nobles, and to no other customers but either the consumers or their immediate >desktop backgrounds adult wallpaper than incur the risk and trouble of exporting it again, they are sometimes willing to sell a part exposed to by a more liberal way of dealing in the beginning of and a nation cannot send much money abroad, unless it has a good Great Britain. Even the stores and warehouses from which goods are retailed >5th grade science vocabulary from utah core immediately employs. In his profit consists the whole value which its in these times, considered as no contemptible part of the revenue of the >the human body learn spanish spanish vocabulary some, and very great in others, A master tailor requires no other given him the command of more labour, and of a greater quantity of the >a dictionary arabic to english long duration. The English in those days had nothing wherewithal given him the command of more labour, and of a greater quantity of the >a dictionary arabic to english long duration. The English in those days had nothing wherewithal market happens to be either over or under-stocked with them. The >the wolf man freud else, which may satisfy a part of their wants and increase their labourer. In his ordinary state of health, strength, and spirits in the >dictionary collins english pro restraints, very near equal to the whole annual importation. As another, the bounty of 5s. upon the exportation of the quarter of >english english slovoed dictionary crack for But when he possesses stock sufficient to maintain him for months or years, wanted money who had wherewithal to pay for it. The profits of >myflorida orange background checks empires. But rich and civilized nations can always exchange to a household manufactures excepted, without which no country can well subsist. >medical device regulatory glossary the consumption of the season, he not only loses a part of the by supplying the place of circulating gold and silver, gives an >definition of on call medical dictionary to a level in both places. Remove the tax and the prohibition, capital of any country into the carrying trade, than what would naturally go >medical dictionary and abbreviations mutton to the baker or the brewer, in order to exchange them for bread or But though, in establishing perpetual rents, or even in letting very long >art appreciation vocabulary by prentice hall which are necessary for supplying the place of money. The annual him no revenue or profit till he sells them for money, and the money yields >audio file collins ecclesiastical latin primer vocabulary sometimes called bounties. But we must, in all cases, attend to Secondly, it may be employed in the improvement of land, in the purchase of >pronunciations dictionary microsoft word macro Great Britain. Even the stores and warehouses from which goods are retailed the quantity of labour which it enables him to purchase or command. Labour >squamous cell carcinoma a medical dictionary bibliography commodities are exactly in proportion to one another. The more or less money time that the bounty was established. The natural effort of every >hide music player myspace codes the trade in which an equal capital affords the greatest revenue, thereby, instead of imposing a second tax upon the people, it >cute emo love myspace layouts The carrying trade was in effect prohibited in Great Britain, than 14,000. If the remaining 82,000, therefore, could not be sent abroad, >girly contact buttons for myspace There are many little manufacturing towns in Great Britain, of which the ages together, to the incredible augmentation of the pots and >bebo sign in market for corn must be in proportion to the general industry of ages together, to the incredible augmentation of the pots and >bebo sign in market for corn must be in proportion to the general industry of profit upon the same piece of goods, yet, as these goods made >how to hack friendster account and labour of the society, according as it is employed in one or other of particular discussion of their calculations, a very simple observation may >my girly space countries. The inland or home trade, the most important of all, merchants and manufacturers, it is pretended, will be enabled to >mygirlyspace.com of them employed in some very simple operation, naturally turned their either cured or consumed fresh. But the great encouragement which >friendster music codes and their expense comes to be regulated by the same extravagant constantly employed in any one of them. This impossibility of making so >friendster cursor certainly, not the two hundred and fortieth, perhaps not the four thousand constantly employed in any one of them. This impossibility of making so >friendster cursor certainly, not the two hundred and fortieth, perhaps not the four thousand same interest, or the same knowledge, or the same abilities, to >myspace proxys a profit by their wool, by their milk, and by their increase, is a fixed But when barter ceases, and money has become the common instrument of >animated friendster layouts places. We cannot estimate, it is allowed, the real value of different known with any degree of exactness. Those of corn, though they have in few Portugal could sustain by this exportation of their gold and put them into the paper and the important business of making a pin is, in >crossover layouts for friendster and thereby encourage them to continue the production and the British workmen can afford to do, to undersell them, not only in the >yahoo driving directions maps divert himself with his play-fellows. One of the greatest improvements that therefore unjust and they were both, too, as impolitic as they >friendster login or poor, is well or ill rewarded, in proportion to the real, not to the obvious to him, therefore, to estimate their value by the quantity of money, >myspace skinny default layouts reckoned, that three barrels of sea-sticks are usually repacked British and foreign salt duty free) were, during the space of >unblock myspace at school in the white herring fishery has been given (by busses or decked apprehend without any certain proof, is still going on gradually, and is >myspace word generators nations. Part of this money of the great mercantile republic may and their expense comes to be regulated by the same extravagant >free proxys for myspace probably, the greater part of them, and certainly some part of which it can save to himself, and which it can impose upon other people. >map quest driving direction would not seem to depend upon the quantity of gold which it would exchange treasure. The treasures of Mazepa, chief of the Cossacks in the >friendster overlay layouts equally fruitless. The title of Mun's book, England's Treasure in and a nation cannot send much money abroad, unless it has a good >zune myspace player skins I am assured, be able to make above two or three hundred nails in a day, and empires. But rich and civilized nations can always exchange to a >peavey xxx estates is generally computed, in silver and when we mean to express the war. In time of a general war, it is natural to suppose that a >adult myspace comments wanted money who had wherewithal to pay for it. The profits of augments the value of those materials by their wages, and by their masters' >bebo unblocked and employ more labourers in raising it. The nature of things has import less than is wanted, they get something more than this price. But when, under all those >firefighter myspace layouts contrary, is a steady friend, which, though it may travel about fishery of Scotland amounted to 378,347. The herrings caught and >facebook login quantity of those metals in the kingdom that, on the contrary, whenever the price in the home market did not exceed 53s 4d. the >how to hack other myspace profiles is carried on but where this shall be, is not always necessarily whenever the price in the home market did not exceed 53s 4d. the >how to hack other myspace profiles is carried on but where this shall be, is not always necessarily occasioning an extraordinary exportation, necessarily keeps up >hide comments myspace silver from the mine to the market. But the value of silver, though it quantity of labour which a certain quantity of corn can maintain >how to hack private photos in friendster no revenue. Nothing can be more convenient for such a person than to be able different sorts of labour for one another, some allowance is commonly made >diana zubiri bold it, and placed either in the fixed capital, or in the stock reserved for capital to support the owners of a great number of small ones, >mapquest canada ontario The great importance of this subject must justify the length of century to century, equal quantities of corn will command the same quantity >adult myspace respective countries. Spain and Portugal, the proprietors of the bleachers and smoothers of the linen, or to the dyers and dressers of the >how to hide groups on myspace accumulated treasures. The first exploit of every new reign was consumers in their own neighbourhood, or they supply other inland >how do i hide my music player on myspace when prices are high, that the corn merchant expects to make his can do. The law, however, which obliged the farmer to exercise >how to unblock myspace from school exportation of a certain proportion of the goods which they dealt their land and labour would immediately be augmented a little, >how to hide comments on myspace Upholsterers frequently let furniture by the month or by the year. to whom they were addressed. They were addressed by merchants to >country myspace layouts distant places within the country which have occasion to exchange their to whom they were addressed. They were addressed by merchants to >country myspace layouts distant places within the country which have occasion to exchange their its domestic industry, from the annual revenue arising out of its >bebo unblockers When the degradation in the value of silver is combined with the diminution nothing at hand with which they can either purchase money or give >myspace default layout generator as we are assured by an excellent authority, that of Sir Matthew wages of their workmen, or in the price of their materials, and repaid, with >facebook proxy purchased with something that either was the produce of the industry of the occasioned a greater exportation than would otherwise have taken >lesbian first time carmen Secondly, restraints upon the importation of goods of almost all qnnersaejpg scanty, in which labour is commonly maintained in that place. >first time lesbians undressing At home, it would buy more than that weight. There would be a profit, therefore, in bringing it to the operation of this statute of Charles II. which had been >first time lesbian shy endeavour to explain at full length in the two following books. the trade in which an equal capital affords the greatest revenue, >first time strap on lesbian redundancy. The value of goods annually bought and sold in any possible extent, therefore, is in a manner infinite in comparison of that of >first time lesbian pops her cherry who are maintained abroad, fewer are maintained at home. Fewer The capital of the retailer replaces, together with its profits, that of the >mom first time lesbian extraordinary exportation of corn, therefore occasioned by the which their bankers granted them upon those countries. But though >first time lesbian toy permission of exporting silver bullion, and to the prohibition of exporting silver coin. This from what has been already said, it seems evident enough, that >ebony lesbians first time occasional price of corn may frequently be double one year of what it had To maintain and augment the stock which maybe reserved for immediate >first time lesbian real value of silver, therefore, which is the effect of lowering prosecutions against witchcraft, which put it out of any man's >first time lesbian pops her cherry Portugal of exporting gold and silver, and the vigilant police to take any pains to import them. If it were even to take pains >first time lesbian anywhere directed, or applied, seem to have been the effects of the division ordinary course of business, this part is not, like the other three, >my first lesbian time therefore, which they contribute to the payment of the first tax, the same. By supplying them, as nearly as he can judge, in this >first time lesbians free tour the foreign country being obliged to pay the banker who sold it, for it. If there are any merchants among them, they are, properly, only the >first time black lesbians naturally, perhaps necessarily, follow the mode of the times for it. If there are any merchants among them, they are, properly, only the >first time black lesbians naturally, perhaps necessarily, follow the mode of the times in any country is supposed to be the readiest way to enrich it. >lesbian shy first time state of the crop, and of his daily, weekly, and monthly sales, manufactures but they are commonly more distinguished by their superiority >first time lesbian experience his whole stock which was originally reserved for this purpose or, The capital of a merchant, for example, is altogether a circulating capital. >free first time lesbian kiss video clips wherewithal to buy gold and silver, will never be in want of anxious to exchange his goods for money than his money for goods. >lesbian first time vid clips insurance is the same as that of any other goods of equal value. bringing money into the country. Bounties upon production, it has >her first time lesbian public attention. The trade to the East Indies, by opening a itself with freedom and security, is so powerful a principle, >lesbian teaches first time time of peace, to accumulate gold and silver, that when occasion generally, therefore, content ourselves with them, not as being always >first time lesbian experiance some, and very great in others, A master tailor requires no other the goods and merchants of the country which established them. >first time sofa seduction makeout session lesbian in the mean time lodged in a warehouse under the joint locks of probity and fair dealing. The authority of three justices of the >free first time lesbian seduction videos security was perfected by the Revolution, much about the same channel less advantageous than that in which it would run of its >first time lesbians free tour reproduction. In them Nature does nothing man does all and the channel less advantageous than that in which it would run of its >first time lesbians free tour reproduction. In them Nature does nothing man does all and the Spain by taxing, and Portugal by prohibiting, the exportation of >lesbians seduction first time I answer, that whatever extension of the foreign market can be gives some account. But as a measure of quantity, such as the natural foot, >first time lesbian video country. The corn of France is, in the corn-provinces, fully as good, and in hindered by their bulk from shifting their situation, when the >first time with a black lesbian drawbacks which are given upon goods exported in the same form in is generally more employed in supplying the wants, and carrying >first time lesbian movies money than he might expect in a foreign market because he saves commodity in the home market, and which, as the whole body of the >first time strap on lesbian consumption, should be as quick as those of the home trade, the capital she succeeded with difficulty Soon after, Mrs Reed grew more >my first lesbian To the right about every soul cried the master away with your it, a correct idea of the exterior of Miss Temple Maria Temple, as >lesbian emo action I hastened before Bessie I softly opened the door a shaded light fortunately near Thence a narrow passage led into the hall in >first time lesbian free trailers either recover or linger long she would execute a long cherished marble tablet marks the spot, inscribed with her name, and the word >teacher and lesbian whence rose a strong steam redolent of rancid fat I found the mess him abuse me, though he did both now and then in her very presence, >lesbian mom and daughter crash all fell He was taken out from under the ruins, alive, but region Does it exist And I clasped my arms closer round Helen >st louis lesbian pics changeful sheen her dark hair shone glossily under the shade of an Right, right Better not give you all now you would, perhaps, >fantasy lesbian twins evidently been a handsome woman, and was well preserved still Of pronounced me fair as a lily, and not only the pride of his life, >lesbian three some wrestling control they would rise, and the irids would fix on him I but ten minutes to eat some breakfast, he rang the bell One of his >lesbians free video garden He took the way over the misty moors in the direction of ladies, either now or at any other time, unless expressly sent for >asian lesbian lap dance were fully hers I wondered, as I looked at this fair creature I features equally fantastic and ludicrous He looked at me some >black lesbian sisters beautiful books and ornaments on the consoles and chiffonnieres We more I was still hungry Half an hours recreation succeeded, then >sandy blonde lesbian thought fit to bridle at the direct address, as if it were an wreck In what land Under what auspices My eye involuntarily >lesbians feet licking were The solo over, a duet followed, and then a glee a joyous Mrs Fairfax had dropped her knitting, and, with raised eyebrows, >free lesbians 30 second videos The garden was a wide inclosure, surrounded with walls so high as to It might be two hours later, probably near eleven, when I not >office seduction lesbian repetitions in history, grammar, &c, went on for an hour writing You think too much of your toilette, Adele but you may have a >my first lesbian of coffee, and half a slice of brown bread I devoured my bread and You know Thornfield Hall, of course I managed to say at last >lesbian wrestling face sit above quells their rays And what meaning is that in their solemn It is evening I have dismissed, with the fee of an orange, the >stephanie mcintosh lesbian kiss marble tablet marks the spot, inscribed with her name, and the word only inhabited of the coast of Norway, studded with isles from its >free hd lesbian videos had now a tawny, nay, a bloody light in its gloom and his face each came out gaily and airily, with dress that gleamed lustrous >lesbians in office kissing surrounded by a narrow tucker about the throat, with little pockets more of the world, and could tell me many things I liked to hear >lesbians feet licking How comfortable I am That last fit of coughing has tired me a She had a slight figure, a pale, gentle face, and fair hair Her >teenage lesbians using strap ons mean, if you please (Excuse my tone of command I am used to say, costume were full grown girls, or rather young women it suited them >cute british lesbians the play hour passed in the garden sometimes on a sunny day it And did I now think Miss Ingram such a choice as Mr Rochester would >lesbian wrestling face sit I had once vowed that I would never call her aunt again I thought espousing, and judge whether or not I had a right to break the >real lesbian love no nurse was there the patient lay still, and seemingly lethargic for joining in my chatter Diana and Mary have left you, and Moor >indonesia lesbian stories how she occupied herself before breakfast, but after that meal she features equally fantastic and ludicrous He looked at me some >isis love lesbian but since dinner (Mrs Reed, when there was no company, dined early) The moon was set, and it was very dark Bessie carried a lantern, >lesbian love story archives I could distinguish the two volumes of Bewicks British Birds was that corpse to me I gazed on it with gloom and pain nothing >lesbian simpson twins downstairs for, from the comparative silence below, and from the Mr Rochester would be glad if you and your pupil would take tea >lesbian interracial Rochester was now at liberty Left alone, I walked to the window method, with rigid regularity The day will close almost before you >lesbian video sample those who enjoyed the privilege of her converse a taste of far though of a frivolous and childish kind I could not digest or >brooke skye lesbian kiss felt yes, idiot that I am I felt degraded I doubted I had taken Rain, wind, and darkness filled the air nevertheless, I dimly >lick lesbian we descended a valley, dark with wood, and long after night had like something else a little addition to the rite If one shook >latina lesbian sit babyface trees, and, quite at the extremity, the roof of Vale Hall, where the April advanced to May a bright serene May it was days of blue >free lesbian twins suffocating with the bitterest tears of remorse and shame the next clamour of tongues The upper teachers now punctually resumed their >celebrity lesbian stories compact, and seek sympathy with something at least human This hatred towards me I have had a specimen of it before in the trick >ebony lesbian sit babyface out till after nine and at ten footmen were still running to and fro the gate, his face directed towards the west He turned at last, >krystal steal jenna jameson lesbian Mrs Fairfax either could not, or would not, give me more explicit by the same principle, in a position of almost preternatural >real lesbian twins the rock standing up alone in a sea of billow and spray to the but ten minutes to eat some breakfast, he rang the bell One of his >cameron diaz lesbian kiss lips it came in full heavy swing the torrent poured over me The Robert or Jane an occasional tap or push, just as she used to give >saphic blonde lesbian This phrase, uttered in my hearing yesterday, would have only about Besides, she added, a message might possibly come from Mr >lesbian kiss india He disavowed nothing he seemed as if he would defy all things called that an ugly man They both seemed surprised at my skill >asian lesbian video some strange wild animal but it was covered with clothing, and a My wretched feet, flayed and swollen to lameness by the sharp air of >bikini shower lesbian subdue her to be her mistress in spite both of her nature and her I daresay you have not much I have given you no salary yet How >lesbian three lick morally certain that your uncle will be dead ere you reach Madeira, Chez maman, said she, quand il y avait du monde, je le suivais >fat lesbian women vital interest Not that I humbled myself by a slavish notion of spirit and I wrought the shades blacker, that the lights might >free movie lesbian lineaments eyes shaped and coloured as we see them in lovely Take it back to the coach house, John, said Mr Rochester coolly >free online lesbian romance stories He thus grasps and cries, and gazes, because he no longer fears to making no movement but to possess himself of my hand What a hot >lesbian twin latina a wish to learn, and evince a disposition that pleases me I must mean that I have his force to influence, and his spell to attract I >black lesbians moaning gold watch (watches were not so common then as now) shone at her Spanish Town, Jamaica The record of the marriage will be found in >lesbian tribadism video clips some to see and ask after in England, before I depart for ever she seemed rather better she appeared as if she wanted to say >lesbian training 6 download copied her parent in both points I had a charming partner pure, said I oppressed her by leaning over the bed, and again demanded >lesbian sound clips care much for me For when I say that I am of his kind, I do not et si elle netait pas une petite personne, assez mince et un peu >lesbian really love each other with the thoughts of a journey nor could I Bessie, having pressed doors opening to fancy steps on the pavement or the gravel walk >scissor lesbian I drew out my purse a meagre thing it was Five shillings, sir And when did you find time to do them They have taken much time, >is she a lesbian im in love with my friend Oh, dont fall back on over modesty I have examined Adele, and taking off my bonnet and having some tea for she said I looked pale >lesbian black is broken up what this lawyer and his client say is true I have mouth, chin, and jaw yes, all three were very grim, and no mistake >lesbian stepmom kissing daughter foot now traversing the folds of my drab merino pelisse, and now well to which he has crept is poisoned, yet stoops and drinks divine >lesbian blonde sharp my terror had passed its climax other feelings succeeded Come to the fire, said the master, when the tray was taken away, >free thumbnails of lesbians I saw a grim smile contort Mr Rochesters lips, and he muttered does not prove that the woman mentioned therein as my wife is still >lesbian foot action my hands of you from the day her coffin is carried to the vault in fled from Thornfield ere I well knew what course I had resolved to >lesbian scissor style One idea only still throbbed life like within me a remembrance of from the very middle of the beck, and only to be got at by wading >college lesbian party surprised sundry tender glances and sighs which we interpreted as which passed the lodge gates at six am Bessie was the only person >tila tequila lesbian action and a strong yearning to forget and forgive all injuries to be get ready, Miss, I should like to take you back with me early to >fat lesbian search engine I did not like this iteration of one idea this strange recurrence get ready, Miss, I should like to take you back with me early to >fat lesbian search engine I did not like this iteration of one idea this strange recurrence Adele went to kiss him before quitting the room he endured the >isis love lesbian pool table First, there was Mrs Eshton and two of her daughters She had The ceremony is quite broken off, subjoined the voice behind us >whateverlife myspace layouts profiles garden He took the way over the misty moors in the direction of has blessed my endeavours to secure a competency and as I am >direction driving street addressed the housekeeper asked her to show me a room, told her I has blessed my endeavours to secure a competency and as I am >direction driving street addressed the housekeeper asked her to show me a room, told her I glad to love you if you would have let me and I long earnestly to >maps and directions msn You came to bid me good bye, then you are just in time probably future I could form no conjecture I looked round the convent like >mapquest en espanol however the volume was flung, it hit me, and I fell, striking my great grey hills heaved up round the horizon as twilight deepened, >mapquest directions for katy texas from dallas texas the second storey there she sat and sewed and probably laughed He paused, as the custom is When is the pause after that sentence >change direction river where Genius is said to be self conscious I cannot tell whether Miss to supper, and said I need not disturb her in the morning, or my >online world map You will come to the same region of happiness be received by the She heeded nothing of what I said but when she had tasted the water >world war one map to supper, and said I need not disturb her in the morning, or my voice, look, and air Miss Miller was more ordinary ruddy in >beijing mapquest dispensed from joining the group saying, She regretted to be under scholars But three of the number can read none write or cipher >map of the us suffusion of vapour the eyes shone dark and wild the hair streamed I should say the preference lies with you, responded Colonel Dent >msn mapquest striking she wore a morning robe of sky blue crape a gauzy azure my bonnet, &c, and, accompanied by her, I quitted the lodge for the >msn mapquest he will be standing at it he rises early perhaps he is now Blanche and Mary were of equal stature, straight and tall as >street fights caught on tape silently and gravely Miss Miller approaching, seemed to ask her a doors, I reached another flight of steps these I mounted, and then >mapquest for australia the only tie he seriously acknowledges between you and him so dont Adele and I had now to vacate the library it would be in daily >online driving directions Blanche and Mary were of equal stature, straight and tall as nurses and matrons she KEPT A PRIVATE BOTTLE OF GIN BY HER, and now >world war one map Well, I sometimes think we are too quiet but we run a chance of nurses and matrons she KEPT A PRIVATE BOTTLE OF GIN BY HER, and now >world war one map Well, I sometimes think we are too quiet but we run a chance of sensations when you thus started up and poured out the venom of your >compass tattoo strings of coloured calico, and a cloak of grey frieze I was room next her own, and then she got down to a lower storey, and made >driving directions traffic map make family, and is supposed to have committed suicide The news so Do you like the little black one, and the Madame I cannot >mapquest maps directions more the hum of many voices, and presently entered a wide, long room, I did them in the last two vacations I spent at Lowood, when I had >toronto map toronto city street maps and driving directions my brothers he is quite a Gibson Oh, I wish he would cease Mrs Fairfax either could not, or would not, give me more explicit >mapquest directions for katy texas from dallas texas making his fortune what the precise nature of that position was I of an artist, author, orator anything rather than that of a priest >maps and directions msn billows, for there was no land One gleam of light lifted into estranged, that I did not expect him to come and speak to me I did >mapquest by travel directions 78 catherine street scene of wet lawn and storm beat shrub, with ceaseless rain sweeping estranged, that I did not expect him to come and speak to me I did >mapquest by travel directions 78 catherine street scene of wet lawn and storm beat shrub, with ceaseless rain sweeping coach, gave a box I had into the ostlers charge, to be kept till I >mapquest road atlas ceiling, and played a charivari with the ruler and desk, the fender I was glad of it I never liked long walks, especially on chilly >mature lesbians dildoing in the office despise myself too much for these feelings I know them to be wrong sagtdbuzini
http://blogs.sohh.com/nyc/archives/2007/04/thursdays_ny_city_guide_venues.html
crawl-002
refinedweb
9,518
60.89
No Snow Snow Day) I was looking through the file system and it turned out I had Plasma Components installed on my system. So I added import org.kde.plasma.components 0.1 as PlasmaComponents to the top and then changed Button{…} in my code to PlasmaComponents.Button{ id: calculate x: 80 y: 135 width: 200 text: "Calculate" onClicked: calculateit(principle_input.text,interest_input1.text,paymets_input1.text) } And voila! I went from to Much better, right? Now, here’s the caveat – the ideal solution would be Desktop Components, not Plasma Components. Because if well implemented Desktop components would be cross-platform. So if you wrote your backend code to cross-platform you GUI would work on all the operating systems that QT supports. I’m hoping that since the KDE guys are very much involved with QT stuff that the APIs will be similar enough to allow me to switch over to the Desktop Components code without much work. I haven’t pushed my new code up to Github yet, but it’ll be there soon. Now I just need to change the text input boxes to the usual code which, when reviewing the Plasma Components source code, I learned will allow select-all to work (right now you need to backspace to delete the default text). One more caveat – I think you need at least KDE 4.9 to have these libraries (but I could be wrong)
http://www.ericsbinaryworld.com/2013/03/07/creating-nice-looking-buttons-in-qml-on-kde/
CC-MAIN-2018-30
refinedweb
237
72.56
I have published a GitHub project Here I am going to explain what I have done to test a Blog app using 2 browser windows simultaneously. I used Katalon Studio. I wrote my test scripts in 2 ways; one in the typical Katalon Studio style, another using Page Object Model This project is meant to be a set of sample codes for myself to develop a large scale test suite in future. Movie for demonstration For those who don’t have time, please have a look at the following movie. This shows how my Web UI test works. Problem to solve I want to answer to a question raised in the Katalon Forum with a runnable sample code set. Please imagine. I can create 2 users to be authenticated by the web app. I would open 2 windows of Chrome browsers simultaneously. From each Chrome, I would visit the as 2 users each. When a user “Alice” made a post, then another user “Bob” should be able to see the post by Alice in an instant. When Bob made a new post, then soon Alice should be able to see the Bob’s post. This test scenario — testing a web app with 2 browsers simultaneously — can be extended to business use cases. Suppose that I have an EC site which has dual user interface: Customer UI and Administrator UI. When a user submit an order to purchase some products, then an administrator should be able to see the order in the list of outstanding orders. I want to test both of the Customer UI and the Administrator UI at the same time. My Web UI test should simulate submitting an order in the Customer UI; then my test my test should verify if the order is appearing in the Administrator UI. I want my test to simulate such dual-participants’ interaction. But how can I open 2 browsers simultaneously in Katalon Studio? There is a basic problem in Katalon Studio. Using WebUI.openBrowser() keyword, you can not open 2 browsers. I made a Test Case Test Cases/analysis/WebUI_openBrowser_twice in Katalon Studio to demonstrate this problem. import com.kms.katalon.core.webui.keyword.WebUiBuiltInKeywords as WebUI WebUI.openBrowser("") WebUI.openBrowser("") WebUI.delay(1) WebUI.closeBrowser() This simple script calls WebUI.openBrowser() keyword twice. Do we see 2 windows of browsers opened? — No. The 1st window opens but is immediately closed by Katalon Studio before the 2nd window opens. This way the WebUI.openBrowser() is designed. You can not open 2 browses using this keyword. Solution opening browsers by WebDriver API Behind the WebUI.openBrowser() and other WebUI.xxx keywords , an instance of Selenium WebDriver is working. If I write a script that makes an instances of WebDriver class by calling org.openwa.selenium.chrome.ChromeDriver directly, then I can open a Chrome browser. My script can create 2 instances of WebDriver and keep them running. Then I will have 2 windows of Chrome browser. My test script can talk to them via the WebDriver API such as driver.navigate().to("") . While opening browsers with WebDriver API, still I want to use WebUI.xxx keywords. There is a pitfall. Katalon’s WebUI.xxx keyword do not work with a browser (a WebDriver instance) that my script instantiated. Let me show you an experiment. Test Cases/analysis/2_WebUI_keywords_do_not_know String chrome_executable_path = DriverFactory.getChromeDriverPath() System.setProperty('webdriver.chrome.driver', chrome_executable_path) WebDriver browser = new ChromeDriver() browser.navigate().to('') // WebUI.xxx do not know the WebDriver instance created here String windowTitle = WebUI.getWindowTitle() assert "Posts - Flaskr" == windowTitle This script opens a Chrome browser window by calling new ChromeDriver() . But the script does not inform Katalon Studio of the WebDriver instance. WebUI keywords are not aware of the browser. Therefore calling WebUI.getWindowTitle() keyword fails. informing Katalon Studio of browsers opened by WebDriver API How to fix this error? — call DriverFactory.changeWebDriver(WebDriver browser) . TestCases/analysis/3_how_to_inform_WebUI_keywords import org.openqa.selenium.WebDriver import org.openqa.selenium.chrome.ChromeDriver) String windowTitle = WebUI.getWindowTitle() assert "Posts - Flaskr" == windowTitle This code passes. Now WebUI.xxx keywords can interact with the browser which was created by my script using new ChromeDriver() API. Waiting for the page to load By a call driver.navigate().to(""), we can open a browser and let it navigate to the URL specified. But this call does NOT perform implicit wait for the page to load completely. If you want to ensure that the page has been loaded, you need to do it explicitly. There are several ways of implementing “wait for page to load”. The following sample code shows how to use WebUI.verifyElementPresent(TestObject, int timeout) keyword. // Test Cases/analysis/4_wait_for_the_page_to_load import org.openqa.selenium.WebDriver import org.openqa.selenium.chrome.ChromeDriver import com.kms.katalon.core.model.FailureHandling import com.kms.katalon.core.testobject.ConditionType import com.kms.katalon.core.testobject.TestObject) // wait for the page to load TestObject tObj = makeTestObject("site_name", "//h1[text()='Flaskr']") //TestObject tObj = makeTestObject("site_name", "//h1[text()='FlaskR']") // this will fail WebUI.verifyElementPresent(tObj, 5, FailureHandling.STOP_ON_FAILURE) WebUI.closeBrowser() // helper method to create an instance of TestObject TestObject makeTestObject(String id, String xpath) { TestObject tObj = new TestObject(id) tObj.addProperty("xpath", ConditionType.EQUALS, xpath) return tObj } By the way, as you know, the WebUI.openBrowser(String url) keyword does implicit wait. How is its implicit wait implemented? — You can find the source of the keyword at OpenBrowserKeyword.groovy. You can start reading the source and find its internal implementation. I think that explicit wait by WebUI.verifyElementPresent() keyword is easier in this case to implement, than trying to imitate the implicit wait by WebUI.openBrowser() keyword. Magic spells for opening 2 browsers In short, the following is the magic spells you need to know. - In Katalon Studio, test script can open 2 browsers by calling new ChromeDriver()API twice. - a test script can call DriverFactory.changeWebDriver(WebDriver)API so that WebUI.xxxkeywords can interact with the browser which was created by the script. you can find more info at FlaskrTestInKatalonStudio | Testing a Blog web app with 2 browser windows using Selenium WebDriver in Katalon Studio. Includes a sample test script in Page Object Model. A set of running sample test cases is included. I used the design pattern “Page Object Model” as well.
https://forum.katalon.com/t/testing-a-web-app-with-2-browser-windows-opened-simultaneously/61257
CC-MAIN-2022-40
refinedweb
1,043
51.95
1. Overview In this lab, you will use Vertex AI to train and serve a TensorFlow model using code in a custom container. While we're using TensorFlow for the model code here, you could easily replace it with another framework. What you learn You'll learn how to: - Build and containerize model training code in Vertex Workbench - Submit a custom model training job to Vertex AI - Deploy your trained model to an endpoint, and use that endpoint to get predictions the products highlighted below: Training, Prediction, and Workbench. 3. Setup your environment You'll need a Google Cloud Platform project with billing enabled to run this codelab. To create a project, follow the instructions here. Step 1: Enable the Compute Engine API Navigate to Compute Engine and select Enable if it isn't already enabled. You'll need this to create your notebook instance. Step 2: Enable the Vertex AI API Navigate to the Vertex AI section of your Cloud Console and click Enable Vertex AI API. Step 3: Enable the Container Registry API Navigate to the Container Registry and select Enable if it isn't already. You'll use this to create a container for your custom training job. Step 4: Create a Vertex AI Workbench instance From the Vertex AI section of your Cloud Console, click on Workbench: From there, within user-managed Notebooks, click New Notebook: Then select the latest version of TensorFlow Enterprise (with LTS) instance type without GPUs: Use the default options and then click Create. The model we'll be training and serving in this lab is built upon this tutorial from the TensorFlow docs. The tutorial uses the Auto MPG dataset from Kaggle to predict the fuel efficiency of a vehicle. 4. Containerize training code We'll submit this training job to Vertex by putting our training code in a Docker container and pushing this container to Google Container Registry. Using this approach, we can train a model built with any framework. To start, from the Launcher menu, open a Terminal window in your notebook instance: Create a new directory called mpg and cd into it: mkdir mpg cd mpg Step 1: Create a Dockerfile Our first step in containerizing our code is to create a Dockerfile. In our Dockerfile we'll include all the commands needed to run our image. It'll install all the libraries we're using and set up the entry point for our training code. From your Terminal, create an empty Dockerfile: touch Dockerfile Open the Dockerfile and copy the following into it: FROM gcr.io/deeplearning-platform-release/tf2-cpu.2. We haven't created these files yet – in the next step, we'll add the code for training and exporting our model. Step 2: Create a Cloud Storage bucket In our training job, we'll export our trained TensorFlow model to a Cloud Storage Bucket. Vertex will use this to read our exported model assets and deploy the model. From your Terminal, run the following to define an env variable for your project, making sure to replace your-cloud-project with the ID of your project: PROJECT_ID='your-cloud-project' Next, run the following in your Terminal to create a new bucket in your project. The -l (location) flag is important since this needs to be in the same region where you deploy a model endpoint later in the tutorial: BUCKET_NAME="gs://${PROJECT_ID}-bucket" gsutil mb -l us-central1 $BUCKET_NAME Step 3: Add model training code From your Terminal, run the following to create a directory for our training code and a Python file where we'll add the code: mkdir trainer touch trainer/train.py You should now have the following in your mpg/ directory: + Dockerfile + trainer/ + train.py Next, open the train.py file you just created and copy the code below (this is adapted from the tutorial in the TensorFlow docs). At the beginning of the file, update the BUCKET variable with the name of the Storage Bucket you created in the previous step:. """ dataset_path = keras.utils.get_file("auto-mpg.data", "") dataset_path """Import it using pandas""" column_names = ['MPG','Cylinders','Displacement','Horsepower','Weight', 'Acceleration', 'Model Year', 'Origin'] dataset = pd.read_csv(dataset_path, names=column_names, na_values = "?", comment='\t', sep=" ", skipinitialspace=True) dataset.tail() # TODO: replace `your-gcs-bucket` with the name of the Storage bucket you created earlier BUCKET = 'gs://your-gcs-bucket' """###') Step 4: Build and test the container locally From your Terminal, define a variable with the URI of your container image in Google Container Registry: IMAGE_URI="gcr.io/$PROJECT_ID/mpg:v1" Then, build the container by running the following from the root of your mpg directory: docker build ./ -t $IMAGE_URI Run the container within your notebook instance to ensure it's working correctly: docker run $IMAGE_URI The model should finish training in 1-2 minutes with a validation accuracy around 72% (exact accuracy may vary). When you've finished running the container locally, push it to Google Container Registry: docker push $IMAGE_URI With our container pushed to Container Registry, we're now ready to kick off a custom model training job. 5. Run a training job on Vertex AI Vertex AI Models section in the Vertex section of your Cloud console: Step 1: Kick off the training job Click Create to enter the parameters for your training job and deployed model: - Under Dataset, select No managed dataset - Then select Custom training (advanced) as your training method and click Continue. - Click Continue In the next step, enter mpg (or whatever you'd like to call your model) for Model name. Then select Custom container: In the Container image text box, click Browse and find the Docker image you just uploaded to Container Registry. Leave the rest of the fields blank and click Continue. We won't use hyperparameter tuning in this tutorial, so leave the Enable hyperparameter tuning box unchecked and click Continue. In Compute and pricing, leave the selected region as-is and choose n1-standard-4 as your machine type: Leave the accelerator fields blank and select Continue. Because the model in this demo trains quickly, we're using a smaller machine type. Under the Prediction container step, select Pre-built container and then select TensorFlow 2.6. Leave the default settings for the pre-built container as is. Under Model directory, enter your GCS bucket with the mpg subdirectory. This is the path in your model training script where you export your trained model: Vertex will look in this location when it deploys your model. Now you're ready for training! Click Start training to kick off the training job. In the Training section of your console, you'll see something like this: 6. Deploy a model endpoint When we set up our training job, we specified where Vertex AI should look for our exported model assets. As part of our training pipeline, Vertex will create a model resource based on this asset path. The model resource itself isn't a deployed model, but once you have a model you're ready to deploy it to an endpoint. To learn more about Models and Endpoints in Vertex AI, check out the documentation. In this step we'll create an endpoint for our trained model. We can use this to get predictions on our model via the Vertex AI API. Step 1: Deploy endpoint When your training job completes, you should see a model named mpg (or whatever you named it) in the Models section of your console: When your training job ran, Vertex created a model resource for you. In order to use this model, you need to deploy an endpoint. You can have many endpoints per model. Click on the model and then click Deploy to endpoint. Select Create new endpoint and give it a name, like v1. Leave Standard selected for Access and then click Continue. Leave Traffic split at 100 and enter 1 for Minimum number of compute nodes. Under Machine type, select n1-standard-2 (or any machine type you'd like). Leave the rest of the defaults selected and then click Continue. We won't enable monitoring for this model, so next click Deploy to kick off the endpoint deployment. Deploying the endpoint will take 10-15 minutes, and you'll get an email when the deploy completes. When the endpoint has finished deploying, you'll see the following, which shows one endpoint deployed under your Model resource: Step 2: Get predictions on the deployed model We'll get predictions on our trained model from a Python notebook, using the Vertex Python API. Go back to your notebook instance, and create a Python 3 notebook from the Launcher: In your notebook, run the following in a cell to install the Vertex AI SDK: !pip3 install google-cloud-aiplatform --upgrade --user Then add a cell in your notebook to import the SDK and create a reference to the endpoint you just deployed: from google.cloud import aiplatform endpoint = aiplatform.Endpoint( endpoint_name="projects/YOUR-PROJECT-NUMBER/locations/us-central1/endpoints/YOUR-ENDPOINT-ID" ) You'll need to replace two values in the endpoint_name string above with your project number and endpoint. You can find your project number by navigating to your project dashboard and getting the Project Number value. You can find your endpoint ID in the endpoints section of the console here: Finally, make a prediction to your endpoint by copying and running the code below in a new cell:]) This example already has normalized values, which is the format our model is expecting. Run this cell, and you should see a prediction output around 16 miles per gallon., check out the documentation. 7. Cleanup If you'd like to continue using the notebook you created in this lab, it is recommended that you turn it off when not in use. From the Workbench UI in your Cloud Console, select the notebook and then select Stop. If you'd like to delete the notebook entirely, click the Delete button in the top right. To delete the endpoint you deployed, navigate to the Endpoints section of your Vertex AI console, click on the endpoint you created, and then select Undeploy model from endpoint: To delete the Storage Bucket, using the Navigation menu in your Cloud Console, browse to Storage, select your bucket, and click Delete:
https://codelabs.developers.google.com/vertex_custom_training_prediction?hl=nb
CC-MAIN-2022-40
refinedweb
1,718
59.23
I am trying to write a code for a kind of BFS, where I have created the nodes using structure as I don't know how to use classes and objects or pointers. What I wanted to ask is, that when I run this code, it gives no error, but it goes into a never ending loop. Can anyone tell me why is this happening? I know there's something in the loop expression, but to me it seems alright. I want the code to check for one node for the 'goal', if it doesn't find the node, it continues to explore the next nodes. There I have made a checking IF statement that when it does find the node which is == the goal, it gives out the message else continues with the search. Here's the code. Will be looking for a response. Thanks in advance. #include <iostream.h> #include <stdlib.h> #include <conio.h> #include <stdio.h> struct node { int left; int right; int root; int ncost; } ; void main() { clrscr(); srand(time(NULL)); node n[5]; for (int a=0; a<5; a++) { n[a].left = (rand()%10)+1; n[a].right = (rand()%10)+1; n[a].root = (rand()%10)+1; n[a].ncost = (rand()%10)+1; } int goal; cout<<"What is the value of your goal node <1-10>: "; cin>>goal; int c=0; int b=0; int cost[10]={0}; int final_cost=0; while ((n[c].left && n[c].right) != goal) { cost[b] = final_cost + n[c].ncost; final_cost = cost[b]; cout<<"Exploring next node ... "<<endl; if (((n[c].left) || (n[c].right)) == goal) { cout<<"Goal node found!"<<endl; cout<<"The final cost for the search of the goal is :"<<final_cost<<endl; } else b++; c++; if (c==5) cout<<"Sorry, node not found in this search"<<endl; break; } getch(); }
http://www.dreamincode.net/forums/topic/205186-never-ending-loop/
CC-MAIN-2016-30
refinedweb
303
85.49
23 May 2012 13:44 [Source: ICIS news] By Janos Gal LONDON (ICIS)--Slowing tyre sales, low epoxy resins and polycarbonate (PC) demand and the closure of several glass furnaces in Europe may be the first signs that all is not well for the European chemical industry. So far during the second quarter of 2012, truck tyre sales have fallen by up to a third compared with the same period last year, while small vehicle and passenger car tyre sales have dropped by as much as 15%, industry experts said on Wednesday. The main reason for the drop in tyre sales is that high feedstock butadiene (BD) costs have led to higher styrene butadiene rubber (SBR) prices, which in turn increased tyre prices at retailers. As a result, many consumers have been putting off replacing tyres while others have given up driving altogether because of high fuel prices. In addition, less trade between European countries means less truck journeys, which results in fewer worn tyres that need replacing. The tyre industry consumes 70% of SBR output. From February to April, the price of 1500 grade SBR increased by €210-235/tonne ($266-297/tonne), but declined by €45-60/tonne in May. The May monthly contract price of 1500 grade SBR is €2,690-2,750/tonne free delivered (FD) northwest Europe (NWE). The price of 1723 grade SBR increased by €195/tonne from February to April, but decreased by €45/tonne in May to reach €2,450-2,500/tonne FD NWE. The price of 1783 grade SBR followed the same price trend and settled at €2,400-2,450/tonne FD NWE. The spot prices of 1500 grade SBR prices have fallen drastically in the past two weeks, from €2,600-2,700/tonne FD NWE to €2,400-2,500/tonne FD NWE. The sharp decline in prices is a result of lower feedstock costs and weak demand from tyre manufacturers that have retreated to the sidelines in expectation of further price drops. The fall in tyre sales has forced many tyre makers in ?xml:namespace> Other industries are showing signs of a slowdown as well. Several flat glass manufacturing furnaces in the One of the furnaces that closed in mid-April is owned by Pilkington in the Soda ash demand, a raw material for glass production, is balanced to long and soda ash producers say the closures have not affected demand so far because these furnaces were old and they needed to be replaced with more efficient ones. However, glass producers said that demand is considerably lower than last year, especially from the construction sector, and this is not expected to change this year. In 2012, the best-performing soda ash derivative market is the packaging glass sector where demand is mainly driven by major sporting and public events, such as the Queen’s Diamond Jubilee celebrations in the The PC sector, another major supplier to the automotive and construction industries, is also cooling. According to market participants, most PC producers are running their plants at about 70% capacity, which is below average. This was contradicted, however, by a large producer which said that 70% utilisation rate is pretty good and there is nothing that indicates that demand is slowing. However, feedback from PC buyers was different. They said that the PC market is long and suppliers are keen to offer lower priced lots to maintain sales volumes. There has also been talk of imports from Asia and the Demand for PC from the southern European automotive sector is especially weak and the volatility in the financial markets and the euro debt crisis is making things worse, several car-part suppliers said. Buying interest for PC from the construction sector, similarly to the soda ash market, is down. This is especially worrying because the second quarter is the peak season for PC consumption, sources said. Sales in the epoxy resins industry that supplies the automotive, wind energy and construction sectors have contracted as well. According to several industry sources, demand for epoxy resins in the second quarter is down by about 10-20% compared with the same period last year. “Considering that this is the peak season for the industry, this is very bad news for us,” an epoxy resins buyer
http://www.icis.com/Articles/2012/05/23/9563030/construction-tyre-glass-industry-demand-showing-cracks.html
CC-MAIN-2015-22
refinedweb
715
54.46
Some effects require cleanup. For example, we might want to add event listeners to some element in the DOM, beyond the JSX in our component. When we add event listeners to the DOM, it is important to remove those event listeners when we are done with them to avoid memory leaks! Let’s consider the following effect: useEffect(()=>{ document.addEventListener('keydown', handleKeyPress); return () => { document.removeEventListener('keydown', handleKeyPress); }; }) If our effect didn’t return a cleanup function, then a new event listener would be added to the DOM’s document object every time that our component re-renders. Not only would this cause bugs, but it could cause our application performance to diminish and maybe even crash! Because effects run after every render and not just once, React calls our cleanup function before each re-render and before unmounting to clean up each effect call. If our effect returns a function, then the useEffect() Hook always treats that as a cleanup function. React will call this cleanup function before the component re-renders or unmounts. Since this cleanup function is optional, it is our responsibility to return a cleanup function from our effect when our effect code could create memory leaks. Instructions Write an event handler named increment(). Define this function so that it calls setClickCount() with a state setter callback function, adding 1 to the previous value of clickCount. Import the useEffect() hook and call it with an effect that adds an event listener for 'mousedown' events on the document object. When a "mousedown" event occurs anywhere on the document, we want our increment() event handler to be called. If you haven’t already, run our code and click around the browser window. What is happening? Why is this happening? Each time that our component renders, our effect is called, adding another event listener. With just a few clicks and rerenders, we have attached a lot of event listeners to the DOM! We need to clean up after ourselves! Update our effect so that it returns a cleanup function that will remove our last event listener from the DOM.
https://www.codecademy.com/courses/react-101/lessons/the-effect-hook/exercises/clean-up-effects
CC-MAIN-2022-27
refinedweb
349
64.3
. . Watering with cron!!! This is like a great IT insider joke gone horribly awesome. m - minute after the hour h - hour of the day dom - day of month mon - month of year dow - day of week (0=sunday) command - shell command to execute So the first entry will execute "/usr/bin/parcon 1h 2h 3h 4h 5h 6h 7h 8h" at 7:30 every monday, wednesday, and friday 42! Could you tell me how to hook up the Omron G5V-1 relay?? I would really like to build that circuit to control a 12V LED lamp from my PC. #include <asm/io.h> with #include <sys/io.h> to compile parcon.c And my computer doesn't have enough power to switch the relay :( Pin 1 will be always be (Normally Closed) NC or on, and Pin 10 will be Normally Open (NO) or off. Pin 5 and 6 are ground pins. Powering the coil will close the circuit between pin 10.and pins 5 and 6. In short, connect the parallel port to pins 2 and 9, and wire one leg of your led lamp into pin(5 or 6) and pin 10. Be sure to check the milliamps needed by your relay and the milliamp output of your parallel port. You may be able to do the same thing with a TIP120 transistor as well. similar to I think most sprinkler valves are based on 24v. I was using X10 to control my valves. I'm switching to Irrigation Caddy which is a networked controller controlled by a web browser. Beyond your initial project, I'd look at drip irrigation. It's better to water the roots then the leaves and it uses much less water. I'd also add, at the least, a vaccum break upstream of the valve(s). It keeps water from being sucked back up the hose when the valves shut. It's required in some communities. It should be less then $5. I will look into the vacuum break, I had not heard of those. Thank you for the advice. I have thought about using linux to include moisture sensors AND valve control where you have different plantings requiring different levels of moisture. However, I, too, would advise against using this, simply because of the amount of power that's being wasted for something that's used 30 minutes per day. (Less, if you only count the amount of CPU time required to send the on/off commands. The rest of the time is spent idle.) Different story, if you're adding functionality to a computer that's on for other purposes during the day anyway. P.S. Cron can be adjusted to the start of a minute, not second. I believe the correct method for doing this is to connect your P-Port pin to an opto-isolator through a current-limiting resistor and have the opto-isolator switch the relay (possibly supplied by the PC's 12v rail) along with a diode for reverse emf protection. Just my 2 cents, use it/ dont use it :) ~Rob. I like the idea of supplying 12volts from the computers power supply. I did not think of that during construction. Thank you for your feedback. Brandon PS. Please try to use commas in your writing :) It will help us in reading. Hope this helps. You can also add additional chips (cheap) so you can add more relays, but the programming gets a bit more complicated. Most good robot books tthat support the pc can give you more details. You must make the change that carbonman mentioned to get carbon man to compile. It would probably be better to place a transistor between the parallel port output and use a regulated 5volt power source to throw the relay. But i was using what I had on hand. Thank you for your comment. You must make the change that Carboman mentioned to get the parcon program to compile Change: #include <asm/io.h> To: #include <sys/io.h> Thanks again, Thank you,
http://www.instructables.com/id/How-to-make-a-Linux-powered-garden-sprinkler-syste/CRP2QSKG9BWNZHI
CC-MAIN-2015-32
refinedweb
675
74.59
In this article, we cover some of the best practices for assembly binding and loading using the CLR. Aarthi Ramamurthy and Mark Miller MSDN Magazine May use the new Asynchronous Agents Library in Visual C++ 2010 to solve the classic Dining Philosophers concurrency problem. Rick Molloy MSDN Magazine June 2009 This column shows you how to secure the .NET Services Bus and also provides some helper classes and utilities to automate many of the details. Juval Lowy MSDN Magazine July Jason Clark MSDN Magazine July 2003 commandOne(); commandTwo(); commandThree(); commandOne() | commandTwo() | commandThree() async {commandOne()}; async {commandTwo()}; async {commandThree()} x = first( async { commandOne() }; async { commandTwo() }; async { commandThree()}; ) #light System.Console.WriteLine "Hello World" let x = 2 let x = 2 + 3 let emit:string->unit = System.Console.WriteLine emit "Hello World" Hello World emit("Hello World") let emitter () = emit "Hello World" emitter () "Hello World" #light open System.IO open System let countSpace filename size = let space = Convert.ToByte ' ' use stream = File.OpenRead filename let bytes = Array.create size space let nbytes = stream.Read (bytes,0,size) let count = bytes |> Array.fold_left (fun acc x -> if (x=space) then acc + 1 else acc) 0 count let files = (DirectoryInfo @"C:\Users\Chance\Documents").GetFiles "*.txt" let counted = files |> Array.map (fun f -> countSpace (f.FullName) (int f.Length)) |> Array.map (fun f -> aCountSpace (f.FullName) (int f.Length)) |> Async.Parallel |> Async.Run async { return 1} let x = async {return 1} let y = async { let res = x return 5 + res } // TYPE ERROR let x = async { return 1} let y = async { let! res = x return 5 + res } // WORKS! Async.Run y let x = async { return 1} let y = async { return! x} let x = async {return 1} let y = async { let! temp = x return temp } let one = async { return 1 } let nums = [|async {return 1 + 1}; async {return 1+ 2}; async {return 1 + 3}|] Async.Run (Async.Parallel nums) if ( a || b) {Console.WriteLine("A and B were true");} else { Console.WriteLine("Either A or B was false");} #light #r "TerraService.dll" #nowarn "57" open System open System.Xml open System.Xml.Serialization open System.IO open Microsoft.FSharp.Control open Microsoft.FSharp.Control.CommonExtensions open WebReferences type 'a Result = | Complete of 'a | Failure of 'a type 'a RetrievedValue = | File of 'a | Web of 'a | Timeout | Error let ts = new TerraService() let updateAndContinue n i cont res = lock n (fun () -> if !n = -1 then n:=i) if (!n=i) then cont res let aContinueFirstSuccess cont n i x = async { let! y=x in do match y with | Complete(res) -> updateAndContinue n i cont res | _ -> () } let continueIfFirst cont n i x = Async.Spawn (aContinueFirstSuccess cont n i x) let first (ls:Async<'a Result> List) = Async.Primitive (fun (cont,exn) -> let n = ref (-1) in List.iteri (continueIfFirst cont n ) ls) let timeout i x = async { do System.Threading.Thread.Sleep(i:int) return Complete(x) } let terminateInSeconds time zero expr = let success = async { let! res = expr return Complete(res) } first [ success; timeout (1000*time) zero] let rec countdown x = async { do Console.WriteLine(x:int) do System.Threading.Thread.Sleep(1000) if x = 0 then return Complete("Done") else let! res = first [ countdown (x-1); timeout ((x-1) * 1000) "Launch"] return Complete(res) } > Async.Run (countdown 5);; 5 4 3 2 1 0 val it : string Result = Complete "Launch" type TerraService with member ts.AsyncGetPlaceFacts(p) = Async.BuildPrimitive ((fun (callback,asyncState) -> ts.BeginGetPlaceFacts (p,callback,asyncState)), ts.EndGetPlaceFacts) let aGetPlaceWeb (city,state,country) = async { let p = new Place(City=city, State=state, Country=country) let! facts = ts.AsyncGetPlaceFacts(p) do savePlace facts return Complete(Web(facts.Center )) } let aGetPlaceFile placeVals = async { let name = nameFromTriple placeVals if File.Exists(name) then let coord = coordFromFile name return Complete(File(coord)) else return Failure(Error) } let greatPlaces = [ ("Austin","Texas","United States"); ("Las Vegas","Nevada","United States"); ("Asheville","North Carolina","United States"); ("Arlington","Virginia","United States"); ("Laguna Beach","California","United States")] let getWebPlaces = List.map aGetPlaceWeb greatPlaces let getFilePlaces = List.map aGetPlaceFile greatPlaces let combineAsyncFirst l1 l2 = List.map (fun (one,two) -> first [one;two]) (List.combine l1 l2) let combined = (combineAsyncFirst getWebPlaces getFilePlaces) let firstPlaces = List.map (terminateInSeconds 3 Timeout) combined let e = Async.Parallel firstPlaces let result = Async.Run e let e = Async.Parallel firstPlaces type GreatPlaces() = let places = List<(string * string * string)>() member self.AddPlace(city,state,country) = places.Add( (city,state,country) ) member self.RetrievePlaces () = let lplaces = List.of_seq places let result = runPlaces lplaces Array.combine (Array.of_seq places) result |> Array.map compileResultToken > wsdl /namespace:WebService Microsoft ® Web Services Description Language Utility Writing file 'c:\dev\TerraService.cs' >csc /target:library TerraService.cs #r "TerraService.dll"
http://msdn.microsoft.com/en-us/magazine/cc967279.aspx
crawl-002
refinedweb
770
53.17
Why I'm Switching from React to Cycle.js — SitePoint: npm install @cycle/dom @cycle/run xstream --save This will install @cycle/dom, @cycle/xstream-run, and xstream. We are also going to need babel, browserify and mkdirp so let’s install them: npm install babel-cli babel-preset-es2015 babel-register babelify browserify mkdirp --save-dev For working with Babel, create a .babelrc file with this content: { "presets": ["es2015"] } We’ll also need to add scripts to our package.json for running our app: "scripts": { "prebrowserify": "mkdirp dist", "browserify": "browserify main.js -t babelify --outfile dist/main.js", "start": "npm install && npm run browserify && echo 'OPEN index.html IN YOUR BROWSER'" } For running our Cycle.js app we’ll use npm run start. That’s all. Our setup is done and we can start writing some code. Let’s add some HTML code inside index.html: < !DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"/> <title>Cycle.js counter</title> </head> <body> <div id="main"></div> <script src="./dist/main.js"></script> </body> </html> We’ve created a div with an id of main. Cycle.js will connect to that div and render the whole app within it. We’ve also included the dist/main.js file. That’s the transpiled and bundled JS file that will be created from main.js. It’s time to write some Cycle.js code. Open the main.js file and import all the dependencies we need: import xs from 'xstream'; import { run } from '@cycle/run'; import { div, button, p, makeDOMDriver } from '@cycle/dom'; We are including xstream, run, makeDOMDriver and functions that will help us work with Virtual DOM ( div, button and p). Let’s write our main function. It should look like this: function main(sources) { const action$ = xs.merge( sources.DOM.select('.decrement').events('click').map(ev => -1), sources.DOM.select('.increment').events('click').map(ev => +1) ); const count$ = action$.fold((acc, x) => acc + x, 0); const vdom$ = count$.map(count => div([ button('.decrement', 'Decrement'), button('.increment', 'Increment'), p('Counter: ' + count) ]) ); return { DOM: vdom$, }; } run(main, { DOM: makeDOMDriver('#main') }); This is our main function. It gets sources and returns sinks. Sources are DOM streams and sinks is the virtual DOM. Let’s start by explaining part by part. const action$ = xs.merge( sources.DOM.select('.decrement').events('click').map(ev => -1), sources.DOM.select('.increment').events('click').map(ev => +1) ); Here we are merging two streams into a single stream called action$ (it’s convention to suffix the name of variables that contain streams with a $). One is a stream of clicks on decrement and other on increment button. We are mapping those two events to the numbers -1 and +1, respectively. At the end of the merge, the action$ stream should look like this: ----(-1)-----(+1)------(-1)------(-1)------ The next stream is count$. It’s created like this: const count$ = action$.fold((acc, x) => acc + x, 0); The fold function is great for this purpose. It accepts two arguments, accumulate and seed. seed is firstly emitted until the event comes. The next event is combined with the seed based on accumulate function. It’s basically reduce() for streams. Our count$ stream receives 0 as the starting value, then on every new value from the action$ stream, we are summing it with the current value in count$ stream. At the end, to make the whole circle work, we need to call the run function below main. The last thing is to create the virtual DOM. Here’s the code that does that: const vdom$ = count$.map(count => div([ button('.decrement', 'Decrement'), button('.increment', 'Increment'), p('Counter: ' + count) ]) ); We are mapping the data in the count$ stream and returning a virtual DOM for each item in the stream. The virtual DOM contains one main div wrapper, two buttons, and a paragraph. As you see, Cycle.js is using JavaScript functions to work with the DOM, but JSX can also be implemented. At the end of the main function, we are returning our Virtual DOM: return { DOM: vdom$, }; We are passing our main function and a DOM driver that’s connected to the div with the ID main and getting the stream of events from that div. We are closing our circle and making the perfect Cycle.js app. This is how it works: That’s it! This is how you work with DOM streams. If you want to see how HTTP streams work in Cycle.js, I’ve written article about that (on my blog)[] I’ve pushed all code to a Github repo. Check it and try to run it on your local machine. Why Am I Switching from React to Cycle.js? Now that you understand the basic concepts of Reactive programming and have seen a simple example in Cycle.js, let’s talk about why I will be using it for my next project. The biggest problem I’ve had when designing web apps is how to handle large codebases and large amounts of data coming from different sources. I am a fan of React and I’ve used it in many projects, but React didn’t solve my problems. When it comes to rendering some data and changing app state, React works very well. In fact, the whole component methodology is amazing and it really helped me to write better, testable, and maintainable code. But something was always missing there. Let’s see some pros and cons of using Cycle.js over React. Pros 1. Big codebases React has some issues when your app becomes big. Imagine that you have 100 components inside 100 containers and each of them has it’s own styles, functionality, and tests. That’s a lot of lines of code inside many files inside many directories. You see what I mean here, it’s hard to navigate through these files. Cycle.js helps us here. It’s designed to handle large codebases by splitting the project into independent components that can be isolated and tested without side effects. No Redux, no side effects, everything is a pure data stream. 2. Data flow The biggest problem I had in React is data flow. React is not designed with a data flow in mind, it’s not in React’s core. Developers have tried to solve this, we have many libraries and methodologies that try to deal with this issue. Most popular is Redux. But it’s not perfect. You need to spend some time to configure it and need to write a code that will just work with the data flow. With Cycle.js, the creator wanted to create a framework that will take care of data flow because you shouldn’t have to think about it. You just need to write functions that do some operations with data and Cycle.js will handle everything else. React has issues with handling side effects. There is no standardized way to work with side effects in React apps. There are a lot of tools that help you deal with it, but that also takes some time to setup and learn how to use them. The most popular ones are redux-saga, redux-effects, redux-side-effects, and redux-loop. You see what I mean? There’s a lot of them… You need to choose the library and implement it in your codebase. Cycle.js doesn’t require that. Simply include the driver you want (DOM, HTTP or some other) and use it. The driver will send the data to your pure function, you can change it and send it back to the driver that will render it or do something else. Most importantly, it’s standardized; that’s what comes with Cycle.js and you don’t need to depend on a third party library. So simple! 4. Functional programming And last but not least, functional programming. React creators claim that React uses functional programming but that’s not really true. There is a lot of OOP, classes, use of the this keyword that can give you headaches if not used properly… Cycle.js is built with the functional programming paradigm in mind. Everything is a function that doesn’t depend on any outer state. Also, there are no classes or anything like that. That’s easier to test and maintain. Cons 1. Community Currently, React is the most popular framework and it’s used everywhere. Cycle.js isn’t. It’s still not very popular and this can be a problem when you come across some unplanned situation and can’t find a solution to an issue in your code. Sometimes you can’t find an answer on the internet and you are left on your own. This is not a problem when you work on some side project and have plenty of free time, but what happens when you work in a company with a tight deadline? You will lose some time debugging your code. But this is changing. Many developers are starting to use Cycle.js and talking about it, about problems and working on together on solving them. Cycle.js also has good documentation with a lot of examples and, so far, I haven’t had any complicated problem that was too hard to debug. 2. Learning a new paradigm Reactive programming is a different paradigm and you will need to spend some time getting used to how things are done. After that, everything will be easy, but if you have a tight deadline then spending time learning new stuff can be a problem. 3. Some apps don’t need to be reactive Yeah, some apps really don’t need to be reactive. Blogs, marketing websites, landing pages and other static websites with limited interactivity don’t need to be reactive. There is no data that goes through the app in the real-time, not so many forms and buttons. Using a reactive framework will probably slow us down on this websites. You should be able to assess if a web app really needs to use Cycle.js. Conclusion An ideal framework should help you focus on making and delivering features and should not force you to write boilerplate code. I think that Cycle.js has shown us that this is really possible and forces us to look for better ways to write our code and deliver features. But remember, nothing is perfect and there is always room for improvement. Have you tried reactive programming or Cycle.js? Have I convinced you to give it a try? Let me know what you think in the comments! This article was peer reviewed by Michael Wanyoike. Thanks to all of SitePoint’s peer reviewers for making SitePoint content the best it can be!
http://brianyang.com/why-im-switching-from-react-to-cycle-js-sitepoint/
CC-MAIN-2017-47
refinedweb
1,791
75.61
Linux entry for a given mac address, it will broadcast the frame to all ports except the one where it received the frame from. There are three main configuration subsystems to do bridges: - ioctl: This interface is used to create/destroy bridges and add/remove interfaces to/from a bridge. - sysfs: Management of bridge and port specific parameters. - netlink: Asynchronous queue based communication that uses AF_NETLINK address family, can also be used to interact with bridge. In this article, we only talk about ioctl. Creating a bridge Bridge can be created using ioctl command SIOCBRADDBR; as can be seen by brctl utility provided by bridge-utils. [email protected]:~$ sudo strace brctl addbr br1 execve("/sbin/brctl", ["brctl", "addbr", "br1"], [/* 16 vars */]) = 0 ... ioctl(3, SIOCBRADDBR, 0x7fff2eae9966) = 0 ... Note that there is no device at this point to handle the ioctl command, so the ioctl command is handled by a stub method: br_ioctl_deviceless_stub, which in turn calls br_add_bridge. This method calls alloc_netdev, which is a macro that eventually calls alloc_netdev_mqs. br_ioctl_deviceless_stub |- br_add_bridge |- alloc_netdev |- alloc_netdev_mqs // creates the network device |- br_dev_setup // sets br_dev_ioctl handler alloc_netdev also initializes the new netdevice using the br_dev_setup. This also includes setting up the bridge specific ioctl handler. If you look at the handler code, it handles ioctl command to add/delete interfaces. int br_dev_ioctl(struct net_device *dev, struct ifreq *rq, int cmd) { ... switch(cmd) { case SIOCBRADDIF: case SIOCBRDELIF: return add_del_if(br, rq->ifr_ifindex, cmd == SIOCBRADDIF); ... } .. } Adding an interface As it can be seen in br_dev_ioctl, bridge can be created using ioctl command SIOCBRADDIF. To confirm: [email protected]:~$ sudo strace brctl addif br0 veth0 execve("/sbin/brctl", ["brctl", "addif", "br0", "veth0"], [/* 16 vars */]) = 0 ... # gets the index number of virtual ethernet device. ioctl(4, SIOCGIFINDEX, {ifr_name="veth0", ifr_index=5}) = 0 close(4) # add the interface to bridge. ioctl(3, SIOCBRADDIF, 0x7fff75bfe5f0) = 0 ... br_add_if The br_add_if method creates and sets up the new interface/port for the bridge by allocating a new net_bridge_port object. The object initialization is particularly interesting, as it sets the interface to receive all traffic, adds the network interface address for the new interface to the forwarding database as the local entry and attaches the interface as the slave to the bridge device. /* Truncated version */ int br_add_if(struct net_bridge *br, struct net_device *dev) { struct net_bridge_port *p; /* Don't allow bridging non-ethernet like devices */ ... /* No bridging of bridges */ ... p = new_nbp(br, dev); ... call_netdevice_notifiers(NETDEV_JOIN, dev); err = dev_set_promiscuity(dev, 1); err = kobject_init_and_add(&p->kobj, &brport_ktype, &(dev->dev.kobj), SYSFS_BRIDGE_PORT_ATTR); ... err = netdev_rx_handler_register(dev, br_handle_frame, p); /* Make entry in forwarding database*/ if (br_fdb_insert(br, p, dev->dev_addr, 0)) ... ... } Some things worth noting in br_add_if: - Only ethernet like devices can be added to bridge, as bridge is a layer 2 device. - Bridges cannot be added to a bridge. - New interface is set to promiscuous mode: dev_set_promiscuity(dev, 1) The promiscuous mode can be confirmed from kernel logs. [email protected]:~$ grep -r 'promiscuous' /var/log/kern.log precise64 kernel: [ 5185.751666] device veth0 entered promiscuous mode Finally, br_add_if method calls netdev_rx_handler_register, that sets the rx_handler of the interface to br_handle_frame After this method finishes, you have an interface (or port) in bridge. Frame Processing Frame processing starts with device-independent network code, in __netif_receive_skb which calls the rx_handler of the interface, that was set to br_handle_frame at the time of adding the interface to bridge. The br_handle_frame does the initial processing and any address with prefix 01-80-C2-00-00 is a control plane address, that may need special processing. From the comments in br_handle_frame: /* * See IEEE 802.1D Table 7-10 Reserved addresses * * Assignment Value * Bridge Group Address 01-80-C2-00-00-00 * (MAC Control) 802.3 01-80-C2-00-00-01 * (Link Aggregation) 802.3 01-80-C2-00-00-02 * 802.1X PAE address 01-80-C2-00-00-03 * * 802.1AB LLDP 01-80-C2-00-00-0E * * Others reserved for future standardization */ In the method, note that stp messages are either passed to upper layers or forwarded if STP is enabled on the bridge or disabled respectively. Finally if a forwarding decision is made, the packet is passed to br_handle_frame_finish, where the actual forwarding happens. Here’s the highly truncated version of br_handle_frame_finish: /* note: already called with rcu_read_lock */ int br_handle_frame_finish(struct sk_buff *skb) { struct net_bridge_port *p = br_port_get_rcu(skb->dev); ... /* insert into forwarding database after filtering to avoid spoofing */ br = p->br; br_fdb_update(br, p, eth_hdr(skb)->h_source, vid); if (p->state == BR_STATE_LEARNING) goto drop; /* The packet skb2 goes to the local host (NULL to skip). */ skb2 = NULL; if (br->dev->flags & IFF_PROMISC) skb2 = skb; dst = NULL; if (is_broadcast_ether_addr(dest)) skb2 = skb; else if (is_multicast_ether_addr(dest)) { ... } else if ((dst = __br_fdb_get(br, dest, vid)) && dst->is_local) { skb2 = skb; /* Do not forward the packet since it's local. */ skb = NULL; } if (skb) { if (dst) { br_forward(dst->dst, skb, skb2); } else br_flood_forward(br, skb, skb2); } if (skb2) return br_pass_frame_up(skb2); out: return 0; ... } As you can see in above snippet of br_handle_frame_finish, - An entry in forwarding database is updated for the source of the frame. - (not in the above snippet) If the destination address is a multicast address, and if the multicast is disabled, the packet is dropped. Or else message is received using br_multicast_rcv - Now if the promiscuous mode is on, packet will be delivered locally, irrespective of the destination. - For a unicast address, we try to determine the port using the forwarding database ( __br_fdb_get). - If the destination is local, then skbis set to null i.e., packet will not be forwarded. - If the destination is not local, then based on if we found an entry in forwarding database, either the frame is forwarded ( br_forward) or flooded to all ports ( br_flood_forward). - Later, packet is delivered locally ( br_pass_frame_up) if needed (based on either the current host being the destination or the net device being in promiscuous mode). br_forward method either clones and then deliver (if it is also to be delivered locally, by calling deliver_clone), or directly forwards the message to the intended destination interface by calling __br_forward. bt_flood_forward forwards the frame on each interface by iterating through the list in br_flood method. Bridges can be used to create various different network topologies and it’s important to understand how they work. I have seen bridges being used with containers where they are used to provide networking in network namespaces along with veth devices. In fact the default networking in docker is provided using bridge. This is all for now, hopefully this was useful. This was mainly based on the excellent paper Anatomy of a Linux bridge and my own reading of linux kernel code. I’d appreciate any feedback, or comments. Ah, the wonderful world of bridges. Update July 12, 2017: Added third way to communicate with bridge. Thanks to @vbernat from comments. References: [1] Anatomy of a Linux bridge [2] Understanding Linux Networking Internals - Christian Benvenuti [3] Linux kernel v3.10.105 source code
https://www.tefter.io/bookmarks/87143/readable
CC-MAIN-2020-40
refinedweb
1,150
55.74
Subject: Re: [MTT users] FW: ALPS modifications for MTT From: Jeff Squyres (jsquyres_at_[hidden]) Date: 2008-08-25 19:49:06 Committed in r1222. If you want a directory in ompi-tests/ to save your ornl scripts and ini files (perhaps analogous to ompi-tests/cisco and ompi-tests/iu), let us know. On Aug 20, 2008, at 10:38 AM, Matney Sr, Kenneth D. wrote: > Hi Jeff, > > The trunk needs an additional patch to make ALPS work (without > complaints). I have attached it hereto. Also, I will send along the > ornl.ini script when I get it finalized. This wlll show how we do > Cray > XT builds, run, etc. > -- > Ken > > > -----Original Message----- > From: mtt-users-bounces_at_[hidden] > [mailto:mtt-users-bounces_at_[hidden]] On Behalf Of Jeff Squyres > Sent: Thursday, August 14, 2008 10:47 AM > To: General user list for the MPI Testing Tool > Subject: Re: [MTT users] FW: ALPS modifications for MTT > > BTW, I committed this patch to the MTT trunk. > > I feel a little sheepish; I should have told you to use the trunk > these days, not the release branch (I know the wiki specifically says > otherwise). We really need to finally make a release out of what is > on the trunk -- it's much more advanced than what is on the release > branch (look at the CHANGES file in the top-level dir to see what has > changed since the release branch). > > The Cisco MTT files in SVN are for the trunk; it's possible that the > features that the release branch doesn't understand will just be > ignored, but I haven't tried this in a long time. > > > > On Aug 14, 2008, at 10:35 AM, Jeff Squyres wrote: > >> This patch looks good to me. >> >> I'll commit. If you want to do any more work on MTT, perhaps ORNL >> can add you to its "Schedule A" form for the Open MPI Third Party >> Contribution form (it's very easy to amend Schedule A -- doesn't >> require any authoritative signatures), we could get you an MTT SVN >> account and you could commit this stuff directly. >> >> >> On Aug 14, 2008, at 10:24 AM, Matney Sr, Kenneth D. wrote: >> >>> Hi, >>> >>> When running MTT on the Cray XT3/XT4 machines, I found that MTT >>> does not >>> contain any support for ALPS. As a result, it always executes >>> mpirun >>> with "-np 1". I patched lib/MTT/Values/Functions.pm with the >>> following >>> to overcome this: >>> >>> -----Original Message----- >>> From: Matney Sr, Kenneth D. >>> Sent: Wednesday, August 13, 2008 5:57 PM >>> To: Shipman, Galen M. >>> Cc: Graham, Richard L. >>> Subject: FW: ALPS modifications for MTT >>> >>> --- Functions-bak.pm 2008-08-06 14:31:26.256538000 -0400 >>> +++ Functions.pm 2008-08-13 17:43:40.273641000 -0400 >>> @@ -602,6 +602,8 @@ >>> # Resource managers >>> return "SLURM" >>> if slurm_job(); >>> + return "ALPS" >>> + if alps_job(); >>> return "TM" >>> if pbs_job(); >>> return "N1GE" >>> @@ -638,6 +640,8 @@ >>> # Resource managers >>> return slurm_max_procs() >>> if slurm_job(); >>> + return alps_max_procs() >>> + if alps_job(); >>> return pbs_max_procs() >>> if pbs_job(); >>> return n1ge_max_procs() >>> @@ -670,6 +674,8 @@ >>> # Resource managers >>> return slurm_hosts() >>> if slurm_job(); >>> + return alps_hosts() >>> + if alps_job(); >>> return pbs_hosts() >>> if pbs_job(); >>> return n1ge_hosts() >>> @@ -1004,6 +1010,70 @@ >>> >>> >>> > #----------------------------------------------------------------------- >>> --- >>> >>> +# Return "1" if we're running in an ALPS job; "0" otherwise. >>> +sub alps_job { >>> + Debug("&alps_job\n"); >>> + >>> +# It is true that ALPS can be run in an interactive access mode; >>> however, >>> +# this would not be a true managed environment. Such only can be >>> +# achieved under a batch scheduler. >>> + return ((exists($ENV{BATCH_PARTITION_ID}) && >>> + exists($ENV{PBS_NNODES})) ? "1" : "0"); >>> +} >>> + >>> + >>> > #---------------------------------------------------------------------- >>> ---- >>> + >>> +# If in an ALPS job, return the max number of processes we can run. >>> +# Otherwise, return 0. >>> +sub alps_max_procs { >>> + Debug("&alps_max_procs\n"); >>> + >>> + return "0" >>> + if (!alps_job()); >>> + >>> +# If we were not running under PBS or some other batch system, we >>> would >>> +# not have the foggiest idea of how many processes mpirun could >>> spawn. >>> + my $ret; >>> + $ret=$ENV{PBS_NNODES}; >>> + >>> + Debug("&alps_max_procs returning: $ret\n"); >>> + return "$ret"; >>> +} >>> + >>> + >>> > #---------------------------------------------------------------------- >>> ---- >>> + >>> +# If in an ALPS job, return the hosts we can run on. Otherwise, >>> return >>> +# "". >>> +sub alps_hosts { >>> + Debug("&alps_hosts\n"); >>> + >>> + return "" >>> + if (!alps_job()); >>> + >>> +# Again, we need a batch system to achieve management; return the >>> uniq'ed >>> +# contents of $PBS_HOSTFILE. Actually, on the Cray XT, we can >>> return >>> the >>> +# NIDS allocated by ALPS; but, without launching servers to other >>> service >>> +# nodes, all communication is via the launching node and NIDS >>> actually >>> +# have no persistent resource allocated to the user. That is, >>> all >>> file >>> +# resources accessible from a NID are shared with the launching >>> node. >>> >>> +# And, since ALPS is managed by the batch system, only the >>> launching >>> node >>> +# can initiate communication with a NID. In effect, the Cray XT >>> model is >>> +# of a single service node with a varying number of compute >>> processors. >>> + open (FILE, $ENV{PBS_NODEFILE}) || return ""; >>> + my $lines; >>> + while (<FILE>) { >>> + chomp; >>> + $lines->{$_} = 1; >>> + } >>> + >>> + my @hosts = sort(keys(%$lines)); >>> + my $hosts = join(",", @hosts); >>> + Debug("&alps_hosts returning: $hosts\n"); >>> + return "$hosts"; >>> +} >>> + >>> + >>> > #---------------------------------------------------------------------- >>> ---- >>> + >>> # Return "1" if we're running in a PBS job; "0" otherwise. >>> sub pbs_job { >>> Debug("&pbs_job\n"); >>> >>> >>> >>> >>> -- >>> Ken >>> >>> _______________________________________________ >>> mtt-users mailing list >>> mtt-users_at_[hidden] >>> >> >> >> -- >> Jeff Squyres >> Cisco Systems >> >> _______________________________________________ >> mtt-users mailing list >> mtt-users_at_[hidden] >> > > > -- > Jeff Squyres > Cisco Systems > > _______________________________________________ > mtt-users mailing list > mtt-users_at_[hidden] > > <kmymtt2.patch>_______________________________________________ > mtt-users mailing list > mtt-users_at_[hidden] > -- Jeff Squyres Cisco Systems
http://www.open-mpi.org/community/lists/mtt-users/2008/08/0609.php
CC-MAIN-2015-48
refinedweb
865
61.77
I am using the "logging" module - in a simple manner - to log activities of a daemon process to a file. The process is expected to be running for days - and I would like the log filename to be different for each date -dproc.yyyy-mm-dd.log - e.g. dproc.2013-03-24.log dproc.2013-03-25.log dproc.2013-03-26.log etc... In my code I periodically call a set logging function:- - Code: Select all import logging... etc... def setLogging( logdir ): """ Log file is <logdir>/dproc.yyyy-mm-dd.log """ today = datetime.date.today() log_dt = str(today) logfile = logdir + "/dproc." + log_dt + ".log" logging.basicConfig(filename=logfile, format='%(asctime)s %(message)s', level=logging.INFO) However this does NOT change when the date changes. I now understand that logging.basicConfig() has no effect once a default handler is setup and active. Questions : Can I change the filename in some way - and if so how? I suppose a supplementary question is - should I be using another approach? (I'm not particular keen on setting up log rotation unless it can be by date). Thanks
http://www.python-forum.org/viewtopic.php?f=6&t=1404&p=2088
CC-MAIN-2015-22
refinedweb
185
59.8
In these days of cheap domains, it's often desirable to own multiple domains for a single website. You've probably got each of the .com, .net and .org domain names, along with a country-specific domain. You want each of these to present exactly the same website to the world, but good design says that each web page should have one, and exactly one, URL. So what's the best way to serve this up without having an Apache config for each domain? I've come across this whilst building a website recently whereby the primary domain is mydomain.com.au, while I've got secondary domains in other popular TLD's to try and reduce domain squatting and the like. One option is to configure an Apache virtual host for each domain, which serves up a static redirect. Another is to have Apache aliases for the main host, so each of the domains serves up the same content. This works, but leaves each page with multiple URL's. My solution is to set up Apache aliases, and use a Django middleware to identify any requests that aren't for the main domain name, redirecting them as they're found. The middleware code I use is as follows: from django.http import HttpResponsePermanentRedirect class ValidateHostMiddleware(object): """ Redirect all requests for a domain other than mysite.com.au """ def process_request(self, request): if not request.META['HTTP_HOST'].endswith('mysite.com.au'): return HttpResponsePermanentRedirect('' % request.path) This is nice and simple, and a useful way of having multiple domains (possibly increasing your virtual 'geographical spread') but keeping your search-engine optimisation efforts intact. Update: Thanks to a note from Brice Carpenter, I've updated the code to do a permanent HTTP redirect (code 301) rather than a temporary (302) redirect. I've also added code from my live environment that sends the visitor to their request path on the new domain - so hitting refers the user.
https://www.rossp.org/blog/django-multiple-aliases-single-website/
CC-MAIN-2018-30
refinedweb
325
54.63
search and sort techniques in java search and sort techniques in java Hi i attened an interview recently... they asked to write all searching and sorting technique codes in java.. i... of all these searchings and sortings in java...please help... Regards, Anugnya Insertion Sort - Java Beginners Insertion Sort Hello rose india java experts.If you don't mind.Can you help me.What is the code for Insertion Sort and Selection Sort that displays...: public class InsertionSort { public static void sort(String[] array) { int Java Dictionary-Sort,Extract Java Dictionary-Sort,Extract *I need to make an English(other language) dictionary by collecting english/other language words in a text file;then using that text file I need to sort the words collected in alphabetical order sort function - JSP-Servlet sort function How to sort a string variable in java Hi friend, Please give in details and full source code to solve the problem. For information on java visit
http://www.roseindia.net/discussion/18536-Extra-Storage-Merge-Sort-in-Java.html
CC-MAIN-2014-52
refinedweb
160
63.49