text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
0
I have a string in Python that is comprised of code in C/C++ syntax. I would like to use regular expressions to search through this string and replace an expression of the form 'pow(a,b)' with 'a^b'
I've attempted to do this, but my code does not seem to be working. What would I have to change to be able to replace the expression?
import re def main(): expr = '5*pow(a,b) + 3*pow(b,c) + 3*a + 4*b' print re.sub('pow(.,.)' , '.^.', expr) if __name__ == "__main__": main()
The output of this example program is
5*pow(a,b) + 3*pow(b,c) + 3*a + 4*b
but I would like the output to be
5*a^b + 3*b^c + 3*a + 4*b
Edited 6 Years Ago by nkinar: Clarification | https://www.daniweb.com/programming/software-development/threads/314522/using-regular-expressions-to-replace-pow-a-b-with-a-b | CC-MAIN-2016-50 | refinedweb | 139 | 74.53 |
The attached program is a prototype for a autodetection function to be put in the choose_medium.c file of dbootstrap. It seems to handle the "new" 2.2 IDE subsystem and SCSI devices since at least 2.0; I've tested it on m68k, alpha and ia32 platforms. Since the IDE system treats DVD-ROMs as CD-ROMs, they are properly detected (though it's possible /proc/ide/hd?/media may report "dvd-rom" or something else in 2.3+; I haven't checked). Note the use of a bitmask for the IDE detection; SCSI devices are simply counted since they are numbered 0..n at boot time. Non-IDE and non-SCSI CD-ROMs aren't handled at all. My suggested use would be for the routine to "mark" the list presented by choose_cdrom(), although (presumably) if only one matching device is detected, it could bypass the menu altogether. Anyway, it's packaged as a standalone program so people can test it and let me know if it fails to detect a CD-ROM it should, or detects something wrongly. Chris -- ============================================================================= | Chris Lawrence | Your source for almost nothing of value: | | <quango@watervalley.net> | | | | | | Grad Student, Pol. Sci. | Visit the Lurker's Guide to Babylon 5: | | University of Mississippi | <*> <*> | =============================================================================
#include <stdio.h> #include <sys/stat.h> #include <glob.h> struct cdroms_detected { int scsi_count; int ide_mask; }; static struct cdroms_detected autodetect_cdrom(void) { FILE *fp; struct stat st; glob_t gl; static struct cdroms_detected cdr = {0,0}; size_t sz=100; char *buf; buf = (char *)malloc(sz); if(!stat("/proc/ide", &st) && S_ISDIR(st.st_mode)) { if(!glob("/proc/ide/hd?/media", 0, NULL, &gl)) { int i; for( i = 0; i < gl.gl_pathc; i++ ) { if(fp = fopen(gl.gl_pathv[i], "r")) { if(getline(&buf, &sz, fp) >= 0 && !strncmp(buf, "cdrom", 5)) { printf("CD-ROM: /dev/hd%c\n", gl.gl_pathv[i][12]); cdr.ide_mask |= 1<< (gl.gl_pathv[i][12] - 'a'); } /* printf("%s: %s", gl.gl_pathv[i], buf); */ fclose(fp); } } globfree(&gl); } } if(!stat("/proc/scsi/scsi", &st) && S_ISREG(st.st_mode)) { if(fp = fopen("/proc/scsi/scsi", "r")) { while(!feof(fp) && getline(&buf, &sz, fp) >= 0) { /* fputs(buf, stdout); */ if(!strncmp(buf, " Type: CD-ROM", 16)) { printf("CD-ROM: /dev/scd%d\n", cdr.scsi_count); cdr.scsi_count++; } } } } return cdr; } int main(void) { autodetect_cdrom(); } | https://lists.debian.org/debian-boot/1999/12/msg00349.html | CC-MAIN-2015-40 | refinedweb | 376 | 62.44 |
trackback fixed
Thanks to Bob Wyman for pointing out my trackback was busted. Normal transmission resumed.
" />
« January 2004 | Main | March 2004 »
Thanks to Bob Wyman for pointing out my trackback was busted. Normal transmission resumed.
You'd think speccing a feed format would be straightforward, but the way things are going on the atom-syntax list over the last few days, Atom will have make a best-effort to address the versioning and naming problems to proceed.
Now, where's the cache-invalidation thread at?
As for whether the Sun's approach of just providing interfaces instead of concrete for XML parsing was such a great thing in Java I'd claim that it's been hit and miss. - Dare Obasanjo, on XML factory loading
I think we'd agree in Java-land that cross-platform APIs have been a mistake (except perhaps for SAX). As for the whole factory and dynamic loading model for raw parsers, well that can get extremely messy in Java. Most of us have run into some form of Xerces hell at some point. To be fair that is usually a problem induced by the Java classloading architecture (what architecture?) rather than XML APIs. I suspect that the .NET loading model isn't much better, but .NET has the luxury of having fewer things to load as Dare points out:
The funny thing is that even if we shipped functionality where we looked in the registry or in some config file before figuring out what XML parser to load it's not as if there are an abundance of third party XML parsers targetting the .NET Framework in the first place.
Really, who's going to use something other than System.Xml/MSXML?
XML support for Java however is fantastic, after you muddle through the options. To name just a few, SAX2, XOM, JDOM, XmlPull, XmlBeans and Jaxen are all really very good libraries (and open source). To be fair to Sun, the JAX* set of APIs had to evolve piecemeal and thus are not always consistent, coherent or without mistakes - a case of putting the wheels on a moving car. .NET APIs have had the luxury of coming a bit later.
All in all, I see the use of interfaces or not as a red-herring here. It comes down to what value cross-platform APIs have (if any), how dynamic implementation loading is managed in a static context and whether you actually need multiple implementations in the first place.
I got into an email conversation with Bob Wyman a while back about the PubSub feed aggregator. With his permission I'm blogging about the PubSub architecture and internal processing model.
Bob asked that I don't paint a negative picture of being anti-XML and I hope I haven' t done that - PubSub doesn't strike me as anything other than great service. For those of you that aren't XML obsessives, Bob has taken some heat in the XML community over the last year for promoting binary infoset approaches. So when I asked if he was using binfosets, he responded:
It's not binfoset exactly. What we've got is set of machines that talk to the outside world and convert XML to and from the ASN.1 PER binary encodings that we use internally. (We use OSS Nokalva's ASN.1 tools) The result is great compression as well as extremely fast parsing. In an application like ours, we have to do everything we can to optimize throughput and XML while XML is really easy for people to generate, it just takes too much resource to push around and parse. Currently, we're monitoring over 1 million blogs. Since we're still pretty new, we've still got fewer than 10,000 subscriptions, so there is no real load on the system. We're usually matching at a rate of about 2.5 to 3 billion matches per day and the CPU on our matching engine is basically idling. (i.e. 3-5% most of the time). This is, of course, in part due to the work we put into optimizing the real-time matching algorithm (we need to match *every* subscription against every new blog entry). However, it is also in part because the matching engine never needs to do the string parsing that XML would require.
It's worth noting that all this is internal to PubSub; the public server I/O is XML.
On XML v Binfosets and the processing model:
My comments should not be read as "anti-XML". I'm simply pointing out a method of working with XML in a high volume environment. Just as people will often convert XML to DOM trees or SAX event streams when processing within a single process or box, what we do is convert to ASN.1 PER when processing within our "box." The fact that our "box" is made up of multiple boxes is, architecturally, no different from what would be the case if we had one thread parsing XML and another working with the DOM or binfoset that resulted from the parse. Our "threads" are running on different machines connected via an internal high-speed network and we pass data between the "threads" as ASN.1 PER-encoded PDUs -- not DOM trees or SAX events.
On PubSub metrics:
As it turns out, the problem of monitoring blog traffic is much easier than it might look. Imagine, if you will, that every one of 1 million blogs was updated twice a day -- giving 2 million updates (much more than what really happens). That is still only an average of 23 updates per second. 23 updates per second isn't a tremendous amount of traffic to handle. It is likely that even an "all XML" service could handle such load although such a system would have much less "headroom" than our system does and would need to scale to multiple matching engines sooner than we will. But, hardware is cheap... For most people, buying more hardware will be more cost effective than going through all the complexity and algorithm tuning that we've had to do. We spend a great deal of time working on the throughput since we expect to be getting much higher volumes of traffic from non-blog sources in the future.
The hardware statement is interesting; it seems to align with the Google view of using commodity boxes while keeping the smarts into software.
On scalability:
There are certainly many examples of XML based systems that handle reasonable amounts of traffic with no problem. Thus, it is likely that there aren't going to be many applications that require the kind of optimization effort that we're forced to make. Nonetheless, it should be recognized that there comes a point where it becomes wise to do something other than process XML directly at all points in a system.
On the value of XML:
I'd also like to make sure you know that there is no question about my appreciation of the strengths of XML. There is no question that if we required all our inputs to be in anything other than XML, we would have virtually no input to work with. XML is so easy for people to generate that the net is literally overflowing with the stuff and there is still much more to come. It may be malformed, filled with namespaced additions (which is often no more than noise...), etc. but we can still manage to make sense of most of what we receive. Things would be cleaner if all data came to us in more strictly defined formats, but it is better to get messy data then no data.
On future interfaces into PubSub:
We will, in fact, be asking some high volume publishers to send us their data using ASN.1 encodings. However, the encodings we ask for will be directly mappable to XML schemas and XML will always be considered a completely interchangeable encoding format. In this we try to stay encoding-neutral. Also, we are already seeing that more compact encodings may be appropriate when delivering data to devices that are on the end of low-bandwidth connections or that have resource requirements that demand ease of parsing. Also, we'll be sending ASN.1 encoded stuff two and from clients that we write ourselves (while allowing XML to be used if one of those clients talks to someone else's XML based server.) Thus, anyone who wants to view our system as "XML only" will be able to do so and anyone who wishes to treat it like an ASN.1 based system will also be able to do so. We will be, as I said before, encoding-neutral.
The main thing I take from Bob's explanations is that PubSub, along with being a fine service, is doing a good job of separating interoperability issues from performance ones, by sticking to XML at the system/web boundary and leveraging ASN.1 PER internally. That helps reduce XML-Binfoset controversy to a kerfuffle. PubSub not the only one working along these lines - Antartica (Tim Bray is on the board) also consumes and produces XML, but internally converts the markup to structures optimized for the algorithms required for generating visual maps. Similarity Systems Athanor allows you to describe data matching plans in XML, but again is converting to optimized data structures when it comes to making matches. The key mistake in interop terms seems to be wanting to distort XML to fit the binary/api worldview or replace it wholesale at the system edges.
I'm curious how RDF honestly helps in search. Watching RSS, most people generate crap feeds. Honestly. Expecting people to magically generate good RDF descriptions of their sites is almost laughable. And the obvious gambit of writing some ai pixy dust to automatically generate RDF from someone's ramblings is enough to keep me chuckling for most of the afternoon.
Honestly yes, I am looking to build tools will generate the RDF (indexes and metadata). I want to scrape RDF metadata from structured data, analogous to the way spiders today scrape indices from unstructured data. It's much the same issue, but I figure the signal to noise ratio will be better in the former - at least I don't see how it could be worse. It already looks like one of the first things I'll have to do is recast http server logs and syslog as RDF triples.
Part of this project is about exercising RDF in a domain I understand. After it, I expect to know whether RDF has value outside academia and standards worlds and what that value is. I was a huge huge fan of the technology, even serving on the working group for the best part of year, before becoming deeply disenchanted with where that process and the community at large was going (models, models, models) to the point where I felt I had little to contribute other than ranting from the sidelines. For the record, I'm still a fan, on my third reading of Shelly's book, am waiting for Danny's, and despite my opinions on the process, still have enormous respect for the work RDFCore has done. But I take a strong view that RDF metadata should layer on top of statistical and automated magma, not manual data entry; that is pixie dust. This hetereogenity is what we know works in robotics, reinforcement learning* and hybrid AI or for any technique that has to live outside a closed environment. So I see much less need for the tidy substrate and attention to good modelling the current RDF model-think presupposes. I also think the semweb cake is missing or willfully ignoring a key layer that the search engines are thriving in - the environmental noise of the web.
It's not metacrap, it's meta living on crap.
As for the AI pixie dust, I don't see computing RDF from structured data being any more pixiesh that computing pagerank from a page or computing a spam filter from spam (did I say I like hybrid techniques? :). The truth is, I'm at least as skeptical as Brett, but it's like being skeptical about what a computer can do in light of the halting problem - yes there's a hard limit, but you can still do something useful before you get there.
* and will be needed for IBM's autonomic computing feedback loops, but I digress...
[roni size: heroes]
Professor David Parnas, one of the world leaders in software engineering, has formally joined the University thanks to a major grant funded by Science Foundation Ireland (SFI).
Awesome. So much better David Parnas, than the fool of a Took who taught me down there (and killed my interest in programming for many years afterwards).
[opm: el capitan]
Web search blows goats. Local search totally blows goats.
For the web case: we need to decentralize search by passing queries around from site to site (trackback chains, mod-pubsub, or hack the bejeesus out of mod_backhand) and allowing sites to generate metadata locally and publish it instead of having spiders reverse engineer from HTML (duh). No matter how fast you can do it; downloading the Web into a cluster and indexing it - in what possible world is that a good idea?
For the local case: same thing, except we do the indexing and monitoring by hanging listeners onto the OS. The plumbing and UI is different but the index material, metadata and plugin models for listerners and indexers should be much the same. We could do lan-wide index sharing over zeroconf, that would be fun, as would a tuplespaces model instead of using mqs or interrrupts. We can of course upload indices to the web or onto your phone.
Let's use RDF for the data. Having seen that people figure using SOAP envelopes is not insane for UDP discovery broadcasts, content management or systems integration, I figure RDF is as production worth a technology as any for search and query. Or possibly an RDF that uses WikiNames instead of URIs.
But basically, a) my continuous build thingy is going to be done in the next two months; b) I can't think of a fun mobile devices project, c) wiki, my favourite web technology is now owned by confluence and snipsnap, d) I badly need better search over all my stuff.
So I'm going to give this 12-18 months. Cool names solicited.
[air: alpha beta gaga]
Someday, your neighbours' brats will try to crack your fridge, run denial of service attacks on the washing machine and own your toaster, perhaps defacing your toast in the process.
Ain't life grand.. However, most vendors do listen to their customers. - Chris Ferris
>applause<. Finally, it's time to get off CVS.
I'm wondering how long it will be before everybody's completely reinvented RDF in the search for what it had all along. - Edd Dumbill
As long as RDF/XML is the sanctioned syntax.
From the alt-tab-up-enter-alt-tab department.
I've been using the IDEA 4 eap this weekend. So far it's better than 3.0x - the interface is cleaner with a nippier response. I like modules (I think). But it seems IDEA doesn't support Ant 1.6 (specifically import). I've been fooling around with jarfiles for the last hour - this is like being back in 2001. Anyone got it working? I'm loath to go back to entity inclusions, plus I want to move some projects onto 1.6 at work.
[note: I bounced the date forward on this entry. It seems javablogs is picking up my feed again, so perhaps someone out there has a hack for this]
Eugene Kuleshov asks, Why can't IDEs use Ant as their project files?
Somewhat less ambitious: why can't IDEs (and Java itself) use the Ant classpath declaration structure?
[See also: java -cp classpath.xml]
AI is often said to be largely useless, but if you had done enough of it you would already know this:
In conclusion, it is clear that Semantic Web can be used to map between XML vocabularies however in non-trivial situations the extra work that must be layered on top of such approaches tends to favor using XML-centric techniques such as XSLT to map between the vocabularies instead. Dare Obasanjo
Among other things, you would also know that the an important lesson folks picked up after the AI winter (who's 15 year anniversary cannnot be far away) is that how you model the inert data is key; that's one reason why all the SUO and WebOnt folks are so hung up on getting the ontologies just so, and that there remain a wasteland of decent tools and syntax (they just don't matter as much as abstract data models in the scheme of things). So I guess AI ain't so bad after all; if nothing else it'll keep you out of the weeds.
As for mapping the complicated stuff; we've been doing that for years in Propylon. Our CTO, Sean McGrath, can wax lyrical on this. It's called pipelining, and it's the way to go for systems integrations in general, not just munging a date format (perl will do just fine there). The main advantage of pipelines are an ability to keep recomposing as requirements change. In short - you can keep changing the transformation as fast as the business changes its mind. Try doing that with an XSLT write-only trainwreck.
I see Clemens Vasters has has caught the pipelining bug and that .NET has had it going a good while back in no small way due to Tim Ewald - WSE 1.0 supported a kind of in-memory pipeline for SOAP Envelopes; for Java folks it's not a million miles away from servlet filter chains.
Perhaps that's not representative, but it does seem that the .NET crowd gets the pipeline model. I'll go out on a limb here- I suspect that has something to do with MS programming culture been less inured to object orientation and object patterns..
The pipeline is the most important pattern/idiom in XML programming. The difference between it and the semantic web outlook, is that any good XML hacker knows that transformation is also primary stuff, not something to be cast aside as a small matter of programming because the model theories can't support it.
It would help to have RDF available to us when defining protocol extensions. Mark Baker.
I used to say that about SOAP ;)
Protocols typically work with a predicate(aka header)/value tuple. If you've ever worked with triples though, you quickly realize the problem with a double; it's not everything you need to know. In this case, you don't know the subject; what has a media type, what is of length 342 bytes, what is identified by this URI? Most protocols, therefore, define their subjects not in the message, but in the specification. For example, here are some possible subjects of an HTTP message. This is fine when you're predefining headers in a spec; there's still a self-descriptive path from bits-on-the-wire to what-is-the-subject-of-this-header. But for extension headers, it doesn't cut it; you've got a predicate (the header name) and an object (the header value), but no subject.
I'm curious to see where Mark's going with this. Like Mark I have a related issue that I can't discuss for various reasons.
In the HTTP case, many of the header tuples are metadata about the representations. But, representations are dark matter - they aren't first class objects on the web since they don't have URIs. Ironically, representations are about as real a thing as you can get on the web (they'll come into your computer if you let them, resources never do that). This 'issue' pops up in RDF circles from time to time. Yet RDF in itself is limited in how it can help with the unamed parts of web architecture, or anything that doesn't have a URI moniker.
Jon Udell: Analyzing blog content
I've heard this way of using CSS described as semantic markup. But I can see an army of RDFers wishing Jon used URIs instead of free text inside his class attributes. I don't know if CSS will take URI syntax as tokens, but WikiWords would be a good compromise.
Reciting chapter and verse of a 12 year old spec? I don't give a flying rats ass what that spec says. Russell Beattie
Back in the day, before we understood the value of standards that would've been the attitude I'd expect from, say, a large software company ;-)
Anyway...
Russell is rightly annoyed about all this, but he's rightly annoyed at the wrong things, the wrong people, and that's understandable given how we got here. I take the opposite view to Russ; not having mainstream availability of PUT and DELETE is the singularly most broken aspect of web technology today.
Let's go back. There is a broken spec in this story, but it's not Atom and and it's not HTTP. It's, wait for it... HTML. The reason technologies like SOAP and the Midp and Flash only use those verbs is because HTML only allowed POST and GET in forms. That's where the rot started.
What's the big deal? Well, the hoops you have to go through to do basic messaging on the web are frankly, ridiculous, and it results directly from inheriting the web forms legacy of abusing POST. For example, consider reliable messaging done over the web. The absence of client infrastructure that supports a full verb complement gives leeway to invent a raft of over-complicated non-interoperable reliable message specs. Reliable messaging, by the way, is one area that WS vendors can't seem to agree to standardize - perhaps that's because it's critical to enterprises (read, there's real money in it). But, the point is that there should never been an opportunity to make things complicated in the first place. In my job, we design and build messaging infrastructure, a lot of it happens over HTTP. There's a good amount of pressure to make that infrastructure fit with web forms technology and existing client stacks. Now, to do RM over the web, and do it cleanly, you want the full complement of HTTP verbs at your disposal (esp. PUT and DELETE). With them you can uniquely name each exchange and use the verbs to create a simple state machine or workflow operating over that exchange. Without them you have to use multiple resources to name one exchange plus clients and servers will typically have to inspect the URLs to find out what's going on. Operators and software have to be able to manage this, know all the URLs involved in the exchange, plus the private keys you're using to bind them together behind the firewall. Oh, did I mention that you'll have to reinvent these verbs in your content anyway and then get your partners to agree on their meaning? POST driven exchanges become to a small degree non-scalable, to some degree insecure, and to a large degree hard to manage.
Trust me, it's not an academic issue, and it's not limited to RM; basic content management is in scope too. For those of you that don't monkey about with HTTP for a living, I can sum up the problem of the problem of not having PUT and DELETE like this - imagine dealing with a subset of SQL that doesn't suppport UPDATE and DELETE or Java Collections that didn't have an add() method. It's an insanely stupid way of working. But if you never knew SQL had UPDATE to begin with, and it was useful, perhaps that wouldn't be as apparent.
The irony is, that while some of us are left to compromizing with the fallout from uninformed specs, a number of people think that PUT and DELETE are some sort of esoterica that only a spec wonk could care about. And now, over on the Atom list some people are talking about workarounds. To heck with that. Get Sun to fix the Midp and the W3C TAG to fix HTML/XForms. The latter is worth emphasizing - as far as I can tell, this issue isn't even on the TAG's radar.
Russ, sorry; Atom is not the broken spec and the REST folks are not being intransigent nerds (this time). Argung for a subset of HTTP is not the way to go here, even if it's the expedient way right now for J2ME. Sure there are hundreds of millions of broken clients out there, but what worries me are the next billion clients, not the early adopters.
I've done usual change-poll-time-and-update-bounce-dance, but for some reason, my blog feed is not being refreshed by javablogs. The last update seems to be ~Jan 31st. I validated the feed, checked against a few aggregators and as far as I can tell there's no encoding weirdness in the feed.
Anyone else having trouble? I tried posting to the javablogs forum but got a reponse code 500 with a stacktrace... oh well, maybe someone can point the Atlassian massive at this entry instead ;)
We could either optimize every last scintilla of performance out of one of those machines or we could get lots of them working on the data in parallel. The former route costs us lots of time and money in terms of labor costs (developers) and capital costs for a small number of top-of-the-range computers. Also, the outcome of the investment in terms of improved throughput is uncertain. The latter route - lots and lots of cheap "throwaway" machines - will cost us a fixed amount of money (low labor costs as we are not optimizing any algorithms) and we can accurately measure the improvements in throughput we expect to see. Sean McGrath
I put links to my weblog categories on the frontpage yesterday. Turns out this lets me inbound link to myself in Technnorati each time I add an entry. I'm taking the categories off for the time being.
WebServices are mostly implemented in the most modern environments like Java and .NET. Can anyone point me to a C binding for SOAP, or even a more difficult one that's implemented for Cobol? Carlos Perez
No, but would we want ever expose such things via a direct binding? Legacy systems living deep in the enterprise, in general don't seem to require web service interfaces; they require web service gateways with data transformation pipelines that can be dynamically brought into the delivery channels. While I think there are avenues of exploration, deep integrations aren't yet something than can be push-button automated by tools. But there are ways to get the job done faster and better.
Consider this - exposing a 24x7 web interface into a mon-fri batch nightly COBOL job system. The COBOL system works correctly and is mission critical to the enterprise in question; not something to be tinkered with. Our answer to that scenario was to accept data and queries as async calls over HTTPS in XML form encapsulated by a standard XML envelope. The back of the web service is a series of pipelines. The pipelines entail auditing, structural validations, content validations, code mappings, pre-matching, data cleansing, statistical capture and conversion to the job a format before leaving the job in a repository. A second process running on a schedule uploads the jobs to VMS vis FTP. A third process collects FTP'd responses from the COBOL systems and proceeds to reconcille the responses with the submissions and fulfil those back the sender of the original XML message via another pipeline as well as publishing the results to other subscribers. The setup has proven flexible, and robust to changes in requirements and semantics.
The idea of blowing out the service from the COBOL; that would be problematic. Herein lies a key issue with the way middleware webservices toolsets prefer to be used - codifying a domain model, then generating web service stubs and wsdl descriptors for deployment to the DMZ tier is, in terms of software process, precisely backwards for deep integrations or repurposing for service oriented architectures. Neither the tools or the local object models should be driving the integration process, they should be supporting it. There are some subtle gotchas. Web interfaces and batch processes are working to different timeframes; this entails an asynchronous gateway, but also impacts system adminstration and operation. There is usually no canonical domain model in an enterprise and perhaps more importantly no time or possibility for agreement on one - I see this as being a serious issue for efforts like the OMG's MDA and the consequent modelling toolsets coming down the line.
Having built the web service, it would be fine to expose, say, WSDL if someone really wanted it, but this is driven from the service design, not the provisions in some RDB/OO/WSDL mapper. Personally, I would see generating WSDL as being a publication exercise more than a design exercise.
As for the Achilles heel of web services :) I've complained about toolkits, but it's not that (as understanding of enterprise integrations grows, the tools will get better). In early 2004, the Achilles heel of web services is the complexity resulting from the sheer volume and lack of coherence in the web services specs and a lack of architectural guidance from the folks generating them - hence the title of this blog. Witness the current ASF list.
[Update: Mark Baker laments the passing of the W3C Web Services Architecture group. Me too - there was some confort to be had by having the likes of Mike Champion, Eric Newcomer, and Frank McCabe thrashing this stuff out. ] | http://dehora.net/journal/2004/02/ | CC-MAIN-2017-09 | refinedweb | 5,049 | 68.7 |
Author: Florent Hivert <florent.hivert@univ-rouen.fr>, Franco Saliola <saliola@gmail.com>, et al.
This tutorial is an introduction to basic programming in Python and Sage, for readers with elementary notions of programming but not familiar with the Python language. It is far from exhaustive. For a more complete tutorial, have a look at the Python Tutorial. Also Python’s documentation and in particular the standard library can be useful.
A more advanced tutorial presents the notions of objects and classes in Python.
Here are further resources to learn Python:
In Python, typing is dynamic; there is no such thing as declaring variables. The function type() returns the type of an object obj. To convert an object to a type typ just write typ(obj) as in int("123"). The command isinstance(ex, typ) returns whether the expression ex is of type typ. Specifically, any value is an instance of a class and there is no difference between classes and types.
The symbol = denotes the affectation to a variable; it should not be confused with == which denotes mathematical equality. Inequality is !=.
The standard types are bool, int, list, tuple, set, dict, str.
The type bool (booleans) has two values: True and False. The boolean operators are denoted by their names or, and, not.
The Python types int and long are used to represent integers of limited size. To handle arbitrary large integers with exact arithmetic, Sage uses its own type named Integer.
A list is a data structure which groups values. It is constructed using brackets as in [1, 3, 4]. The range() function creates integer lists. One can also create lists using list comprehension:
[ <expr> for <name> in <iterable> (if <condition>) ]
For example:
sage: [ i^2 for i in range(10) if i % 2 == 0 ] [0, 4, 16, 36, 64]
A tuple is very similar to a list; it is constructed using parentheses. The empty tuple is obtained by () or by the constructor tuple. If there is only one element, one has to write (a,). A tuple is immutable (one cannot change it) but it is hashable (see below). One can also create tuples using comprehensions:
sage: tuple(i^2 for i in range(10) if i % 2 == 0) (0, 4, 16, 36, 64)
A set is a data structure which contains values without multiplicities or order. One creates it from a list (or any iterable) with the constructor set. The elements of a set must be hashable:
sage: set([2,2,1,4,5]) set([1, 2, 4, 5]) sage: set([ [1], [2] ]) Traceback (most recent call last): ... TypeError: unhashable type: 'list'
A dictionary is an association table, which associates values to keys. Keys must be hashable. One creates dictionaries using the constructor dict, or using the syntax:
{key1 : value1, key2 : value2 ...}
For example:
sage: age = {'toto' : 8, 'mom' : 27}; age {'toto': 8, 'mom': 27}
Quotes (simple ' ' or double " ") enclose character strings. One can concatenate them using +.
For lists, tuples, strings, and dictionaries, the indexing operator is written l[i]. For lists, tuples, and strings one can also uses slices as l[:], l[:b], l[a:], or l[a:b]. Negative indices start from the end.
The len() function returns the number of elements of a list, a tuple, a set, a string, or a dictionary. One writes x in C to tests whether x is in C.
Finally there is a special value called None to denote the absence of a value.
In Python, there is no keyword for the beginning and the end of an instructions block. Blocks are delimited solely by means of indentation. Most of the time a new block is introduced by :. Python has the following control structures:
Conditional instruction:
if <condition>: <instruction sequence> [elif <condition>: <instruction sequence>]* [else: <instruction sequence>]
Inside expression exclusively, one can write:
<value> if <condition> else <value>
Iterative instructions:
for <name> in <iterable>: <instruction sequence> [else: <instruction sequence>]
while <condition>: <instruction sequence> [else: <instruction sequence>]
The else block is executed at the end of the loop if the loop is ended normally, that is neither by a break nor an exception.
In a loop, continue jumps to the next iteration.
An iterable is an object which can be iterated through. Iterable types include lists, tuples, dictionaries, and strings.
An error (also called exception) is raised by:
raise <ErrorType> [, error message]
Usual errors include ValueError and TypeError.
Note
Python functions vs. mathematical functions
In what follows, we deal with functions is the sense of programming languages. Mathematical functions, as manipulated in calculus, are handled by Sage in a different way. In particular it doesn’t make sense to do mathematical manipulation such as additions or derivations on Python functions.
One defines a function using the keyword def as:
def <name>(<argument list>): <instruction sequence>
The result of the function is given by the instruction return. Very short functions can be created anonymously using lambda (remark that there is no instruction return here):
lambda <arguments>: <expression>
Note
Functional programming
Functions are objects as any other objects. One can assign them to variables or return them. For details, see the tutorial on Functional Programming for Mathematicians.
Example:
sage: L = [3, Permutation([5,1,4,2,3]), 17, 17, 3, 51] sage: L [3, [5, 1, 4, 2, 3], 17, 17, 3, 51]
Exercise: Create the list \([63, 12, -10, \text{``a''}, 12]\), assign it to the variable L, and print the list.
sage: # edit here
Exercise: Create the empty list (you will often need to do this).
sage: # edit here
The range() function provides an easy way to construct a list of integers. Here is the documentation of the range() function:.
Exercise: Use range() to construct the list \([1,2,\ldots,50]\).
sage: # edit here
Exercise: Use range() to construct the list of even numbers between 1 and 100 (including 100).
sage: # edit here
Exercise: The step argument for the range() command can be negative. Use range to construct the list \([10, 7, 4, 1, -2]\).
sage: # edit here
List comprehensions provide a concise way to create lists from other lists (or other data types).
Example We already know how to create the list \([1, 2, \dots, 16]\):
sage: range(1,17) [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]
Using a list comprehension, we can now create the list \([1^2, 2^2, 3^2, \dots, 16^2]\) as follows:
sage: [i^2 for i in range(1,17)] [1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144, 169, 196, 225, 256]
sage: sum([i^2 for i in range(1,17)]) 1496
Exercise: [Project Euler, Problem 6]
The sum of the squares of the first ten natural numbers is
The square of the sum of the first ten natural numbers is
Hence the difference between the sum of the squares of the first ten natural numbers and the square of the sum is
Find the difference between the sum of the squares of the first one hundred natural numbers and the square of the sum.
sage: # edit here
sage: # edit here
sage: # edit here
A list can be filtered using a list comprehension.
Example: To create a list of the squares of the prime numbers between 1 and 100, we use a list comprehension as follows.
sage: [p^2 for p in [1,2,..,100] if is_prime(p)] [4, 9, 25, 49, 121, 169, 289, 361, 529, 841, 961, 1369, 1681, 1849, 2209, 2809, 3481, 3721, 4489, 5041, 5329, 6241, 6889, 7921, 9409]
Exercise: Use a list comprehension to list all the natural numbers below 20 that are multiples of 3 or 5. Hint:
sage: # edit here
Project Euler, Problem 1: Find the sum of all the multiples of 3 or 5 below 1000.
sage: # edit here
List comprehensions can be nested!
Examples:
sage: [(x,y) for x in range(5) for y in range(3)] [(0, 0), (0, 1), (0, 2), (1, 0), (1, 1), (1, 2), (2, 0), (2, 1), (2, 2), (3, 0), (3, 1), (3, 2), (4, 0), (4, 1), (4, 2)]
sage: [[i^j for j in range(1,4)] for i in range(6)] [[0, 0, 0], [1, 1, 1], [2, 4, 8], [3, 9, 27], [4, 16, 64], [5, 25, 125]]
sage: matrix([[i^j for j in range(1,4)] for i in range(6)]) [ 0 0 0] [ 1 1 1] [ 2 4 8] [ 3 9 27] [ 4 16 64] [ 5 25 125]
Exercise:
A Pythagorean triple is a triple \((x,y,z)\) of positive integers satisfying \(x^2+y^2=z^2\). The Pythagorean triples whose components are at most \(10\) are:
Using a filtered list comprehension, construct the list of Pythagorean triples whose components are at most \(50\):
sage: # edit here
sage: # edit here
Project Euler, Problem 9: There exists exactly one Pythagorean triple for which \(a + b + c = 1000\). Find the product \(abc\):
sage: # edit here
To access an element of the list L, use the syntax L[i], where \(i\) is the index of the item.
Exercise:
Construct the list L = [1,2,3,4,3,5,6]. What is L[3]?
sage: # edit here
What is L[1]?
sage: # edit here
What is the index of the first element of L?
sage: # edit here
What is L[-1]? What is L[-2]?
sage: # edit here
What is L.index(2)? What is L.index(3)?
sage: # edit here
To change the item in position i of a list L:
sage: L = ["a", 4, 1, 8] sage: L ['a', 4, 1, 8]
sage: L[2] = 0 sage: L ['a', 4, 0, 8]
To append an object to a list:
sage: L = ["a", 4, 1, 8] sage: L ['a', 4, 1, 8]
sage: L.append(17) sage: L ['a', 4, 1, 8, 17]
To extend a list by another list:
sage: L1 = [1,2,3] sage: L2 = [7,8,9,0] sage: print L1 [1, 2, 3] sage: print L2 [7, 8, 9, 0]
sage: L1.extend(L2) sage: L1 [1, 2, 3, 7, 8, 9, 0]
sage: L = [4,2,5,1,3] sage: L [4, 2, 5, 1, 3]
sage: L.reverse() sage: L [3, 1, 5, 2, 4]
sage: L.sort() sage: L [1, 2, 3, 4, 5]
sage: L = [3,1,6,4] sage: sorted(L) [1, 3, 4, 6]
sage: L [3, 1, 6, 4]
To concatenate two lists, add them with the operator +. This is not a commutative operation!
sage: L1 = [1,2,3] sage: L2 = [7,8,9,0] sage: L1 + L2 [1, 2, 3, 7, 8, 9, 0]
You can slice a list using the syntax L[start : stop : step]. This will return a sublist of L.
Exercise: Below are some examples of slicing lists. Try to guess what the output will be before evaluating the cell:
sage: L = range(20) sage: L [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
sage: L[3:15] [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]
sage: L[3:15:2] [3, 5, 7, 9, 11, 13]
sage: L[15:3:-1] [15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4]
sage: L[:4] [0, 1, 2, 3]
sage: L[:] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
sage: L[::-1] [19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
Exercise (Advanced): The following function combines a loop with some of the list operations above. What does the function do?
sage: def f(number_of_iterations): ... L = [1] ... for n in range(2, number_of_iterations): ... L = [sum(L[:i]) for i in range(n-1, -1, -1)] ... return numerical_approx(2*L[0]*len(L)/sum(L), digits=50)
sage: # edit here
A tuple is an immutable list. That is, it cannot be changed once it is created. This can be useful for code safety and foremost because it makes tuple hashable. To create a tuple, use parentheses instead of brackets:
sage: t = (3, 5, [3,1], (17,[2,3],17), 4) sage: t (3, 5, [3, 1], (17, [2, 3], 17), 4)
To create a singleton tuple, a comma is required to resolve the ambiguity:
sage: (1) 1 sage: (1,) (1,)
We can create a tuple from a list, and vice-versa.
sage: tuple(range(5)) (0, 1, 2, 3, 4)
sage: list(t) [3, 5, [3, 1], (17, [2, 3], 17), 4]
Tuples behave like lists in many respects:
Trying to modify a tuple will fail:
sage: t = (5, 'a', 6/5) sage: t (5, 'a', 6/5)
sage: t[1] = 'b' Traceback (most recent call last): ... TypeError: 'tuple' object does not support item assignment
“Tuple-comprehensions” do not exist. Instead, the syntax produces something called a generator. A generator allows you to process a sequence of items one at a time. Each item is created when it is needed, and then forgotten. This can be very efficient if we only need to use each item once.
sage: (i^2 for i in range(5)) <generator object <genexpr> at 0x...>
sage: g = (i^2 for i in range(5)) sage: g[0] Traceback (most recent call last): ... TypeError: 'generator' object has no attribute '__getitem__'
sage: [x for x in g] [0, 1, 4, 9, 16]
g is now empty.
sage: [x for x in g] []
A nice ‘pythonic’ trick is to use generators as argument of functions. We do not need double parentheses for this:
sage: sum( i^2 for i in srange(100001) ) 333338333350000
A dictionary is another built-in data type. Unlike lists, which are indexed by a range of numbers starting at 0, dictionaries are indexed by keys, which can be any immutable objects. Strings and numbers can always be keys (because they are immutable). Dictionaries are sometimes called “associative arrays” in other programming languages.
There are several ways to define dictionaries. One method is to use braces, {}, with comma-separated entries given in the form key:value:
sage: d = {3:17, "key":[4,1,5,2,3], (3,1,2):"goo", 3/2 : 17} sage: d {3/2: 17, 3: 17, (3, 1, 2): 'goo', 'key': [4, 1, 5, 2, 3]}
A second method is to use the constructor dict which admits a list (or actually any iterable) of 2-tuples (key, value):
sage: dd = dict((i,i^2) for i in xrange(10)) sage: dd {0: 0, 1: 1, 2: 4, 3: 9, 4: 16, 5: 25, 6: 36, 7: 49, 8: 64, 9: 81}
Dictionaries behave as lists and tuples for several important operations.
sage: d[10]='a' sage: d {3/2: 17, 10: 'a', 3: 17, (3, 1, 2): 'goo', 'key': [4, 1, 5, 2, 3]}
A dictionary can have the same value multiple times, but each key must only appear once and must be immutable:
sage: d = {3: 14, 4: 14} sage: d {3: 14, 4: 14}
sage: d = {3: 13, 3: 14} sage: d {3: 14}
sage: d = {[1,2,3] : 12} Traceback (most recent call last): ... TypeError: unhashable type: 'list'
Another way to add items to a dictionary is with the update() method which updates the dictionary from another dictionary:
sage: d = {} sage: d {}
sage: d.update( {10 : 'newvalue', 20: 'newervalue', 3: 14, 'a':[1,2,3]} ) sage: d {'a': [1, 2, 3], 10: 'newvalue', 3: 14, 20: 'newervalue'}
We can iterate through the keys, or values, or both, of a dictionary:
sage: d = {10 : 'newvalue', 20: 'newervalue', 3: 14, 'a':[1,2,3]}
sage: [key for key in d] ['a', 10, 3, 20]
sage: [key for key in d.iterkeys()] ['a', 10, 3, 20]
sage: [value for value in d.itervalues()] [[1, 2, 3], 'newvalue', 14, 'newervalue']
sage: [(key, value) for key, value in d.iteritems()] [('a', [1, 2, 3]), (10, 'newvalue'), (3, 14), (20, 'newervalue')]
Exercise: Consider the following directed graph.
Create a dictionary whose keys are the vertices of the above directed graph, and whose values are the lists of the vertices that it points to. For instance, the vertex 1 points to the vertices 2 and 3, so the dictionary will look like:
d = { ..., 1:[2,3], ... }
sage: # edit here
Then try:
sage: g = DiGraph(d) sage: g.plot()
Example: Construct a \(3 \times 3\) matrix whose \((i,j)\) entry is the rational number \(\frac{i}{j}\). The integers generated by range() are Python int‘s. As a consequence, dividing them does euclidean division:
sage: matrix([[ i/j for j in range(1,4)] for i in range(1,4)]) [1 0 0] [2 1 0] [3 1 1]
Whereas dividing a Sage Integer by a Sage Integer produces a rational number:
sage: matrix([[ i/j for j in srange(1,4)] for i in srange(1,4)]) [ 1 1/2 1/3] [ 2 1 2/3] [ 3 3/2 1]
Try to predict the results of the following commands:
sage: a = [1, 2, 3] sage: L = [a, a, a] sage: L [[1, 2, 3], [1, 2, 3], [1, 2, 3]]
sage: a.append(4) sage: L [[1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4]]
Now try these:
sage: a = [1, 2, 3] sage: L = [a, a, a] sage: L [[1, 2, 3], [1, 2, 3], [1, 2, 3]]
sage: a = [1, 2, 3, 4] sage: L [[1, 2, 3], [1, 2, 3], [1, 2, 3]]
sage: L[0].append(4) sage: L [[1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4]]
This is known as the reference effect. You can use the command deepcopy() to avoid this effect:
sage: a = [1,2,3] sage: L = [deepcopy(a), deepcopy(a)] sage: L [[1, 2, 3], [1, 2, 3]]
sage: a.append(4) sage: L [[1, 2, 3], [1, 2, 3]]
The same effect occurs with dictionaries:
sage: d = {1:'a', 2:'b', 3:'c'} sage: dd = d sage: d.update( { 4:'d' } ) sage: dd {1: 'a', 2: 'b', 3: 'c', 4: 'd'}
For more verbose explanation of what’s going on here, a good place to look at is the following section of the Python tutorial:
While loops tend not to be used nearly as much as for loops in Python code:
sage: i = 0 sage: while i < 10: ... print i ... i += 1 0 1 2 3 4 5 6 7 8 9
sage: i = 0 sage: while i < 10: ... if i % 2 == 1: ... i += 1 ... continue ... print i ... i += 1 0 2 4 6 8
Note that the truth value of the clause expression in the while loop is evaluated using bool:
sage: bool(True) True
sage: bool('a') True
sage: bool(1) True
sage: bool(0) False
sage: i = 4 sage: while i: ... print i ... i -= 1 4 3 2 1
Here is a basic for loop iterating over all of the elements in the list l:
sage: l = ['a', 'b', 'c'] sage: for letter in l: ... print letter a b c
The range() function is very useful when you want to generate arithmetic progressions to loop over. Note that the end point is never included:
sage: range?
sage: range(4) [0, 1, 2, 3]
sage: range(1, 5) [1, 2, 3, 4]
sage: range(1, 11, 2) [1, 3, 5, 7, 9]
sage: range(10, 0, -1) [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]
sage: for i in range(4): ... print i, i*i 0 0 1 1 2 4 3 9
You can use the continue keyword to immediately go to the next item in the loop:
sage: for i in range(10): ... if i % 2 == 0: ... continue ... print i 1 3 5 7 9
If you want to break out of the loop, use the break keyword:
sage: for i in range(10): ... if i % 2 == 0: ... continue ... if i == 7: ... break ... print i 1 3 5
If you need to keep track of both the position in the list and its value, one (not so elegant) way would be to do the following:
sage: l = ['a', 'b', 'c'] ... for i in range(len(l)): ... print i, l[i]
It’s cleaner to use enumerate() which provides the index as well as the value:
sage: l = ['a', 'b', 'c'] ... for i, letter in enumerate(l): ... print i, letter
You could get a similar result to the result of the enumerate() function by using zip() to zip two lists together:
sage: l = ['a', 'b', 'c'] ... for i, letter in zip(range(len(l)), l): ... print i, letter
For loops work using Python’s iterator protocol. This allows all sorts of different objects to be looped over. For example:
sage: for i in GF(5): ... print i, i*i 0 0 1 1 2 4 3 4 4 1
How does this work?
sage: it = iter(GF(5)); it <generator object __iter__ at 0x...> sage: it.next() 0 sage: it.next() 1 sage: it.next() 2 sage: it.next() 3 sage: it.next() 4 sage: it.next() Traceback (most recent call last): ... StopIteration
sage: R = GF(5) ... R.__iter__??
The command yield provides a very convenient way to produce iterators. We’ll see more about it in a bit.
For each of the following sets, compute the list of its elements and their sum. Use two different ways, if possible: with a loop, and using a list comprehension.
The first \(n\) terms of the harmonic series:
sage: # edit here
The odd integers between \(1\) and \(n\):
sage: # edit here
The first \(n\) odd integers:
sage: # edit here
The integers between \(1\) and \(n\) that are neither divisible by \(2\) nor by \(3\) nor by \(5\):
sage: # edit here
The first \(n\) integers between \(1\) and \(n\) that are neither divisible by \(2\) nor by \(3\) nor by \(5\):
sage: # edit here
Functions are defined using the def statement, and values are returned using the return keyword:
sage: def f(x): ... return x*x
sage: f(2) 4
Functions can be recursive:
sage: def fib(n): ... if n <= 1: ... return 1 ... else: ... return fib(n-1) + fib(n-2)
sage: [fib(i) for i in range(10)] [1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
Functions are first class objects like any other. For example, they can be passed in as arguments to other functions:
sage: f <function f at 0x...>
sage: def compose(f, x, n): # computes f(f(...f(x))) ... for i in range(n): ... x = f(x) # this change is local to this function call! ... return x
sage: compose(f, 2, 3) 256
sage: def add_one(x): ... return x + 1
sage: compose(add_one, 2, 3) 5
You can give default values for arguments in functions:
sage: def add_n(x, n=1): ... return x + n
sage: add_n(4) 5
sage: add_n(4, n=100) 104
sage: add_n(4, 1000) 1004
You can return multiple values from a function:
sage: def g(x): ... return x, x*x
sage: g(2) (2, 4)
sage: type(g) <type 'function'>
sage: a,b = g(100)
sage: a 100
sage: b 10000
You can also take a variable number of arguments and keyword arguments in a function:
sage: def h(*args, **kwds): ... print type(args), args ... print type(kwds), kwds
sage: h(1,2,3,n=4) <type 'tuple'> (1, 2, 3) <type 'dict'> {'n': 4}
Let’s use the yield instruction to make a generator for the Fibonacci numbers up to \(n\):
sage: def fib_gen(n): ... if n < 1: ... return ... a = b = 1 ... yield b ... while b < n: ... yield b ... a, b = b, b+a
sage: for i in fib_gen(50): ... print i 1 1 2 3 5 8 13 21 34
Todo | http://sagemath.org/doc/thematic_tutorials/tutorial-programming-python.html | CC-MAIN-2014-15 | refinedweb | 3,968 | 68.5 |
Docs |
Forums |
Lists |
Bugs |
Planet |
Store |
GMN |
Get Gentoo!
Not eligible to see or edit group visibility for this bug.
View Bug Activity
|
Format For Printing
|
XML
|
Clone This Bug
I installed Kaffe-1.1.4 and tried to compile a standard Java "Hello World"
program:
public class hello {
public static void main(String[] args) {
System.out.println("Hello, World!");
}
}
However, it failed with the following error message:
$ javac hello.java
Internal error: caught an unexpected exception.
Please check your CLASSPATH and your installation.
java/lang/NullPointerException
Aborted
I compiled the program using another Java compiler (from Blackdown JDK) and
tried to run it with Kaffe, but I got the same error message:
$ java hello
Internal error: caught an unexpected exception.
Please check your CLASSPATH and your installation.
java/lang/NullPointerException
Aborted
I thought it might be a problem with Kaffe itself, but version 1.1.4 installed
from sources works out-of-the-box, so I guess there must be something wrong
with the ebuild.
While trying to work around it, I also noticed that I am able to run the
compiled program if I launch it using the plain kaffe-bin binary with a
specific LD_LIBRARY_PATH:
$ LD_LIBRARY_PATH=/opt/kaffe-1.1.4/jre/lib/i386 kaffe-bin hello
Hello, World!
This might suggest that the problem is caused by incorrect path settings.
Reproducible: Always
Steps to Reproduce:
1. Create hello.java
2. Try to compile it with Kaffe's javac
Please try 1.1.5 (~x86). If your problem persists, reopen this bug. We'll mark
it stable very soon.
The problem is still there for Kaffe 1.1.5. It does compile the hello.java
program, but only because version 1.1.5 uses a different Java compiler (jikes).
When I try to execute the compiled hello.class, I get the same error message.
But I think I tracked down the problem: it is obviously caused by my CFLAGS
settings (-O2 -march=pentium4 -pipe -fomit-frame-pointer). When I emerged Kaffe
1.1.4 using its default CFLAGS taken from the original Makefile (-g -O2 -Wall
-Wstrict-prototypes), the problem was gone and both compilation and execution
worked fine.
Could you please use your original CFLAGS, but without -fomit-frame-pointer.
(There seems to be a common misconception that -fomit-frame-pointer will give a performance boost. I recommend you remove it from your CFLAGS, since it breaks a lot of stuff, and also hampers debugging.)
Yes, removing -fomit-frame-pointer did it.
since -fomit-frame-pointer is used by many people i've added a strip-flags lime
to the ebuild. therefor i'm going to mark this one as fixed. thanks for
reporting and for supporting our development. | http://bugs.gentoo.org/88330 | crawl-002 | refinedweb | 453 | 59.6 |
Ok, as promised yesterday on #dragonflybsd, I tried to build ghc-7.0.4 on both 32bit and 64bit DragonFly. Since I was not able to spend much time on this, I just run the build on the available machines with their current setup, and took the results without much testing. See below for possible problems. ======================================================================== For i386 you need (sorry, you have to use exactly those URLs; no access to the dfly directory): For x86_64: The sources are: Both errno_ptr-<arch>.tar.xz and ghc-7.0.4-<arch>.tar.xz should be extracted below /usr/local. The cabal-<arch>.xz ist a cabal(-install) binary built with above compilers, for easier installation of further libraries. ======================================================================== Possible problems: 1. Since I have this errno_ptr stuff also below /usr/pkg, the build process might have picked those, and you need to move them there or set LD_LIBRARY_PATH and maybe also use (3). 2. I am not sure, what the configure script found on both machines, and the result might depend on different sets and versions of shared libraries like libiconv and libgmp. 3. You probably have to set "extra-include-dirs: /usr/pkg/include" and "extra-lib-dirs: /usr/pkg/lib" in $HOME/.cabal/config or use similar commandline arguments to build stuff using native code. Please tell me, if there are other problems, or if I could help with doing the build differently. ======================================================================== Maybe a few words about the porting. The 32bit port was very easy, since DragonFly at this time was able to execute FreeBSD4 binaries. Porting was merely a searching for every "defined(__FreeBSD__)" and appending a " || defined(__DragonFly__)", with one exception, where I have only a workaround but not real solution until now. The ghc compiler creates object files for a static executable, but has also a simple Linker (rts/Linker.c) that can load those object files at runtime. This is neccessary for the interactive ghci and for Template Haskell (not much fun without). This linker can only handle the most common relocations, and is not able to load anything that accesses our "extern __thread int errno;". I have put some (limited) time into trying to understand how this thread local storage stuff works, but decided to go with this workaround: Put the function "int *errno_ptr(void) { return (&errno); }" into a shared library "liberrno_ptr" and patch the ghc sources to replace the errno access with calling this function. The 64bit port took much more time and was frustrating, but also very easy looking back afterwards. At first I used dfly/i386 to bootstrap x86_64, and succeeded with the build after only some small fixes to the documented bootstrap process. But the compiler crashed immediately during initialization. It took me some time to notice that it was a general problem, that the bootstraped compiler used only 32bit of pointers for native calls from Haskell to C. Since I saw no simple way, to fix this with my level of "knowledge" about the generated code, I decided to try FreeBSD/amd64 next. The build succeeded following the same steps as before. The result was a little better but was totally confused about the file system because "struct stat" on FreeBSD and DragonFly differ. Took some time to understand the generated code, but then it was enought (as far as I remember) to change a single bit shift value at only one place, to get a compile that was able to do the bootstrap (probably not much more). ======================================================================== Have still no good idea, how to integrate this into pkgsrc. Have also not seen a good place to integrate this errno access wrapper into a patch against the ghc sources only. So if you have a good idea, or even better if you know how to fix the rts/Linker.c problem with this "unhandled ELF relocation(Rel) type 15" please let me/us know... -- Goetz | http://leaf.dragonflybsd.org/mailarchive/users/2012-01/msg00027.html | CC-MAIN-2014-35 | refinedweb | 647 | 61.77 |
:
PRINT "How many fibonacci numbers do you want?" INPUT nums LET a = 0 LET b = 1 WHILE nums > 0 REPEAT PRINT a LET c = a + b LET a = b LET b = c ENDWHILE
This program prints out terms of the fibonacci sequence based on the user's input: 0 1 1 2 3 5 13...
Our language will allow a variety of the basic operations that you'd expect from a programming language. In particular, it will support:
Although this is a standard subset of features, you may notice that there are no functions, no arrays, no way to read/write from a file, and not even an else statement. But with just this small set of constructs, you can actually do a lot. It will also setup the compiler in such a way that many other features will be straight forward to add later.
Our compiler will follow a three step process that is illustrated above. First, given the inputted source code, it will break the code up into tokens. These are like words and punctuation in English. Second, it will parse the tokens to make sure they are in an order that is allowed in our language. Just like English sentences follow specific structures of verbs and nouns. Third, it will emit the C code that our language will translate to.
We will use these three steps as the main organization for our code. The lexer, parser, and emitter will each have their own Python code file. This tutorial is broken up into 3 parts based on these steps as well. If you were to extend the compiler, there are some additional steps you would add, but we will hold off on discussing those.
The first module of our compiler is called the lexer. Given a string of Teeny Tiny code, it will iterate character by character to do two things: decide where each token starts/stops and what type of token it is. If the lexer is unable to do this, then it will report an error for an invalid token.
The figure demonstrates an example input and output of the lexer. Given the Teeny Tiny code, the lexer must determine where the tokens are along with the type (e.g., keyword). You can see that spaces aren't recognized as tokens, but the lexer will use them as one way to know when a token ends.
Let's finally get into some code, starting with the structure of the lexer in the file lex.py:
class Lexer: def __init__(self, input): pass # Process the next character. def nextChar(self): pass # Return the lookahead character. def peek(self): pass # Invalid token found, print error message and exit. def abort(self, message): pass # Skip whitespace except newlines, which we will use to indicate the end of a statement. def skipWhitespace(self): pass # Skip comments in the code. def skipComment(self): pass # Return the next token. def getToken(self): pass
I like to sketch out all the functions that I think I will need, then go back and fill them in. The function getToken will be the meat of the lexer. It will be called each time the compiler is ready for the next token and it will do the work of classifying tokens. nextChar and peek are helper functions for looking at the next character. skipWhitespace consumes the spaces and tabs that we don't care about. abort is what we will use to report an invalid token.
The lexer needs to keep track of the current position in the input string and the character at that position. We will initialize these in the constructor:
def __init__(self, input): self.source = input + '\n' # Source code to lex as a string. Append a newline to simplify lexing/parsing the last token/statement. self.curChar = '' # Current character in the string. self.curPos = -1 # Current position in the string. self.nextChar()
The lexer needs the input code and we append a newline to it (this just simplifies some checks later on). curChar is what the lexer will constantly check to decide what kind of token it is. Why not just do source[curPos]? Because that would litter the code with bounds checking. Instead we do this in nextChar:
# Process the next character. def nextChar(self): self.curPos += 1 if self.curPos >= len(self.source): self.curChar = '\0' # EOF else: self.curChar = self.source[self.curPos]
This increments the lexer's current position and updates the current character. If we reach the end of the input, set the character to the end-of-file marker. This is the only place we will modify curPos and curChar. But sometimes we want to look ahead to the next character without updating curPos:
# Return the lookahead character. def peek(self): if self.curPos + 1 >= len(self.source): return '\0' return self.source[self.curPos+1]
We should make sure these functions work. Let's test them by create a new file teenytiny.py:
from lex import * def main(): input = "LET foobar = 123" lexer = Lexer(input) while lexer.peek() != '\0': print(lexer.curChar) lexer.nextChar() main()
Run this and the output should be every character of the input string, LET foobar = 123, on a new line:
L E T f o o b a r = 1 2 3
But we don't just want characters, we want tokens! We need to plan how combining individual characters together makes a token, which works much like a state machine. Here are the main lexer rules for the Teeny Tiny language:
Next we will start our getToken function in our Lexer class:
# Return the next token. def getToken(self): # Check the first character of this token to see if we can decide what it is. # If it is a multiple character operator (e.g., !=), number, identifier, or keyword then we will process the rest. if self.curChar == '+': pass # Plus token. elif self.curChar == '-': pass # Minus token. elif self.curChar == '*': pass # Asterisk token. elif self.curChar == '/': pass # Slash token. elif self.curChar == '\n': pass # Newline token. elif self.curChar == '\0': pass # EOF token. else: # Unknown token! pass self.nextChar()
This will detect a few possible tokens, but doesn't do anything useful yet. What we need next is a Token class to keep track of what type of token it is and the exact text from the code. Place this in Lex.py for now:
# Token contains the original text and the type of token. class Token: def __init__(self, tokenText, tokenKind): self.text = tokenText # The token's actual text. Used for identifiers, strings, and numbers. self.kind = tokenKind # The TokenType that this token is classified as.
To specify what type a token is, we will create the TokenType class as an enum. It looks long, but it just specifies every possible token our language allows. Add import enum to the top of Lex.py and add this class:
# TokenType is our enum for all the types of tokens. class TokenType(enum.Enum): EOF = -1 NEWLINE = 0 NUMBER = 1 IDENT = 2 STRING = 3 # Keywords. LABEL = 101 GOTO = 102 PRINT = 103 INPUT = 104 LET = 105 IF = 106 THEN = 107 ENDIF = 108 WHILE = 109 REPEAT = 110 ENDWHILE = 111 # Operators. EQ = 201 PLUS = 202 MINUS = 203 ASTERISK = 204 SLASH = 205 EQEQ = 206 NOTEQ = 207 LT = 208 LTEQ = 209 GT = 210 GTEQ = 211
Now we can expand getToken to actually do something when it detects a specific token:
# Return the next token. def getToken(self): token = None # Check the first character of this token to see if we can decide what it is. # If it is a multiple character operator (e.g., !=), number, identifier, or keyword then we will process the rest. if self.curChar == '+': token = Token(self.curChar, TokenType.PLUS) elif self.curChar == '-': token = Token(self.curChar, TokenType.MINUS) elif self.curChar == '*': token = Token(self.curChar, TokenType.ASTERISK) elif self.curChar == '/': token = Token(self.curChar, TokenType.SLASH) elif self.curChar == '\n': token = Token(self.curChar, TokenType.NEWLINE) elif self.curChar == '\0': token = Token('', TokenType.EOF) else: # Unknown token! pass self.nextChar() return token
This code sets up the lexer to detect the basic arithmetic operators along with new lines and the end of file marker. The else clause is for capturing everything that won't be allowed.
Let's change main to see whether this works or not so far:
def main(): input = "+- */" lexer = Lexer(input) token = lexer.getToken() while token.kind != TokenType.EOF: print(token.kind) token = lexer.getToken()
If you run this, you should see something like:
TokenType.PLUS TokenType.MINUS Traceback (most recent call last): File "e:/projects/teenytiny/part1/teenytiny.py", line 12, in
main() File "e:/projects/teenytiny/part1/teenytiny.py", line 8, in main while token.kind != TokenType.EOF: AttributeError: 'NoneType' object has no attribute 'kind'
Uhoh! Something went wrong. The only way getToken returns None is if the else branch is taken. We should handle this a little better. Add import sys to the top of lex.py and define the abort function like:
# Invalid token found, print error message and exit. def abort(self, message): sys.exit("Lexing error. " + message)
And replace the else in getToken with:
else: # Unknown token! self.abort("Unknown token: " + self.curChar)
Now run the program again...
TokenType.PLUS TokenType.MINUS Lexing error. Unknown token:
There is still an issue, but now we can make a little more sense of it. It looks like something went wrong after the first two tokens. The unknown token is invisible. Looking back at the input string, you may notice we aren't handling whitespace! We need to implement the skipWhitespace function:
# Skip whitespace except newlines, which we will use to indicate the end of a statement. def skipWhitespace(self): while self.curChar == ' ' or self.curChar == '\t' or self.curChar == '\r': self.nextChar()
Now put self.skipWhitespace() as the first line of getToken. Run the program and you should see the output:
TokenType.PLUS TokenType.MINUS TokenType.ASTERISK TokenType.SLASH TokenType.NEWLINE
Progress!
At this point, we can move on to lexing the operators that are made up of two characters, such as == and >=. All of these operators will be lexed in the same fashion: check the first character, then peek at the second character to see what it is before deciding what to do. Add this after the elif for the SLASH token in getToken:
elif self.curChar == '=': # Check whether this token is = or == if self.peek() == '=': lastChar = self.curChar self.nextChar() token = Token(lastChar + self.curChar, TokenType.EQEQ) else: token = Token(self.curChar, TokenType.EQ)
Using the peek function allows us to look at what the next character will be without discarding the curChar. Here is the code for the remaining operators which work the same way:
elif self.curChar == '>': # Check whether this is token is > or >= if self.peek() == '=': lastChar = self.curChar self.nextChar() token = Token(lastChar + self.curChar, TokenType.GTEQ) else: token = Token(self.curChar, TokenType.GT) elif self.curChar == '<': # Check whether this is token is < or <= if self.peek() == '=': lastChar = self.curChar self.nextChar() token = Token(lastChar + self.curChar, TokenType.LTEQ) else: token = Token(self.curChar, TokenType.LT) elif self.curChar == '!': if self.peek() == '=': lastChar = self.curChar self.nextChar() token = Token(lastChar + self.curChar, TokenType.NOTEQ) else: self.abort("Expected !=, got !" + self.peek())
The only operator that is a bit different is !=. That is because the ! character is not valid on its own, so it must be followed by =. The other characters are valid on their own, but the lexer is greedy and will accept it as one of the multi-character operators if possible.
We can test these operators by updating the input to "+- */ >>= = !=" which should give you the following output when you run the program:
TokenType.PLUS TokenType.MINUS TokenType.ASTERISK TokenType.SLASH TokenType.GT TokenType.GTEQ TokenType.EQ TokenType.NOTEQ TokenType.NEWLINE
The program now accepts all of the language's operators. So what is left? We need to add support for comments, strings, numbers, identifiers, and keywords. Let's work through these one by one and test as we go.
The # character will indicate the start of a comment. Whenever the lexer sees it, we know to ignore all the text after it until a newline. Comments are not tokens, but the lexer will discard all this text so that it can find the next thing we care about. It is also important that we don't throw away the newline at the end of the comment since that is its own token and may still be needed. Fill in skipComment:
# Skip comments in the code. def skipComment(self): if self.curChar == '#': while self.curChar != '\n': self.nextChar()
Easy enough! Now call it from nextToken, such that the first few lines of the function look like:
# Return the next token. def getToken(self): self.skipWhitespace() self.skipComment() token = None ...
Test it out with the input "+- # This is a comment!\n */" and you should see:
TokenType.PLUS TokenType.MINUS TokenType.NEWLINE TokenType.ASTERISK TokenType.SLASH TokenType.NEWLINE
Notice that the comment is completely ignored!
Our language supports printing a string, which starts with a double quotation mark and continues until another quotation mark. We won't allow some special characters to make it easier to compile to C later on. Add the following code to getToken's big block of else if statements:
elif self.curChar == '\"': # Get characters between quotations. self.nextChar() startPos = self.curPos while self.curChar != '\"': # Don't allow special characters in the string. No escape characters, newlines, tabs, or %. # We will be using C's printf on this string. if self.curChar == '\r' or self.curChar == '\n' or self.curChar == '\t' or self.curChar == '\\' or self.curChar == '%': self.abort("Illegal character in string.") self.nextChar() tokText = self.source[startPos : self.curPos] # Get the substring. token = Token(tokText, TokenType.STRING)
You'll see the code is just a while loop that continues until the second quotation mark. It'll abort with an error message if any of the invalid characters are found. Something different from the other tokens we have covered so far: we set the token's text to the content of the string (minus the quotation marks).
Update the input again with "+- \"This is a string\" # This is a comment!\n */" and run the program:
TokenType.PLUS TokenType.MINUS TokenType.STRING TokenType.NEWLINE TokenType.ASTERISK TokenType.SLASH TokenType.NEWLINE
Moving right along to numbers. Our language defines a number as one or more digits (0-9) followed by an optional decimal point that must be followed by at least one digit. So 48 and 3.14 are allowed but .9 and 1. are not allowed. We will use the peek function again to look ahead one character. Similar to the string token, we keep track of the start and end points of the numbers so that we can set the token's text to the actual number.
elif self.curChar.isdigit(): # Leading character is a digit, so this must be a number. # Get all consecutive digits and decimal if there is one. startPos = self.curPos while self.peek().isdigit(): self.nextChar() if self.peek() == '.': # Decimal! self.nextChar() # Must have at least one digit after decimal. if not self.peek().isdigit(): # Error! self.abort("Illegal character in number.") while self.peek().isdigit(): self.nextChar() tokText = self.source[startPos : self.curPos + 1] # Get the substring. token = Token(tokText, TokenType.NUMBER)
Test it out with the input "+-123 9.8654*/" and you should see:
TokenType.PLUS TokenType.MINUS TokenType.NUMBER TokenType.NUMBER TokenType.ASTERISK TokenType.SLASH TokenType.NEWLINE
Great, we are almost done with the lexer!
The last big thing is to handle identifiers and keywords. The rules for an identifier is anything that starts with a alphabetic characters followed by zero or more alphanumeric characters. But before we call it a TokenType.IDENT, we have to make sure it isn't one of our keywords. Add this to getToken:
elif self.curChar.isalpha(): # Leading character is a letter, so this must be an identifier or a keyword. # Get all consecutive alpha numeric characters. startPos = self.curPos while self.peek().isalnum(): self.nextChar() # Check if the token is in the list of keywords. tokText = self.source[startPos : self.curPos + 1] # Get the substring. keyword = Token.checkIfKeyword(tokText) if keyword == None: # Identifier token = Token(tokText, TokenType.IDENT) else: # Keyword token = Token(tokText, keyword)
Fairly similar to the other tokens. But we need to define checkIfKeyword in the Token class:
@staticmethod def checkIfKeyword(tokenText): for kind in TokenType: # Relies on all keyword enum values being 1XX. if kind.name == tokenText and kind.value >= 100 and kind.value < 200: return kind return None
This just checks whether the token is in the list of keywords, which we have arbitrarily set to having 101-199 as their enum values.
Alright, test identifiers and keywords with the input string "IF+-123 foo*THEN/"
TokenType.IF TokenType.PLUS TokenType.MINUS TokenType.NUMBER TokenType.IDENT TokenType.ASTERISK TokenType.THEN TokenType.SLASH TokenType.NEWLINE
And what the output looks like from the terminal:
There we have it. Our lexer can correctly identify every token that our language needs! We have successfully completed the first module of our compiler.
If you think this is underwhelming, don't give up yet! I think the lexer is actually the most tedious yet least interesting part of compilers. Next up we will parse the code, that is make sure the tokens are in an order that makes sense, and then we will emit code.
The source code so far can be found in its entirety in the Github repo.
Continue on to part 2 of this tutorial (SOON!). Other recommended reading: | http://web.eecs.utk.edu/~azh/blog/teenytinycompiler1.html | CC-MAIN-2020-24 | refinedweb | 2,933 | 60.72 |
So is there any way to exclude result prefixes in XSLT 1.0? The documentation does discuss exclude result prefixes attribute for XSLT 1.0 On Tue, Jan 7, 2014 at 1:38 PM, Graydon <graydon@xxxxxxxxx> wrote: > On Tue, Jan 07, 2014 at 11:29:01AM -0800, Martin Holmes scripsit: >> On 14-01-07 11:26 AM, Graydon wrote: >> >Especially since #all is a keyword for _modes_, not prefixes; if you've >> >got something in your code where a namespace prefix starts with an >> >octothorpe (#) that's not going to work anywhere, prefixes can have the >> >characters of the Name production except for >> >> Surely #all is an allowed value for exclude-result-prefixes? It's in >> the example code here: >> >> <> > > Whups! > > And so is #default. > > Though both are still strictly XSLT 2.0 and thus incomprehensible to > Xalan. Which probably explains why Xalan is looking for a #all prefix, > under the circumstances. > > -- Graydon | http://www.oxygenxml.com/archives/xsl-list/201401/msg00031.html | CC-MAIN-2018-05 | refinedweb | 154 | 62.27 |
int[] x = new int[10]; Scanner scan = new Scanner(System.in); for(int i = 0; i < x.length; i++){ System.out.println("Enter number " + (i+1)); x[i] = scan.nextInt(); }
Event-driven programming is different, though. Rather than placing so much control on gathering input, we focus on responding to that input. So basically, when a button is clicked, we write the code to respond. When a key is pressed, or the mouse moved, etc., we respond to those respective events. This is how GUI programming is, as well as a lot of Game Programming. So if the user doesn't select something, then our program never responds. This may not be extremely clear right now, but after we examine the components of an Event-Driven model, I think the differences will become more apparent.
In any Event-driven model, we have three components: the trigger, the listener, and the response. The trigger is usually something the users interact with (like a JButton), but not always (in the case of Swing Timer). Basically, it fires an event. This is where the Listener comes in. As a trigger cannot listen for its own events, we need a separate component to do such. The Listener basically invokes the response.
An analogy for the event-driven model would be a light switch. We have the trigger (the switch), the Listener (the wiring), and the response (the light turning on). In addition, it is there for the user to click at will, but we are not notifying and prompting them to turn on the light.
A basic example illustrating the event-driven model in Java, commented to identify the trigger, listener, and response.
//we have a class which will display a window //and listen for the button clicks public class MyFrame extends JFrame implements ActionListener{ private JButton button = new JButton(); private int count = 0; public MyFrame(){ this.setSize(200,200); this.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); this.add(button); //we are telling button that it has a listener //for its ActionEvent, marking the JButton as the trigger //and this object as the listener button.addActionListener(this); this.add(button); this.setVisible(true); } //this is our response //when the listener is notified of an event //being fired, it will invoke this method public void actionPerformed(ActionEvent e){ button.setText("I have been clicked " + (++count) + " times"); } } | http://www.dreamincode.net/forums/topic/179290-event-driven-programming/ | CC-MAIN-2017-22 | refinedweb | 388 | 57.67 |
Section (1) setsid
Name
setsid —
−−fork option.
OPTIONS
−c, −−ctty
Set the controlling terminal to the current one.
−f, −−fork
Always create a new process.
−w, −−wait
Wait for the execution of the program to end, and return the exit value of this program as the return value of setsid.
−V, −−version
Display version information and exit.
−h, −−help
Display help text and exit.
AUTHOR
Rick Sladkey <[email protected]>
Section (2) setsid
Name
setsid — creates a session and sets the process group ID
Synopsis
#include <sys/types.h> #include <unistd.h>)
−1 is returned, and
errno is set to indicate the error.
ERRORS
- EPERM
The process group ID of any process equals the PID of the calling process. Thus, in particular,
setsid() fails if the calling process is already a process group leader.
NOTES
A child created via fork(2) inherits its parent_zsingle_quotesz_zsingle_quotesz) | https://manpages.net/detail.php?name=setsid | CC-MAIN-2022-21 | refinedweb | 143 | 68.16 |
Sub-object for geometric objects. More...
#include <sobject.h>
Inherits dyn_collectable.
Inherited by rtl_groupmem and rtl_object.
List of all members.
This class is an abstract definition of the basic elements of a geometric object. All classes that are renderable should inherit from this class and implement the intersection and normal functions.
[virtual]
Intersects a ray with this object. This object is the default behaviour all objects shoud call this function in their intersect routine. If the default does no return NOHIT then do not continue intersection. This function transforms the ray into object space and calls lclintersect. The bool should be true if this class is a re-hit from a csg class.
Reimplemented in rtl_triangle, rtl_soft::rtl_softvoxel, rtl_polyhedron, rtl_object and rtl_groupobj.
[pure virtual]
Returns the normal at the position of the last successfull intersection calculation.
Reimplemented in rtl_triangle, rtl_soft::rtl_softvoxel and rtl_object.
[virtual]
Returns true if this object intersects with the given axis aligned box. The default is true because this function is used to cull the object from a subdivision of space, true means the object will never be culled.
Reimplemented in rtl_triangle, rtl_polyhedron and rtl_object.
Return the bounding box for this object (in world space).
[virtual]
Returns the material for this sub-object.
Reimplemented in rtl_triangle, rtl_polyhedron, rtl_object and rtl_groupmem.
Returns the flags for this object.
Returns the flags for this object.
[virtual]
Returns the sub-object that was struck by the ray during the last call to intersect. The return of this function should only really differ from this in the case of a grouped object (rtl_groupobj). You should never need to override this function.
Reimplemented in rtl_groupobj.
[virtual]
Returns the object that contains this subobject. If this object is a sub-class of rtl_groupmem then the return will not be valid, otherwise the object cannot be contained and will return 0.
Reimplemented in rtl_groupmem.
[virtual]
Set the bounding box for this object (in object space). This function should not be called after an object has been added to a rtl_world object.
Reimplemented in rtl_object.
[protected]
[static]
On if the illumination model should negate the normal.
[static]
On if the object has a defined bounding box, off by default. All objects must have a defined bounding box if they are to be added to scenes with space subdivision.
[protected]
[protected]
Reimplemented in rtl_superellipsoid.
[protected]
[protected]
Reimplemented in rtl_object. | http://pages.cpsc.ucalgary.ca/~jungle/software/jspdoc/rtl/class_rtl_subobject.html | crawl-003 | refinedweb | 392 | 52.46 |
Just.
With .asmx Ajax services, you create a class in a .asmx file (or associated code behind file), attribute it with the [ScriptService] attribute, reference the .asmx endpoint in the Services section of your ScriptManager control on your .aspx page, and you're off and running. The new WCF Ajax-enabled service support works fundamentally the same way: parameters and results are serialized using a JSON serializer, JavaScript proxy classes are automatically generated when you reference the .svc endpoint with a /js at the end of the URL, and the invocation mechanism on the client stays the same using the familiar asynchronous callback model developers have become accustomed to with .asmx script services. The actual details of creating and referencing the service in Visual Studio 2008, however, is quite different from how .asmx script services work, and because the .asmx script service model is still in place and available, I have seen quite a few developers stick with .asmx instead of shifting their services over to .svc endpoints for familiarity. It's fine if you want to stick with .asmx endpoints, but the web service story at Microsoft is all about WCF these days, and will continue to be so in the future, so if you have already adopted the 3.5 .NET runtime, I highly recommend migrating your script web services over to the WCF model. Which brings me to the main point of this post – there has been a fair amount of confusion about how to create script-enabled WCF web services throughout the betas of 3.5, and a search on the web bring up many misleading results..
If you add a new service named WeatherService using this template, it will do three things for you:
Let's start by removing the DoWork() method in WeatherService.cs and adding in a more appropriate GetForecast() method.
[ServiceContract(Namespace = "")]
[AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
public
class
WeatherService
{
static
Random _rand = new
Random();
[OperationContract]
public string GetForecast(string zip)
{
switch (_rand.Next(3))
{
case 0:
return "Sunny and warm";
case 1:
return "Cold and rainy";
case 2:
return "Windy with a chance of snow";
default:
return "Invalid";
}
}
}
Now we're ready to call the service from script, so add a ScriptManager to the page you want to make the service call from, and add a ServiceReference to the Services element:
<asp:ScriptManager
<Services>
<asp:ServiceReference
</Services>
</asp:ScriptManager>
Then wire up some client-side action to initiate the call – here's a sample script and piece of HTML that invokes our service:
<script
type="text/javascript">
function OnGetForecast()
WeatherService.GetForecast($get("zip").value, OnGetForecastComplete, OnError);
function OnGetForecastComplete(result)
$get("weatherResult").innerText = result;
function OnError(result)
alert(result.get_message());
</script>
…
Enter zip: <input
type="text"
id="zip"
/>
<input
type="button"
value="get forecast"
onclick="OnGetForecast()"
/><br
/>
<span
id="weatherResult"></span>
So far, it feels pretty much the same, no? Things begin to feel a bit different if you begin making other changes, however. For example, let's specify a real namespace in our service definition (which was defaulted to an empty string).
[ServiceContract(Namespace = "")]
This actually affects the client-side script proxy that is created – it will now be in the namespace, so we need to adjust our client script accordingly:($get("zip").value, OnGetForecastComplete, OnError);
Note this is quite different from the way .asmx script services worked – there the class namespace was used in the client proxy, not the web service namespace. In .svc script services, the client proxy is always encapsulated in the web service namespace and the namespace of the implementation class never enters the picture. This was rather confusing in earlier releases of the WCF Ajax-enabled template which left the ServiceContract unadorned with a namespace, which meant that the web service lived in the default tempuri.org namespace. In order to reference your web services from the client side proxy, you would have to type tempuri.org.WeatherService… even though the string tempuri.org didn't show up anywhere in your code! (Keep this in mind if you ever see a script-enabled WCF service with an empty ServiceContract attribute).
Ok, the next difference you're bound to run into is also related to namespaces. If you encapsulate your web service class in a namespace (C# or VB namespace this time), you will need to make changes in both the .svc file as well as your web.config file to accommodate the name change. Let's try encapsulating our class in a namespace:
namespace Pluralsight
[ServiceContract(Namespace = "")]
[AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
public
class
WeatherService …
}
Now we modify the WeatherService.svc file to reflect the new class (just like we would have if it were a .asmx file):
<%@
ServiceHost
Language="C#"
Debug="true"
Service="Pluralsight.WeatherService"
CodeBehind="~/App_Code/WeatherService.cs"
%>
And finally, we need to make two changes to the web.config which also references the class in the endpoint description:
<system.serviceModel>
...
<services>
<service
name="Pluralsight.WeatherService">
<endpoint
address=""
behaviorConfiguration="WeatherServiceAspNetAjaxBehavior"
binding="webHttpBinding"
contract="Pluralsight.WeatherService" />
</service>
</services>
</system.serviceModel>
The one last thing you might find yourself missing as you migrate from .asmx-based script services to .svc script services, is the handy test page that you see when you access the .asmx endpoint directly from the browser. In fact, if you point a browser to our WeatherService.svc file, you will see a notification that "Metadata publishing for this service is currently disabled", along with a lengthy description of the configuration elements necessary to enable it. Unfortunately there is no auto-generated POST-based test page for WCF services, so you're probably best just using the JavaScript proxy to try invoking the methods as a test. There is a test client available (WcfTestClient.exe) but it is designed to work with a service that has been compiled into an assembly, so it is not easily used with the App_Code model of an ASP.NET Web site.
If you are using Web Application Projects instead of the Web site model, you will have an assembly to test your service from if you like, which may be the topic of a future post...
I run into the same problem as Rama. When I host the service in the same application (using the build in Cassini webserver) everything works fine. But when I, in my case, host the service in it's own process (windows service) I get an 'Access is denied' failure.
Rama, were you able to work arround this?
Is this 'by design'? It must be related to some securtiy setting, but I have already tried to configure the security mode to 'None' at the hosting side.
That's a NICE post. I had been working with Visual Web Developer 2008 Express edition and all goes well. How ever I am not able to make any sense what JSON stands for here. I mean what is the role JSON played in this code. I see only the ASP.Net Ajax and WCF code here and nothing looks to be as JSON!!!
VWD Express is not compiling the asembly. How this is going to affect me while uploading the WCF service to any production grade application??
Thanks
Good introduction. I have a question though, Does WCF handles concurrent client calls?, Let's imagine that the client calls methodA that takes a long time to execute. And before methodA returns, the client calls methodB. It appears that methodB will not be processed until methodA returns. Is there a way to allow for methodA and methodB to be processed concurrently?
I have a question. In this sample it seems the service and client are on same virtual folder Path="~/WeatherService.svc". How does this works when my service is at for eg..
Rakesh
Pingback from ASP.NET MVC Archived Buzz, Page 1
"Access Is Denied", have you set the default document in IIS to point to an .svc when a Default.aspx isn't present? Just curious. I've run into this, as well.
Hi,
Excellant post, I'm just tinkering about with jquery, json and WCF. Its really interesting and can develop some really powerful UI's
My concern is though how to secure calls to the WCF service? For example, an add event is triggered and the data needs to be stored in the main db but say some one has hijacked the json and altered it... the wcf service would just process it as long as it deserialised into the object.
Is there a secure or proper way to implement jquery json and WCF?
Hi,
I have implemented this and its working fine. Its working if the service and the client is in the same site. But if the service and the client are in the different site then its not able to fine the method defined in the service. though i have added
<asp:ServiceReference Path="localhost/.../mathservice.svc />
anup@asterglobalservices.com
Please help.
Thanking
Very nice post.I am not sure how to implement security. Also it did not work too well in firefox and in a multiclient/multi domain environment. An real attached sample project would have very helpful though. I am still a novice to the jquery, json and WCF technologies, I will follow your articles in future. THanks | http://www.pluralsight.com/community/blogs/fritz/archive/2008/01/31/50121.aspx | crawl-002 | refinedweb | 1,538 | 57.37 |
islower()
Test a character to see if it's a lowercase letter
Synopsis:
#include <ctype.h> int islower(lower() function tests for any lowercase letter a through z.
Returns:
Nonzero if c is a lowercase letter; otherwise, zero.
Examples:
#include <stdio.h> #include <stdlib.h> #include <ctype.h> char the_chars[] = { 'A', 'a', 'z', 'Z' }; #define SIZE sizeof( the_chars ) / sizeof( char ) int main( void ) { int i; for( i = 0; i < SIZE; i++ ) { if( islower( the_chars[i] ) ) { printf( "Char %c is a lowercase character\n", the_chars[i] ); } else { printf( "Char %c is not a lowercase character\n", the_chars[i] ); } } return EXIT_SUCCESS; }
produces the output:
Char A is not a lowercase character Char a is a lowercase character Char z is a lowercase character Char Z is not a lowercase character
Classification:
Last modified: 2014-06-24
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/i/islower.html | CC-MAIN-2014-42 | refinedweb | 150 | 56.15 |
Controlling Devices with Relays>
Listing 1. ar-2.c
#include <sys/types.h> #include <sys/ioctl.h> #include <fcntl.h> #include <errno.h> #include <stdlib.h> #include <unistd.h> #include <stdio.h> #include <signal.h> /* Main program. */ int main(int argc, char **argv) { int fd; int sleep_time; /* < 3) { fprintf(stderr, "Usage: ar-2 <device> " "<bits-to-set> <hold-time>\n"); exit(1); } /* Open monitor device. */ if ((fd = open(argv[1], O_RDWR | O_NDELAY)) < 0) { fprintf(stderr, "ar-2: %s: %s\n", argv[1], sys_errlist[errno]); exit(1); } /* Get the bits to set from the command line. */ sscanf(argv[2], "%d", &set_bits); /* get delay time from command line. */ sscanf(argv[3], "%d", &sleep_time); ioctl(fd, TIOCMSET, &set_bits); sleep(sleep_time); close(fd); } | http://www.linuxjournal.com/article/6693?page=0,1&quicktabs_1=2 | CC-MAIN-2014-10 | refinedweb | 119 | 56.93 |
29 September 2006 17:33 [Source: ICIS news]
LONDON (ICIS news)--Sabic’s $700m petrochemicals deal with Huntsman highlights the way in which the chemicals landscape is being re-drawn.
The power base in petrochemicals is shifting, faster than some realise. Companies without the size or access to advantaged feedstock are having to seek new models for growth if they want to continue to play on the global scene.
Huntsman president and chief executive officer, Peter Huntsman, rightly called the $700m (€550m) divestment of ?xml:namespace>
Speaking to ICIS news, Huntsman said that a number of strategic players were interested in his company’s
When completed the changes at Huntsman would represent the biggest transformation in
The realignment will mean that the company’s natural gas purchases will drop by more than 90%; its purchases of natural gas liquids and naphtha by between 90% and 95%. Huntsman will see its capital expenses cut almost in half.
Re-focusing mid-sized companies on customers and technology is the name of the game across much of the sector. Huntsman has built a sizeable diversified business, for instance. And when it exits bulk petrochemicals – it will keep TiO2 – it will be a $10bn player.
The company is also expanding geographically. It has major businesses in
Companies growing into new markets, whether geographically or technically, face different challenges compared with those wedded to the chemical markets upstream.
Sabic has become a major player in petrochemicals in only a short period and as chief executive Mohamed Al-Mady says, has developed the experience to do business overseas.
Sabic wants a lot from its European business – it expects growth of between 5% and 10% a year Al-Mady said in an ICIS Radio interview. The company also wants to drive away from ethane chemistry and introduce more sophisticated products into the portfolio.
Sabic is on a path towards becoming a more diversified petrochemicals company as well as an international powerhouse. It has looked at acquisition targets in the
Growth and realignment are twin themes in chemicals and reflect the changing face of the sector. As assets change hands the ambitions of key industry players change with | http://www.icis.com/Articles/2006/09/29/1095026/INSIGHT-Sabic-and-Huntsman-play-to-advantage.html | CC-MAIN-2014-35 | refinedweb | 360 | 50.67 |
One of the more complicated concepts to get your head around as a new programmer is classes and objects. Once you know how to use classes in Python though, you will be ready to build significantly more powerful and complex code.
Also read: What is object oriented programming?
Read on to learn how to use classes in Python, and when you should!
Introducing classes in Python
For those that are unfamiliar with the concept of classes and who want to learn more about how they work, keep reading. If you just want the syntax for classes in Python, you can skip to the next section!
So, what is a class? A class is a piece of code that describes a “data object.” This is an object just like you find in the real world, except that it has no tangible presence: it only exists in concept!
Like real objects though, data objects can have properties (size, weight, height, number of lives, speed), and they can have functions (move forward, jump, turn up the heat, delete).
In a computer game, for instance, a bad guy could be described in the code as a data object. This would keep track of how much health the bad guy had, where it was in relation to the player, and how aggressive it would behave. We could then call a bad guy’s “shoot” function to fire projectiles, or their “destroy” function to remove them from the game.
(Except that we call functions “methods” when they appear inside classes in Python!)
You’d then simply use your graphics routines to draw those bad guys to the screen, based on the information provided by the class.
When to use Python classes
If you know how to use variables in Python, this works similarly: except instead of storing one piece of data as an integer, you are storing custom information about an object you conceived.
Also read: How to use strings in Python
The great thing about classes in Python, is that they can create multiple “instances” of a single thing. That means we only need to write one “BadGuy” class in order to create as many individual bad guys as well like!
What else might you use classes in Python for? A class could be used to describe a particular tool within a program, such as a score manager, or it could be used to describe entries in a database of clients. Any time you want to create lots of examples of the same “thing,” or any time you want to handle complex code in a modular and easily-exported fashion, classes are a great choice.
How to use classes in Python
So, now you know what the deal with classes is, you may be wondering how to actually use classes in Python.
Getting started is relatively simple, got to love Python! You will create a class in just the same way you create a function, except you will use “class” instead of “def.” We then name the class, add a colon, and indent everything that follows.
(Note that classes should use upper-case camel case to differentiate them from variables and functions. That means “BadGuy” and not “badGuy” or “bad_guy.”)
Also read: How to define a function Python
So, if we wanted to create a class that would represent an enemy in a computer game, it might look like this:
class BadGuy: health = 5 speed = 2
This bad guy has two properties (variables) that describe its health and its movement speed. Then, outside of that class, we need to create a BadGuy object before we can access those properties:
bad_guy_one = BadGuy() print(bad_guy_one.health) print(bad_guy_one.speed)
Note that we could just as easily create a bad_guy_two and a bad_guy_three, then show each of their properties!
bad_guy_one = BadGuy() bad_guy_two = BadGuy() print(bad_guy_one.health) print(bad_guy_two.health) bad_guy_one.health -= 1 print(bad_guy_one.health) print(bad_guy_two.health)
Here, we have changed the value of one bad guy’s health, but not the other! We have edited one instance of the bad guy.
Understanding instances
In order to really tap into the power of classes in Python though, we need to understand instances and constructors. If you create two bad guys from the same BadGuy class, then each one of these is an “instance.”
Ideally, we might want to create two bad guys with different starting health. Moreover, we might want to alter that health from within the BadGuy class.
To do this, we need a special type of method (function in a class) called a “constructor.”
The constructor is called as soon as you create a new instance of an object (when you “instantiate” the object) and is used predominantly to define the variables as they relate to that specific instance of the object. Though, of course, you can do other things here too: such as sending welcome messages.
So, for example:
class BadGuy: def __init__(self, health, speed): print("A new badguy has been created!") self.health = health self.speed = speed bad_guy_one = BadGuy(5, 2) bad_guy_two = BadGuy(3, 5) print(bad_guy_one.health) print(bad_guy_two.health)
This code creates two bad guys. One is strong but slow (health 5, speed 2), the other is weak but fast (3, 5). Each time a new bad guy is created, a message pops up to tell us that has happened.
The constructor method is always called __init__ and will always have “self” as the first argument. You can then pass whatever other arguments you want to use in order to set-up your object when you first initialize it.
The term “self” simply means that whatever you’re doing is referring to that specific instance of the object.
How to use functions in classes in Python
As mentioned, a function in Python is technically referred to as a method.
We can create methods within a class just as we normally create functions, but there are two different types of method:
- Instance methods
- Static methods
An instance method will only affect the instance of the object it belongs to. Thus, we can use this as a more convenient way to damage individual enemies:
class BadGuy: def __init__(self, health, speed): print("A new badguy has been created!") self.health = health self.speed = speed def shoot_badguy(self): self.health -= 1 print("Ouch!") bad_guy_one = BadGuy(5, 2) bad_guy_two = BadGuy(3, 5) def display_health(): print(bad_guy_one.health) print(bad_guy_two.health) display_health() bad_guy_one.shoot_badguy() display_health()
A static method, on the other hand, is designed to act globally. To make static methods, we remove the “self” argument and instead use the @staticmethod decorator just above the method name.
In the following example, we create a static method to generate a random number, then we subtract this amount from the enemy’s health. The method doesn’t need to specifically relate to the instance of that object, so it can simply act like a normal function we gain access to when we use the class.!") bad_guy_one = BadGuy(5, 2) bad_guy_two = BadGuy(3, 5) def display_health(): print(bad_guy_one.health) print(bad_guy_two.health) display_health() bad_guy_one.shoot_badguy() display_health()
Note that we can also use the following line at any point in our code to get a random number:
print(bad_guy_two.random_generator())
If, for whatever reason, we want to prevent this from happening then we simply need to prefix our method name with a double underscore.
@staticmethod def __random_generator():
This is how to create a private method in Python, and it will prevent us from accessing the method outside of that class.
Closing up
Finally, the last thing you might want to do is place your class in a separate file. This will keep your code tidy, while also letting you easily share the classes you’ve made between projects.
To do this, simply save the class as it is in a new file:!")
Be sure to give the file the same name as the class. In this case: “BadGuy.py” is the name of the file. It also needs to be saved in the same directory where you save your main Python file.
Now you can access the class and all its properties and methods from any other Python script:
import BadGuy bad_guy_one = BadGuy.BadGuy(5, 2) bad_guy_two = BadGuy.BadGuy(3, 5) def display_health(): print(bad_guy_one.health) print(bad_guy_two.health) display_health() bad_guy_one.shoot_badguy() display_health()
And there you have it! That’s how to use classes in Python! This is an extremely valuable skill and one that will allow you to build all kinds of amazing things in future.
At this point, you are probably ready to take your skills to the next level. In that case, why not try one of these amazing online Python courses:
Coding with Python: Training for Aspiring Developers will provide you with a comprehensive introduction to Python that will take you from the basics of coding to high-level skills that prepare you for a career in Python development. This course usually costs $690 but is available to Android Authority readers for just $49!
Alternatively, you can see how classes fit into the big picture by checking out our comprehensive beginners’ guide to Python: | http://www.nochedepalabras.com/how-to-use-classes-in-python.html | CC-MAIN-2020-40 | refinedweb | 1,519 | 71.04 |
NAME
ksql_cfg_defaults—
set defaults for a ksql configuration
LIBRARYlibrary “ksql”
SYNOPSIS
#include <sys/types.h>
#include <stdint.h>
#include <ksql.h>void
ksql_cfg_defaults(struct ksqlcfg *cfg);
DESCRIPTIONThe
ksql_cfg_defaultsfunction initialises cfg with useful defaults: the
KSQL_EXIT_ON_ERRand
KSQL_SAFE_EXITflags are set, which means that any errors in using database routines will trigger an exit; and upon exiting in such a state, the database will be properly cleaned up. The
ksqlitemsg() and
ksqlitedbmsg() functions are set as error message loggers. These output to
stderrthe full error message and error code for both regular and database errors. The struct ksqlcfg structure consists of the following:
- void *arg
- The private argument passed to err and dberr.
- ksqldbmsg dberr
- A function that will be invoked upon a database error, for example, if sqlite3_step(3) does not return an
SQLITE_DONEor
SQLITE_ROWcode.
- ksqlmsg err
- Supply a function that will be invoked upon a non-database error, for example, memory allocation.
- unsigned int flags
- A bit-field which may consists of
KSQL_EXIT_ON_ERR, which causes the system to exit(3) if any database errors occur;
KSQL_FOREIGN_KEYS, which causes the database to be subsequently opened with foreign key support; and
KSQL_SAFE_EXIT, which causes the library to register an atexit(3) hook to free the database if it hasn't be freed prior to exit. The
KSQL_SAFE_EXITflag will also cause the
SIGABRTand
SIGSEGVsignals to be caught and siglongjmp(3) into an exit handler, which will then close out open databases.
- struct ksqlroles roles
- Role-based access control configuration. Roles map a caller role to stored statements in stmts and set the availability of further role transition with ksql_role(3).
- struct ksqlstmts stmts
- Structure containing stored statement information. If stored statements are provided, ksql_stmt_alloc(3) and ksql_exec(3) will only draw from the stored statements.
ksqlmsg(void *arg, enum ksqlc code, const char *file, const char *msg);, with arg being the private argument, argc being the error code in question, the database file (which may be
NULL), and msg being an ASCII string describing the error (in English). The ksqldbmsg function is void
ksqldbmsg(void *arg, int sqlerr, int sqlexterr, const char *file, const char *msg);, which also has the sqlerr and sqlexterr SQLite error and extended error code, and and the SQLite string error message msg. The stmts variable configures stored statements. These provide an extra measure of security for ksql_alloc_child(3) contexts where the protected child process manages pre-set SQL statements that cannot be changed by the caller. It contains the following:
- const char *const *stmts
- An array of SQL statement strings, none of which may be
NULL.
- size_t stmtsz
- The number of entries in stmts.
- struct ksqlrole *roles
- The role array. Each struct ksqlrole entry consists of roles, a list of possible roles that may be subsequently set with ksql_role(3) from the current role; flags, a bit-field consisting only of
KSQLROLE_OPEN, which indicates that the role may open databases; and stmts, a list of all possible statements. The index of a statement in stmts and the role in roles corresponds to the id passed to ksql_stmt_alloc(3) and ksql_exec(3). If it zero if false (the role may not execute the statement, or the role may not be entered from the given role), non-zero if it may.
- size_t rolesz
- The length of roles.
- size_t defrole
- The index of the default role set upon ksql_alloc(3) or ksql_alloc_child(3).
EXAMPLESIn this simple example, a default configuration is extended with stored statements, then the connection is started in split-process mode and the caller sandboxes. (The sandboxing is only available on OpenBSD.) For brevity, no error checking is performed.
struct ksqlcfg cfg; struct ksql *sql; const char *const stmts[] = { "INSERT INTO test (foo) VALUES (?)", "SELECT foo FROM test" }; ksql_cfg_defaults(&cfg); cfg.stmts.stmts = stmts; cfg.stmts.stmtsz = 2; sql = ksql_alloc_child(&cfg, NULL, NULL); pledge("stdio", NULL); | https://kristaps.bsd.lv/ksql/ksql_cfg_defaults.3.html | CC-MAIN-2021-21 | refinedweb | 637 | 61.97 |
16 minutes ago, DeadX07 wrote.
Which is terrific. What I do not understand is the shift to native over managed. I thought the thinking on performance had been that the next generation of hardware would solve that problem.
And I do not follow why the JITer and GC the .NET framework cannot smply be improved to address performance problems. I am impressed with how quickly the shell namespace extension I wrote recently in C++ starts up. The preview handler I wrote starts instantly. But why would a .NET version have to be slower? Can there be prestarted .NET processes, which somehow are able to latch onto a set of preloaded .NET assemblies at process start ( if assembly loading is the cause of slow starting .NET apps. )? | http://channel9.msdn.com/Forums/Coffeehouse/has-Anders-commented-on-WinRT/11e474911cbd4aa89db8a0aa0159fa72 | CC-MAIN-2014-35 | refinedweb | 126 | 78.55 |
I've been using Windows Forms for the last 8 years or so and, while it has its quirks, it's by and large been pretty straightforward to use. However, the one case where
Windows Forms causes me real pain occurs when I need to merge a .resx file from two branches of a source control system. It turns out that the Windows Forms implementation
in Visual Studio reorders (apparently randomly) the elements in the .resx file when changes are made to a form or custom control. While this doesn't affect the behavior
of the form or control (the sequence is irrelevant), it wreaks havoc on any merge/diff tools you are using because every resequencing is treated as a change.
This project is written with .NET 3.5 and has not been tested with any other configurations that support LINQ. If you used this successfully
with .NET 3.0 or .NET 2.0 with SP1, please add a message outlining your experience so that others can benefit from it.
When merging a Windows Forms form or control between two branches of a source control system, the merge tool typically displays many false conflicts
because insignificant changes in element sequences are treated as significant.
This article describes a simple console application that can be used as a pre-comparison conversion to sort the elements in a .resx file by name attribute.
While you could use this filter standalone to modify the .resx files before comparison, many merge/diff utilities can be configured to run the conversion prior
to comparison and merge.
Because both files are sorted in a deterministic way, the merge/diff utility can accurately determine what elements are new and changed without introducing
false conflicts because of circumstantial differences in element location.
I originally thought it would be simplest to implement an XSLT transform to sort the various elements in the .resx XML, but discovered, to my delight,
that I could use LINQ to achieve the same result trivially. The entire code for the project follows:
using System;
using System.Collections.Generic;
using System.Linq;
using System.IO;
using System.Text;
using System.Xml;
using System.Xml.Linq;
namespace SortRESX
{
// Assume two inputs, a source .resx file path and a target .resx file path.
// The program reads the source and writes a sorted version of it to the
// target .resx file.
class Program
{
static void Main(string[] args)
{
// Check parameters
if (args.Length != 2)
{
ShowHelp();
return;
}
try
{
XDocument doc = XDocument.Load(args[0]); // Create a LINQ XML document
// from the source file.
XDocument sortedDoc = SortDataByName(doc); // Create a sorted version
// of the XML
sortedDoc.Save(args[1]); // Save it to the target file
}
catch (Exception ex)
{
Console.Error.WriteLine("Error loading resx file {0}" +
"or error saving it to {1}: {2}", args[0], args[1], ex.Message);
}
return;
}
// Use LINQ to sort the elements. The comment, schema,
// resheader, assembly, metadata, data appear in that order,
// with resheader, assembly, metadata and data elements sorted by name attribute.
private static XDocument SortDataByName(XDocument resx)
{
return new XDocument(
new XElement(resx.Root.Name,
from comment in resx.Root.Nodes() where comment.NodeType ==
XmlNodeType.Comment select comment,
from schema in resx.Root.Elements() where schema.Name.LocalName ==
"schema" select schema,
from resheader in resx.Root.Elements("resheader")
orderby (string) resheader.Attribute("name") select resheader,
from assembly in resx.Root.Elements("assembly")
orderby (string) assembly.Attribute("name") select assembly,
from metadata in resx.Root.Elements("metadata")
orderby (string)metadata.Attribute("name") select metadata,
from data in resx.Root.Elements("data")
orderby (string)data.Attribute("name") select data
)
);
}
// Write invocation instructions to stderr.
private static void ShowHelp()
{
string sExeName = System.Diagnostics.Process.GetCurrentProcess().ProcessName;
Console.Error.WriteLine("Command line format\n{0} <input resx file>
<output resx file>", sExeName);
}
}
}
There's not much to it. The input file is used to initialize a LINQ XDocument XML document. The SortDataByName() method
is called to return a sorted version of the document, and the converted file is saved to the target path. The real work is done
by the SortDataByName() method, which contains a single LINQ statement.
XDocument
SortDataByName()
The LINQ statement constructs a new document with a single root element with the same name as the root element in the source document. The contents
of that element are defined by the six LINQ queries corresponding to the groups of nodes desired in the target:
And that's it. The SortRESX.exe program is now ready for integration with a merge/diff utility.
Many commercial merge/diff utilities provide a mechanism that permits you to pre-process files with specified extensions using an external program before they
are compared or used for merging. By associating SortRESX.exe with .resx file types, the comparison is done against sorted files,
which enables only significant changes to appear. To associate SortRESX.exe with .resx file types for such a tool,
you can copy SortRESX.exe into the installation directory for the utility, then use the instructions provided in the utility to associate
it with the ".resx" file type.
The same .resx sorting code used in this project could probably be used as an add-in to Visual Studio that would perform the sort every
time the .resx file is saved. This alternative approach has the advantage that it enables all diff/merge tools to behave non-pathologically
if all .resx files had been thus sorted. The disadvantage of this approach is that if the merge involved files created before the add-in was adopted,
the merge problem described above would still manifest | http://www.codeproject.com/Articles/37022/Solving-the-resx-Merge-Problem?fid=1541806&df=10000&mpp=25&sort=Position&spc=Relaxed&tid=3070356 | CC-MAIN-2014-10 | refinedweb | 924 | 59.4 |
The!
My friend Zach has this old ASP site over at bodyblog.com. Back in the day, it was a pretty active community of bodybuilders and regular people just getting in shape.
A couple months ago, I agreed to help him dust it off and get it working again, and since the code was pretty bad and ASP is a dead technology (replaced by ASP.net), I advised him to port it to a newer technology.
Then the question was, which one? I've been tinkering with my own little python web framework for years now, and while I'm proud of it, I couldn't in good conscience advise Zach to build his business around it.
If you're building a business you want something approaching an industry standard, because the more people working on and with a technology, the more developers you have available to hire, and the more tools they have to help them work faster.
Java and ASP.net were certainly viable options, but given my own areas of expertise, and the fact that I'd be doing most of the work, it came down to three choices:
Of the three, I had extensive experience with drupal, no experience with django (but over a decade of python experience) and only a vague familiarity with rails (having worked through a book a couple years ago and not touching it since)
It wasn't much of a contest.
Drupal is impressive, but it's a case of bright people making the best of a pretty clunky technology (PHP).
Django is... Well, it's written in python, lots of people seem to like it, and I can't think of one bad thing to say about it. Django is OK.
Rails, on the other hand... Rails sucks. In fact, rails sucks hard. The Railiens make a big fuss about favoring convention over configuration, and of course they picked the most bone-headed, moronic conventions imaginable, like coupling data objects to the database, using pluralized table names, and... and...
And none of those choices make the slightest bit of difference in the long run.
Rails is something people either love or hate with a passion. It has a personality. It's opinionated software, as they say, and it turns out that was a pretty brilliant marketing move.
A year or two when I worked through that rails book, the system I saw left me wondering "so what?" But what rails has going for it is a network effect. It drew in this core of very passionate users (especially corporate users) and they contributed their passion, and their resources, and it just kept growing.
We picked rails because it's reached its tipping point. The industry is behind it, there's a huge freelance developer population, and there's plenty of really great tool support.
Once I got past the culture shock and accepted that the conventions are just a matter of taste, and once I got past all the hassles of installing and configuring the dang thing (I finally gave up trying to get ruby talking to MySQL on my home windows box and just moved everything over to linux) I noticed that I was actually getting a hell of a lot done in not much time at all.
For the longest time, I held on to this idea that I could become some kind of tech leader. Some kind of Prometheus complex, maybe: part of me really wanted to be the one stealing fire from the gods and leading everyone into enlightenment.
But for the past few months... I dunno. I just don't care about that anymore. I don't feel like I have to leave my mark on the world. At least, not as a web framework developer.
So yeah. Long story short: rails won.
python that looks like haskell:
from arlo import declare, _ exec declare('sig let Ord a where y xs x ') qsort = [ sig. qsort == Ord.a >> [a] >> [a] , let. qsort ( [] ) == [] , let. qsort ( _[x:xs] ) == ( _.qsort.lesser + [x] + _.qsort.greater , where ( _.lesser == [ y | y << xs , y < x ], _.greater == [ y | y << xs , y >= x ])) ]
The first line imports a function (
declare) and an object (
_). from the module
arlo.
The second line calls
declare to convert a string of identifiers into a string of python variable declarations, and then runs the generated code.
The remaining lines create a list of three objects, and assign it to a new variable called
qsort.
The point of this program isn't to emulate haskell in python, or even to implement quicksort, but rather to demonstrate how
arlo lets you hack the python syntax to express interesting ways.
expressions that look like statements:
from arlo import declare, _ exec declare("CLASS YIELD RETURN SET DEF SELF INIT") print repr( CLASS. MyClass ( _.object ) [ DEF. INIT (SELF, _.name ) [ SET. SELF.name == _.name ], DEF. hello (SELF, _.name) [ YIELD. _("hello %s") % SELF.name ]])
The first two lines are the same as before, except for the names being declared.
The rest of the program builds up a single object and then prints out a string representation of that object. It looks like this (except scrunched together on one long line):
_.CLASS.MyClass(_.object)['_.DEF.INIT(_.SELF,_.name) ['(_.SET.SELF.name == _.name)'],_.DEF.hello(_.SELF, _.name)["(_.YIELD._(_('hello %s')) % _.SELF.name)"]']
What's interesting is that if you loaded that output into a python string, evaluated it with python's
eval() function, and then printed the resulting representation, you'd get that same output
again.
the reification rule:
# for any arlo.Expr object s: assert repr(eval(s)) == s
In other words,
arlo lets you create
python expressions that model the syntax used to create them.
Why would you want to do this? Well, as you can see from the examples above, by stripping the meaning from the various python operators, you can create your own mini-languages on the fly, without needing to write your own parser.
arlo at the prompt:
You can also combine these expression objects directly at the python prompt, and switch back and forth between the expression language and pure python. For example, notice how python immediately evaluates 2 + 3 in the fourth line below, whereas the other addition operations get quoted into the expression object.
>>> x = _.x >>> x _.x >>> x + 1 (_.x + _(1)) >>> x * ( 2 + 3 ) + 2 ((_.x * _(5)) + _(2)) >>> x.y() + _.z (_.x.y() + _.z)
Another way of saying this is that
arlo
is a generic combinator library for building expressions.
Combinator libraries are common in functional
languages like haskell, but they're not exactly
new to python. For example, pyparsing uses the approach to build parsers, while Stan overrides method invocation (
x(y)) and subscripting (
x[y]) to provide a pure-python syntax for building XML documents,
and the SQLAlchemy expression language uses the combinator concept for building SQL queries.
What is new (at least as far as I know) is the idea of creating a generic combinator library for expressions.
We've already seen that calling
repr() on
these objects produces the python syntax necessary to recreate them. But if you call
str(), you get the equivalent python expression, which you can then pass to
eval():
Sort of like a
lambda:
>>> y = _.m * _.x + _.b >>> y # repr() ((_.m * _.x) + _.b) >>> print y # str() ((m * x) + b) >>> m, x, b = 1, 2, 3 >>> eval(str(y)) 5 >>> x = 4 >>> eval(str(y)) 7
As you can see, you can use
arlo and
eval() as an alternative to defining named functions or using
lambda.
About two years ago, I wrote a post called "what python looks like naked" that demonstrated how you could remove all of python's control structures from a program and replace them with functions and delaying every expression by wrapping it in a
lambda.
If you can replace python's control structures, that
means you can create your own control structures. For example, you could use the technique to add a prolog-style inference engine to python. Then, in addition to simple propositional logic (
if/
elif/
else and the boolean operators), your program could make choices based on logical deduction from a set of facts and rules.
Of course you can implement these things in python
already, or call foreign libraries, but it would be nice
to have these things as first-order objects in the language, and that's what a reification technique like wrapping everything in
lambda gives you.
The only problem is,
lambdafied python is an ugly, ugly mess.
The combinator appproach goes a long way to solving the syntax problem, and with
arlo, python developers can now use a generic syntax for building combinator libraries.
Anyway, this is an idea I've been kicking around for a long time, and now that it's working, I'd love to hear what other folks have to say about it.
try it out!
(it's open source under a python-style license)
You can browse or download the code from the Trac links below:
You might also want to look at wherewolf, a simple language
for building expressions for matching against tables of data. Thee expressions can either be evaluated directly in python for querying against data in memory, or compiled down to SQL
WHERE clauses (like a very tiny version of SQLAlchemy).
I've been sick in bed with the flu all week. Last week I was in Texas with my family. Given past history, my world should basically be upside down right now.
I do have a larger-than-normal backlog of stuff to do right now, but I actually feel like I'm on track. Once I was finally able to breathe and sit up this morning, I got to work, and I expect to be caught up completely within the next day or two.
That's a big change. In years past, the backlog (especially with email) would persist for days or even weeks.
Forcing myself to sit down and write The Flake Effect last month was one of the smartest things I've ever done. I didn't really come up with a book so much as a long, rambling journal focused on productivity. But I covered a lot of ground, and I verbalized, clarified, or outright solved a bunch of problems that had been holding me back for years.
It's going to be a long time before I have a second draft worthy of showing off to anyone, but I do want to share some of the key insights I gained, especially in the context of planning the next year.
First off, I hit the official nanowrimo wordcount in the first week. I thought I was aiming for 100,000 words but it turns out it was only 50,000. It's not really a novel. More like narrative nonfiction.
Second, part of the way I was attempting to handle "characters" was to talk about ideas with my friends in chat and upload the logs (I stripped out the repeating names from my wordcount totals). That was really the first time I tried to explain my ideas to my friends directly... At least in a long time, and when they weren't explicitly asking for help with something. It made for some interesting conversations and I think it helped me understand my friends a little better too.
Did I actually learn anything new about productivity and getting things done?
I feel like I did, but right now it's hard to explain concisely. I certainly compiled a bunch of techniques, like:
I also am going to have quite a bit to say about attention management in general, especially with regards to the idea that attention is fractal.
The fractal metaphor is only going to go so far, but there are a lot of parallels between the questions "how long is a coastline?" and "how much attention can you pay to something in an hour?" In both cases, the answer keeps increasing the closer you look.
In other words, attention isn't really measured in time directly but something like details per second, square-inch-seconds of brain activity, or maybe flops if you're talking about a cpu. We don't really have tools to measure it outside of MRI lab (though several portable consumer EEG machines are close to market, and they come a little closer), but the point is attention management and time management are two totally different things.
Also, you can't make more time but you can create more attention through the miracle of neuroplasticity.
But anyway... There'll probably be a whole chapter on that, so no point rewriting it here.
The other major insight that came to me is the distinction between two types of problems. This kind of fits in with the fractal idea, though at the moment, the idea is still very fuzzy in my mind. In fact, it may not even be a distinction between problems but between problem solving methods.
When I say "deep" I'm picturing a deep canyon that has been carved by millions of years of water flowing over the same course. An example would be mastering a specific skill like aikido or playing poker or painting a picture, or developing a framework to solve a particular programming problem. Or building a fully automated business.
Or for example, I sometimes make the distinction between writing to explain things to myself. It doesn't matter if I ramble and take 10 pages to get there because the only thing that matters is that the problem winds up solved in my own head. It's like casting a wide net and hoping to catch something good.
But if I want to explain the idea to someone in particular, I need to go back and harvest that idea. Rewriting it, simplifying the mental path for my audience and filling in the intuitive leaps.
Anyway, I'm a whole lot better at the "wide" approach than at the "deep" approach. I wind up knowing a little bit about a lot of different things, but I don't have any real expertise.
That means I can look at a new problem from a bazillion different points of view, and come up with a solution rather quickly, but then I have trouble actually implementing a solution. (Which, when combined with a tendency to follow lots of little tangents means I wind up with a whole slew of unimplemented ideas - the very definition of the flake effect.)
Anyway, so I see now that I could benefit tremendously from working on the "deep" habits - daily practice, refinement, continuous improvement, etc.
Well, first of all, I want to put the ideas into practice in my own life. I've made a start on that, but these are things that need to be habits, not just one-time exercises.
Since this is "deep" stuff and I suck at that, there will probably be a lot more to write along those lines. In fact, I may wind up writing enough for a second book before I figure out how to tame the first book.
But I do want to tame the book. Go back and refactor the ideas. Create an actual organizing structure.
And then what? I don't know. Blog about it. Maybe sell it as an ebook. Maybe work with some clients or start an online community so I'm sure I have a system that works for other people too.
I feel like I've got a complete idea now, and it could be a book as-is, but I don't really know if it's useful to anyone besides me, and I also don't know if it'll continue to be useful to me in the months ahead without further improvement. (I need to test it against a requisite variety of situations, as my friend Leslie might say.)
Anyway, my immediate goal is to use the tools I have now to get caught up on my various responsibilities, and then start working on some goals and plans for next year.
Well, I'm finally doing it.
I didn't want to say anything because I didn't really know whether or not I'd I'd follow through, and I sort of felt guilty about spending time on this when there's so much else I'm supposed to be doing for other people.
But it turns out I am following through (in fact, I'm way ahead of the already aggressive schedule I set for myself)...
And also, it's helping me sort through the problems that have kept me from delivering for my clients, customers, and even myself.
Long story short: I signed up for NaNoWriMo, and I'm finally writing The Flake Effect - the book about getting things done that I promised myself I'd write some day, whenever I figured out how to get things done.
Since I don't really know how to get things done (despite reading countless books and struggling with the problem for years), that's the central conflict of the book. It's about problem solving, bootstrapping, neuro-linguistic programming, creativity, the theory of constraints, running a business, and basically all the stuff I normally write about anyway. :)
I posted the rough first chapter. It's 10,000 words and full of my usual rambling. It doesn't really fit with the rest of what I've written, and I'm not planning on posting any more text until it's done, but you can read it and my little status blog on the writing effort over at flakeeffect.com.
"There's way too much information to decode the Matrix. You get used to it, though. Your brain does the translating. I don't even see the code. All I see is blonde, brunette, redhead. Hey uh, you want a drink?"--Cipher
This is the kind of thing my brain comes up with when I forget to take my vyvanse.
I will live each day cheerfully and with a sense of adventure...
Thus begins my personal mission statement - the document designed to guide my every decision.
And yet... I have to admit I have not been very cheerful lately.
If you (like me) [ heh heh heh ] need a reminder of what it means to be cheerful, please direct your attention to the following lecture by rock star and particle physicist, Dr. Brian Cox:
I'm just rambling here. You can tell by the way there's no little hand drawn picture on this post.
Which is actually funny, because what I'm rambling about is Quality, which is largely a byproduct of having a Process (love the capital letters) and the real gist of this post is that by sticking to your Process your quality goes up and your headaches go down.
So I've been doing client work, and part of what I'm doing is trying to help fix the broken development process in an otherwise awesome company.
I've been working long hours. I have the dark circles under my eyes. It's 10:47pm right now and there's stuff I ought to be doing for the morning and it's probably not going to get done.
Basically, a product got launched before there was any e-commerce software to back it up. The shopping cart wasn't finished. The code to talk to the fulfillment center hadn't been written. There were no automated tests to be found. I spent several hours the first night manually diffing and merging files into a new CVS repository because there were multiple people working on the site and they didn't even have version control set up.
No... Actually it was worse than that. The company has a version control system but the developers in charge refused to use it.
Anyway, I've been doing a lot of inter-department diplomacy the past couple weeks. Everything's in subversion now, but we're still putting out fires on a daily basis.
I found out today I started one of those fires. I rushed some code out the door because we were in such a hurry and it had to be done right now and so I wrote this really fancy SQL query.
The first time it came back, I found out it was grouped wrong, so I fixed that. Only it was a quick fix because I was overwhelmed with all the other fire-fighting work.
That was a week ago. Today it comes back again, after people have acted on the data, and it turns out there was a massive bug in my SQL:
select if(some_string, use_this, else_this) ...
Do you see the bug? I sure didn't. And I read the if() docs right before I wrote the query.
Turns out that strings are evaluated as false, regardless of their content. (In python and many other languages, the empty string "" is false and all other strings are true)... So basically the "use_this" value never got used.
I'm not ashamed of that mistake. The dang query still looks right to me.
But I am ashamed that I let myself get caught up in the frenzy and shipped mission-critical code with only a cursory visual inspection.
All along, I've been thinking: Let's just get through this launch, and then we can start doing things right.
But after today?
After today, I'm honestly worried that these problems are just going to keep on going. We're so busy fighting fires that we're not doing anything to prevent new ones.
Hell, I'm already worried about the next launch slipping. Everyone seems to think we have plenty of time. But I think we're three weeks late unless we make some massive changes NOW, and this is a project we haven't even started yet.
What's really funny is that on everything else - like the ability to change processes and fix communications problems - I've been scoffed at for being overly optimistic. :)
Anyway, I made the decision today that this has to end. From now on, we write unit tests for every line of production code we create.
That is absolutely the right decision.
And yet I felt like a complete jerk today, telling the guy that just worked overtime 4 days in a row on a poorly spec'd app we absolutely had to have right now that no, we can't ship this after all until we go back and build unit tests.
So this is management, huh?
Actually, I'm having fun.
It's stressful and time consuming, but everyone's been remarkably understanding that it's the process at fault, not individual people. It really is a neat company.
Anyway, I just needed to vent here... I guess I cheered myself up. Optimism comes in handy sometimes. :)
Back to work.. :/
Here's a screenshot of the client site I've been working on:
The site is American Program Bureau. They manage the speaking engagements for lots of big name celebrities. So if you want Dan Rather or Alan Alda to drop by your next meeting, they're the guys to call.
I wish I could take credit for the design, but I didn't have much to do with that. I built the drupal backend, cleaned up and extended some actionscript for the various flash widgets, and got their old data into the system.
Anyway, we just brought it live, so I figured I'd share. :) | http://withoutane.com/feed.atom | crawl-002 | refinedweb | 3,969 | 71.04 |
import random import time badmStudentsF = ["FNAME1","FNAME2","FNAME3","FNAME4","FNAME5"] badmStudentsM = ["MNAME1","MNAME2","MNAME3","MNAME4","MNAME5","MNAME6","MNAME7","MNAME8", "MNAME9", "MNAME10","MNAME11"] random.shuffle(badmStudentsM, random.random); random.shuffle(badmStudentsF, random.random); print "HackLab Groups : " + time.strftime("%Y-%m-%d %H:%M") print "++++++++++++++++++++++++++" for i in range(0,len(badmStudentsM),1): if i == 4 or i == 8 or i == 12 or i == 16 : print "------------------------------" for i in range(0,len(badmStudentsF),2): if i == 4 or i == 8 or i == 12 or i == 16 : print "------------------------------"# print badmStudentsF[i] print badmStudentsM[i] print "++++++++++++++++++++++++++"
I am trying to make a script that when it runs it will make 4 groups of 4.
Each group must have atleast one female name 'FNAME' in each print out.
Thank you for the help in advance. | https://www.daniweb.com/programming/software-development/threads/503029/random-generator-python | CC-MAIN-2018-26 | refinedweb | 130 | 60.24 |
Wikibooks:Manual of Style
From Wikibooks, the open-content textbooks collection
The Wikibooks Manual of Style is intended to describe the ideal construction of textbooks on Wikibooks in order to establish consistency between books and thus to improve usability of Wikibooks as a whole.
[edit] Titling
Wikibooks aims for a certain method of titling module pages, depending on the aspect of the book. The aspect of a book can be said to be the target audience and/or aims of the book. Several books on one topic but with differing aspects can exist (see Wikibooks:Forking policy for details).
On Wikibooks aiming for a generic aspect, that is, a wide-ranging aspect, or alternatively, a book with an aspect similar to a print textbook, a generic title may be used. For example, a book on woodwork aimed at the general public may be entitled "Woodworking", or, a book aimed at woodworking students may also be entitled "Woodworking". A book on financial mathematics may be entitled "Actuarial studies".
For books aiming for a narrower aspect, for example, a mathematics text for commerce students, a more specific title can be used, so, in the previous example, "Mathematics for commerce" or "Commercial mathematics" may be used.
Some people feel that the casing for the titles should match that of Wikipedia's scheme for titles, viz., the first letter of the title and all proper nouns should be capitalized, and no other letters should be capitalized. Others prefer the use of titlecase for titles; all words are capitalized except for minor words (articles and conjunctions mainly) not appearing at the beginning. Please follow the existing style for the book you are editing.
See Wikibooks:Naming policy for more information on how to and how not to name books and their chapters.
[edit] Structure
Books should follow the following structure.
In accordance with the titling specification, a contents page should be created in the main namespace using the book title. A page should be created with a short introduction (a longer introduction can always be placed elsewhere, see below), a header "== Contents ==", followed by a bulleted list of divisions of the book as a whole. Additional bulleting/indenting can be used to introduce structure. Alternative structuring may also be used.
The way structure is reflected in the module names has been decided, and the results of that decision can be found at Wikibooks:Naming policy.
[edit] Title Pages
A title page is similar to a front and/or back cover of a book. You can create a title page separate from the table of contents to provide a short description of the book's scope. A book's title page should describe the purpose of the book, explain what the book is trying to teach, and the topics that it may cover. In short, the title page should summarize the entire book to the best of your ability. You can also use a cover image for decorative purposes.
If the title page isn't the first page a reader sees and your book is named Book, as per naming conventions, the title page should be named Book/About or Book/Cover.
[edit] Creation of Title Pages
As a rule, a link to the table of contents should be placed on the title page. The title page should also link to a book's specific manual of style. There should also be a link to a list containing the names of all the users actively working on the project - it can be placed in the manual of style, in a foreword or on a separate "Authors" page. This page may also be linked to from the table of contents.
It's good to point to a central discussion page for the whole book, so that contributors have one place for discussing issues that concern the book as a whole.
If a print version or PDF version of your book is available, you can announce it using the {{print version}} and {{PDF version}} templates or just add a normal link to them.
Cover images should be kept to a minimum, and should not dominate your title page. Then, if your book is nominated as a featured book, the cover image is used to represent your book on the main page.
If a book is listed in categories and has interwiki links, all of them should be placed on the title page, not the table of contents.
[edit] Linking Title Pages
If a book has a title page, its table of contents should not contain any interwiki links or categories. You should provide a link to the title page at the top of book's table of contents. This may already exist for books using the slash-convention. If you use custom navigation templates, you should not link to the title page.
External links, like links from bookshelves, should point to the book title page, not the table of contents. book must be categorized. Currently, the card catalog office organizes books by their bookshelf, by their name, by Subject,.
[edit]:
[edit] Internal
Module pages may require structure in the layout of content.
[edit] Linking
Modules using a large amount of substructure should include links in each content page for navigation within the structure. A suggested method is to create a template with the requisite links, and to include this template on the necessary pages.
[edit] Headers
Headers should be used to indicate structure in.
[edit] Content layout
It is suggested that:
- like Wikipedia, the first use of the title words or words that are akin in sentiment to the title words are marked in bold.
- when a new term is introduced, the term is italicised.
[edit] Mathematics
Use HTML/Wiki-formatting when using variables or other simple mathematical notation within a sentence, and use <math></math> tags when using "display" mathematics. If the notation is too complicated for a module where notation is used within a sentence, consider switching to using "display" style (see below).
[edit] HTML guidelines
Italicise variables: a+b, etc. Do not italicise Greek letter variables, nor italicise function names or the brackets.
[edit] Display guidelines
To introduce mathematics notation in a display format, move to the start of the new line, add a single colon, and then the notation within the <math></math> tags. If punctuation follows the notation, do so inside the tags.
For example,
is correctly used for "display" guidelines.
For consistency, if some notation does not render as PNG, force it so by adding a \, at the end of the formula. Ensure that the way the notation is displayed is consistent.
[edit] Footnotes and References
Wikibooks has a really nice way to do footnotes and references. Much better than HTML, or most word processors! First put in a reference in the form:
"This hedgehog will live with us!"[1]
(Click "edit" to see the code for this.) That makes a superscript number that your readers can click on. Then at the end of your file put in the note:
- ^ Chapayev, V.I., "Sesquipedalian Obfuscation in Late Early Middle Wikibooking," Journal of Very Specialized Research, 2005.
Remember the pound sign (#) to make the matching number appear. | http://en.wikibooks.org/wiki/WB:MOS | crawl-001 | refinedweb | 1,194 | 61.36 |
Created on 2008-01-01 16:39 by gvanrossum, last changed 2013-08-22 01:14 by python-dev. This issue is now closed..
__cmp__ is not coming back.
Bumping priority.
Ping
Can someone other than me test and apply this? It seems still relevant,
and __cmp__ is not coming back.
Additionally, there are still lots of references to __cmp__ in the
library which should be ripped out.
Bumping priority even further. This shouldn't make it past rc.
Guido's patch breaks these tests:
test_descr test_hash test_long test_richcmp test_set
Since we are making 3.0 issues deferred blockers dropping the priority.
> Guido's patch breaks these tests:
>
> test_descr test_hash test_long test_richcmp test_set
It looks like all these are easily fixed: all these tests were making
outdated assumptions and needed updating.
Here's a patch that fixes these tests.
One other detail:
In test_descr.py, there are tests for 'overridden behavior for static
classes' and 'overridden behavior for dynamic classes', using test classes
'Proxy' and 'DProxy'; apart from the name change, the 'dynamic' code is
identical to the 'static' code, so I removed it. I guess this had to do
with the __dynamic__ class attribute, which is ancient history, no?
Thanks, Mark! Applied in r66920.
The library still has __cmp__ functions defined here and there, e.g. in
xmlrpc/client.py.
Presumably any nonzero entries for tp_compare in type initializers
should be looked at closely, as well?
I see nonzero tp_compare entries in:
Modules/_tkinter.c
Modules/parsermodule.c
Objects/cellobject.c
Objects/descrobject.c
PC/winreg.c
Objects/setobject.c
(but that last one just raises an error pointing out that cmp can't be
used to compare sets, so maybe that's okay).
Let's lower the priority on this.
cmp() the function is also still there.
Bah. That means we'll have to start deprecating cmp() in 3.1, and won't
be able to remove it until 3.2 or 3.3. :-)
I hope the smiley really indicates a joke...
Well what would you suggest we do?
I'd release 3.0.1 quickly, maybe also with the in-development
improvements to the io library that alleviate the factor-1000 slowdowns.
Since cmp() was documented as removed in whatsnew3.0, it may be fair
game to complete the removal, treating it as a bugfix.
If as Georg suggests it is done *very* quickly in 3.0.1 (say in the next
week or so), it is likely harmless and better than living with a
half-removed feature for the next three years. Also, Georg intimated,
we will likely need to do 3.0.1 almost immediately anyway (the current
state of affairs with IO borders on the unusable.
A quick ballot on #python-dev resulted in 3 of 3 votes for removing
cmp() in a quick bug fix release.
About the io misery:
My quick fix for the buffer reallocation schema has made the situation a
bit better. For the 3.0.x series I like to discuss if we shouldo declare
critical performance fixes as bug fixes and not as new features.
OK, remove it in 3.0.1, provided that's released this year.
Performance fixes are always fair game for bugfix releases.
Please don't "fix" the what's new document (or undo the fix).
I do hope cmp() was already undocumented elsewhere.
Do the API functions PyObject_Compare() and PyObject_Cmp() also go away?
Do we also get rid of the tp_compare type slot? 6 types still use it,
they should convert this to a tp_richcompare slot
Yes, tp_compare should go.
tp.
Either take out the C-API functions or re-define them as:
lambda a, b: (a > b) - (a < b)
r67627 : complete the dedocumenting of cmp().
Amaury, I have a non-metaclass solution for you. Will post on ASPN. It
uses a class decorator to sniff-out *any* defined rich ordering
comparisons and the automatically fills-in those that are missing.
FWIW, the tp_compare slot in setobject.c serves only say the cmp() isn't
defined. It can be ripped-out straight-away.
+1 for a speedy removal of cmp and tp_compare
We shouldn't remove the tp_compae slot in 3.0. That's going to break too
much 3rd party software. Instead of removing it, a deprecation warning
should be printed when the slot isn't 0.
Here is a longer patch that removes cmp, PyObject_Compare and cmpfunc.
The slots has been renamed to tp_reserved. If the slot is used by a type
an exception is raised to notify the user about possible issues.
Missing:
* fix unit tests
* add error checks to PyObject_RichCompareBool calls
* Remove/replace the last PyUnicode Compare ASII function
I'll work on fixing the unit tests if that's helpful.
Can you fix the decimal module and tests? You know more about the module
than me. I'm half through most of the others modules.
Decimal fixes
I've integrated Mark's patch and fixed more tests. Who wants to pick it
up from here?
remove_cmp3.patch adds to Christian's patch to fix 6 more test_failures:
(test_distutils, test_kqueue, test_list, test_sort, test_sqlite,
test_userlist).
On OS X, the only remaining test failure is test_unittest.py.
About unittest:
unittest.TestLoader has an attribute "sortTestMethodsUsing", which it
expects to be an old-style comparison.
Should this attribute be updated (and renamed?) to expect a key function
instead, or left as is?
On Sun, Dec 7, 2008 at 13:26, Mark Dickinson <report@bugs.python.org> wrote:
>
> Mark Dickinson <dickinsm.
The latest patch removes PyUnicode_Compare as well as lots of __cmp__
functions under Lib/. It also renames and redefines
PyUnicode_CompareWithASCIIString(). The simpler
PyUnicode_EqualToASCIIString() function is easier to use, too.
Update patch to include fix for unittest.py and test_unittest.py.
I've created a new branch to work on the issue:
svn+ssh://pythondev@svn.python.org/python/branches/py3k-issue1717. It's
easier to work on a branch than exchanging monster patches.
Please put the PyUnicode_Compare() API back in there.
Removing __cmp__ really doesn't have anything to do with removing often
used helper functions for comparing certain object types and only
cripples the C API in a needless way.
Thanks.
Instead of removing cmp(a, b) and replacing all uses with (a>b)-(b<a) I
think it's better to turn cmp() into a helper that applies this
operation in C rather than Python..
Instead I'm proposing PyUnicode_Equal() which returns -1 for errors, 0
for unequal and +1 for equal. The function follows the semantic of nicely.
About your proposal for cmp(), where should we put the method? I'm -0.5
on builtins.
IMO PyUnicode_Compare() should be replaced by a hypothetical
PyObject_RichCompare(), which allows to take shortcuts when comparing
strings of different length and a Py_EQ or Py_NE comparison is requested.
(see #3106 too)
On 2008-12-09 00:12, Christian Heimes wrote:
> Christian Heimes <lists@cheimes.de> added the comment:
>
>.
For Unicode objects you don't have the issues with general rich
comparisons. Unicode objects only know <, = and >, so the C approach
of using -1, 0, +1 for these results will work just fine.
It doesn't in the general case, where comparisons can return arbitrary
objects.
> Instead I'm proposing PyUnicode_Equal() which returns -1 for errors, 0
> for unequal and +1 for equal. The function follows the semantic of
> nicely.
Yes, but for sorting Unicode (in C) and other quick checks that rely on
lexical ordering you need checks for < and > as well.
It's better to have just one C API than 3 or 5 to cover all comparison
cases.
> About your proposal for cmp(), where should we put the method? I'm -0.5
> on builtins.
Good question.
It is in the builtins in Python 2.x, so keeping it there would make
porting easier, but then it's probably better to put it somewhere else
to make people aware of the fact that the underlying machinery has
changed.().
On 2008-12-09 00:58, Antoine Pitrou wrote:
> Antoine Pitrou <pitrou@free.fr> added the comment:
>
>().
What for ? What does this have to do with removing __cmp__ ?
Why do you want to break the C API for no apparent reason ?
I've designed the Unicode C API to be a rich API and would like
it to stay that way.
> It is in the builtins in Python 2.x, so keeping it there would make
> porting easier, but then it's probably better to put it somewhere else
> to make people aware of the fact that the underlying machinery has
> changed.
from __past__ import cmp
?
More seriously, if cmp were to go into the standard library somewhere,
perhaps Raymond's class decorator (for filling in missing rich
comparisons) could go into the same place?.
On 2008-12-09 10:59, Raymond Hettinger wrote:
> Raymond Hettinger <rhettinger@users.sourceforge.net> added the comment:
>
>.
The idea was to have one implementation of the work-around (a>b) - (b<a)
instead of 10 or so instances of this snippet in the Python stdlib
and probably a few hundred places in other code.
Indeed, the motivation is to have one obvious way to write this
work-around :-)
Note that cmp() doesn't make __cmp__ come back, but it does help
porting code that uses cmp().
Besides, cmp() is available builtin in 3.0, so it's too late to just
remove it anyway. We'd have to go through the usual deprecation
process.
>?
On 2008-12-09 16:06, Antoine Pitrou wrote:
> Antoine Pitrou <pitrou@free.fr> added the comment:
>
>>?
It implements the DRY principle.
Guido approved removing __builtin__.cmp() in 3.0.1. It was supposed to
have been taken out but was forgotten.
With respect to the DRY principle, I disagree about its utility here.
The code is so simple that it doesn't warrant cross-module
factorization. It is a better goal to minimize dependencies.
Still to do:
pybench needs updating to remove a cmp reference; since the changes
required for pybench are a little bit more substantial than simple cmp
replacement, I've broken this out into a separate issue: issue 4704.
There are many uses of cmp still in the Demos directory. How much do we
care?
The documentation in Doc/extending/newtypes.rst needs updating. I can
have a go at this, but Georg would almost certainly do a better job.
Should Py_CmpToRich (in object.c) disappear?
It's currently used in longobject.c, but nowhere else as far as I can
tell. It shouldn't be too hard to update longobject.c to remove the use.
> It shouldn't be too hard to update longobject.c to remove the use.
I'll do that one.
> > It shouldn't be too hard to update longobject.c to remove the use.
>
> I'll do that one.
Done in r67871, r67873.
Please remember to add back PyUnicode_Compare() if that hasn't already
been done.
Thanks,
--
Marc-Andre Lemburg
eGenix.com
________________________________________________________________________ patch (against py3k) generated from the current state of the
py3k-issue1717 branch, for ease of review and testing.
The patch needs serious review; it should be considered a first draft,
and there are probably many more changes still to be made. I don't
think I can do much more for now without getting input from others.
Known places in the source tree where cmp/__cmp__ still lingers on:
Demos/<many>
Doc/extending/newtypes.rst
Misc/cheatsheet
Misc/python-mode.el
Misc/Vim/python.vim
Parser/spark.py # (I don't know what this does. Anyone?)
Tools/<various> # (notably pynche and pybench)
Apart from the newtypes.rst, all of the above files are somewhat out of
date in other ways. In my opinion it's only the doc fixes that stop the
patch from being complete enough.
Doc/extending/newtypes.rst
I'll fix this one after the patch has landed.
Misc/cheatsheet
This needs a general overhaul.
Misc/python-mode.el
I think this should be removed from the distribution; it's maintained
externally.
Parser/spark.py # (I don't know what this does. Anyone?)
This is used by asdl_c.py which generates Python-ast.c -- it should be
updated.
[Georg Brandl, on spark.py]
> This is used by asdl_c.py which generates Python-ast.c -- it should be
> updated.
The only issue here is a single comment, which reads:
# GenericASTMatcher. AST nodes must have "__getitem__" and "__cmp__"
Still, I'm reluctant to remove or alter the comment without
understanding why it was there in the first place. If someone more
familiar with AST stuff can confirm that __cmp__ is definitely no
longer needed, I'll remove the mention of it from the comment.
> Misc/cheatsheet
>
> This needs a general overhaul.
I'll open a separate issue for this, if that's okay with everyone.
Presumably 3.0.1 doesn't need to wait for Misc/cheatsheet to be updated.
I'm wondering how to move forward with this issue. Might it make sense to
break the current monster patch into a series of more-easily reviewed
patches? I was thinking of something like:
- patch 1 (Python code): remove all uses of cmp from std. lib. and tests
- patch 2 (C code): remove all uses of tp_compare from Modules and
Objects
- patch 3 : rename tp_compare to tp_reserved in all Modules and Objects
(should be a very simple patch, but touches a lot of files)
- patch 4 : fix docs, and look for any other loose ends that need
cleaning up.
Here's patch 1: remove uses of cmp from library and tests (and also from
Tools/unicode/makeunicodedata.py). It involves essentially no user-
visible changes. (At least, assuming that users aren't trying to access
e.g., UserString.__cmp__ directly.)
The patch is against py3k. All tests pass on my machine with this patch
applied.
Anyone interested in reviewing this?
Quick comments on your patch:
- two files have unwanted modifications on non-ASCII characters
(Lib/heapq.py and Lib/sqlite3/test/hooks.py)
- you haven't renamed "__cmp__" to "_cmp" in Lib/xmlrpc/client.py:
deliberate?
- in Lib/heapq.py, there's a "_cmp__" which should be "_cmp"
Thanks, Antoine. Here's a new patch.
> - two files have unwanted modifications on non-ASCII characters
> (Lib/heapq.py and Lib/sqlite3/test/hooks.py)
Fixed, I think. Could you double check?
> - you haven't renamed "__cmp__" to "_cmp" in Lib/xmlrpc/client.py:
> deliberate?
Not deliberate! This __cmp__ should simply be removed, I think---the rich
comparison methods are already present.
> - in Lib/heapq.py, there's a "_cmp__" which should be "_cmp"
Replaced "_cmp__" (which was in a comment) with "comparison".
> Fixed, I think. Could you double check?
It's ok!
Stage 1 committed in r69025 (py3k) and r69026 (release30-maint).
Can anyone who uses tkinter give me some advice? Does PyTclObject in
_tkinter.c need to have its tp_richcompare method implemented? And if so,
how do I go about testing the implementation? It seems that PyTclObjects
aren't directly exposed to Python under 'import tkinter'.
I'll hold off on any more checkins until the 3.0.1 thread on python-dev
has resolved itself.
Mark,
I'm not a very huge user of tkinter, but I can tell you it would be
tricky to try getting a PyTclObject. It needs to be exposed if you want
to test it without relying on Tcl, but, to me they are just a
"temporary" object that serves to indicate the caller that it needs to
be converted to something else since _tkinter itself wasn't able to do
it. For this reason I would actually prefer to them not be comparable.
Thanks, Guilherme.
> For this reason I would actually prefer to them not be comparable.
That's fine with me, so long as we can be sure that there's no existing
code that depends on them being comparable. I can't figure out whether
there's any legitimate way that the tp_compare slot (which is currently
implemented) of a PyTclObject could ever be called.
I'm not going to get more time to work on this before
the weekend, so if anyone else wants to take over please
feel free.
Still to do for stage 2: cell objects and slot
wrapper objects need to have tp_richcompare
implemented, to replace the tp_compare slots that
are being removed.
For 3.0, are you going to keep tp_compare slot in existence and just
assert that it is NULL? Then in 3.1, remove the slot entirely?
If I understand Christian's plan correctly, it was to:
(1) raise TypeError for non-NULL tp_compare, and
(2) rename tp_compare to tp_reserved (with type void *).
and both of these would happen with 3.0.1, so no difference between
3.0.1 and 3.1.0.
It seems to me that if tp_compare is actually going to be removed then
that should be done in 3.0.1; else third-party stuff that works with
3.0.x will fail with 3.1, due to the various tp_* fields being out of sync.
It would be nice not to have tp_reserved hanging around for the duration
of 3.x. (Similarly for nb_reserved.)
Instead of tp_reserved, the name should be tp_deprecated_compare.
There should be a python-dev discussion around when to actually remove
the slot:
3.0.1 -- binary incompatibility between minor releases (BAD)
3.1.0 -- uncomfortable for writers who have to add #ifdefs
4.0.0 -- no pain, just wasted space
3.1.5 -- just plain mean :-)
Actually, I would like to repurpose tp_compare as tp_bytes for the
__bytes__ method..
On Thu, Jan 29, 2009 at 2:39 PM, Brett Cannon <report@bugs.python.org> wrote:
>
> Brett Cannon <brett@python.org> added the comment:
>
>?
Not that I can remember, but I'm sure someone will correct me if I'm wrong.
Here's stage 2: remove uses of tp_compare from Objects and Modules, and
replace uses of PyObject_Compare with PyObject_RichCompareBool.
PyObject_Compare, cmp and friends still haven't been removed at this
stage.
In detail:
- for cell objects, method wrapper objects, PyTclObjects, and
PyST_Objects (in the parser module), remove the defined tp_compare
methods and implement tp_richcompare instead.
- add tests for cell comparisons and PyST_Object comparisons;
reenable
tests for method wrapper comparisons. There are no tests for the
PyTclObject comparisons.
- remove tp_compare method from sets (all it did was emit an error
message about the nonsensicality of doing order comparisons on sets)
- in Objects/rangeobject.c and ElementTree, replace uses of
PyObject_Compare with PyObject_RichCompareBool
I haven't stared very closely but it looks ok.
("spanish armada" might be replaced with "spanish inquisition", though)
Thanks for the review, Antoine.
Stage 2 applied to py3k in r69181, merged to 3.0 in r69182.
cmp, PyObject_Cmp and PyObject_Compare removed in r69184 (py3k) and r69185
(release30-maint).
There's still the rename of the tp_compare slot to deal with, and a fair
amount of cleaning up and documentation fixing to do.
All relevant changes from the py3k-issue1717 branch have now been merged into the py3k
branch (and from there into the 3.0 maintenance branch), in a series of revisions.
Here they are, listed in py3k/release30-maint pairs:
r69188, r69189,
r69190, r69191,
r69192, r69193,
r69214, r69215,
r69218, r69221,
r69224, r69225.
The idea to raise TypeError on non-NULL tp_compare was abandoned after Martin pointed
out that it would be a binary incompatible change _[1], but Nick Coghlan suggested a
DeprecationWarning when tp_compare is non-NULL and tp_richcompare is NULL _[2], and
remarked that the warning should also be implemented for Python 2.7 when run with the -
3 option.
Still to do before this can be closed:
- fix up Doc/extending/newtypes.rst; I think Georg has said he'll
take care of this, so once everything else is done I'll just reassign
this issue to him and let him get on with it.
- add Nick Coghlan's suggested DeprecationWarning.
- grep through the sources looking for tp_compare, __cmp__, cmpfunc,
PyObject_Cmp, PyObject_Compare and cmp, and check all remaining
references are legitimate.
- update pybench; there's a separate issue already open for this.
(issue 4704), and it looks like Antoine and Marc-André are on the
case.
- anything else that I've forgotten.
Thanks everyone for your help so far, especially Christian for much of
the original code and Antoine for code and review.
[1]
[2]
Deprecation warning for types that implement tp_compare but not
tp_richcompare added in r69431, r69432.
Just the doc fixes in Doc/extending/newtypes.rst left. Assigning to Georg
and reducing priority.
Removing cmp() breaks distutils. I get the following exception, for
example using the just released version 3.0.1:
Traceback (most recent call last):
File "setup.py", line 318, in <module>
classifiers = classifiers)
File "c:\Python30\lib\distutils\core.py", line 149, in setup
dist.run_commands()
File "c:\Python30\lib\distutils\dist.py", line 942, in run_commands
self.run_command(cmd)
File "c:\Python30\lib\distutils\dist.py", line 962, in run_command
cmd_obj.run()
File "c:\Python30\lib\distutils\command\build.py", line 128, in run
self.run_command(cmd_name)
File "c:\Python30\lib\distutils\cmd.py", line 317, in run_command
self.distribution.run_command(command)
File "c:\Python30\lib\distutils\dist.py", line 962, in run_command
cmd_obj.run()
File "c:\Python30\lib\distutils\command\build_ext.py", line 306, in run
force=self.force)
File "c:\Python30\lib\distutils\ccompiler.py", line 1110, in new_compiler
return klass(None, dry_run, force)
File "c:\Python30\lib\distutils\cygwinccompiler.py", line 314, in __init__
if self.gcc_version <= "2.91.57":
File "c:\Python30\lib\distutils\version.py", line 64, in __le__
c = self._cmp(other)
File "c:\Python30\lib\distutils\version.py", line 341, in _cmp
return cmp(self.version, other.version)
NameError: global name 'cmp' is not defined
Fixed in r69682.
Darn. That's really very annoying. Apologies for missing this one.
Thanks for the quick fix, Benjamin.
Documentation updated in r70863.
New changeset 64e004737837 by R David Murray in branch '3.3':
#18324: set_payload now correctly handles binary input. | https://bugs.python.org/issue1717 | CC-MAIN-2017-47 | refinedweb | 3,673 | 68.77 |
the problem is factions. i have a server setup with only factions on it and i cant get tnt cannons to work, cant find a setting for it either
package me.farkinklown;
import org.bukkit.Material;
import org.bukkit.Sound;
import org.bukkit.entity.Item;
import...
why not just make a command block, so when they walk into an area it tps them to the prison and tells then not to be naughty ???
CMG
or:
if these help at all ???
crunkazcanbe The Gaming Grunts if either of you know much about using java to send emails, would much appreciate the input. im in need of this...
Tehmaker Tehmaker
sorry tehmaker, just reading and had to point out your "words of knowledge" "1. If you do not know Java, learn it first"
lol...
i knew basic java 10 years ago. would have to relearn it again now.
shuddup :p i havent coded with java for ages. and my head hurts at the moment :p
CMG Awesome
:( i hate waiting :P
CMG Assist
any idea when the api will be updated
CMG im not going to make out i know this much, but from what i can gather, it may be your base64coder
i dont know if its recognisiong that as...
can u link it into here please. or pm me :p
Still having troubles with sending emails ?? Maybe post it and see if anyone can help finish it off with you joint effort
do we have a beta yet :P
sounds good, but do me a favour, change <p> to <player>
makes it a little more uniformed to other plugins and commandblocks.
its a little more...
Separate names with a comma. | https://dl.bukkit.org/members/frkinklown.90820949/recent-content | CC-MAIN-2022-33 | refinedweb | 280 | 83.05 |
See:
Description
Sample use of interface:
The home team sends an object
ii satisfying interface
InputInitialI to the outsourcing team. The outsourcing
team does not know the internal representation of the object (it
could use DemeterJ datastructures) but they know that
getPairs returns a set of objects satisfying interface
PairI.
The outsourcing team sends back an object satisfying interface
OutputI. Again the home team does not know the internal
representation the outsourcing team uses but they know that there is
a method
getMaxBias() to get back the maximum bias. Etc.
For the update case, we use interface
InputUpdateI for
input to the outsourcing team and again
OutputI for
output from the outsourcing team.
The outsourcing team will create a class
edu.neu.ccs.satsolver.SATSolverUtil with two methods
that the home team will call:
IllegalArgumentExceptionif:
PairIfractions do not add to 1.0, or
PairIrelation number outside the range of 0 to 255 inclusive, or
PairIfraction is outside of the range of 0.0 to 1.0 inclusive, or
PairIobjects in the set have the same relation number
IllegalArgumentExceptionif:
PairIadded fractions do not equal the subtracted fractions (to within some amount), or
PairIrelation number is outside the range of 0 to 255 inclusive, or
PairIfraction is outside the range of 0.0 to 1.0 inclusive, or
PairIobjects in the set have the same relation number
Because these are static methods, you can use them this way:
import edu.neu.ccs.satsolver.SATSolverUtil; ... <in some method> ... InputInitialI input_initial = new InputInitial( ... ); OutputI output = SATSolverUtil.calculateBias(input_initial); ... do stuffwith output ... InputUpdateI input_update = new InputUpdate( ... ) OutputI output = SATSolverUtil.updateBias(input_update);
Ahmed wrote a
Relation module that the outsourcing
team is using. The outsourcing code requires that the
relation numbers you pass into the outsourcing code are computed
the way we're expecting.
The truth table for a relation number is this:
x2 x1 x0 -- -- -- 1 0 0 0 2 0 0 1 3 0 1 0 4 0 1 1 5 1 0 0 6 1 0 1 7 1 1 0 8 1 1 1
The way to map literals in a constraint to a column of the truth table (also known as "variable number") is this:
Or( x y z ) x = x2, y = x1, z = x0 Or( x y ) x = x1, y = x0 Or( x z ) x = x1, z = x0 Or( x ) x = x0 Or( y ) y = x0
To compute R, you can bitwise-or the magic numbers of the literals.
Example: Given
Or( !x y ) we have:
!xis varnum x1
yis varnum x0
So the relation number for
Or( !x y ) is 187.
For a more complete specification, refer to: CSU 670 Fall 2006 Project Description and also your email because we'll be making updating the specification based on conversations. | http://www.ccs.neu.edu/home/lieber/courses/csg270/sp07/guaraldi/satsolver-1.1/api/edu/neu/ccs/satsolver/package-summary.html | CC-MAIN-2017-43 | refinedweb | 460 | 62.78 |
On Mon, 08 Feb 2010 15:10:21 +0200 Aggelos Economopoulos <aoiko@cc.ece.ntua.gr> wrote: > Max Herrgård wrote: > > Den 2010-02-08 09:00:02 skrev Steve O'Hara-Smith <steve@sohara.org>: > >> Hi, > >> > >>. > > The statement is correct. One could fix fpurge() (though I bet it breaks > other assumptions as well) or could cast the FILE* to __FILE_public* > and use it's _flags and _w fields. > > > hi. rumko filed a pr for this: > >. is it the > > same issue? > > Yes. Like I exlained above, I don't think removing the code for > DragonFly is the correct solution. Unless someone takes the time to > audit fpurge(), I suppose using __FILE_public is the safest "fix" since > it brings us back to how things were. So this patch (which works for me): --- fpurge.c.orig 2010-02-08 17:31:18 +0000 +++ fpurge.c 2010-02-08 17:32:31 +0000 @@ -61,8 +61,13 @@ If this invariant is not fulfilled and the stream is read-write but currently writing, subsequent putc or fputc calls will write directly into the buffer, although they shouldn't be allowed to. */ +#if defined __DragonFly__ + if ((((struct __FILE_public *) fp)->_flags & __SRD) != 0) + ((struct __FILE_public *) fp)->_w = 0; +#else if ((fp->_flags & __SRD) != 0) fp->_w = 0; +#endif # endif return result; -- Steve O'Hara-Smith | Directable Mirror Arrays C:>WIN | A better way to focus the sun The computer obeys and wins. | licences available see You lose and Bill collects. | | http://leaf.dragonflybsd.org/mailarchive/users/2010-02/msg00058.html | CC-MAIN-2015-22 | refinedweb | 243 | 74.49 |
#include <deal.II/grid/tria_accessor.h>
A class that provides access to objects in a triangulation such as its vertices, sub-objects, children, geometric information, etc. This class represents objects of dimension
structdim (i.e. 1 for lines, 2 for quads, 3 for hexes) in a triangulation of dimensionality
dim (i.e. 1 for a triangulation of lines, 2 for a triangulation of quads, and 3 for a triangulation of hexes) that is embedded in a space of dimensionality
spacedim (for
spacedim==dim the triangulation represents a domain in \(R^{dim}\), for
spacedim>dim the triangulation is of a manifold embedded in a higher dimensional space).
There is a specialization of this class for the case where
structdim equals zero, i.e., for vertices of a triangulation.
Definition at line 127 of file tria_accessor.h.
Propagate alias from base class to this class.
Definition at line 706 of file tria_accessor.h. 3668 of file tria_accessor.h.
Another conversion operator between objects that don't make sense, just like the previous one.
Definition at line 3698.
Test for the element being used or not. The return value is
true for all iterators that are either normal iterators or active iterators, only raw iterators can return
false. Since raw iterators are only used in the interiors of the library, you will not usually need this function.
Pointer to the
ith vertex bounding this object. Throw an exception if
dim=1.
Return the global index of i-th vertex of the current object. The convention regarding the numbering of vertices is laid down in the documentation of the GeometryInfo class.
Note that the returned value is only the index of the geometrical vertex. It has nothing to do with possible degrees of freedom associated with it. For this, see the
DoFAccessor::vertex_dof_index functions.
Return a reference to the
ith vertex. The reference is not const, i.e., it is possible to call this function on the left hand side of an assignment, thereby moving the vertex of a cell within the triangulation. Of course, doing so requires that you ensure that the new location of the vertex remains useful – for example, avoiding inverted or otherwise distorted (see also this glossary entry).
Pointer to the
ith line bounding this the face with index
face has its normal pointing in the standard direction (
true) or whether it is the opposite (
false). Which is the standard direction is documented with the GeometryInfo class. 180 90 line with index
line is oriented in standard direction.
true indicates, that the line is oriented from vertex 0 to vertex 1, whereas it is the other way around otherwise..
Test whether the object has children.
Return the number of immediate children of this object. The number of children of an unrefined cell is zero.
Compute and return the number of active descendants of this objects. For example, if all of the eight children of a hex are further refined isotropically exactly once, the returned number will be 64, not 80.
If the present cell is not refined, one is returned.
If one considers a triangulation as a forest where the root of each tree are the coarse mesh cells and nodes have descendants (the children of a cell), then this function returns the number of terminal nodes in the sub-tree originating from the current object; consequently, if the current object is not further refined, the answer is one.
Return the number of times that this object is refined. Note that not all its children are refined that often (which is why we prepend
max_), the returned number is rather the maximum number of refinement in any branch of children of this object.
For example, if this object is refined, and one of its children is refined exactly one more time, then
max_refinement_depth should return 2.
If this object is not refined (i.e. it is active), then the return value is zero.
Return an iterator to the
ith child.
Return the child number of
child on the current cell. This is the inverse function of TriaAccessor::child().
Return an iterator to that object that is identical to the ith child for isotropic refinement. If the current object is refined isotropically, then the returned object is the ith child. If the current object is refined anisotropically, the returned child may in fact be a grandchild of the object, or may not exist at all (in which case an exception is generated).
Return the RefinementCase of this cell.
Index of the
ith child. The level of the child is one higher than that of the present cell, if the children of a cell are accessed. The children of faces have no level. If the child does not exist, -1 is returned.
Index of the
ith isotropic child. See the isotropic_child() function for a definition of this concept. If the child does not exist, -1 is returned.
Return the boundary indicator of this object.
If the return value is the special value numbers::internal_face_boundary_id, then this object is in the interior of the domain.
Set the boundary indicator of the current object. The same applies as for the boundary_id() function.
This function only sets the boundary object of the current object itself, not the indicators of the ones that bound it. For example, in 3d, if this function is called on a face, then the boundary indicator of the 4 edges that bound the face remain unchanged. If you want to set the boundary indicators of face and edges at the same time, use the set_all_boundary_ids() function. You can see the result of not using the correct function in the results section of step-49.
Do as set_boundary_id() but also set the boundary indicators of the objects that bound the current object. For example, in 3d, if set_boundary_id() is called on a face, then the boundary indicator of the 4 edges that bound the face remain unchanged. In contrast, if you call the current function, the boundary indicators of face and edges are all set to the given value.
This function is useful if you set boundary indicators of faces in 3d (in 2d, the function does the same as set_boundary_id()) and you do so because you want a curved boundary object to represent the part of the boundary that corresponds to the current face. In that case, the Triangulation class needs to figure out where to put new vertices upon mesh refinement, and higher order Mapping objects also need to figure out where new interpolation points for a curved boundary approximation should be. In either case, the two classes first determine where interpolation points on the edges of a boundary face should be, asking the boundary object, before asking the boundary object for the interpolation points corresponding to the interior of the boundary face. For this to work properly, it is not sufficient to have set the boundary indicator for the face alone, but you also need to set the boundary indicators of the edges that bound the face. This function does all of this at once. You can see the result of not using the correct function in the results section of step-49.
Return whether this object is at the boundary. Obviously, the use of this function is only possible for
dim>structdim; however, for
dim==structdim, an object is a cell and the CellAccessor class offers another possibility to determine whether a cell is at the boundary or not.
Return a constant reference to the manifold object used for this object.
As explained in the Manifold description for triangulations module, the process involved in finding the appropriate manifold description involves querying both the manifold or boundary indicators. See there for more information.
Return the manifold indicator of this object.
If the return value is the special value numbers::flat_manifold_id, then this object is associated with a standard Cartesian Manifold Description.
Read the user flag. See GlossUserFlags for more information.
Set the user flag. See GlossUserFlags for more information.
Clear the user flag. See GlossUserFlags for more information.
Set the user flag for this and all descendants. See GlossUserFlags for more information.
Clear the user flag for this and all descendants. See GlossUserFlags for more information.
Reset the user data to zero, independent if pointer or index. See GlossUserData for more information.
Set the user pointer to
p.
See GlossUserData for more information.
Reset the user pointer to a
nullptr pointer. See GlossUserData for more information.
Access the value of the user pointer. It is in the responsibility of the user to make sure that the pointer points to something useful. You should use the new style cast operator to maintain a minimum of type safety, e.g.
A a=static_cast<A>(cell->user_pointer());.
See GlossUserData for more information.
Set the user pointer of this object and all its children to the given value. This is useful for example if all cells of a certain subdomain, or all faces of a certain part of the boundary should have user pointers pointing to objects describing this part of the domain or boundary.
Note that the user pointer is not inherited under mesh refinement, so after mesh refinement there might be cells or faces that don't have user pointers pointing to the describing object. In this case, simply loop over all the elements of the coarsest level that has this information, and use this function to recursively set the user pointer of all finer levels of the triangulation.
See GlossUserData for more information.
Clear the user pointer of this object and all of its descendants. The same holds as said for the recursively_set_user_pointer() function. See GlossUserData for more information.
Set the user index to
p.
Reset the user index to 0. See GlossUserData for more information.
Access the value of the user index.
See GlossUserData for more information.
Set the user index of this object and all its children.
Note that the user index is not inherited under mesh refinement, so after mesh refinement there might be cells or faces that don't have the expected user indices. In this case, simply loop over all the elements of the coarsest level that has this information, and use this function to recursively set the user index of all finer levels of the triangulation.
See GlossUserData for more information.
Clear the user index of this object and all of its descendants. The same holds as said for the recursively_set_user_index() function.
See GlossUserData for more information.
Diameter of the object.
The diameter of an object is computed to be the largest diagonal. This is not necessarily the true diameter for objects that may use higher order mappings, but completely sufficient for most computations.
Return a pair of Point and double corresponding to the center and the radius of a reasonably small enclosing ball of the object.
The function implements Ritter's O(n) algorithm to get a reasonably small enclosing ball around the vertices of the object. The initial guess for the enclosing ball is taken to be the ball which contains the largest diagonal of the object as its diameter. Starting from such an initial guess, the algorithm tests whether all the vertices of the object (except the vertices of the largest diagonal) are geometrically within the ball. If any vertex (v) is found to be geometrically outside the ball, a new iterate of the ball is constructed by shifting its center and increasing the radius so as to geometrically enclose both the previous ball and the vertex (v). The algorithm terminates when all the vertices are geometrically inside the ball.
If a vertex (v) is geometrically inside a particular iterate of the ball, then it will continue to be so in the subsequent iterates of the ball (this is true by construction).
see this and [Ritter 1990]
Return the smallest bounding box that encloses the object.
Notice that this method is not aware of any mapping you may be using to do your computations. If you are using a mapping object that modifies the position of the vertices, like MappingQEulerian, or MappingFEField, then you should call the function Mapping::get_bounding_box() instead.
Definition at line 1437 of file tria_accessor.cc.
Length of an object in the direction of the given axis, specified in the local coordinate system. See the documentation of GeometryInfo for the meaning and enumeration of the local axes.
Note that the "length" of an object can be interpreted in a variety of ways. Here, we choose it as the maximal length of any of the edges of the object that are parallel to the chosen axis on the reference cell.
Definition at line 1459 of file tria_accessor.cc.
Return the minimal distance between any two vertices.
Return a point belonging to the Manifold<dim,spacedim> where this object lives, given its parametric coordinates on the reference
structdim cell. This function queries the underlying manifold object, and can be used to obtain the exact geometrical location of arbitrary points on this object.
Notice that the argument
coordinates are the coordinates on the reference cell, given in reference coordinates. In other words, the argument provides a weighting between the different vertices. For example, for lines, calling this function with argument Point<1>(.5), is equivalent to asking the line for its center.
Definition at line 1566 of file tria_accessor.cc.
This function computes a fast approximate transformation from the real to the unit cell by inversion of an affine approximation of the \(d\)-linear function from the reference \(d\)-dimensional cell.
The affine approximation of the unit to real cell mapping is found by a least squares fit of an affine function to the \(2^d\) vertices of the present object. For any valid mesh cell whose geometry is not degenerate, this operation results in a unique affine mapping. Thus, this function will return a finite result for all given input points, even in cases where the actual transformation by an actual bi-/trilinear or higher order mapping might be singular. Besides only approximating the mapping from the vertex points, this function also ignores the attached manifold descriptions. The result is only exact in case the transformation from the unit to the real cell is indeed affine, such as in one dimension or for Cartesian and affine (parallelogram) meshes in 2D/3D.
For exact transformations to the unit cell, use Mapping::transform_real_to_unit_cell().
Definition at line 1684 of file tria_accessor.cc.
Center of the object. The center of an object is defined to be the average of the locations of the vertices, which is also where a \(Q_1\) mapping would map the center of the reference cell. However, you can also ask this function to instead return the average of the vertices as computed by the underlying Manifold object associated with the current object, by setting to true the optional parameter
respect_manifold. Manifolds would then typically pull back the coordinates of the vertices to a reference domain (not necessarily the reference cell), compute the average there, and then push forward the coordinates of the averaged point to the physical space again; the resulting point is guaranteed to lie within the manifold, even if the manifold is curved.
When the object uses a different manifold description as its surrounding, like when part of the bounding objects of this TriaAccessor use a non-flat manifold description but the object itself is flat, the result given by the TriaAccessor::center() function may not be accurate enough, even when parameter
respect_manifold is set to true. If you find this to be case, than you can further refine the computation of the center by setting to true the second additional parameter
interpolate_from_surrounding. This computes the location of the center by a so-called transfinite interpolation from the center of all the bounding objects. For a 2D object, it puts a weight of
1/2 on each of the four surrounding lines and a weight
-1/4 on the four vertices. This corresponds to a linear interpolation between the descriptions of the four faces, subtracting the contribution of the vertices that is added twice when coming through both lines adjacent to the vertex. In 3D, the weights for faces are
1/2, the weights for lines are
-1/4, and the weights for vertices are
1/8. For further information, also confer to the TransfiniteInterpolationManifold class that is able to not only apply this beneficial description to a single cell but all children of a coarse cell.
Definition at line 1713 of file tria_accessor.cc.
Return the barycenter (also called centroid) of the object. The barycenter for an object \(K\) of dimension \(d\) in \(D\) space dimensions is given by the \(D\)-dimensional vector \(\mathbf x_K\) defined by
\[ \mathbf x_K = \frac{1}{|K|} \int_K \mathbf x \; \textrm{d}x \]
where the measure of the object is given by
\[ |K| = \int_K \mathbf 1 \; \textrm{d}x. \]
This function assumes that \(K\) is mapped by a \(d\)-linear function from the reference \(d\)-dimensional cell. Then the integrals above can be pulled back to the reference cell and evaluated exactly (if through lengthy and, compared to the center() function, expensive computations).
Definition at line 1415 of file tria_accessor.cc.
Compute the dim-dimensional measure of the object. For a dim-dimensional cell in dim-dimensional space, this equals its volume. On the other hand, for a 2d cell in 3d space, or if the current object pointed to is a 2d face of a 3d cell in 3d space, then the function computes the area the object occupies. For a one-dimensional object, return its length.
The function only computes the measure of cells, faces or edges assumed to be represented by (bi-/tri-)linear mappings. In other words, it only takes into account the locations of the vertices that bound the current object but not how the interior of the object may actually be mapped. In most simple cases, this is exactly what you want. However, for objects that are not "straight", e.g. 2d cells embedded in 3d space as part of a triangulation of a curved domain, two-dimensional faces of 3d cells that are not just parallelograms, or for faces that are at the boundary of a domain that is not just bounded by straight line segments or planes, this function only computes the dim-dimensional measure of a (bi-/tri-)linear interpolation of the "real" object as defined by the manifold or boundary object describing the real geometry of the object in question. If you want to consider the "real" geometry, you will need to compute the measure by integrating a function equal to one over the object, which after applying quadrature equals the summing the JxW values returned by the FEValues or FEFaceValues object you will want to use for the integral.
Definition at line 1426 of file tria_accessor.cc.
Return true if the current object is a translation of the given argument.
Like set_boundary_id but without checking for internal faces or invalid ids.
Copy the data of the given object into the internal data structures of a triangulation.
Definition at line 1404 of file tria_accessor.cc.
Set the flag indicating, what
line_orientation() will return.
It is only possible to set the line_orientation of faces in 3d (i.e.
structdim==2 && dim==3).
Set whether the quad with index
face has its normal pointing in the standard direction (
true) or whether it is the opposite (
false). Which is the standard direction is documented with the GeometryInfo class.
This function is only for internal use in the library. Setting this flag to any other value than the one that the triangulation has already set is bound to bring you disaster.
Set the flag indicating, what
face_flip() will return.
It is only possible to set the face_orientation of cells in 3d (i.e.
structdim==3 && dim==3).
Set the flag indicating, what
face_rotation() will return.
It is only possible to set the face_orientation of cells in 3d (i.e.
structdim==3 && dim==3).
Set the
used flag. Only for internal use in the library.
Clear the
used flag. Only for internal use in the library.
Set the
RefinementCase<dim> this TriaObject is refined with. Not defined for
structdim=1 as lines are always refined resulting in 2 children lines (isotropic refinement).
You should know quite exactly what you are doing if you touch this function. It is exclusively for internal use in the library.
Clear the RefinementCase<dim> of this TriaObject, i.e. reset it to RefinementCase<dim>::no_refinement.
You should know quite exactly what you are doing if you touch this function. It is exclusively for internal use in the library.
Set the index of the ith child. Since the children come at least in pairs, we need to store the index of only every second child, i.e. of the even numbered children. Make sure, that the index of child i=0 is set first. Calling this function for odd numbered children is not allowed.
Clear the child field, i.e. set it to a value which indicates that this cell has no children.
Lines along x-axis, see GeometryInfo
Lines along y-axis
Definition at line 1492 of file tria_accessor.cc.
Lines along x-axis, see GeometryInfo
Lines along y-axis
Definition at line 1506 of file tria_accessor.cc.
Lines along x-axis, see GeometryInfo
Lines along y-axis
Lines along z-axis
Definition at line 1521 of file tria_accessor.cc.
Using directive for backwards-compatibility.
Definition at line 1728 of file tria_accessor.h. | https://dealii.org/developer/doxygen/deal.II/classTriaAccessor.html | CC-MAIN-2020-10 | refinedweb | 3,581 | 55.54 |
The pandas’ library is a vital member of the Data Science ecosystem. However, the fact that it is unable to analyze datasets larger than memory makes it a little tricky for big data. Consider a situation when we want to analyze a large dataset by using only pandas. What kind of problems can we run into? For instance, let’s take a file comprising 3GB of data summarising yellow taxi trip data for March in 2016. To perform any sort of analysis, we will have to import it into memory. We readily use the pandas’
read_csv() function to perform the reading operation as follows:
import pandas as pd
df = pd.read_csv('yellow_tripdata_2016-03.csv')
When I ran the cell/file, my system threw the following Memory Error. (The memory error would depend upon the capacity of the system that you are using).
Any Alternatives?
Before criticizing pandas, it is important to understand that pandas may not always be the right tool for every task. Pandas lack multiprocessing support, and other libraries are better at handling big data. One such alternative is Dask, which gives a pandas-like API foto work with larger than memory datasets. Even the pandas’ documentation explicitly mentions that for big data:
it’s worth considering not using pandas. Pandas isn’t the right tool for all situations.
In this article, however, we shall look at a method called chunking, by which you can load out of memory datasets in pandas. This method can sometimes offer a healthy way out to manage the out-of-memory problem in pandas but may not work all the time, which we shall see later in the chapter. Essentially we will look at two ways to import large datasets in python:
One thought on “Loading large datasets in Pandas”
Pandey, I’m a lover of your work.
Thanks for making Data Science easy to learn. | https://parulpandey.com/2020/10/18/loading-large-datasets-in-pandas/ | CC-MAIN-2022-21 | refinedweb | 314 | 62.88 |
Function to calculate new volume size - problem
2013-10-18 12:39 PM
I am attempting to create a function that calculates correct volume size based on a desired percentage used. I am encountering what I believe to be a computational issue inside the MVEL where it doesn't behave consistently. Maybe this is a bug, or maybe the issue is between my ears? I need some help, though.
Here is the code:
def calculateTargetVolumeSize(vol_used,vol_avail,snap_
{
int percent_used = (vol_used / (vol_used + vol_avail)) * 100;
if (percent_used < target_percent ) {
throwException("Percent used " + percent_used + "% is already below designated percentage of " + target_percent + "%.\nThe following values were passed: " + vol_used + ", " + vol_avail + ", " + snap_percent + ", " + snap_used);
}
int new_avail_size = (int)(vol_used / (target_percent / 100));
int new_vol_size = (int)(new_avail_size / (1 - (snap_percent / 100)));
int new_snap_size = new_vol_size - new_avail_size;
if (snap_used > new_snap_size) {
new_vol_size = new_avail_size + snap_used;
}
if (new_vol_size < 20 ) {
new_vol_size = 20;
}
// throwException("new size: " + new_vol_size);
return new_vol_size;
}
I am leveraging this function in a workflow that uses the Resize volume command. The function is in the TargetSize value. It returns the following error:
Function: 'calculateTargetVolumeSize'
With parameters: v1.used_size_mb, v1.available_size_mb, v1.snapshot_reserved_percent, v1.snapshot_used_mb, 70
Threw an Exception with a message: Percent used 9.99997731789257E-4% is already below designated percentage of 70%.
The following values were passed: 237883, 53957, 5, 50035
I cannot for the life of me figure out how ( 237883 / (237883 + 53957)) * 100 = 9.999977331789257E-4
When I run the function as a test using this syntax: calculateTargetVolumeSize(237883,53957,5,50035,70)
It returns the following value correctly: 389867
Any ideas?
Thanks,
Will
Solved! SEE THE SOLUTION
Re: Function to calculate new volume size - problem
2013-10-21 04:36 AM
Thanks for looking into this. Attached is the full workflow.
Re: Function to calculate new volume size - problem
2013-10-21 06:13 AM
Will,
Is V1 volume variable defined by you or found using a finder?
Just asking to confirm that those attributes of V1 have values. I suspect one of them is not getting the right value.
Can you add all those parameters that you pass to the functions to be return parameters so you can see them in the planning?
You might need to add them and disable the function briefly (Just exchange it with some fixed size) to get through planning and see those values.
Very useful for debugging...
Best,
Yaron
Re: Function to calculate new volume size - problem
2013-10-21 06:19 AM
Thanks Yaron,
I did as you recommended and the values returned are in the attached screenshot.
Thanks,
Will
Re: Function to calculate new volume size - problem
2013-10-22 03:56 AM
hey willwalsh,
I would like to know which version of WFA you have installed. is it 1.0 ?
Sinhaa,
I know this is completely off track from what we are looking at here but, I get an error while importing this workflow given by willwalsh in my setup. I have attached the snapshot of the error.
I have WFA 2.0.1.23.8 installed. I saw another thread () on importing workflows from 1.0 to 2.0 and the outcome of the thread says that we need an intermediary installation of 1.1.1 to get it done. Is there no other way of doing it ?
Re: Function to calculate new volume size - problem
2013-10-22 05:52 AM
I am running version 2.1.0.70.32
Re: Function to calculate new volume size - problem
2013-10-22 05:55 AM
I realize that I did not answer part of your question. v1 is found using a finder. I ask the user to input the volume in the format <filer>:/vol/<volume>. I then parse that into the variables filer and volume and pass that to the finder. The values appear to be correct, both as returned values, and when the function throws an exception.
Re: Function to calculate new volume size - problem
2013-10-24 06:35 PM
Hey,
You are running into precision issues with the math. As the input is integer, it is being converted along the way and integers cannot hold decimal places. Hence you get enormous rounding errors. ie. 1/3 = 0.333 as a double, or 0 as an integer.
Try something like this instead, force type-cast as double the variables for the calculations:
int percent_used = (((double)vol_used) / (((double)vol_used) + ((double)vol_avail))) * 100;
This will result in the correct response as the 'double' type can store many decimal places during the calculation. Keep in mind casting is a low precedence operator on MVEL unlike some other languages and needs the brackets. Without them, the first divide would be interpreted with vol_used as an integer instead then converted to double after the divide, but it's already lost the decimal places at this point. It's safest to always use brackets in MVEL when type-converting, and remember that MVEL will automatically type-convert for you which may have unexpected side effects like precision loss.
Regards,
Michael.
Re: Function to calculate new volume size - problem
2013-10-28 08:17 AM
Thanks Michael! That fixed it. Here is the completed workflow. For those interested in using it. It was created in WFA 2.1.
Thanks,
Will | http://community.netapp.com/t5/OnCommand-Storage-Management-Software-Discussions/Function-to-calculate-new-volume-size-problem/m-p/28757 | CC-MAIN-2015-35 | refinedweb | 866 | 64.2 |
Only show links if user can access the view
- here.
Most things I seem to develop have a top-level menu that runs across the top or side of the page, which has the various pages within them. In many cases, some of these will only be accessible to people who have permission to access the view to which they point. That is, I want to selectively display menu items based on if the logged in user can access the page linked to.
Previously, I have been doing this by having a check in the template that is essentially a duplicate of the check in the view. Specifically, the view check is usually a decorator (or multiple decorators), using things like
login_required or
user_passes_test, which takes a callable that runs the test, when passed in the user object.
I already had a
django-menus app, that generates the menu items, based on the view name (it creates the links, and you may optionally pass in a text value). But each of these was wrapped in things like:
{% if request.user.is_staff %} {% menu_item 'foo' 'Foo' %} {% endif %}
The view
foo is wrapped in a decorator:
1 @user_passes_test(lambda user:user.is_staff) 2 def foo(request): 3 # ... view code removed ...
So, I spent quite a while working out how to inspect the decorators wrapping a function, and if they were of the form that might be a test of permission/some other attribute, run the test. I’ve settled on the convention that a decorator function that takes a first argument of
user or
u (for
lambda u: u.is_staff, for instance) is executed.
The project can be found at django-menus. | http://schinckel.net/tags/django-menus/ | crawl-003 | refinedweb | 279 | 70.33 |
These logs document versioned changes to the Graph API and Marketing API that are no longer available. To learn more about versions, please see our Platform Versioning documentation. Use the API Upgrade Tool to determine which version changes will affect your API calls. For more information on upgrading, please see our upgrade guide.
Changelog entries are categorized in the following way:
New Features, Changes, and Deprecations only affect this version. 90-Day Breaking Changes affect all versions.
Breaking Changes are not included here since they are not tied to specific releases.
This.
Released October 7, 2015 | Available until July 13, 2016
Ad, Ad set, and Campaign object read path additions:
effective_statusto reflect the real Ad delivery status. For example for an active adset in a paused campaign, the
effective_statusof this adset is
CAMPAIGN_PAUSED.
configured_statusto reflect the status specified by user.
Insights - The following metrics have been added:
inline_link_clicks
cost_per_inline_link_click
inline_post_engagement
cost_per_inline_post_engagement
Admin Name - Default the
name of the first visible admin to current user if not specified.
Campaign -The following objective names have changed:
WEBSITE_CLICKSto
LINK_CLICKS
WEBSITE_CONVERSIONSto
CONVERSIONS
DPA - Enforce Terms of Service for all Product Catalogues and Dynamic Product Ads users. Advertisers implicitly accept by creating their first catalog through Business Manager. For API developers, Terms of Service need to be accepted through BM when the product catalog is first created.
Local Awareness Ads
- Require
page_idin
promoted_objectat the
/adsetlevel for Local Awareness Ads
- No longer require
custom_locationsfor location targeting, and also allow sub-national, single country targeting. Now all location types (except for country) must be within the same country.
- Change
/adcampaign_groupsto
/campaigns
- Change
/adcampaignsto
/adsets
- Change
/adgroupsto
/ads
- In the write path, change campaign_group_status, campaign_status, adgroup_status to
status
- Response for call to
/search?type=adgeolocationwill contain only the 'City' instead of 'City, State'
- Change
{cpc, cpm, cpa}_{min, max, median}, such as
cpc_minfields in /reachestimate to
bid_amount_min,
bid_amount_median,
bid_amount_max. Change
bid_forto
optimize_for
- Remove
friendly_name
- Remove
USE_NEW_APP_CLICK
- Remove
buying_type
- Remove ability to create campaign with
Objective='NONE'. 'NONE' is still a valid read-only objective. You can continue to edit existing campaigns.
- Remove
REACHBLOCK,
DEPRECATED_REACH_BLOCKfrom campaign
buying_type. For older campaigns using these two
buying_type, the API will return
RESERVEDfor
REACHBLOCKand
FIXED_PRICEfor
DEPRECATED_REACH_BLOCK.
FIXED_CPMis replaced with
FIXED_PRICE.
- Remove filter on
adgroup_id,
campaign_id, or
campaign_group_id. Instead use
ad.id,
adset.idand
campaign.id
- Note Prior to 2.6 We decided to extend the timeline to deprecate Clicks(All) and CPC metrics until the deprecation of API v2.6. In 2.5, removed
cpcand
clicksfields in Insights API. Newly added related metrics include
inline_link_clicks,
cost_per_inline_link_click,
inline_post_engagement,
cost_per_inline_post_engagement. Please note clicks(all) historically counted any engagement taken within the ad unit as a click. Thus, clicks(all) won't be a simple addition of
link_clicksand
post_engagement. The newly added fields are available in v2.4 and v2.5.
Reach Estimate - Require
optimize_for at the
/reachestimate endpoint.
Targeting
- Remove
want_localized_nameparam for
/search?type=adgeolocation.
- Remove
/search?type=adcityand
adregion, use
/search?type=adgeolocationinstead.
- Remove
engagement_specsand
excluded_engagement_specsfrom targeting spec. The similar video remarketing functionality is supported through
/customaudiencesendpoint.
Deprecate or make private certain targeting options:
- Instead of being returned in
/search, private categories will now be returned in
/act_{AD_ACCOUNT_ID}/broadtargetingcategoriesor
/act_{AD_ACCOUNT_ID}/partnercategories.
- Deprecated categories will no longer be returned.
- Updates to ad set targeting using private or deprecated categories will error. It's recommended to query the validation endpoint before updating the ad set targeting to identify which targeting options to remove.
Released July 8th, 2015 | No longer available
In v2.4 we expose a set of new Page Video metrics, such as
page_video_views_paid,
page_video_views_autoplayed, and
page_video_views_organic available from the Graph API via
GET /v2.4/{page_id}/insights/?metric={metric}. These metrics require the
read_insights permission.
The
Video node now contains the following fields in
GET|POST operations to
/v2.4/{video_id}:
content_categorywhich supports categorizing a video during video upload, and can be used for suggested videos. Content categories include: Business, Comedy, Lifestyle, etc and a full list of categories can be viewed on the
Videonode docs page.
unpublished_content_typewill expose 3 new types (Scheduled, Draft, and Ads_Post) which will help coordinate how the video is posted.
expirationand
expiration_typeallows the video expiration time to be set, along with the type (hide, delete).
embeddableboolean flag is now available to control if 3rd party websites can embed your video.
We've simplified how you access content on a person's Timeline. Instead of handling different object types for statues and links, the API now returns a standardized Post node with attachments which represent the type of content that was shared. For more details view the User reference docs.
For Marketing API (formerly known as Ads API) v2.4 new features, see Facebook Marketing API Changelog.
admin_creatorobject of a Post in now requires a Page access token.
POST /v2.4/{page_id}/offersand
DELETE /v2.4/{offer_id}now require a Page access token with
manage_pagesand
publish_pagespermissions.
GET /v2.4/{page_id}/milestones,
POST /v2.4/{milestone_id}, and
DELETE /v2.4/{milestone_id}now require a Page access token with
manage_pagesand
publish_pagespermission.
Pagenode
GET /v2.4/{page_id}/promotable_postsnow require a user access token with
ads_managementpermission or a Page access token.
global_brand_parent_pageobject has been renamed to
global_brand_root_id.
GET /v2.4/{global_brand_default_page_id/insightswill now return only insights of the default Page insight data, instead of the insights for the Global Brand hierarchy. Use the Root Page ID to retrieve the integrated insights of the whole hierarchy.
GET|POST /v2.4/{page_id}/promotable_postshas renamed the field
is_inlineto
include_inline.
limit=100. This will impact
GEToperations made on the
feed,
posts, and
promotable_postsedges.
The default pagination ordering of
GET {user-id}/events now begins with the newest event first, and is ordered in reverse chronological order.
Graph API v2.4 now supports filtering of
GET /v2.4/{user_id}/accounts with new boolean fields:
is_promotablefilter results by ones that can be promoted.
is_businessfilter results associated with a Business Manager.
is_placeInclude Place as results filter.
To try to improve performance on mobile networks, Nodes and Edges in v2.4 requires that you explicitly request the field(s) you need for your
GET requests. For example,
GET /v2.4/me/feed no longer includes likes and comments by default, but
GET /v2.4/me/feed?fields=comments,likes will return the data. For more details see the docs on how to request specific fields.
GET /v2.4/{id}/linksand
GET /v2.4/{id}/statuseswill no longer be available beginning in v2.4. As an alternative, we suggest using
GET /v2.4/{id}/feed.
GET|POST /v2.4/{page_id}/?fields=global_brand_parent_pageis being deprecated in this version and replaced by
/v2.4/{page_id}/?fields=global_brand_root_id.
GET|POST /v2.4/{page_id}/global_brand_default_page_id/global_brand_childrenwill no longer function in v2.4. As an alternative, please use the root page ID.
GET /v2.4/{page_id}/promotable_postswill no longer support the filter and type params in v2.4. For example, a call to
GET /v2.4/{page_id}/promotable_posts?type=STATUSwill return an empty result set.
Eventnode no longer supports
GEToperations on the endpoints
/v2.4/{event_id}/invited,
/v2.4/{event_id}/likes, or
/v2.4/{event_id}/sharedposts.
GET /v2.4/{user_id}/home,
GET /v2.4/{user_id}/inbox, and
GET /v2.4/{user_id}/notificationsoperations as well as
read_stream,
read_mailbox, and
manage_notificationspermissions are deprecated in v2.4.
user_groupspermission has been deprecated. Developers may continue to use the
user_managed_groupspermission to access the groups a person is the administrator of. This information is still accessed via the
/v2.4/{user_id}/groupsedge which is still available in v2.4.
GET /v2.4/{event_id}/?fields=privacyis deprecated in v2.4.
GET /v2.4/{event_id}/?fields=feed_targetingis deprecated in v2.4.
GET /v2.4/{event_id}/?fields=is_date_onlyis deprecated in v2.4.
From October 6, 2015 onwards, in all previous API versions, these endpoints will return empty arrays, the permissions will be ignored if requested in the Login Dialog, and will not be returned in calls to the
/v2.4/me/permissions endpoint.
Released March 25th, 2015 | No Longer available
user_posts Permission - We have a new permission user_posts that allows an app to access the posts on a person's Timeline. This includes the someone's own posts, posts they are tagged in and posts other people make on their Timeline. Previously, this content was accessible with the
read_stream permission. The
user_posts permission is automatically granted to anyone who previously had
read_stream permission.
all_mutual_friends Edge - This Social context edge enables apps to access the full list of mutual friends between two people who use the app. This includes mutual friends who use the app, as well as limited information about those who don't.
Both users for whom you're calling this endpoint must have granted the
user_friendspermission.
If you calling this endpoint for someone not listed in your app's
Roles sectionyou must submit your app for review by Facebook via App Review.
Although this edge is new to Graph API v2.3, to make it easier for you to migrate we also added this edge to v2.0, v2.1, and v2.2.
Debug Mode - Provides extra information about an API call in the response. This can help you debug possible problems and is now in Graph API Explorer, as well as the iOS and Android SDKs. For more information see Graph API Debug Mode.
New Pages Features
-
Real-time Updates - As of March 25, 2015 We now send content in Page real-time updates (RTUs). Previously, only the object's ID was in the RTU payload. Now we include content in addition to the ID including: statuses, posts, shares, photos, videos, milestones, likes and comments. In order for the app to receive these types of updates, you must have enabled the "Realtime Updates v2.0 Behavior" migration in your app's dashboard.
-
Page Reviews now support real-time updates. Apps can subscribe to the
ratingsproperty to receive a ping every time a public review is posted on pages the app is subscribed to. In order for the app to receive this type of update, you must have enabled the "Realtime Updates v2.0 Behavior" migration in your app's dashboard.
-
Page Posts,
admin_creator- All Page Posts now include a new
admin_creatorfield that contains the
idand
nameof the Page Admin that created the post. This is visible when you use a Page access token, or the user access token of a person who has a role on the Page.
-
New Page Fields -
GET|POST /v2.3/{page-id}now supports fetching and updating these fields:
food_styles,
public_transit,
general_manager,
attire,
culinary_team,
restaurant_services,
restaurant_specialties, and
start_info.
-
New Page Settings -
GET|POST /v2.3/{page-id}/settingsnow supports four new settings:
REVIEW_POSTS_BY_OTHER,
COUNTRY_RESTRICTIONS,
AGE_RESTRICTIONS, and
PROFANITY_FILTER.
- Larger Videos with Resumable Upload - We now support larger video sizes uploads up to 1.5GB or 45 minutes long with resumable video upload. See Video Upload with Graph API.
/v2.3/{object_id}/videosedges you can create a new Video object from the web by providng the
file_urlparameter.
- Resumable, Chunked Upload - All
/v2.3/{object_id}/videosedges support resumable, chunked uploads. For more information, see Video Upload with Graph API.
Video Playlists - You can now create and manage video playlists for Pages by using
GET|POST|DELETE on the
/v2.3/{page_id}/videolist edge. you can also add videos to a playlist by
POST /v2.3/{videolist_id}/videos and
GET the videos of a playlist.
Page's Featured Video - You can now set and get the featured_video of a page using
GET|POST /v2.3/{page_id}/featured_videos_collection.
-
Publish Fields - All Video nodes objects at
/v2.3/{video_id}now return the new fields:
published, a boolean which indicates if the video is currently published, and
scheduled_publish_time.
-
Custom Thumbnail - You can now upload and manage custom video thumbnails as JPEGs or PNGs with
GET|POST /v2.3/{video_id}/thumbnail. See Graph API Reference, Video Thumbnail.
-
Targeting Restrictions -
POST /v2.3/{page-id}/videosnow supports targeting restrictions by country, locale, age range, gender, zipcode, timezone and excludin locations.
-
Delete -
DELETE /v2.3/{video_id}removes videos. This is supported if you have edit permissions on the video object.
-
New Read and Write Fields - The Video node now supports retrieving and modifying additional fields:
backdated_time, and
backdated_time_granularity.
-
Subtitles, Localized Captions -
GET|POST /v2.3/{video-id}now supports supplying and retrieving subtitles and localized captions.
-
Visibility of Video - With
POST /v2.3/{page_id}/videosyou can control where the video is seen with
no_storyand
backdated_post.hide_from_newsfeedparameters. These parameters target visibility on feeds and page timelines.
Page Plugin - Is the new name for Like Box Social Plugin and it has a new design.
Comments Plugins Has a new design and it now supports Comment Mirroring (private beta).
Requests are now
GameRequests - Previously this term and object-type created confusion for non-game app developers, and the intended usage of these objects are to invite others to a game Non-game apps should use App Invites. The
/v2.3/{user-id}/apprequests edge is now limited to game apps. See Games, Requests.
read_custom_friendlists - Is the new name for
read_friendlists Login permission. This is to clarify that the permission grants access to a person's custom friendlists, not the full list of that person's friends.
Changes to Page APIs:
-.
-
Page Publish Operations - now accept the type of access token the requests are made with. This includes publishing posts, likes and comments. When a user access token of a Page admin is in the request such as
POST /v2.3/{page-id}/feed, the action occurs with the voice of the user, instead of the Page. To publish as the Page, you must now use the Page access token.
-
Removing Comments on Page posts - As a Page admin with
DELETE /v2.3/{comment-id}now requires a Page access token.
-
POST /v2.3/{page-id}- Now requires country as sub-parameter if you update the
locationfield without specifying the
city_idsub-parameter. The state sub-parameter is also required if country sub-parameter is 'United States'.
-
Countries - The country subfield of the
feed_targetingfield on a Page post is renamed
countrieswhen you make a
POST|GET /v2.3/{page-id}/{post-id}.
-
Page Field Updates -
POST /v2.3/{page-id}- Now supports complete field updates for: hours, parking, payment_options.
Previously, the update behavior on these fields was to replace a specific key/value pair that was specified as part of the POST, leaving all other keys intact, but the new functionality of a POST on one of these fields will replace all key/value pairs with what is posted.
-
Premium Video Metrics - Insights for Page Premium Video Posts are now deprecated.
-
page_consumptions_by_consumption_typeinsight now returns data for a local page instead of a parent - In earlier versions of the insights API for a Page, asking for this insight would return data for a parent of a local page. In v2.3, we now return the data for only that local page.
Picture Error - For the Link, Post, Thread, Comment, Status, and AppRequest nodes, and other nodes which don't have a picture, the
/v2.3/{object}/picture edge now returns an error message. Before, the API returned a placeholder image of a 'question mark' for the picture edge requested on these nodes.
[Oauth Access Token] Format - The response format of returned when you exchange a code for an
access_token now return valid JSON instead of being URL encoded. The new format of this response is
{"access_token": {TOKEN}, "token_type":{TYPE}, "expires_in":{TIME}}. We made this update to be compliant with section 5.1 of RFC 6749.
Consistent Place Data - Content objects tagged with a place now share a consistent place data structure which contains the ID, name, and the geolocation of the place. This includes Events, Photos, Videos, Statuses and Albums. As a result, the venue and location fields have been removed from the Event node. Developers should access the place field instead.
In addition, if someone tags a Photo, Video, Status, or Album at an event, that object will contain an
eventfield.
Serialized Empty Arrays - All Graph API endpoints now consistently serialize empty arrays as
[] and empty objects as
{}. Previously, some empty objects were incorrectly serialized as empty arrays.
Default Result Limit - All edges will now return 25 results per
page by default when the
limit param is not specified.
Ads API now Marketing API - We recently renamed the Facebook Ads API to the Facebook Marketing API. For details on Marketing API changes see the Marketing API v2.3 Changelog.
is now deprecated and will stop returning data from June 23, 2015. Developers should call the Graph API's
/v2.3/{page_id}/feed endpoint instead. This returns JSON rather than RSS/XML.
Social Plugins - The following are now deprecated and will no longer render after June 23, 2015:
- Facepile Plugin
- Recommendations Feed Plugin
- Activity Feed Plugin
- Like Box Social Plugin
Released October 30th, 2014 | No longer Available
POST /v2.2/{comment_id}?is_hidden=trueto hide,
POST /v2.2/{comment_id}?is_hidden=falseto unhide. You can determine if a comment is hidden, or if you have the ability to hide/unhide the comment by checking the
is_hiddenor
can_hidefields on the
commentnode.
token_for_businessfield makes it easier to identify the same person across multiple apps owned by the same business: In addition to the Business Mapping API, there is now a new
token_for_businessfield on the
userobject. This emits a token which is stable for the same person across multiple apps owned by the same business. This will only be emitted if the person has logged into the app. For games developers, this property will also be emitted via the
signed_requestobject passed to a canvas page on load. Note that this is not an ID - it cannot be used against the Graph API, but may be stored and used to associate the app-scoped IDs of a person across multiple apps. Also note that if the owning business changes, this token will also change. If you request this field for an app which is not associated with a business, the API call will return an error.
commentnode has a new
objectfield which emits the parent object on which the comment was made. To get the ID and owner of the comment's parent object (for example, a Page Post) you might call:
/v2.2/{comment_id}?fields=object.fields(id,from)
- In previous versions, any app which was added as a Page Tab also received realtime updates. From v2.2 onwards there is a dedicated endpoint for managing these subscriptions:
-
GET /v2.2/{page-id}/subscribed_appsreturns the apps subscribed to realtime updates of the Page. This must be called with a Page Access Token.
-
POST /v2.2/{page-id}/subscribed_appssubscribes the calling app to receive realtime updates of the Page. This must be called with a Page Access Token and requires the calling person to have at least the Moderator role on the Page.
-
DELETE /v2.2/{page-id}/subscribed_appsstops the calling app from receiving realtime updates of the Page. This may be called with a Page or App Access Token. If called with Page Access Token, it requires the calling person to have at least the Moderator role on the Page.
feed_targetingparameter is now supported when publishing videos to a Page:
POST /v2.2/{page_id}/videos?feed_targeting={targeting_params}. This lets you specify a number of parameters (such as age, location or gender) to help target who sees your content in News Feed. This functionality is already supported on
POST /{page_id}/feedso we're extending this to videos too.
/v2.2/{page_id}:
-
payment_optionstakes an object which lets you specify the payment options accepted at the place.
-
price_rangeaccepts an enum of strings which represent the self reported price range of the Page's business.
-
latitudeand
longitudecan now be specified as properties of the
locationfield in order to let apps programmatically update Page's physical location. Both are floats.
-
ignore_coordinate_warnings(boolean) determines if the API should throw an error when latitude and longitude are specified in location field for updating the Page's location. If set to false, an error will be thrown if the specified coordinates don't match with the Page's address.
-
is_always_openlets apps set the status of the place to “Always Open”. This can only be set to
true, and will clear previously specified hours in
hoursfield. To set specific hours, use
hoursfield.
-
is_publishedtakes a boolean which lets you publish or unpublish the Page. Unpublished pages are only visible to people listed with a role on the Page.
- The Page node has a new readable field to let apps read a Page's information: The following field are now supported with a GET to
/v2.2/{page_id}:
-
name_with_location_descriptorreturns a string which provides additional information about the Page's location besides its name.
- There's a new
APPEARS_IN_RELATED_PAGESsetting on
/v2.2/{page_id}/settings. This boolean determines if your page can be included in the list of suggested pages which are presented when they like a page similar to yours. You may set or read this value.
- You can now read the permissions your app has been approved for via an API. A new edge on your App object, called
/{app-id}/permissions, allows you to view the permissions that your app has been approved for via Login Review.
?ids=ID1,ID2. This reduces the likelihood of timeouts when requesting data about a large number of IDs in a single request.
blockededge on the Page node now requires a Page Access Token - this endpoint can no longer be called with an App or User token. That endpoint is available at:
GET|POST|DELETE /v2.2/{page_id}/blocked?access_token={page_access_token}.
tabsedge on the Page node now requires a Page token for GETs. POSTs and DELETEs on this endpoint already require page tokens. That endpoint is available at:
GET|POST|DELETE /v2.2/{page_id}/tabs?access_token={page_access_token}Calling GET on this edge now works for people in the 'Analyst' role. It previously required the 'Editor' role.
tabsedge on the Page node will now throw an error if the caller does not have permission. Previously it would only return an empty string.
/{page_id}/adminsedge on the Page node has been renamed to
/v2.2/{page_id}/roles. In addition, in the response, the values returned in the role field have been changed:
-
MANAGERhas been renamed to
Admin.
-
CONTENT_CREATORhas been renamed to
Editor.
-
MODERATORhas been renamed to
Moderator.
-
ADVERTISERhas been renamed to
Advertiser.
-
INSIGHTS_ANALYSThas been renamed to
Analyst.
settingsedge on the Page node will no longer include entries for settings where the
valuefield would be null.
POST /{page_id}/settingswill no longer support the
settingand
valueparams. Instead, you should specify the
optionparam which should be an object containing a single key/value pair with the setting enum as the key.
GET /v2.2/{page_id}/notificationsmust use a Page Access Token. Previously this required the
manage_notificationspermission. User Access Tokens will no longer work for this endpoint.
/v2.2/{group_id}/albumsendpoint has changed to match the response of
/{user_id}/albums.
/v2.2/me/friendsendpoint now defaults to 25.
fb:namesocial plugin has been deprecated and will stop working on Jan 28, 2015. Developers should instead use the
FB.api()method of the Javascript SDK to retrieve the names of users.
page_fanFQL table or the
/{user_id}/likes/{app_page_id}Graph API endpoint without needing the
user_likespermission. Starting in v2.2, the
user_likespermission will be required to query these endpoints. Also, we will require the
user_likespermission on versions older than v2.2 starting 90 days from today, on Jan 28, 2015. Facebook will not grant the
user_likespermission solely for the purpose of checking if a person has liked an app's page. This change was announced on August 7, 2014 and will come into effect on November 5, 2014.
/v2.2/{page_id}/feed.
POST /{page-id}/tabsand
DELETE /{page-id}/tabswill no longer support subscribing or unsubscribing an app for realtime updates. This will take effect in all previous API versions on January 28, 2015. To subscribe an app to realtime updates for a page, use the new
/v2.2/{page_id}/subscribed_appsendpoint.
On October 14, 2014, we dropped support for SSL 3.0 across Facebook properties, including the Facebook Platform API and the Real-Time Updates API, after a vulnerability in the protocol was revealed on October 14, 2014. This change helps protect people’s information.
If your HTTP library forces the use of SSL 3.0 rather than TLS, you will no longer be able to connect to Facebook. Please update your libraries and/or configuration so that TLS is available.
Released August 7, 2014 | No longer Available
This is Facebook's first new API update after version 2.0 was launched at the 2014 f8 conference. API versions are supported for two years after the next version is released. This means that:
screennamesedge: this returns a list of the other, non-Facebook accounts associated with the brand or entity represented by a Facebook Page.
The following deprecations will take place on November 5, 2014 - 90 days after the launch of v2.1.
{ “success”: true }
urishould instead use
url.
These policy changes will come into effect on November 5, 2014 - 90 days after the launch of v2.1. They apply to all apps across all API versions.
As of version 2.5, future Marketing API changelogs will be published under the Graph API Changelog. For expired versions please see the Graph API Changelog Archive.
v2.5
Version
v2.4 will be available until April, 2016. Before that, you should migrate your API calls to
v2.5.
Going forward, Marketing API changelogs will be published under the Graph API Changelog.
v2.4
Version
v2.3 will be available until Oct 7th, 2015. Before that date, you should migrate your API calls to
v2.4.
A new API that lets developers tag campaigns/adsets/ads/creatives with arbitrary strings (“labels”) and organize/group their ad objects. The Labels API also allows querying adobjects by labels, including AND/OR queries, and query aggregated Insights by label. Now it is possible to answer questions like “how many clicks did all adsets with the 'US-country' label get?”
Added a new field 'app_install_state' in the ad set's targeting which takes a enum value of {installed, not_installed}. This field is intended to make inclusion/exclusion targeting for apps easier. The field can be used for any objective, but will only take effect if a promoted object is present.
optimization_goal: What the advertiser is optimizing for (video_views, link_clicks, etc...)
billing_event(required): What the advertiser is paying by (pay per impression, pay per link click, pay per video view etc...)
bid_amount: The value the advertiser is bidding per optimization goal
rtb_flagwhich should be used in favor of bid_type=CPM_RTB.
bid_amount: The value the advertiser is bidding per optimization goal
bid_info: will be removed on all write paths but can still be read. Use ad set's bid_amount field instead.
bid_type
bid_info: will be removed on all write paths but can still be read. Use ad's
bid_amountfield instead.
conversion_specs: will be removed on all write paths but can still be read. Use ad set's optimization_goal field instead.
is_autobid
v2.3terms, CPA for Page Post Engagement).
v2.3- definition of CPC will be represented in
v2.4as having
billing_event=POST_ENGAGEMENTand
optimization_goal=POST_ENGAGEMENT
page_typesinto an array of single items, e.g. Instead of
page_types:['desktop-and-mobile-and-external']you would specify:
page_types=['desktopfeed', 'rightcolumn', 'mobilefeed', 'mobileexternal']. The previously available groupings of page_types and validations remain the same.
['mobile']or
['mobilefeed-and-external']must now be replaced with
['mobilefeed', 'mobileexternal']
targetingof an ad set, by default Facebook will include Audience Network
page_types, and may include other
page_typeswithout notice in the future.
conjunctive_user_adclustersand
excluded_user_adclustersin favor of flexible targeting
act_{AD_ACCOUNT_ID}/connectionobjectswill no longer return connections based on advertiser email.
statusfield. Developers should instead use
last_firing_time
subtypewill become a required parameter in custom audience and lookalike creation
{ "tos_accepted": { "206760949512025": 1, "215449065224656": 1 }, return
"tos_accepted": {"web_custom_audience_tos": 1, "custom_audience_tos": 1 }
daily_spend_limitfrom ad account
spend_captype change from uint32 to numeric string
DAILY_BUDGET, BUDGET_REMAINING, LIFETIME_BUDGETtype change from uint32 to numeric string
For the full list of Graph API changes, refer to the Graph API changelog.
v2.3
Version
v2.2 will be available till July 8, 2015. Before that date, you should migrate your API calls to
v2.3.
A new
/insights edge consolidates functionality among
/stats,
/conversions, and
/reportstats edges. It provides a single, consistent interface for ad insights. This edge is provided with every ad object node, including Business Manager. It provides functionality such as grouping results by any level, sorting, filtering on any field, and real time updates for asynchronous jobs.
The new preferred way to upload ad videos is to do it chunk by chunk. When you upload an ad video, especially large ones, this method greatly increases the stability of video uploads. The previous one-step video uploading API still works, but we suggest you to switch to the new method for all videos.
interval_frequency_cap_reset_periodto R&F which allows you to set a custom period of time that a frequency is applied to. Previously the cap was applied to the entire duration of the campaign.
Introduced new lead ad unit designed to capture leads within Facebook's native platforms.
CALL_NOWand
MESSAGE_PAGEcall to action for local awareness ads
The following summarizes all changes. For more information on upgrading, including code samples, see Facebook Marketing API Upgrade Guide.
amount_spent,
balance,
daily_spend_limit, and
spend_capfields for an ad account are changed from integers to numeric strings.
businessfield of
/act_{AD_ACCOUNT_ID}and
/{USER_ID}/adaccountsis now a JSON object.
nameis required while creating a Custom Audience pixel for an ad account using the
/act_{AD_ACCOUNT_ID}/adspixelsendpoint.
pixel_idis required while creating website custom audiences using the
/act_{AD_ACCOUNT_ID}/customaudiencesendpoint.
DELETED.
start_timeand
end_timefields when updating an ad set.
promoted_objectin an ad set for an offer ad, you will provide the
page_idinstead of the
offer_id.
bid_infofield of ad set or ad will not be available if
is_autobidof the ad set is true.
creative_idsfield of an ad is no longer available.
objectivefield of an ad is no longer available.
multi_share_optimizedfield defaults to
truenow. You use this field when you create a Multi Product Ad with
/{PAGE_ID}/feedor
object_story_specin
/act_{AD_ACCOUNT_ID}/adcreatives.
/{APP_SCOPED_SYSTEM_USER_ID}/ads_access_tokenis replaced by
/{APP_SCOPED_SYSTEM_USER_ID}/access_tokens, for Business Manager System User access token management.
paramsis removed from the response of targeting description obtained with
/{AD_GROUP_ID}/targetingsentencelines
mobileand
mobilefeed-and-externalplaces ads on News Feed on mobile as well as on Audience Network. The new option
mobilefeedis for News Feed on mobile only. You specify the placement option with
page_typesfield of targeting specs.
v2.2
This is Facebook's first new API update after versioning was announced. API versions are supported for 90 days after the next version is released. This means that version 2.1 would be available until 90 days from
v2.2, January 28, 2015. However, we have extended the adoption timeline for
v2.2 this time to March 11, 2015. For more information, please see our blog post.
The below is a summarized list of all changes. For more info on upgrading, including code samples, please see our expanded upgrade guide.
targetingand
bid_typewill be required at ad set level, and will no longer be available at ad level.
bid_infowill be required at ad set level, while optional at ad level.
Affected Endpoints:
/act_{AD_ACCOUNT_ID}/adgroups
/act_{AD_ACCOUNT_ID}/adcampaigns
/{CAMPAIGN_GROUP_ID}/adcampaigns
/{CAMPAIGN_GROUP_ID}/adgroups
/{AD_SET_ID}/adgroups
/{AD_SET_ID}
/{AD_ID}
A new field
promoted_object will be required for creating an ad set when the campaign objective is website conversions, page likes, offer claims, mobile app install/engagement or canvas app install/engagement.
promoted_objectwill not be allowed to set a
promoted_object. You must create a new ad set if you want to specify a
promoted_object. Those existing ad sets without that setting will still run, and still can be updated/deleted as usual.
promoted_objectis specified,
conversion_specswill be automatically inferred from the
objectiveand
promoted_objectcombo and cannot be changed/overwritten.
promoted_object.
Affected Endpoints:
/{AD_SET_ID}
target_specsendpoint will be replaced with
target_spec, only allowing for one spec per prediction.
target_specfield returns an object where
target_specsused to return an array.
story_event_type, will be added. This field will be used to specify when an ad set may or may not have video ads and is required when targeting all mobile devices.
app_idsfield is required when
"schema"="UID".
use FacebookAds\Object\CustomAudience; use FacebookAds\Object\Values\CustomAudienceTypes; // Add Facebook IDs of users of certain applications $audience = new CustomAudience(<CUSTOM_AUDIENCE_ID>); $audience->addUsers( array(<USER_ID_1>, <USER_ID_2>), CustomAudienceTypes::ID, array(<APPLICATION_ID>));
from facebookads.adobjects.customaudience import CustomAudience audience = CustomAudience('<CUSTOM_AUDIENCE_ID>') users = ['<USER_ID_1>', '<USER_ID_2>'] apps = ['<APP_ID>'] audience.add_users(CustomAudience.Schema.uid, users, '<APP_ID>'s=apps)
User user = new CustomAudience(<CUSTOM_AUDIENCE_ID>, context).createUser() .setPayload("{\"schema\":\"UID\",\"data\":[\"" + <USER_ID_1> + "\",\"" + <USER_ID_2> + "\"],\"app_ids\":[\"" + <APPLICATION_ID> + "\"]}") .execute();
curl \ -F 'payload={ "schema": "UID", "data": ["<USER_ID_1>","<USER_ID_2>"], "app_ids": ["<APPLICATION_ID>"] }' \ -F 'access_token=<ACCESS_TOKEN>' \<CUSTOM_AUDIENCE_ID>/users
Starting with version 2.2 the following changes will be in affect for the endpoints below.
count,
offset, and
limitwill no longer be returned and you must instead use a cursor-based approach to paging.
total_countis only returned when the flag
summary=trueis set.
Affected Endpoints:
/act_{AD_ACCOUNT_ID}/asyncadgrouprequestsets
/act_{AD_ACCOUNT_ID}/adreportschedules
/{SCHEDULE_REPORT_ID}/adreportruns
/act_{AD_ACCOUNT_ID}/stats
/act_{AD_ACCOUNT_ID}/adcampaignstats
/act_{AD_ACCOUNT_ID}/adgroupstats
/act_{AD_ACCOUNT_ID}/conversions
/act_{AD_ACCOUNT_ID}/adcampaignconversions
/act_{AD_ACCOUNT_ID}/adgroupconversions
/act_{AD_ACCOUNT_ID}/connectionobjects
/act_{AD_ACCOUNT_ID}/partnercategories
/act_{AD_ACCOUNT_ID}/reachfrequencypredictions
/act_{AD_ACCOUNT_ID}/asyncadgrouprequestsets
/act_{AD_ACCOUNT_ID}/broadtargetingcategories
/act_{AD_ACCOUNT_ID}/targetingsentencelines
/act_{AD_ACCOUNT_ID}/ratecard
/act_{AD_ACCOUNT_ID}/reachestimate
/act_{AD_ACCOUNT_ID}/users
/{AD_ACCOUNT_GROUP}/users
/{AD_ACCOUNT_GROUP}/adaccounts
/{CAMPAIGN_ID}/asyncadgrouprequests
/{ADGROUP_ID}/reachesttimate
/{ADGROUP_ID}/keywordstats
/{ADGROUP_ID}/targetingsentencelines
/search?type=adgeolocation (location_types: city, region)
/search?type=adlocale
/search?type=adworkemployer
/search?type=adworkposition
/search?type=adeducationschool
/search?type=adzipcode
/search?type=adcountry
/search?type=adcity
/{CUSTOM_AUDIENCE_ID}/users
/{CUSTOM_AUDIENCE_ID}/adaccounts
/{REACH_FREQUENCY_PREDICTIONS_ID}/
/{ASYNC_REQUEST_SET_ID}/
/{ASYNC_REQUEST_SET_ID}/requests/
/{ASYNC_REQUEST_ID}
/{USER_ID}adaccountswhich has additional changes:
businessreturns an ID rather than an object.
usersreturns a object with fields
idrather than
uid,
permissions, and
role.
created_time, which is the time that the account was created. | https://developers.facebook.com/docs/graph-api/changelog/archive | CC-MAIN-2018-17 | refinedweb | 5,606 | 50.02 |
Kush Colorado January 2011
Colorado's premiere cannabis lifesyle magazine
24 kush 64 40 colorado's premier cannabis lifestyle magazine 90 100 inside 10 | The Health Report by J.T. Gold 14 | Rob's Corner by Robert J. Corry 18 | High Fees by Noelle Leavitt features 24 Going Green: Carpooling 54 Political Roundup 64 KushCon II One way to help the environment is to ride share. Check out sources that can help you go green and contribute to preserving our planet. 30 | Dive In: Dive Bars in Colorado by Charlotte Cruz 34 | Best Albums of 2010 by Heather Gulino State Lawmakers hope to clarify the new medical marijuana laws in the 2011 legislative session. 46 | Seniors Turn to MMJ for Pain Relief by David Downs 52 | Yoga and Mota by Patrick Harrington 56 | This Month in Weed History by Jay Evans Hosted the "United Nations of the Cannabis Industry" as the leaders, movers and shakers converged to educate, illuminate and celebrate the marijuana movement. 90 Hemp Frozen Desserts 100 The Nuggets You scream, I scream we all scream for Hemp Ice Scream. The latest and most delicious form of hemp is now a non dairy frozen treat. With the NBA season in full force, even though the Nuggets are struggling to maintain, check out the action at the Pepsi Center this New Year. 6 74 | The Art & Science of Being a Budtender by J.B. Woods 78 | Grower's Grove by Jade Kine 86 | Silver Surfer Vaporizers by John Green 88 | Strain Review: Heavy Hitter OG by Michael Dillion 92 | We Dig This: Winter Events by Mason Tvert 98 | Cloning Your Way to Success by Tyler C. Davidson 104 | KushCon II Growbot Winner 106 | Colorado Travel: Loveland by Charlotte Cruz 107 | Kushcon II Speaker Review 110 | Colorado Live Music Preview 114 | Chef Herb Recipes 118 | Dailybuds.com Dispensary Directory h from the editors , colorado's premier cannabis lifestyle magazine kush A Division of Dbdotcom LLC appy New Year from Kush Magazine and dailybuds.com! This past year has been a roller coaster ride for many in the cannabis industry. We saw the passing of SB 1284 this past June, which has set forth the rules and regulations for businesses in the medical marijuana field. Colorado sets itself apart from the rest of the nation in passing statewide legislation controlling dispensing, growing, manufacturing of edibles and drinks containing cannabis as well as medical doctor regulations controlling the prescribing of medical cannabis. Steep registration fees and voluminous reporting requirements and regulations control the industry. This is not an industry for the faint of heart. Those who "passed the test" so to speak must continue complying with the rules that now control this booming industry. What we now know for certain is that the cannabis industry is here to stay. This past December, Kush Magazine and dailybuds.com hosted KushCon II in Denver, Colorado (see article p. 64). With over 300,000 square feet and close to 400 vendor booths, this was definitely the greatest cannabis convention ever. Dozens of marijuana activists from all over the country converged for three days to discuss all aspects of cannabis (see article p 107). Hundreds of vendors promoting everything from smoke ware, hydroponics, security systems, clothing, edibles, infused beverages, storage containers, testing labs, hemp products, cannabis and more Publishers | Dbdotcom LLC & Michael Lerner Editor-in-Chief | Michael Lerner Editor | Lisa Selan Business Operations Manager | Bob Selan Business Development | JT Wiegman Art Director | Robb Friedman Director of International Marketing & Public Relations | Cheryl Shuman Director of Colorado Sales | Denise Mickelson Colorado Sales Manager | Christianna Lewis Advertising Sales Reps | Amanda Allen, Audrey Cisneros, Charlene Moran, Rashad Sutton Designers | Marvi Khero, Coco Lloyd, Joe Redmond Photography | Avel Culpa, Robb Friedman, Kirstin Rojo Traffic Managers | Alex Lamitie, Ryan Renkema, Jordan Selan, Rachel Selan Distribution Manager | Alex Lamitie Contributing Writers | Chef Herb, Julie Cole, Robert J. Corry, Charlotte Cruz, Tyler Davison, Michael Dillon, Jay Evans, Valerie Fernandez, J.T. Gold, Heather Gulino, John Green, David Patrick Harrington, Josh Kaplan, Jade Kine, Bud Lee, Noelle Leavitt, Scott Lerner, Cheryl Shuman, Mason Tvert, J.B. Woods Accounting | Dianna Bayhylle Internet Manager Dailybuds.com | Rachel Selan Dailybuds.com Team | JT Kilfoil & Houston Director of KushCon Sales | Michael Douglass SUBSCRIPTIONS KUSH Magazine is also available by individual subscription at the following rates: in the United States, one year 12 issues $89.00 surface mail (US Dollars only). To Subscribe mail a check for $89.00 (include your mailing address) to : DB DOT COM 24011 VENTURA BLVD. SUITE 200 CALABASAS, CA 91302 877-623-KUSH (5874) Fax 818-223-8088 KUSH Magazine and are Tradenames of Dbdotcom LLC. Dbbotcom LLC 24011 VENTURA BLVD. SUITE 200 CALABASAS, CA 91302 877-623-KUSH (5874) Fax 818-223-8088 To advertise or for more information Please contact info@dailybuds.com or call 877-623-5874 Printed in the United States of America. Copyright �2010. All rights reserved. No part of this publication may be reprinted or otherwise reproduced without the written written permission of Dbdotcom LLC. What we now know for certain is that the cannabis industry is here to stay. gathered to display all of the numerous areas the marijuana industry has filtered into. With no medicine on site, the show was attended by over 35,000 participants who wanted to learn about the latest and greatest that the industry has to offer. One lucky conventioneer won a $46,000 GrowBot (see article p 104). Close to $100,000 in prizes were given away by KushCon, not to mention coupons and discounts that the vendors at the show were giving to attendees. The convention also provided non-stop entertainment with concert performances by Mickey Avalon, Asher Roth, The Flobots, Aaron Lewis of Staind, Mix Master Mike, The Dirty Heads, Gregg Rolie lead singer of Santana and Journey, and War. KushCon II applauds and graciously thanks our generous sponsors Full Spectrum Labs, MMJdailydeals.com, Dr. Robert Melamede of Cannabis Science, Jammin 101.5, Hot 107.1 and of course GrowBot who gave away one of the greatest holiday gifts ever! We want to also thank each and every vendor that participated in this amazing event. The accolades we have received and the positive feedback from all who participated and attended should confirm that the cannabis industry is a true lifestyle. In the coming year, Kush Magazine and dailybuds.com will continue to provide all of the latest information to our readers about the cannabis industry and happenings in your locale. So with a new year upon us, we at Kush wish all of you a peaceful, healthy and prosperous new year. Kush Editorial Board, 8 IF YOU'RE ONE OF THE MILLIONS OF PEOPLE WHO HAVE RESOLVED TO MAKE 2011 THE YEAR YOU get healthy, then you probably have done this before. The wagon is so easy to fall of, especially when you're feeling good! So this year, when you're vowing to take some time off of drinking, or are going to lose those 20 pounds, or are finally going to quit smoking, remember that we don't get to the point of needing resolve overnight; results are relative to the amount of work we are willing to put in and should be expected to be a marathon not a sprint. Here are some words of wisdom that should help you achieve your goals for 2011. The best-laid plans are the ones with focus and direction. If your goal is to be healthier, decide what that looks like. Are you going to try to bulk up? Lose weight? Increase stamina? Set a realistic goal for each week and stick to it. If you're just coming off the couch and have been sedentary, take small steps. You may not be running marathons in 3 months, but you could vow to walk 3 miles a day 3 times a week. This can be achieved simply, quickly and without the burden of a gym membership. Walk your way up to jogging and maybe in 6 months time, try a 5k. Sadly, this is true and a lot of us are a double-double with cheese. While the occasional trip to our favorite fast food joint is certainly expected, it's too easy to drive through at lunch. Remember when your mom packed your lunch? She did this because she wanted you to eat well and that wisdom of controlling what you eat should be welcome. Snacking throughout the day helps us to avoid intense hunger pains that often lead us right to the counter demanding the super size option. Snacks like fresh veggies or rice cakes, granola or yogurt keep us full and provide actual nutrients to the fuel the body. After all, that's what food is--fuel; and you wouldn't put kerosene in your car, so why would you put fat and salt in your body? Eating several times a day is recommended by nearly every diet expert and the reasons are simple: you need to metabolize and if you go to long without eating, your body goes into starvation mode and the metabolism slows. So eat well, and eat often. You can't achieve any level of fitness by lying on the couch or sitting on a barstool, so make a deal with yourself that this is the year to get off your ass. Walking is the only exercise we really need to do if you do it long enough and mix it up with hills or terrain change. You can walk anywhere, for free. Swimming is another great way to boost cardio and increase flexibility. If you hate the gym, find something to play and someone to play with and no, video games do not count. Find a buddy to hit golf balls with, play tennis with, jog with or hit the weights with. If you are both accountable, it makes the chore easier when you have someone to share the load with. Getting back in shape after an absence can be daunting. It's so easy to fall into patterns of laziness that the turnaround can feel impossible. Do not despair. The body is an amazing machine and all it takes is for you to make up your mind to live better and take better care of your body. After all, it's the only one we have got and to take good care of it makes you feel better, look better and sets up your chances for long-term health and happiness. Happy 2011 to us all. Live well. 10 12 Q: Rob, I am a Colorado patient, and I often carry medicine in my car. What should I do if I get pulled over by the police? ~ T.S., DenveR A: T.S., ThiS iS An iSSuE that confronts nearly every patient in the state. Being pulled over by the police can be a nerve-wracking experience, especially when there is medicine in the car. Police officers are trained to detect the odor of marijuana, and if they smell some purple kush wafting from your window, they will want to search your vehicle. You can avoid this invasive and embarrassing spectacle by staying calm, being courteous, being prepared, and knowing your rights. Staying calm is important because an officer may suspect you're are hiding something if you are fidgety or seem especially anxious or nervous. Being courteous is also important because acting rude, arrogant, or sarcastic only escalates the tension, and can give some officers a personal reason to prolong your detention or initiate a search of your vehicle. However, being prepared is the most effective and efficient way to protect yourself from the long arm of the law. ALWAYS keep your registry card on you, especially when you are transporting medicine in your vehicle. As they say: "Don't Leave Home Without It." When an officer smells marijuana emanating from your car, he may erroneously think it is still 1999, when marijuana was illegal for even non-medical purposes. The officer may wrongly believe that the smell alone gives him probable cause to search your vehicle for evidence of the so-called "crime" of marijuana possession. Before the officer conducts a search, presenting your registry card shows the officer that you are in lawful possession of marijuana, and that the Colorado law prohibiting possession of marijuana does not apply to you. You are not required to reveal your status as a medical marijuana patient, ever, but at times voluntarily revealing such status can avert a search. If you haven't received your registry card after 35 days from submission to the Department of Health, the Colorado Constitution states that a copy of your marijuana registry application, "including the written documentation and proof of the date of mailing or other transmission of the written documentation for delivery to the state health agency," has the same legal effect as a registry card. Therefore, presenting your application and proof of submission to an officer can also absolve you from criminal liability. If you aren't prepared and don't have your registry card or it's functional equivalent, the officer will have to take your word that you are a patient in legal possession of medical marijuana, and not a dangerous heinous criminal who possesses a plant that grows naturally on God's Earth. If you don't have your paperwork with you and the officer cites you for possession of marijuana, you can still raise the exception and affirmative defense in court. However, presenting proof of your patient status to an officer can save you the expense, time, and stress of defending yourself at trial. Critics of medical marijuana have recently voiced concerns about patients driving while medicated. Colorado lawmakers have responded to these concerns by drafting upcoming legislation that clarifies the "legal limit" for active THC in a driver's blood, much like limits on blood-alcohol levels. This will be a major battle at the Capitol and possibly the courts, because marijuana is not a one-size-fits-all medicine in that one man's impairment can be another man's cure. While possession of your registry card or its functional equivalent can shield you from marijuana possession charges, it will not allow you to drive while impaired. The government must prove beyond a reasonable doubt that marijuana impairs you. Some patients drive better after medicating rather than dealing with the debilitating effects of their medical conditions, however, this is still risky. A safer approach is to never drive while medicated, until our society advances to the point where science becomes part of the law. If you are a patient with marijuana in your vehicle and you get pulled over, stay calm! There is no need to make the police think you have committed a crime, if you are innocent. Be polite and courteous, even though it may be challenging to kiss the ass of someone who is cannabinoid-deficient and thus operating on a lower intellectual level. Police officers carry lethal weapons, and it can be wiser to use kindness than poke a sleeping bear. Police take an oath to uphold the Colorado Constitution, which mandates that criminal marijuana possession laws do not apply to patients in possession of a registry card at the time of a police encounter. If you remember to stay calm, be courteous, and be prepared, your encounter with the police should be no more painful than the symptoms of your debilitating medical condition. Robert J. Corry, Jr. is an Attorney licensed to practice in Colorado, California, and the District of Columbia. This column does not constitute formal legal advice, and should not relied upon as such. Please submit comments or questions to. 14 15 16 17 Dispensaries across the state are struggling to break even as they continue to pay large fees and purchase the expensive equipment currently required for operating legitimate medical marijuana businesses. Employees at Evergreen Apothecary, located in Denver, have worked long hours to ensure it is operating under the medical marijuana guidelines set forth by the state, but it's a daunting task. "Everyone has that idea that if you build it, the money will come," said Jessica McCormick, who does all the administrative work at Evergreen Apothecary. Yet, the dispensary has had to shell out tens of thousands of dollars for the equipment needed just to comply with state law. "You have to have a certain scale, and a 2,000-pound safe and security cameras," McCormick said, adding that those particular purchases add up. "We, and most dispensaries we know, are trying to break even with these fees and requirements." Sen. Pat Steadman, D-Denver, agrees that dispensary operating fees and costs are too high, but he doesn't think that the Colorado General Assembly will tackle that issue this year. Instead, he thinks more time needs to pass in order for lawmakers to know for sure what the right levels of fees and costs should be, so that the laws won't need to be changed yearly. "The fees are a thing we're going to need a little bit more time with under our belt," Steadman said. "The security requirements are pretty expensive, too, but I don't think that's an issue we're going to take on this year." Aside from ensuring that dispensaries have the correct products in house, MMJ businesses have fees they have to pay in order to receive dispensary licenses. It currently costs $1,800 to apply for a dispensary license. And under House Bill 1284, which was passed into law last July, dispensaries have to give the state their first $2,000 in sales to help fund substance abuse programs in Colorado. The money is split between Colorado's Department of Human Services (DHS) and Department of Health Care Policy and Financing. Another issue that Evergreen Apothecary has with the current fee structure and law is that dispensary owners are required to grow 70 percent of their own product, which can be an expensive job. "My boss uses the analogy that it's as if someone goes into all the restaurants and says that you have to grow 70 percent of the beef that you sell," McCormick said. Herbal Remedies owner Carl Wemhoff said he has dished out $28,000 in fees to stay up-to-date with Colorado's medical marijuana regulations and code. "If you don't have enough money, you're going to be forced out of the industry," Wemhoff said. However, he does agree with the stipulation that dispensaries have to grow 70 percent of their own product, as he feels it's the only way to make a real profit. Wemhoff grows 100 percent of his own product, which adds another layer of expenses in itself. He said that he has spent thousands getting his grow facilities up to code. All of the fees collected by the state are used to fund a new department of regulation for the medical marijuana industry in Colorado. by N OE L L IT EAv EL T 18 19 20 21 22 23 Carpooling erideshare.Com carpooling, once limited to school trips in the mornings by neighborhood moms, has become an online enterprise with several companies offering to find riders with similar destinations the chance to hook up and share gas, time, wear and tear on the car, and subsequently the environment. Carpooling is one of those wonderful win-win-win situations where everyone gets the better end of the deal. In cities with Carpool lanes, the value of a fellow passenger is immeasurable. If you have ever sat in traffic and watched the carpoolers' whiz by, you know how lonely and desperate you can suddenly feel, especially when you're late for work. Carpooling cuts expenses, saves on polluting emissions and saves time. This site is a nifty way to find rides 10 minutes or 10 hours away. The sections are broken up into: daily commutes, cross-country travel, errands (medical, grocery, etc) and a groups option where schools, employers, parents, etc can set up ridesharing communities. Erideshare has been around sine 1999 and is a trusted resource for carpooling. You can even view a map that shows how many people are in your area using the service. Craigslist Craigslist is where the world meets. You can buy a sofa, get a job, rent an apartment, find a tennis partner and yes, a ride. The rideshare section is located under Community and is a great place to post for free. You can also search the ads that are already posted and find someone who may be looking for the same exact thing you are! And if not, you can always get lost in the Free section and score some fill dirt and a broken Volkswagen. ridester.Com Ridester is more of an auction �like site where people offering rides post where and when they are leaving, the destination and return (if applicable) and post an asking price for your share of the expenses. The steps are simple and pretty cool. 1. Join Ridester (free) 2. Build a personal profile including your preferences for gender, music, smoking, and age. 3. Enter where you're leaving from and going to and instantly find drivers going your way. You can filter the trips by asking price and trip date. if there are no matches, you can even save your search and get notified automatically by email (or text message) when new trips are going your way. Carpooling is a great way to meet new people, save on costs and help the environment. The sites that are dedicated to ridesharing do a good job of giving you the power to choose who you ride with and a sense of the experience before you commit. Be safe, be smart and be green! by Ch a r lotte Cruz 24 25 28 29 Welcome to the first installment of Dive In, a stumble through the region's hidden treasures--dive bars. No matter what kind of nightlife you enjoy, be it clubbing or gothic trance or fancy wine bars in neighborhoods with parking problems, everyone loves a good dive bar. Everyone needs his or her Cheers. To be clear, a dive bar does not have to be gross. It does not have to have bathrooms that would be considered luxurious for a gas station, nor do they have to be falling apart at the seams to fall into this illustrious category. Every neighborhood has a watering hole where the locals go. Everybody has a place to go where everybody knows your name. Whether you pop in for a cameo appearance or are a regular fixture, dive bars are a part of our culture where you can really get to know the locals. And more often than not, the jukebox is like digging out your old cassettes and cd's and the doors open at 6.a.m. Capitol Hill is where a lot goes on. From the mixture of residents--artists to politicians to the greasy spoon to the gourmet organic caf�, life bustles on the hill. There are endless bars to explore and this will not be the first time Capitol Hill makes the geographical cut on our search for great dives. Without further ado, the first bar on our prestigious list is: THE LANCER LOUNGE 233 East 7th Avenue, Denver, Colorado The Lancer is on E. 7th Ave and it is strongly recommended that if you decide to spend a night out here, don't drive. The drinks are strong, like really strong as they should be in a good dive. If you are planning a long night, you may want to stick with beer because the mixed drinks are real light on the mix. The crowd at The Lancer is everyone. On any given night you may find a hipster crowd mingling with the old-timers who look like they are well rooted in the barstools and probably have been for years. The regulars are always friendly and they take care of one another as if they were family, because they probably are as close as any biological pair may be. If one stumbles out, another will follow to make sure they get home safely. If examining the locals doesn't offer you enough entertainment (it will), there is a pool table and a pinball machine in the back. These two items are also necessary fixtures in a dive bar, by the way. If you're one of those people who plays pool every now and again, you may not want to challenge anyone at The Lancer if you are into maintaining your dignity. Every week is shark week at The Lancer. Considered fancy in our dive bar book, The Lancer has a pretty nice patio. The jukebox will fill your void of Johnny Cash, Heart, Twisted Sister or even Belinda Carlisle. Like The Lancer itself, the jukebox is full of variety, so take your goth-loving, granola-eating, masters-degree-seeking, ballgame pre-funking butts to The Lancer. And don't drive! Stay safe and happy crawling! 30 31 32 33 34 BEST ALBUMS OF 2010 Eminem Recovery Eminem delivers his first grown up album. After a very open and honest confession about his drug addiction, Slim Shady gives us the genius of Marshall Mathers on a record that shows off his wordplay, rap skills and introspection. Gorillaz Plastic Beach The craftiness of Damon Albarn's cartoon band is exploited in the Gorillaz third album with verve and spunk. The slam on consumer society inspires great cameos from Snoop Dogg, Lou Reed and even Bobby Womak and Mos Def join in the fun. Arcade Fire The Suburbs Arcade Fire just keeps getting better and better and 2010's foray into the garages of suburban homes serves as both a reminder of the best and worst times of young musical life. Arcade Fire delivers musically, lyrically and nostalgically. Their best album to date, hands down. Kanye West My Beautiful Dark Twisted Fantasy Some have called it a perfect album and that might be right. MDTF is the journey of a hip-hop lifetime. Collaborators galore and a Kanye West who brings a bruised ego and a need for perfect musicianship to the party. The album slams you, breaks your heart, has fun with you and makes you believe again that Kanye being Kanye is sometimes a near perfect thing. Musically. Big Boi Sir Luscious Left Foot: The Son of Chico Dusty The headier and edgier half of Outkast brings the funk. The production is flawless, as one would expect from the musical stylings with choruses being crooned by Jamie Foxx, Janelle Mon�e and B.o.B. This is a heavy, get-down, bring the funk down hard record that deserves a good party. Robert Plant Band of Joy Robert Plant teams up with Nashville this time around to produce a folksy, misty, Americana album that delivers sultry, country-blues with a little gospel thrown in that is unmistakably Plant and unmistakably incredible. The Black Keys Brothers You definitely get a lot of bang for your buck from this album. The run time is over an hour long and holds 15 tracks. Dan Auerbach, the front man for The Black Keys, delivers pained and poignant lyrics with the same head-bopping grove that brought them early success. Auerbach's newly perfected falsetto lends itself nicely to the album's bipolar mood. Beach House Teen Dream The Baltimore duo have come to shine with Teen Dream. The haunting vocals and eerie melodies feel less emo and more musical this time around as we are watching them grow up before our very ears. While there are still the steamy organ and slide guitar sounds that dig into your soul, the sounds have finally met the songs. 35 36 38 POLITICAL ROUNDUP by NOELLE LEAvITT LOUD& CLEAR State lawmakers hope to clarify medical marijuana policy in 2011 legislative session Despite last year's legal efforts, the battle to regulate Colorado's medical marijuana industry still has a long way to go before cannabis businesses can operate smoothly. In the 2011 legislative session, state lawmakers hope to clarify and "clean up" several provisions in Colorado's complex medical marijuana reforms enacted in 2010. Additionally, a likely new bill will attempt to tackle driving while under the influence of marijuana. Last year, lawmakers spent long hours in committee debating new laws under House Bill 1284 for cannabis dispensaries, medical marijuana card holders, caregivers, growers and doctors. The biggest challenge in regulating Colorado's marijuana industry is the mere fact that it has never been done before. While lawmakers were amending legislation at the state Capitol in Denver, the cannabis industry continued to operate statewide under unclear guidelines as to how to grow and distribute medical marijuana. After the lengthy legislation passed, marijuana proponents began shuffling through the new state laws, working their way toward running legitimate cannabis businesses throughout Colorado. However, state Sen. Pat Steadman, D-Denver, feels that some of the 2010 provisions need to be revised. He and state Rep. Tom Massey, D-Poncha Springs, plan to introduce a "cleanup" bill by the end of the month. 40 (continued on page 42) 41 POLITICAL ROUNDUP by NOELLE LEAvITT ~ Senator Pat Steadman Specifically, Steadman wants to make it faster and easier for cannabis patients to get medicine after they are approved for a medical marijuana card. Under current law, a patient can't buy products from a dispensary unless they have their actual card in hand, and it currently can take up to 35 days for the state health department to mail a card to a patient after approval. "I'm trying to address the issue of being able to purchase at a center after you've been approved," Steadman said. "It should be just like when you leave a doctor's office and get your prescription." The cleanup bill would also address the issue of confidential grow locations. Under current law, it's required that grow operation sites remain confidential, yet Steadman wants that changed. "It just makes it awkward for the planning and zoning departments to do their job," Steadman said. Steadman also wants to revise a few product labeling issues within the current law, making standard rules for all edibles and ganja-infused products. Essentially, Steadman and Massey want to lump these many issues into one bill, which some marijuana constituents embrace. Miguel Lopez, who organizes the annual 420 Rally in Denver, just wants legislators to be fair in how they regulate the industry. "I'm calling for common sense and fair regulation," Lopez said. Another bill that Lopez will be watching closely is the DUID (Driving While Under the Influence of Drugs) bill. "That's going to go down in flames," Lopez said. The bill is being drafted by state Rep. Claire Levy, D-Boulder, who wants to set guidelines on how much marijuana users are legally allowed to consume before getting behind the wheel of a motor vehicle. "The bill will create a per se limit of THC that a driver can have in their blood. It would just create a per se limit to bring clarity to the issue," Levy told Kush Magazine. "An officer has to have probable cause to stop a driver; if it's alcohol, they do a roadside sobriety test. They can do a breathalyzer." That's not the case with marijuana, though. The only way law enforcement is able to test for marijuana impairment is through a blood sample, which makes the issue tricky, Levy said. She's been working closely with the toxicologist at the Colorado Department of Public Health and Environment to create a standard for THC in the blood stream to determine marijuana impairment. What they found is that a driver can have no more than five nanograms of THC per milliliter of whole blood in their system to avoid being classified as "impaired." "I'm trying to pick the standard that has been validated in the lab that has been associated with impairment," Levy said. She feels very strongly that the public perception of marijuana use will be more embraced if there are set standards on driving while impaired. "I think that the public is going to be more accepting if they believe" that driving while you're stoned won't be tolerated, Levy said, adding that she doesn't want to overregulate the industry. Levy is also concerned with the current rule that dispensaries must grow 70 percent of their own product. Although she has no plans on drafting a bill to address the issue, she feels that there should be a provision in the law that allows grow operations to have their own businesses. "There are people that are very good at growing and know that side of the business, and there are dispensary owners that are very good at that part of the business but can't grow," Levy pointed out. One thing is for sure: Levy hopes that Colorado's General Assembly can swiftly and successfully tackle medical marijuana issues at the state Capitol this year. "I hope we don't have a repeat of the marathon sessions that we had in 2010," she said. 42 43 44 45 MEDICAL MARIJUANA HAS BECOME A MORE POPULAR CHOICE FOR PAIN RELIEF AMONG SENIORS AGED 55 and older, bringing a whole new demographic to the cannabis industry. Tom Jones, 61, who suffers from pulmonary lung disease, started using marijuana edibles two years ago to help with pain relief and to cut back on taking pharmaceutical drugs. "I've reduced my pain meds by about 25 percent, and my illness hasn't gone forward in two years," Jones said. "Doctors thought I'd really be deteriorating by now." Jones lives in a retirement community in Aurora, and he thinks that many seniors who have not tapped into the medical marijuana market would find the same relief he did if they gave it a try. "I think that if seniors are willing to take a shot at it, they'll find that it really helps with the pain," Jones said. "The problem with seniors is getting them to realize that it's not being prescribed to them as a recreational drug." He gave an edible to his 85-year-old dad once, and it helped him with pain relief. "He says it helps," Jones said. "If seniors try it, they really do like it." The Colorado Patient Coalition, a dispensary located in Federal Heights, has seen an increase in the number of elders tapping into MMJ for various ailments. "I'VE REDUCED MY P AIN MEDS BY ABOUT 25 PERCENT AND , MY ILLNESS HASN'T GONE FORW ARD IN TWO YEARS" "We're definitely seeing an increase in older people using marijuana. That's kind of been a focus of ours from the beginning," said Shane Tara, owner of Colorado Patient Coalition. The biggest concern most seniors have when considering using MMJ is whether their privacy will be protected, Tara said. It's still not clear how much privacy a medical marijuana cardholder has, making it a difficult decision for many to start using ganja. "For a lot of people that's a really sketchy adventure for them," Tara said, highlighting that a large portion of the older population is scared of losing government-funded support if they become a registered MMJ user. Yet, seniors who have embraced marijuana as an alternative drug seem very pleased with the results, including Bob Melamede, 60. "The people who need cannabis the most are seniors suffering from age-related illness," Melamede said. He had knee surgery in 1989 and was told he could no longer run because of the lack of cartilage in his knee. "Had I listened to the doctors, I would've stopped running 20 years ago. But as soon as I use it, it makes me go exercise, it makes me go stretch," Melamede said. "Cannabis is very important to seniors because of its antiinflammatory properties." 46 46 47 48 49 50 51 Plant Pros & Tree Pose My name is Patrick Harrington and I have been a yoga studio owner in Denver for the last eight years. If you were at KushCon, you might recognize me from Saturday's Cannabis and Hemp Wellness Panel. Kindness is the name of my new studio--opening February 1. Kindness is a donation-based yoga studio in the heart of Cherry Creek North (2727 E. 2nd Ave). I am writing about the benefits of yoga as it relates to medical cannabis patients. This is very exciting to me. Why? Because these two modalities will accelerate your healing on all levels; mind, body and spirit. Sound familiar? For me, the same could be said about medical cannabis; healing on all levels; mind, body and spirit. Being a part of a very large healing community over the years, I've had the unique chance to witness thousands of people heal themselves through yoga. Seriously. Thousands of people. Cannabis and the mindful practices of yoga and meditation will meet you wherever you are at physically, mentally and spiritually. Yoga lowers your stress level. It teaches you to breathe (Take a deep breath now, please...). It helps you sleep, and it helps you wake up, which is interesting. Just as there are different varieties and strains Yoga & Mota: by PATRICK HARRINGTON of cannabis, there are distinct styles to relax and rejuvenate with (Restorative, Nidra, Basics) and styles to invigorate and strengthen you (Power, Anusara, vinyasa, Forest). Yoga increases your range of motion. Flexibility. Coordination. Balance. Focus. Breath. Digestion. Relationships. Everything. The more you do it, the better you are. Period. We all know that medical cannabis can address many symptoms beautifully. It can stimulate the appetites of chemo patients. It can subdue the seizures of epileptics. It can even ease the pain of a migraine. It is certainly not a cure-all though. It has limitations. Stress, whether it's emotional or physical, is the underlying cause of most illness and even aging. Cannabis can definitely reduce or even temporarily relieve stress, but any patient who's honestly committed to getting healthy should consider taking a proactive approach. Addressing the root--the source. What results will open up to you when you start a yoga practice? The possibilities are vast. From the mental to the physical, the spiritual to the literal. Yoga is a game changer. Whatever level of physical fitness you possess, yoga will meet you there and encourage you forward with any and all health and healing goals. Come in to Kindness Yoga for a free week of classes and give it a try for yourself:) KindnessCollective.com BTW, our friends at Root Yoga Center have a similar mission--bringing healing and consciousness to the cannabis world, visit them online at RootYogaCenter.com 53 54 55 this month in weed history January Greatness Is It Possibly In the Stars? This Month in Weed History usually spotlights a particularly memorable moment involving our beloved Marijuana plant whether it be in the continuous battle we all share for its inevitable legalization, the marking of a milestone in that battle, or the celebration of its virtues. We will often remember great moments in its history, by highlighting great concerts (and/or musicians that may have been part of that moment, sometimes with joint in hand). With so many musicians backing the cause, we've compiled a list this month. Not that they all smoked Marijuana per say, but their music sure sounds great "...on weeeeed." Sharing the Capricorn / Aquarius symbols, this list of January standouts is eye-opening. Maybe there is something to the moon and stars...? The greatest thing about comprising this list was thinking about how much weed each and every one of these people may, or may not have smoked during their days on tour, or in the studio. Each artist may have had influences, (or been under the influence) yet not necessarily. In this analysis, it brings to light another subject: the diverse genres and artists making music, and if there is a common thread to great music and the mind-altering affects of Marijuana? Is it possible to think that the use of a common drug may have an effect on whether a person makes great music? Hmm, it seems preposterous, yet so similar in theory to astrology.... There's only one "King," and ours was born Elvis Aaron Presley, in Tupelo Mississippi, on Jan. 8th, 1935. Should we just stop there? How can we stop, with so many more... Janis Joplin Jimmy Page David Bowie Rod Stewart Eddie Van Halen Steven Stills Steve Perry Michael Hutchence Justin Timberlake Dolly Parton Ronnie Milsap Placido Domingo LL Cool J Sade Pat Benatar Kenny Loggins Alicia Keys Phil Collins Joan Baez Sarah MacLauchlan Naomi Judd Aaliyah Jan.19th 1943 Jan. 5th 1945 Jan. 8th 1947 Jan. 10th 1945 Jan. 26th 1955 Jan. 3rd 1945 Jan. 22nd 1949 Jan. 22nd 1960 Jan. 31st 1981 Jan. 19th 1946 Jan. 16th 1943 Jan. 21st 1941 Jan. 14th 1968 Jan. 16th 1959 Jan. 10th 1953 Jan. 7th 1948 Jan. 25th 1981 Jan. 30th 1951 Jan. 9th 1941 Jan. 28th 1968 Jan. 11th 1946 Jan. 17th 1979 Birthday Shoutouts! by BUD LEE and sliding on his knees into the category, (not for his acting abilities, but for his real musical skills with the Blues Brothers) John Belushi - Jan. 24th 1949.... Wow, this list covers many genres - food for thought.... 56 57 588 5 60 61 62 63 Denver anD the rest of the country will never be the same following the three day KushCon II Cannabis Lifestyle Convention that took over the Colorado Convention Center last month. Leading up to the event were busses, billboards. radio and print ads all saying "Have a Kush Day", come to KushCon. With over 35,000 people in attendance, more than 400 booths of vendors and organizations from all over the world, dozens of world-class keynote speakers, and some of the biggest names in the music industry, KushCon II shaped up to be the greatest medical cannabis event in history. And surveying the entirety of over 340,000 square feet at the Colorado Convention Center, the message of the medical marijuana revolution was never more evident: the movement is here, and it is here to stay. KushCon II showcased both the current state and future of the cannabis industry, embracing education, health, lifestyle, diversity, and continual expansion and advancement as its fundamental cornerstones. And all the while, everyone that came to partake in the festivities or to just check out all of the excitement had a great time! Diversity was present in every aspect of KushCon II, and it is one of the qualities that the evolution of this industry in its infancy has sincerely embraced. People attended the event from all over the world and almost every state in the country including medical cannabis states, such as Arizona, Colorado, California, Hawaii, Montana, Michigan, and Rhode Island, as well as non-medical states including New York, Mississippi, Texas, Florida and Arkansas. There were people of all ages, kids to senior citizens, people of all different races, occupations, and economic classes, current medical patients and 64 curious newcomers. There were businesses ranging from medical cannabis dispensary centers to financial services companies, software engineers, cooking classes to legal advisors, security firms to edible manufacturers as well as glass blowers to prominent politicians. It is really not a fair statement to say that conventioneers were primarily comprised of any specific demographic. One medical cannabis testing company, for example, whose booth was continuously busy, is an independent research company that uses their laboratory to identify the particular chemical composition of a particular strain of marijuana. Employing PhDs as well as lab technicians, their scientific research allows dispensaries to more accurately prescribe medicine to fit patient's needs, and at the same time assure the patient that the product is not contaminated with harmful pesticides. Advancement in technology also has come to both the way medical marijuana centers run their businesses to the ways in which patients medicate. The software created by one vendor brings hi-tech internet cloud technology to local dispensaries, ensuring that all of their patient and business records are in kept in strict compliance with state laws. There were numerous beverage and edible companies promoting state-of-the-art manufacturing and distillation processes, in conjunction with lab testing of their products, offering to provide the purest most suitable, and best tasting assortment of infused medicine to their patients. Business acumen and technology present at KushCon II equaled the diversity of the patrons in attendance. Professionals from a wide variety of industries are now bringing their expertise to the medical marijuana industry, expanding the possibilities of the cannabis world like never before. And as the technology expands, so does the user base; and as the amount of the users expands, so does the technology, and the synergy they give to one another is taking the industry to unprecedented levels. 65 Several vendors expressed that they were very pleased with the results they achieved at KushCon II. Many said they completely sold out of the products they brought to sell. Others seeking new relationships and promote their services said they were very happy with their increased patient count following the show. KushCon II was also jam packed with first rate entertainment catering to a wide array of tastes. Musicians from equally diverse backgrounds and genres highlighted the concert series presented daily during all the three days of KushCon II. Day one saw rappers Mickey Avalon and Asher Roth perform alongside Colorado based super group, The Flobots. Saturday's lineup followed with performances by Aaron Lewis of Staind, a set by Mix Master Mike of Beastie Boys fame, and a killer performance to close the evening by Rolling Stone Reggie- Rock breakout band of the year, The Dirty Heads. The mega concert series concluded on Sunday with special old school recording artists featuring Rock n Roll hall-of-fame inductee and former Santana Band founder and lead singer Gregg Rolie, and wrapping up with California funk delivered by 70s legends, War. Fittingly, it was the cannabis revolution that united such a seemingly disparate group of musicians. To accompany this list of artists spanning multiple genres and generations was the most extensive panel of influential activists and community leaders ever assembled to speak about the medical marijuana revolution, and medical cannabis industry ever assembled in the same place at the same time. Over 65 men and women--business owners, entrepreneurs, politicians, horticulturalists, and activists--spoke for more than 12 hours about the current and future state of the cannabis industry, covering financial, social, political and health issues on the national front as well as in Colorado, California and beyond. This panel addressed the desires of the attending public to be educated concerning the many pressing issues surrounding cannabis and combating complex inherent issues with tangible solutions to encourage the spread and sharing of usable and empowering knowledge to keep things moving in a positive direction. Once again, the diversity and breadth of the panel of speakers truly showcased the multiple facets of KushCon II. One panel comprised solely of women from all walks of the movement demonstrated the changes that have resulted due their respective and collective dedication and power that women in the cannabis movement have made and are continuing to make, past, present and future. And in between the vast array of first class entertainment and dissemination of invaluable information and education about Cannabis, KushCon found time to conduct the mega 4:20 give away of well over $100,000 of free gifts to the attendees including a fully equipped $46,000 mobile GrowBot cultivation system. Simply, the breadth and scope of KushCon II was unlike anything the cannabis industry has ever seen. Never before has there been such a large diverse group of people gathered under one roof to be a part of the growing medical cannabis insurgency. The organization and dedication to the cause by the patrons, participants, Kush Magazine, dailybuds.com performers, and speakers, has rightly shown the seriousness of this movement. It has without a doubt exposed to all, the political, social and economic power of cannabis. KushCon II has shown that the synergy of diversity, education, and advancements on every level have been and will continue to be the sources from which the medical marijuana revolution and the efforts to thwart prohibition will continue to thrive and expand into uncharted territories. For complete album of photos from KUSHCON visit DailyBuds.com 66 67 in Aspen... For Free? WhAt More Could You Ask For? Winter X Games by JAY EvANS N ow that the holidays are in our rear-view, we can focus on what's really important: crazy, sick Skiing, Snowboarding, and Snowmobile tricks like Ollies, 360s, Helicopters, BackFlips, McTwists, Chicken Salads, PopTarts, Mule Kicks, Tailfishes, Twisters, Fender Grabs, No Footed Can Cans, Supermans, and No Footers. Whoooooooohooooooooo!!! Yeeeeaaaaaaaahhhhhhhhhhhhhh!!! You get the picture, dude - It's Winter X-Games time!!! And ESPN is coming back to Aspen/ Snowmass Jan. 27th - Jan. 30th, for the glorification of all that is "rad!" The 15th annual winter action sports competition features athletes from around the world, competing for medals, prize money, and maybe most importantly, the respect from their athlete peers. The X-Games, (both Summer and Winter) have grown exponentially in popularity since 1995. The events have become a quasi - Olympics for extreme athletes and earning X-Games Gold can translate into millions in prizes and endorsements. These extreme athletes are household names now, and have found these games to be their main-stage. With some of the most amazing feats of athleticism, these "extremists" continue to push the limits of physics, pulling off more and more difficult and daring tricks with every passing year. What was once only done on a BMX bike, is now being done with a 450 pound snowmobile. Truly incredible! The Winter X-Games will be held at Buttermilk Mountain. For all-inclusive event packages, you can call 888-649-5982, or email at info@ StayAspenSnowmass.com. But guess what, locals? If you can get up to the mountain, the events are FREEEEEEEEEEEEEEEEEEEEEEEEE!!!! With so many killer events, and jaw-dropping action, you just can't miss. This is the weekend to book your stay, grab your board, and get "high".... up the mountain, of course. Here's a list of events to cHoose from: skiing Big Air Skier X Men's Skier X Women's Mono Skier X (Men's & Women's combined) Slopestyle Men's Slopestyle Women's SuperPipe Men's SuperPipe Women's SuperPipe High Air snoWboArd Big Air Slopestyle Men's Slopestyle Women's Snowboarder X Men's Snowboarder X Women's SuperPipe Men's SuperPipe Women's snoWMobile Best Trick Freestyle Knock Out SnoCross Adaptive SnoCross 68 68 69 CAREGIVERS FOR LIFE 70 71 72 73 The Art and Science of being a Budtender by JB Woods Medical marijuana patient Johnny Green recently decided to try a new dispensary after being referred there by a friend. "I wanted to work with budtenders that knew their medicine and I felt comfortable with how they operate," said Green. Medical marijuana patients today have many choices as to where they buy their medicine. If they don't particularly care for the treatment at one dispensary, they easily can drive a mile in any direction-- especially Colorado's Front Range--to find what they are seeking. The level of competition for patients has forced many dispensaries to remodel their locations, demonstrate compliance, and create environments for patients to feel safe and comfortable. Ultimately, the professional service and relationship with patients established by budtenders can make or break a business. The profession of budtender has quickly evolved over the last year into a highly sought after position. Dispensaries seek qualified applicants, and individual job seekers are out to prove that they have what it takes. The work of a budtender is a balancing act between art and science. The "art" is the interaction with patients to develop trust and understand their needs. The "science" is having knowledge of a vast number of marijuana strains, which can top one hundred at some of the larger dispensaries. The title itself was most likely a derivative of the word bartender--the cannabis industry just swapped out "bar" for "bud." Budtender Daniel Sanchez of Pure Medical Dispensary in Colorado Springs remembers when a new patient said, "It's nice to see that you have all of your teeth," referring to his former budtender who apparently lacked some dental hygiene. Besides Sanchez being able to provide service with a smile, Pure Medical goes several steps further by having a dress code, name tags, and refreshments for patients to enjoy. Sanchez believes that patients appreciate the professionalism. Just like pro athletes that train daily, great budtenders must know their product and patient in order to excel at their jobs and provide a valuable service. Mitch Woolhiser of Northern Lights Natural Rx in Edgewater, Colorado remembers visiting a dispensary and asking the budtender,"Can you tell me about white widow?" The response felt like a brick hitting the ground--"Those buds are $25.00 in the jar." replied the employee. This budtender had not learned the important skills of listening and being informative, attributes that many patients require or expect in this competitive industry. Mitch uses this unfortunate experience by offering the complete opposite at Northern Lights. They make it a practice to listen to their patients, know their product well, and understand how medical cannabis affects patients physiologically and emotionally. "I like to look at the history of what the patient has purchased from past visits to help determine if the treatment working," says Mitch. In order to stay informed, many budtenders have books available-- such as The Cannabible--for those occasions when a patient asks about an unfamiliar strain. Initiating further education is common practice for Mike Maes, who is a budtender at Infinite Wellness of Fort Collins. Maes researches strains online through his favorite website,. These resources provide Maes with the product knowledge that is required to be confident behind the counter. There is a strong desire by medical marijuana patients for legitimacy and safety at dispensaries. Even though Maes can easily recognize many of his repeat patients, he will verify their registration card as if they were a new client. "Our patients like the fact that we are legitimate and that we confirm their information," says Maes. "Patients don't want to feel as if they are working with a drug dealer in a backroom or behind a curtain. We also understand that their information is private and should be treated with the respect it deserves," he continued. Assessing how a patient is feeling comes in many ways. Budtenders must have the ability to quickly identify the patient in front of them as to their mood or demeanor, while being careful not to judge them incorrectly. Many patients that appear healthy can be suffering immensely inside. This is where the art of intuition takes over for the best budtenders in the industry. Sanchez from Pure Medical says, "the work I do is about reading people. We have a lot of people who come to our dispensary that are suffering from difficult medical conditions." His job is to be able to read those patients without being intrusive. "I like to ask questions like--what are you looking to solve today?" or "what did you have last time and how did that work for you?" These questions allow patients to feel comfortable enough to share their situation and begin to build trust. The art and science of budtending is tethered with the testimonials from patients they have impacted through the patient and caregiving relationship. Woolhiser from Northern Lights remembers a patient who reluctantly tried an edible after suffering from chronic back pain for 15 years. It was a moment of celebration when his patient expressed that he experienced his first sound sleep in ages. Today his wife purchases canna butter to make cookies for her husband. A patient testimonial remembered by Maes from Infinite Wellness was when he recommended edibles to a cancer patient who uses an oxygen tank to breath. She was skeptical at first, but was ecstatic with the results and thanked Maes for having the knowledge to recommend an alternative form of treatment. It doesn't matter what kind of business it is, as those tried and true qualities of experience, knowledge and a desire to please customers are relevant to success. As Sanchez from Pure Medical said, "honesty still works behind the counter." 74 74 75 76 One of the most prevalent misconceptions regarding how to fertilize cannabis is that more is better. Whether you're asking for advice at your local hydroponic store or searching online grow forums � you'll notice that almost everyone seems to agree that the goal of fertilizing Cannabis is to "force" as much nutrients into the plant as possible. So it doesn't surprise me that 9 out of every 10 growers I meet today over-feed their crops to some degree, many of them significantly. Some of them are otherwise very proficient growers with many years of experience and good looking product to show off, but when it comes time to burn a joint of their pretty herb, the visual appeal is forgotten in a cloud of harsh, heavy smoke that really irritates the throat and lungs. Smoking Cannabis is supposed to be a pleasant experience from the first whiff of a new bag to the last tasty toke off a joint. By understanding what fertilizer is, when the plant wants it and how to know the appropriate amount to feed, growers can yield as much or more than they ever have while improving the quality of their crop significantly. First things first � what we call "plant food" is more appropriately called fertilizer or nutrients. A plant's "food" supply is actually sugars � simple carbohydrates made through photosynthesis. Plants make their own food out of light, air (CO2) and water. Fertilizer - the stuff that we're supplying in those fancy bottles with big claims on the labels - is actually more like multivitamins for humans than it is to actual food. Now, if you take a good cross section of traditional tips regarding Cannabis fertilization and boil them down, you'll get something that goes like this: find the maximum feeding level for your plant by adding increasingly larger amounts of fertilizer until mild symptoms of overfeeding occur (like leaf curl or burned leaf tips), then back off slightly to the point where the symptoms are no longer seen. (continued on page 80) 78 78 79 80 80 plants both receiving the same nutrient solution of 1000 ppm worth of fertilizer. If an indica plant is consuming a gallon of water per week and a sativa plant is consuming 2 gallons of water per week, then the sativa is actually receiving twice as much fertilizer overall because the fertilizer is suspended in the water and the plant has no choice but to drink. For this reason, sativa varieties should be given lower concentrations of fertilizer due to the fact that they typically drink more water. Indica varieties can tolerate higher concentrations of fertilizer in the root zone, but that doesn't necessarily mean that they enjoy it. Indica plants have simply adapted to regions that are more arid. As soils get dry, the nutrients become concentrated in the remaining amount of water. The last few drops of water in a dry soil will be extremely concentrated with fertilizer. (That's why you never want to apply nutrient water to extremely dry soils � always re-wet the media with unfertilized water if they get really dry.) This adaptation gives indicas the ability to withstand higher levels of fertilizer in the root zone than sativas but it's still very important to note that tolerance is not preference. Just because a variety can tolerate the 2000 ppm solution you're determined to give it, doesn't mean that it is performing at its peak or yielding as much as it could. It might be yielding the nutrient companies a big return, but your crop is probably just overfed. What We Want vs. What Our Plant Wants When it comes to Cannabis, what we want is resin � the sticky psychoactive stuff. When plants are properly fed, they produce plenty of flowers and resin. The plants want to produce as many flowers as possible � it's in their best interest and it's what we want as well. The difference between our desires and the plant's is that the plant is trying desperately to reproduce with its flowers and we're trying to stop it from reproducing so that the buds swell with resin instead of seeds. The point here is 2 fold. First of all, you don't have to cram as much nutrients into your plant as possible for it to yield well - the plant wants to get big on its own. Secondly, when a plant is given more fertilizer than it needs to produce its structures, it just keeps storing nutrients as a survival mechanism. If the female plant goes un-pollinated, it's just going to keep storing nutrients in an attempt to hopefully survive a mild winter and re-grow in the spring. Despite being an annual plant, un-pollinated females will frequently live through a mild climate winter (like many places in California) and sprout new vegetative growth when the days start to get longer. So, overfed plants simply keep storing up excess fertilizer in the hopes of later re-growth. At a certain point, the extra fertilizer doesn't contribute to the development of flower structures or the production of resin, it's just building up. The plant doesn't know that we want it to burn cleanly after we harvest it; it's just thinking about how to live long enough to make a seed. As for adding weight, excess fertilizer actually contributes very little and besides, that's not the weight you want. When the plants aren't forced to cope with storing excess fertilizer, they use all their energy and available resources to build as many flower sites as possible (hoping for seeds) and then use their energy to fill the empty seed pods with resin (as a defense mechanism to keep animals from eating them). What we want is resin weight, not fertilizer weight. If you take 2 nugs of equal size and shape, but one is clearly more resinous, then that nug will always weigh more. An excess of fertilizer in the bud contributes little in the form of weight but can essentially ruin otherwise excellent pot. Again, the plant wants to grow big flowers and swell with heavy resin. Growers need to stop thinking about fertilizing in terms of force feeding the maximum amount and start thinking about it as "covering your bases". It's very easy to get lost in all the hype and claims on the nutrient bottles � at times it seems as though you need every product in the store. Don't get overwhelmed. The best gardens I've seen are the product of well managed environments � not the result of a magic bottle. When plants are provided with ample, but not excessive nutrients, in a good environment, their genetic potential is easily realized. If you love to feed your plants, try reducing your fertilizer strength by 20% on a few representative plants and see the results for yourself. If they start doing substantially better, you may want to try another small scale trial of fertilizer reduction and reduce the concentration even more. Not only will you start saving money on nutrients immediately, I bet those are also the plants you smoke first. For those aiming for the highest standard of quality in their medicine, less is more when it comes to fertilizer. In Next Month's Growers Grove: We're going to take a closer look at some common garden styles and the EC values that work best in those conditions. Many factors can be involved in finding just the right nutrient concentration for your crop. Among these are Cation Exchange Capacity, pH and the concentration of fertilizer in the garden's runoff water. With these considerations in mind we'll look at healthy standards We're also going to take a closer look at rinsing agent products to better understand when and how to use them (if at all) and how to know when it's time to rinse. Methods for improvement of outdoor soils and a discussion of outdoor fertilizers will also be covered. Until then, Happy Holidays from the Grove! 81 81 82 83 84 85 Left to Right: Silver Surfer Vaporizer Unit, Da Buddha, "Right Past the Light" Vaporizer designed by Jay Alders SSV In a twist of irony Steve's story represents the quintessential tale of the American entrepreneur. As Silver Surfer vaporizers continues to gain international presence in the world of smoking accessories and beyond, each failure and success of the company ultimately rests with Steve. So far the successes have outweighed the failures, and regardless, Steve has a unique ability to take what might be considered debilitating failure and turn it into great success. It is not surprising over the last few years Silver Surfer has sold thousands of products across the globe and developed a brand recognized for making some of the finest vaporizer products available. Silver Surfer vaporizers is the result of a combination of Steve Kelnhofer's hard work and determination to prove himself the best at what he does, his wits as a business man to know what people want and how to give it to them, and his misfortunes (or fortunes depending how you look at it) due to the legal status of a certain plant. Steve started out his career completing a five year apprenticeship program and working as a union electrician. However, in 2002 he was charged with cultivation of cannabis and was laid off for violating company policy. Out of work and unable to practice his trade, Steve viewed this as an opportunity to begin his own business. He considered starting a clothing line but realized he did not have the necessary start-up capital requirements, so instead he teamed up with a Silver Surfer Vaporizers by JOHN GREEN | photos by ROBB FRIEDMAN friend who worked at an adult toys distribution center, started a company called 7th Floor, bought a book on HTML, and spent three months learning how to code and building a website selling sex toys online. For a few months things were good for Steve and his website. He figured that people like sex and if they were inclined to purchase accessories to make it better they would go to Google, and through whatever inquiry submitted geared towards the indulgence of their fantasies, if his site was among the top rankings for search returns his business would be successful. His site sat near the top and attracted a good deal of traffic until Google changed their algorithm, dropping Steve's site from relevance to the throngs of online sex toy buyers. What seemed like another failure provided Steve with inspiration to consider a new venture, leading to what would become the Silver Surfer. Steve had been using his friend's vaporizer for over a year and loved it compared to the traditional smoking experience, however he noticed issues with the poor design of this vaporizer and became disappointed because it was continuously breaking and needed to go into the shop for repair. Steve decided to use his experience as an electrician to build his own vaporizer. Steve researched the parts he needed for the heater and the tube, and built his first vaporizer from home with his old tools by 86 86 hand. The basic functioning model was complete except for the glass. Steve searched his town for someone to help make the parts he needed, but with no luck turned to the internet where he bought a book on glass blowing. He picked up the proper supplies, set up a small glass blowing studio, and spent a couple months learning how to make the pieces needed to fit his vaporizer. Finally, Steve had the first Silver Surfer model up a running. Steve soon turned his house into a Silver Surfer production center, making six more models of the same version as the first vaporizer for his friends, and using his personal experience of making each product and criticism from his friends to continue to refine the Surfer. After this Steve saw the potential of the distinct vaporizer model he created and began to consider how to bring the product to the greater market and expand his 7th Floor company. Steve went at it for over a year and a half building and selling vaporizers from his house, hiring employees to handle production and assembly, outsourcing part orders to production factories in China and other countries to reduce costs, selling products independently through Ebay, and developing the Silver Surfer brand from the ground up. Demand soon overwhelmed the space available and Steve moved into his own head shop creating a retail outlet for the Surfer and other products. Today Silver Surfer has grown into a major presence in smoking culture focused on delivering unique variations of smoking and vaporizing devices and accessories and selling their products online through a few partnering e-commerce retailers as well as in smoke shops around the world including the USA, Canada, Australia, and countries in Europe and Asia. The Silver Surfer is still the mainstay of the company, featuring top quality in functionality and design in addition to unlimited options for customization of glass knobs, wands, covers, mouthpieces, and logos. The piece is extremely durable, easy to use and maintain, and delivers an excellent vaporizing experience that is healthier than smoking while still producing the desired effect, tasting great, incorporating oil diffusion to enhance the scent, and conserving product in the process. Beyond the Surfer, the company has the Da Buddha vaporizer which is a cheaper alternative to the Silver Surfer appealing to a greater quantity of users still placing importance on function and design, but featuring less options for art and customization. Recently, the Life Saber vaporizer has been introduced as a portable vaporizing option again placing the same importance on durability, function, and design as the rest of the line. The company also sells a wide variety of other accessories from glass to grinders, not to mention their own line of SSv clothing. Steve plans to further expand the Silver Surfer and his other products by overcoming key challenges such as meeting demand for products, continuing to make business operations more efficient, and meeting the funding required for growth by staying true to his original design principals while developing new products and building the Surfer brand through sponsorships of artists and musicians who can help spread the word. If you would like to share some of Steve's success it is highly recommended you check out the Silver Surfer and the other product offerings from his 7th Floor. For more information on these products visit silversurfervap.com The Life Saber Vaporizer 87 Strain review by Michael Dillon Heavy Hitter OG is a kush well worth its title. It's hard hitting cannabis that doesn't disappoint the common, or casual smoker. This strain is really just classic OG kush with a bit of an extra kick, giving it the appropriate `Heavy Hitter' name. It can be hard to trust a name in marijuana, as many strains tend to be boast via title, but HH lives up to the claim in the name. Hea vy Hitter Heavy Hitter has a medium to light green color with a healthy portion of red hairs, and a reasonable amount of trichomes, creating that shine that so many of us connoisseurs know and love. Buds were very dense, making for good grinder material. Smell is sweet, piney, and overall very fresh. The smell is really nice and quite potent, which sort of functions as warning for the strength of the buzz (in case the name wasn't enough). When I first brought this home, within one minute of opening the bag my roommate asked if I just got some new pot, and he was swiffering the floor in the other room. So yeah, it's strong in a lot of ways. The taste is that of pure kush; nothing too sweet, sour, fruity, or anything else. It's just the way you dream of great kush hitting your taste buds. HH has a high that creates a somewhat hazy effect, where it may be hard to communicate, and become easily lost in your own thoughts. For someone who may only smoke occasionally or recreationally, this could put you down for a while. Also it causes giggles, physical soothing, and an overall relaxation that should lighten any previously dark mood. This isn't a strain you'll need to smoke a lot of, which is always nice on the lungs, throat, and wallet. Similar to other OG Kush strains, Heavy Hitter OG is a strong indica, with all the wonderfully mellow, soothing qualities that come along with the indica name. The precise origin wasn't specified, but you can bet your booty that it's grown somewhere here in our fine state of legalized medical marijuana. Can be a little tricky asking for it, as my merchant seemed to think I was just asking for a `heavy hitting' strain of OG Kush.... turned out they didn't have the strain anyway. Never a bad idea to call around to several dispensaries in advance to see if they have some in stock, but it should be available. HH OG is great rainy day weed; perfect for those long afternoons where typical `stoner' activities seem like the only right thing to do. Grab the remote control, order some delicious delivery treats, and suck down that pretty little bowl. A good blanket/snuggie and your favorite sweatshirt are all the company you really need to feel all warm and cuddly the solo way. Really the ideal way to enjoy this kind of buzz because you don't have to talk to anyone, and it's ok to get lost in thought for 15 minutes straight without vocalizing a word. Overall, a big thumbs up to Heavy Hitter. It's economical, enjoyable, and potentially enlightening. Recommended smoking for all the kush lovers out there. 88 88 89 by vALERIE FERNANDEz With so many great Hemp products out there to spotlight, Hempful Hints spends great effort bringing you new and exciting ones to enjoy. We've told you about the many health benefits, and some outstanding products optimizing the strengths of this wonder-plant, and now we want to take a break - not your normal 420 break, but one with a different satisfaction. This break is more the type to enjoy with your kids, or during the holidays with family. We're talking about the Cool Hemp product line, which includes Frozen Desserts, Energy Cookies, and even Cool Hemp Protein Powder, Hemp Balm, and Hempseed. Let's stay focused on the sweet stuff though.... Owners Christina and Robbie Anderman make their non-dairy frozen desserts using 100% organic products, including the innovative use of Hemp oils to reproduce the creaminess of "ice cream", or in this case, frozen dessert. Cool Hemp Frozen Dessert comes in Chocolate, Maple, and Natural. Rich in nutrition, a small 125 ml portion of Cool Hemp provides you with half your daily need of the Essential Fatty Acids Omega 3 and 6, as well as being high in iron and calcium. It's yummy too!!!! What better to accompany a frozen dessert than a delicious cookie, right? Cool Hemp's wheat-free, vegan cookies are a great source of fiber, Iron, Thiamin, and Niacin, and come in Raisin, and Chocolate Chip (of course). Not only is their plant certified by the OCPP/ Pro-Cert which governs all organic foods in our country, but their product line is Kosher - and that is governed by a much higher power. Their packaging is earth friendly with the vision of using hemp fibre packaging in the future. Even their community farm is solar-powered, with wood and solar heat, and they use only recycled or tree free hemp paper. Since they are a Canadian based company, most of their products are only available in Canada. However they do have a limited number of products available online for purchase. For more info on acquiring these Cool Hemp products, go to. Your taste buds will thank you! Living Harvest located in Portland, Oregon also provides great frozen hemp desserts called TemptTM made from hemp milk (filtered water and hulled hemp seeds) available in 5 delicious flavors --vanilla Bean, Mint Chip, Coffee Biscotti, Chocolate Fudge and Coconut Lime. These yummy non-dairy desserts provide all the benefits of the hemp seed featuring the essential fatty acids of Omega 3 and 6. Living Harvest also sells Hemp Milk in five flavors including original (sweetened and unsweetened), vanilla (sweetened and unsweetened) and chocolate. To see a complete list of these earth friendly products check out. While their frozen desserts are not currently available for ordering online, their milk, protein powders and hemp oils are. 90 90 91 We Dig This Winter Events 92 In Colorado by JAY EvANS Colorado has an array of great things to do. There are the beautiful outdoors to take advantage of, with access to some of the best snow, and Winter sports facilities in the world. We also have a great heritage of Western sports, Rodeos, Round-ups, and the wrangling of pretty much anything that runs wild on four legs. And of course there are the health nuts out there, up early, jogging in the cold, and making us all look bad. You know who you are. Don't be ashamed by this public outing. We respect you for what you do - it's just that we don't like doing it. So anyways, we here at KUSH figured we'd cover the gamut in this January's We Dig This. So here's a lil' something for everyone out there, to kick off your New Year. Steamboat Springs will host the 98th Annual Winter Carnival, Feb. 2nd - Feb. 6th. This seasonal hit started in 1914 as a way to deal with "cabin fever," and now this community based event allows us to show off our town, and welcomes all to "Ski Town USA." The festivities include events such as: Ski Jumping competitions, a Snowboarding Jam Session, the Soda Pop Slalom, a Tubing Party, the Diamond Hitch Parade which includes the High School band on skis, the Street Events on Lincoln Avenue, and the Night Extravaganza at Howelsen Hill, including a brilliant fireworks display and the famous Lighted Man. That last Saturday and Sunday, hundreds of people line Lincoln Ave. to witness the Street Events. These events feature children being pulled behind horses down Lincoln, on skis and snowboards. There is the Street Slalom, Ski Joring (whatever that is?), Ring & Spear (sounds dicey), Ring & Box (sounds expensive), and the famous Donkey Jump. This just sounds outright dangerous, yet somehow it's been sanctioned by the city, so... maybe you should plan on seeing it, just to say you were there. Go to for all the details. Now, if livestock, the hunting and/or killing of livestock, the eating of livestock, or the watching of men and women riding, and doing strange things to livestock is your thing, then we have an event for you!!! The National Western Stock Show - taking place January 8th - 23rd. With many venues around town facilitating these events, checking the schedule at is advised. Besides the Rodeo, Horse Shows, Livestock Shows, Trade Show, and great food concessions, there will also be a great Art show with a Red Carpet Reception. This is where the Art world collides with the "High Life." There seems to be something here for just about everyone. Don't miss it. Healthy people, come one! Come all! This part of the article is for you! If you happen to be one of those people, and are still reading KUSH magazine, then keep doing whatever you're doing. Check out these bone numbing events: Chilly Cheeks Duathlon Series at ... Sorry one of these three events already happened, but there's still time to freeze your butt off in two of them, January 15th, and February 26th. These Run/Rides will keep you on your toes. These can even be a part of your New Year's Resolutions, (... if you have those). So check out the info online, and make plans to be there. We have to get through these cold winter months, so let's make the best of them. Before you know it, you'll be hiking Red Rocks in a tank top, basking in the sun. Happy New Year! 93 QUESTION: IS IT SAFE TO SMOKE RESIN? - CHRISTOPHER WELLISH - LONGMONT, CO Buckie: Yes. As a matter of fact, resin can be very useful at bed time - plus it's free! Many people refer to the build-up inside of a cannabis smoking device as resin. This residue is left behind as particles from your cannabis smoke stick on the surface of your smoking device as they pass over. In a previous Ask Buckie, we discussed the process of decarboxylation (dee-car-box-ill-lay-shun). When THC-A goes through this process, it becomes the familiar cannabinoid THC and through further degradation it converts into the cannabinoid CBN. Harvesting resin and putting it through the smoking process again can deliver a very rich dose of CBN. After smoking resin, many cannabis users report a mild effect followed by sleepiness. Only 10% as psychoactive as THC, CBN isn't very effective in tickling the mind but it will help you get to sleep. Resin will appear very dark and feel very sticky if you typically smoke buds and firm hash. Hash oil tends to leave behind a residue similar to rubber cement, ranging from amber to black in color. The first type of resin I mentioned, generated by smoking buds and hash, is fairly easy to remove using a piece of durable glass or metal. Simply scrape the deposits from your smoking device and toss it back in the bowl. Hash oil resin can be tougher due to its viscous nature. If you have a dedicated hash oil smoking device, you can place it on a clean oven-safe glass baking dish and toss it in an oven heated below 200 degrees. Position the device in a way that the resin will drain out into the baking dish. Like mom always said, wait for it to cool down before you touch it. Disclaimer: Make sure to avoid any baking dish with a non-stick coating. QUESTION: I WENT TO MY LOCAL DISPENSARY AND SAW SOMETHING CALLED CAVIAR. WHAT IS IT? -BARISH LESAINE - DENVER, CO Buckie: Caviar is a term used to describe a delicacy comprised of salty fish eg... oh wait a second. In the wonderful world of cannabis, caviar (also known as infused bud) refers to cannabis buds that have been coated in hash oil or loose trichomes (kief). Beware, some care providers use this to cover up poorly grown or mishandled cannabis. Fortunately, caviar is fairly easy to make and can be made at home. If you would like to make your own caviar using kief, simply roll your favorite buds in a pile of kief until well coated. Presto! Kief caviar. Hash oil can be slightly more tricky depending on the consistency of the oil available to you. You're ready to roll if your oil is thin. Hash oil that is sticky or clumpy can be heated very slightly to make it easier to work with. Grab an oven-safe glass baking dish and lay out a bud or two. Warm the hash oil until it reaches a runny consistency and drip it generously over the buds. Try to get most of the hash oil directly on to the buds. Upon cooling, the hash oil can make it difficult to remove the caviar from the glass surface. If you find your caviar glued to the dish, place it in an oven heated to 200 degrees for about 30 seconds or until the buds come off with ease. This is also a good time to collect any hash oil that has collected on the bottom of the baking dish. If you prefer to purchase ready-made caviar and suspect it was made to cover up unsafe cannabis, ask for the test results. 94 95 97 Cloning Y our Way Here we are, staring down the barrel of another winter in Colorado, wondering- at least for some of us- how we're gonna pass the time until the slopes and half-pipes open, and for others, how we're going to avoid the annual case of cabin fever! I have the perfect solution- create your very own `clone army' in your indoor growing space! Think of it- your favorite strains, multiplied by the dozens or hundreds, all marching to the beat of YOUR drum! If that doesn't get you excited, you can always grab the snow shovel... All righty, then- the first thing to do when you set out to take cuttings of your favorite plants is to set up a proper `clone zone'. This is an area in your house where the temperature stays between 72 and 78 degrees and doesn't change much, especially when the furnace kicks on. Remember that the lights you use will generate some heat, so factor in your fluorescent or CFL lighting- do a test setup- and use a good quality thermometer to check your temps. By the way, HID lighting isn't recommended, as cuttings just don't need and can't handle that kind of light intensity. Unless you're growing in a relatively unheated area, you shouldn't need a warming met, so use one only as a last resort. Next, get yourself a tray with a humidity dome- stop by the lawn and garden department or any greenhouse or hydro store and they'll be plentiful. I tend to stay away from the types that need fresh refills or new inserts for every new crop, since it runs into money and isn't necessary if you do your homework on the soil. Speaking of... I use basic indoor potting soil and I add a little bit of rooting accelerator (NOT cloning solution, I'll explain that application in a sec-), possibly a light application of natural pesticide drench, such as a neem oil based product to keep down fungus gnat larvae and spidermites, and some sort of beneficial microbe/micorrhizae inoculant powder, plus plant enzymes and vitamins. Keep the solutions used on the weak side of the manufacturer's recommendationsand for the love of Gaea, do NOT use any fertilizer at this point! Keep in mind that these will be very tender, vulnerable cuttings and that they can't handle much in the way of nutrients until after they've developed roots. Now, onto the setup- wet your soil until it drips a few drops when you clench it 98 to Success, the Easy Way! tightly in your fist. Then, gently place it into the cups of your tray, being careful not to pack it in place. Use a skinny pointy tool, like a chopstick, to poke a hole all the way down the middle of each cup for the stem of your new plant. Now, you're ready to actually take the cuttings. I choose strong, healthy shoots from near the top of my plants, since the ones on the bottom are shaded, usually pale and spindly and often have mildew on them. You want them to be just tall enough to reach the bottom of the hole you made in the cup and still stick up 3 or 4 inches. Take your cuttings using a sharp blade like an exacto-knife, place them in a cup of cool water to soak for a minute or two, then trim off any leaves or side shoots that would be covered by the soil. Then, using the blade, gently scrape the bottom inch or so of the stem to expose the tissue just under the outer covering of the stem. This gives your rooting solution something to absorb into. I use a gel type rooting hormone since the gel sticks to the stem of the cutting better than powder or liquid. Carefully place the stem all the way down the hole into your soil, and tamp the soil in place around the stem with your finger. Gently now, since this is when your plants are at their very most fragile! Keep all your cuttings well misted, and for the next week or so keep the humidity dome in place as well, because without roots the cuttings won't be able to draw any moisture up the stem to keep from wilting. After a week or so, gradually open your dome for an hour at a time until the plants are able to stand on their own. This will be easier- and your temps will be more stable- if you keep the humidity in your growing and cloning area above 50%. Yes, like any worthwhile skill it takes a little practice, but with attention to detail and some patience there is no reason why you should need a multi-zillion dollar super turbo monster cloning machine to achieve excellent results- and remember, clones are an exact copy of the original plant's genetics so once they've hardened off, you're dealing with a known quantity in terms of growth characteristics and horticultural preferences. So this winter, march down into your grow room and take command! Your clone army awaits! indoorcultivationconsulting@gmail.com by T yler C. Davidson 99 100 ately has been a rocky road. It's not that we're doing so badly, it's just that we haven't been able to consistently win since our seven game run from late Nov. into Dec. Since then we'll win one then lose two. We'll win two, then lose one, then win one, then lose three. You get it! We just can't get it together. The tragic loss of Michelle Anthony, (Carmelo's sister) and the understandable five games missed by the all-star didn't help, but we all push on, and will hopefully prevail. My beloved Denver Nuggets have seen better days, and hovering just above .500 (at the time of writing this article), my hopes heading into the New Year were waning. Then I got a jolt of Nugget energy, from the strangest of places: Los Angeles! What!?! Lakers town?!? How is it possible?!? On a recent trip to Los Angeles, I was lucky enough to see a great local Indie-Rock band named The Ventriloquists, at an L.A. staple called The Mint. This group of UCLA art students did not write an ode to Kobe, or another "Magic Johnson" song, but a tribute to our Denver Nuggets. "I Always Root For The Nuggets" (available in iTunes) is a clever homage to our team, and it was exactly what this Nugget fan needed to hear. Combining Funk, Soul, and Hip-Hop, this eclectic group of hometown faves were finishing up a west coast tour, and I was lucky enough to catch it. Their diverse set featured some tasty, soulful songs, with an incredible horn section. New songs from their latest release Bailout! got the crowd moving, but the older hits from their first album titled Safety Meeting got everyone out of their seat. Their tongue in cheek style of blending Funky Soul, with Hip-Hop/R&B is perfect for the Nugget fans. If anyone knows the house DJ at the Pepsi Center, pass this track on...This will get the fans out of their seats. The crafty lyrics and uplifting chorus are infectious. We need a push heading into the All-Star break. I'll pass on three things I learned in Los Angeles: 1."I Always Root For The Nuggets." 2. Don't be a dummy - Listen to The Ventriloquists @ 3. Never under-estimate the Denver Nuggets' appeal. here in Denver. We have Coach Karl back, and thankfully healthy. We have a great home record which should hold up thanks to the great fan base here. Our away record needs a little help, but that will hopefully come with the New Year. Here are some games to check out between January and February: SAT - JAN. 15TH VS. CLEVELAND - PEPSI CENTER SUN - JAN. 16TH @ SAN ANTONIO - ESPN / KRWZ AM 950 WED - JAN. 19TH VS. OKLAHOMA CITY - PEPSI CENTER FRI - JAN. 21ST VS. LOS ANGELES LAKERS - PEPSI CENTER / ESPN SUN - JAN. 23RD VS. INDIANA - PEPSI CENTER TUES - JAN. 25TH @ WASHINGTON - KRWZ AM 950 WED - JAN. 26TH @ DETROIT - KRWZ AM 950 FRI - JAN. 28TH @ CLEVELAND - KRWZ AM 950 SUN - JAN. 30TH @ PHILADELPHIA - KRWZ AM 950 MON - JAN. 31ST @ NEW JERSEY - KRWZ AM 950 / NBA TV WED - FEB. 2ND VS. PORTLAND - PEPSI CENTER FRI - FEB. 4TH VS. UTAH - PEPSI CENTER / ESPN SAT - FEB. 5TH @ MINNESOTA - KRWZ AM 950 MON - FEB. 7TH VS. HOUSTON - KCKK AM 1510 WED - FEB. 9TH @ GOLDEN STATE - KCKK AM 1510 / NBA TV THURS - FEB. 10TH VS. DALLAS - PEPSI CENTER / TNT SUN - FEB. 13TH @ MEMPHIS - KRWZ AM 950 MON - FEB. 14TH @ HOUSTON - KRWZ AM 950 101 102 Medical marijuana patients across the country are losing their jobs after workplace drug tests reveal their confidential medical marijuana treatment therapies to their bosses. Originally designed to find drug abusers, these drug tests are placing State-authorized medicinal marijuana patients in a Catch-22 of the worst kind. Many workers are faced with an untenable "choice." They can resist the drug test and be fired for alleged "insubordination," or they can "voluntarily" submit and be fired for testing positive for medical marijuana remnants that stay in their bodies long after treatment has occurred. These tests are highly unreliable and cannot show impairment � only past treatment. Fifteen states and the District of Columbia recognize the medicinal value of marijuana and provide legal rights for qualified patients to treat a variety of debilitating medical conditions. Yet, since few States' medical marijuana laws provide specific legal protections to workers, some companies are firing excellent workers based on drug policies that supposedly allow private companies to "random" test medical patients, even when they are strong performers with no history of safety issues and solid work records. Your Job or Your Life. Workers are being terminated under outdated corporate drug testing policies that fail to account for State-authorized medicinal treatments and do not distinguish between illicit abuses and valid medical treatments. Although these workers may "fail" a drug test, they should be regarded as "Innocent Positives" when they have treated responsibly at home under their doctors' recommendations and in accordance with their State laws. They should be treated no differently than workers validly treating with pharmaceutical drugs. Even when workers offer their State-issued medical marijuana registration cards - proof that they are certified legal patients registered under their States' Medical Marijuana Compassionate Use laws - many unenlightened employers are treating vulnerable workers with debilitating medical conditions with resistance or outright hostility. Some workers, in light of their undeniable exceptional job performance, are told that they are great employees but that management's "hands are tied" by their corporate policies. Many employers say they can come back to work, if they just stop their medical marijuana treatments and use pharmaceutical drugs, many of which have highly potent side effects. Others simply say, "We don't care about State law. Our company policies and federal law trump your so-called State Rights." But no federal law requires termination of medical marijuana patients. The United States Department of Labor confirms that drug-free workplace programs are not required under regulations for the Occupational Safety and Health Administration or the Mine Safety and Health Administration. Nothing in the federal Controlled Substances Act requires or authorizes employers to fire workers who are validly treating with medicinal marijuana under their States' laws. Although the Department of Transportation makes no allowances for medical marijuana patients in narrowly-defined "safety sensitive positions," nothing in the DOT regulations requires termination. The Drug-Free Workplace Act of 1988 (DFWA) does not apply to most private employers and does not require drug testing. Even for federal contractors, the DFWA requires only that they notify workers that use, distribution, or possession of a controlled substance is prohibited in the workplace. It requires an awareness program about drug abuse in the workplace and mandates notification of available drug abuse counseling services and penalties for abuse violations. It does not require patients to disclose treatments. Instead, covered employees must disclose only criminal drug convictions. Even for convictions, contractors have discretion to impose appropriate sanctions, and termination is not required. Despite unsubstantiated fears that off-site medicinal marijuana treatments could threaten federal contracts, contractors lose funds only when such a high number of employees have been legally convicted for violations occurring in the workplace as to indicate that the contractor has failed to make a good faith effort to provide a drug-free workplace. Even then, federal agencies may grant waivers under appropriate circumstances, which should include Stateauthorized treatments. "NO FEDERAL LAW REQUIRES TERMINATIONOFMEDICALMARIJUANA PATIENTS" Workers' Rights? In Colorado, qualified patients have a Constitutional right to treat debilitating conditions with marijuana in accordance with the law. Yet to date no Colorado court has decided a medical marijuana wrongful termination case, and it remains to be seen how the courts and juries will treat unjust firings of patients who are exercising their Constitutional rights. If faced with a workplace drug test, patients should immediately seek legal counsel to help protect their rights as a productive member of the workplace. /////////////////////////////////////////////////////////////////////////////////////////// Fertile Ground is a monthly column published in KUSH Magazine highlighting the hottest state and national issues surrounding marijuana reform. This column is brought to you by Brian Vicente, the Executive Director of the advocacy group Sensible Colorado, and a partner at Vicente Consulting LLC, a full-service medical marijuana law firm. 103 104 104 104 I'm interested to see where states that are in the red go with reforming marijuana laws, and seeing how the money they can generate from marijuana tax revenue and licensing fees influences the way they view marijuana. The 4:20 mega-giveaway on Sunday, the third and final day of KushCon II at the Colorado Convention Center last month, was the grand prize at the world's biggest medical marijuana convention ever. The GrowBot give away was sponsored by MMJ Daily Deals, with CEO John Molinare present on stage for the selection of the winner of the contest. The winner was Scott Korpas, who unfortunately had to go back to work and was not present during the drawing. Throughout the weekend, contestants entered their names into raffle barrel at the MMJ Daily Deals booth located at the front entry of the exhibition hall in hopes of being selected as the winner of the 28 foot self-contained cultivation growing system from GrowBots. Korpas ended up winning the GrowBot-2800 an "all-in-one, plug-and-grow hydroponic production system . . . a complete seed-to-harvest solution." Housed in what looks like the trailer of a semi-truck, the $46,000 GrowBot was raffled off and awarded to the New York native Korpas. Korpas, 46, has been working as a hydroponics specialist for the last few years in Montrose, Colorado. Originally from the East Coast, Korpas had been working in the pharmaceutical industry before deciding to make the move to Colorado and explore the flourishing Medical Cannabis Industry. "I never thought I would be doing this," Korpas said. "But it's one of the few industries that's growing right now," he said, also mentioning how many of the industries in Northwest states such as Connecticut and New Jersey are diminishing in the current economic depression. He explains, "I'm interested to see where states that are in the red go with reforming marijuana laws, and seeing how the money they can generate from marijuana tax revenue and licensing fees influences the way they view marijuana." He is also fascinated with how far the cannabis industry has come in the last few years, and hopeful of the positive progress that will be made around the country in the coming few years. Korpas is currently making his way to Providence, Rhode Island to watch, learn, and participate in the development of its up and coming medical marijuana market. Working in the cannabis industry has also given Korpas the sort of life he thought he should be living. He is now interacting with customers on a daily basis, enjoying the breadth of humanity not the width of a desk. Scott Korpas is another example of an informed, educated, professional and articulate person who has decided to work in the booming medical cannabis industry. And with his specific interest in helping customers and enthusiasts' alike, growing marijuana in the most effective way possible, winning the GrowBot only serves as a way to better his knowledge of hydroponic growing. Korpas hopes to utilize the GrowBot to further his understanding of growing marijuana and also to help him better his ability to serve patients and growers alike. The Growbot Company is the brain child of Tom Patton, who currently manufactures the GrowBot units in three different configurations in a plant outside of Atlanta, Georgia. When asked to comment about Scott Korpas winning the GrowBot 2800, Patton said he was happy that the winner understood its capabilities and planned on putting the unit to good use. Patton stated that based on the overwhelming initial demand for the Growbot units, it was likely that he would soon be opening a second manufacturing facility somewhere in the western U.S. 105 Loveland by CHARLOTTE CRUz Loveland is a sweet little town with a sweet little. While Loveland is a sweetheart town, it's also a great getaway for a day on the slopes. The Area is one of Colorado's highest ski areas with a summit of 13,010 ft and the second highest lift served areas in North America at 12,697 ft (. The ski area takes its name from Loveland Pass, which separates it from the Arapahoe Basin ski area. With 8 lifts in operation, it's easy to find the terrain that's right for you. Loveland is home to a lot of beginners since there are easy slopes to learn on and a favorite of snowboarders who live for Loveland's powder. If you seek a challenge, The Ridge @ Loveland is the lift served area off chair 9 at an elevation of 12,697 feet and is hikeable to the summit at 13,010 feet (3,970 m). It features almost entirely Black and Double Black runs. It also has 360 degree views that stretch across and beyond the Continental Divide, so make sure you pause a moment to take in the scenery. Loveland is less than an hour from a large percentage of towns in the Denver area and is a favorite among locals for its short lines and less expensive lift tickets. While there may be fewer trails than some of the bigger mountains, Loveland has fantastic powder and a cool local's vibe that makes it a top choice destination for a "sick" day. The price of lift tickets are as $52 at Safeway or $59 at the mountain. Y can also get ou flex tickets that are good for any 4 hours or a half-day ticket (after 11:30) for $46. There is a no frills deli and cafeteria for dining and a bar that's always festive. So even if you don't have a lot of time or a lot of money, Loveland is the great escape to the slopes that won't take long to find and won't break the bank once you do. Colorado Travel ion to e ideal locat th day" end a "sick sp 106 CANNABIS HAS CAPTURED THE ATTENTION OF THE WORLD. FROM DECEMBER 17 THRU THE 19TH OF 2010, the Colorado Convention Center was buzzing with the nation's top medical cannabis political leaders, endocannabanoid experts, cultural movers and shakers in the largest Cannabis lifestyle convention to ever take place on planet Earth -- Kush Con II. The international media capitalized on the "Stiletto Stoners" phenomenon fascinated by women's use of cannabis. Celebrities like Melissa Etheridge and Alanis Morissette are putting a new face on this controversial plant. Highlighting what the media referred to as the "United Nations of the Cannabis Industry" was the newly launched NORML Women's Alliance fundraising weekend that began with a business to business networking event sponsored by the Medical Marijuana Business Alliance and KUSH Magazine on Thursday, December 16th where the elite of the cannabis industry gathered together to celebrate the movement and organize product and service giveaways that raised thousands of dollars for the charity. Heading up the Speaker Power Panels was former Beverly Hills NORML Executive Director Cheryl Shuman. Shuman is the Director of Public Relations and Media for the KUSH Brand including KUSH Magazine, KushCon and DailyBuds.com. . Kushcon II made history by featuring the most incredibly extensive list of guest speakers ever assembled in the cannabis movement. Each of the three days showcased a diverse set of panels that, in their entirety, covered every aspect of the growing cannabis industry. Friday, the first day of the convention kicked off the weekend with the "National Cannabis Political Powerhouse Panel". Headlining the panel was former New Mexico Governor Gary Johnson speaking about his plans to take on a 2012 presidential bid focusing on legalizing cannabis. This informed and dedicated panel featured the elite of the industry including, Keith Stroup, founder of NORML, the National Organization to Reform Marijuana Laws; Steph Sherer, Executive Director for ASA, Americans for Safe Access; Stephen DeAngelo, President of CannBe and Harborside Health Centers; Steve Fox and Aaron Smith of the newly formed NCIA, National (continued on page 108) 107 Cannabis Industry Association; Russ Belville, Host of NORML's Daily Audio Stash and writer for the Huffington Post. The second panel featured on Friday was the "Medical Science Breakthrough Panel" giving incredible insight into the increasing amount of scientific research being performed on the cannabis plant itself. Dr. Robert Melamede, C.E.O. of Cannabis Science, Inc. spoke about breakthroughs in curing cancer and other illnesses that are currently being documented by Craig Sahr, Executive Director of the Phoenix Tears Foundation. Full Spectrum Labs representative, Buckie Minor shared specific research that they are conducting as well as how science is legitimizing the medicinal benefits of cannabis building effective treatment plans. Soon, these companies will have case studies ready to submit to the FDA for possible clinical trials with patient case studies. Harvard trained Dr. Alan Shackleford of Amarimed discussed with the audience specific case studies and the need for the medical community to seriously reevaluate their practices to include cannabis as medicine. Book author, activist and Federal Cannabis patient Irv Rosenfeld shared his experiences being one of only four living patients that is provided medical cannabis through the government for free. Timothy Tipton finalized the panel sharing the work he does with the Cannabis Therapy Institute. Cheryl Shuman shared 25 years of experience working with media, celebrities, marketing and health care in Beverly Hills explained to the audience her personal cancer survival story using cannabis and the importance of using celebrity and media power to spread the word on a mainstream level. . Saturday was host to three extensive panels, including the "Colorado Political Power", "Cannabis and Hemp Wellness" and "Women's Political Powerhouse" panels. Colorado is currently viewed by the media and cannabis experts as the epicenter for Cannabis Law Reform. Governor Gary Johnson again addressed the audience on the probabilities of cannabis legalization benefits as well as discussing the specific role of Colorado's political leaders in forming new policies. Congressman Jared Polis arrived to the panel a few minutes late, but received a standing ovation for his support of the industry and movement. The most respected legal minds and leaders for Colorado soon chimed in with their mission and insight moving towards legalization in 2012 including attorneys, Warren Edson, Matt Kumin and Rob Corry. Powerful Activism leaders including Mason Tvert of SAFER and Brian Vicente of Sensible Colorado shared their vision and game plan for Colorado. Dan Hartman of the State of Colorado greeted the audience to share how Colorado is now the role model for the nation being the first to implement a system working with patients and business owners that works well generating millions in revenue for Colorado. Denver Mayoral candidate Councilman Doug Linkhart shared his vision for the city if he is elected. Legendary growing expert Ed Rosenthal conducted three full days of hour long seminars sharing his expertise in cannabis cultivation for free to the KUSHCON II crowds who were excited to learn every aspect of learning to grow their own medicines focusing on staying green and organic. The Cannabis Hemp and Wellness panels boasted Stephen DeAngelo sharing the business role model of the well respected Harborside Health Centers allowing patients to leave the shadows and enter the light of what is considered to be the finest in the nation by various media sources. Colorado Cannabis Hemp and Wellness Experts speaking included, Chloe Villano of Cloverleaf Consulting; Deanna Gabriel of Plant Magic, Inc.; Kerrie Badertscher of Otoke' Horticulture; Patrick Harrington of Kindness; Bret Bogue of Apothecary Genetics; Vincent Palazzatto of MMAPR; and Vivian McPeak from the world famous Seattle Hempfest. Many speakers spoke of the way the federal government (DEA) is enforcing laws versus current laws that states are passing, and explained the nuances of what the laws are and what people can do to help change them. They also informed the audience of the many new uses of cannabis as medicine, including topicals and edibles, with the overall goal urging everyone to help educate the general public about the cannabis plant. The Women's Political Powerhouse panel showcased the most prominent and influential women in the cannabis movement. Named by the media as one of the top five most influential political activists in the nation-KUSH's Director of Public Relations and Media, Cheryl Shuman opened up the panel sharing her experience as a single mother struggling to survive a terminal illness diagnosis. Shuman's medical case was one of the first cases accepted for study by Dr Robert Melamede of Cannabis Science, Inc. and Craig Sahr of the Phoenix Tears Foundation. Shuman's case if she survives the endocannabanoid therapy program outlined by these organizations could be the first medical case to be accepted for FDA clinical trials. Steph Sherer, Executive Director for ASA, American's for Safe Access spoke about the differences between the legalization of medical cannabis vs. the possibility of full legalization for marijuana for responsible adult recreational use. Founder of the NORML Women's Alliance, Sabrina Fendrick addressed the issue of women in the movement 108 108 and the efforts to "class it up" by KUSH's institution of a dress code for the convention, which was another first in the industry. She also spoke about the importance of more young women becoming activists. Other women representing the diversity in the NORML Women's Alliance included Attorney Anne Davis, Executive Director for New Jersey NORML discussed the issues involved in New Jersey's latest medical marijuana laws. Georgia Edson "came out" of the closet officially regarding her involvement in the cannabis community by not only being the wife of respected attorney Warren Edson and their lifetime of activism work with NORML, but by addressing the issues of being a mother to young children. Greta Gaines, world champion snowboarder and Nashville recording artist discussed the difficulty of openly discussing her activism work with NORML Women's Alliance in an "illegal" state of Tennessee. Amanda Rain flew in from California to discuss her role in Proposition 19 and how California is revamping it's policy to move towards legalization in 2012. Sarah Lovering of MPP, Marijuana Policy Project discussed the importance of women in activism and MPP's role in legalization efforts nationwide. Kandice Hawes flew in from OC NORML and discussed her personal experience being arrested for cannabis in college and how it influenced her to become one of the youngest women in the activism movement in a Republican county. Stephanie Bishop addressed the crowd on the health benefits of hemp seed as well as her role as an organizer in the outrageous successful Seattle Hempfest. Sunday took on a different tone, as both panels were largely concerned with the business and investment side of the cannabis industry. Business to business led the way with a panel including J.B. Woods of Greenpoint Insurance discussed business owners needs; Corky Kyle, The Lobbying Pro spoke about the importance of meeting with state and federal representatives; Joel Russman spoke about the specifics of compliance with HR 1284 in Colorado and Mark Goldfogel of MJ Freeways shared a revolutionary approach to software programs in the cannabis industry. The "Investors and Business Panel" highlighting Stephen DeAngelo described the risk and rewards of becoming a "ganjapreneur,"acknowledging that getting involved in the industry can be a risky business move. But in addition to the risks DeAngelo inspired the crowd by declaring that this is "our opportunity to create a new industry. Not just a new industry but a new KIND of industry." He spoke of his vision of the cannabis industry as a "cauldron of creativity" where the good-nature and strong morality of the cannabis movement carries over into the actual business itself. Bret Bogue, consultant for Apothecary Genetics told the crowd about surviving a rare form of cancer--using his knowledge to build a company that is the only American winner of the world famous High Times Cannabis Cup in Amsterdam. Apothecary Genetics is set to raise millions in capital to expand their brand to other ancillary businesses within the field. Robert Kane shared business plans and the momentum of brands such as KUSH Magazine, KUSHCON and dailybuds.com and how this media powerhouse is changing the future of America's economy through media and social networking. Vivian McPeak of Seattle Hempfest addresses the audience about the importance of respecting the political activists who have lost their lives, homes and children fighting for political changes in policy. McPeak also spoke about the annual gathering of hundreds of thousands of people each year for the Seattle Hempfest and of the importance of forming smart strategic alliances to push the movement forward. Attorney Matt Abel spoke to the audience about emerging markets and business opportunities in new legal medical marijuana states such as Michigan. Paul Stanford, Founder of THC Media Foundation spoke to the audience about how the media will influence our future. "We as a modern society can fiscally improve our budget by moving cannabis from the criminal sector into the lawful sector.," says Cheryl Shuman. "Marijuana prohibition makes the difficult job of parenting even more difficult by the state and federal governments not actually controlling marijuana use, cultivation or distribution -- notably by American youth," states Anne Davis, Executive Director of New Jersey NORML. These diverse speakers brought a contemporary approach to the public policy debate, and proudly represent the interests of modern, mainstream professionals who believe that the negative consequences of marijuana prohibition far outweigh any repercussions from marijuana consumption itself. For complete album of photos from KUSHCON visit DailyBuds.com Thirty Seconds To Mars 1.21.11 @ Fillmore Auditorium Still touring in support of their album This Is War, Jared Leto's band Thirty Seconds To Mars is making their way around the world and comes back to Denver for this show at the Fillmore. While many of us have been critical of Leto's screamo band over the years, they are slowly gaining more respect amongst music snobs across the land. They even did a song with Kanye on this album! It's said to be a legit live show that may leave you laying in bed afterwards pondering love lost, or when the earth will end. So if you're into that kinda thing, this is your ticket! Big Boi w/ Eligh & Scarub 1.28.11 @ Ogden Theatre Big Boi, one part of the amazing hip-hop duo Outkast, has been pumping out solo work for quite some time now. While sometimes not as recognizable or hyped as Andre 3000, Big Boi has always more than held his own. Joining him on this late January evening in Denver are two members of LA's Living Legends group, Eligh & Scarub. Overall, a really solid bill of genuine rap music that should leave you feelin' oh so fresh and so clean., Rebelution People Under The Stairs 1.28.10 @ Fillmore Auditorium Representing Santa Barbara, Rebelution has become leader of the pack in grassroots, independent, touring driven bands reppin' the California reggae scene. Originally formed in 2004, members Eric Rachmany, Rory Carey, Wesley Finley, and Marley Williams met at college in Isla vista, the laidback beach-side community in Santa Barbara. They bring their tremendous live show to the Fillmore on this, their Winter Greens Tour. If you like cannabis friendly music and vibes, this is a must see show. Don't miss it! 1.29.11 @ Marquis Theater People Under the Stairs, or P.U.T.S., came together back in 1997 with the union of MC's Thes One and Double K. They released their 7th studio album back in October 2009, and continue to tour the globe as one of the most prominent underground hip-hop groups of all time. Similar to other LA based hip-hop groups like the Pharcyde and Jurassic 5, P.U.T.S. employs an authentic quality in their rhymes that is both charming and inspiring. Get to Marquis Theater on this Saturday evening for a show that will be unforgettable. George Clinton & Parliament/Funkadelic 2.4.11 @ Fillmore Auditorium George Clinton, the legend who brought the funk alive along with the likes of Sly Stone and James Brown in the 70's and 80's, is still serving up a groovy platter of live funk. He's been inducted into the Rock and Roll Hall of Fame, founded both Parliament & Funkadelic, and has had a long sustaining solo career. Get ready to boogy at the Fillmore on this Friday night to kick off the month of February!, georgeclinton.com Ozzy Osbourne 2.8.11 @ Pepsi Center To be entirely honest, after watching The Osbournes on MTv years ago, I didn't think `The Godfather of Metal' had much left in his tank. But year after year, I'm amazed when I see his latest tour announced to the media. I guess copious amounts of drugs and bats can lead to rockin' stadium shows at age 62. The legendary metal-head that is Ozzy Osbourne brings the Scream Tour to the Pepsi Center in Denver, and it will certainly be a spectacular production. Even if you're not a fan of his newer material, you should get a chance to hear some classics from his Black Sabbath years. Don't miss the Prince of Darkness at the Pepsi Center on February 8th! The Decemberists, Mountain Man 2.10.11 @ Ogden Theatre On January 18th of this year, The Decemberists release their 6th studio album, The King Is Dead. Some of the songs are said to be influenced by REM, and REM's guitarist Peter Buck actually appears on three of the tracks. Touring in support of that album, the Portland indie/ folk rock band comes to Ogden Theatre for a show with Mountain Man, a band with a self described sound of "night noise". Mountain Man met at Bennington College in vermont, and their folk sound should compliment that of The Decemberists quite well. Should be a really good show at Ogden!,. bandcamp.com Sarah McLachlan 2.15.11 @ Paramount Theatre If somehow you forget to get your girlfriend a present for valentine's Day, and find yourself in the doghouse the next day, here is your opportunity at redemption. Sarah McLachlan is the key to many ladies' heart, not to mention she has a wonderful voice and many of you you men out there probably listen to `Adia' or `Building A Mystery' in your room by yourself. It's ok, she's legit... whether your friends will admit it or not. The Lilith Fair founder can be seen live at Paramount Theatre for this peaceful, friendly night of music. Left page: The Decemberists Above from Top: 30 Seconds to Mars, Ozzy Osbourne, George Clinton, People Under The Stairs, Big Boi, Sarah McLachlan 111 112 113 Chef herb go To. To learn More abouT cook with herb & 114 Thai ChiCken Wraps ingREdiEnTS -1/4 cup sugar -1/4 cup creamy THC peanut butter -3 tablespoons soy sauce -3 tablespoons water -2 tablespoons THC vegetable oil -1 teaspoon bottled minced garlic -6, 8- to 10-inch green, red, and/or plain flour tortillas -1/2 teaspoon garlic salt -1/4 to 1/2 teaspoon pepper -12 ounces skinless, boneless chicken breast strips for stir-frying -1 tablespoon THC vegetable oil -4 cups packaged shredded broccoli (broccoli slaw mix) -1 medium red onion, cut into thin wedges -1 teaspoon grated fresh ginger diRECTiOnS For peanut sauce, in a small saucepan combine sugar, peanut butter, soy sauce, water, the 2 tablespoons THC oil, and the garlic. Heat until sugar is dissolved, stirring frequently. Set aside. Wrap tortillas in foil. Bake in a 350 degree F oven about 10 minutes or until heated and softened. Meanwhile, in a medium mixing bowl combine garlic salt and pepper. Add chicken, tossing to coat evenly. In a large skillet heat the 1 THC. To assemble, spread each tortilla with about 1 tablespoon of the peanut sauce. Top with chicken strips and vegetable mixture. Roll up each tortilla, securing with a wooden toothpick. Serve immediately with remaining sauce. Makes 6 servings. sTraWberry and GoaT Cheese brusCheTTa ingREdiEnTS -1 8-oz. baguette -2 Tbsp. THC olive oil -1 4-oz. log goat cheese (chevre) -1-1/2 cups sliced strawberries -1/2 cup arugula -THC Olive oil -Sea salt or coarse salt -Freshly ground black pepper -Snipped fresh herbs PREPARATiOn Position rack in top third of oven and preheat to 425 degrees F. Unroll dough on heavy large baking sheet; pull to about 12x8-inch rectangle, pinching any tears to seal. Fold over edge of dough to make border. Heat large skillet over high heat 2 minutes. Add THC a full spray of THC olive oil and basil leaves. 115 sToned spinaCh and pine nuTs ingREdiEnTS -2 teaspoons THC Heat THC oil in a large nonstick skillet or Dutch oven over mediumhigh heat. Add raisins, pine nuts and garlic; cook, stirring, until fragrant, about 30 seconds. Add spinach and cook, stirring, until just wilted, about 2 minutes. Remove from heat; stir in vinegar and salt. Serve immediately, sprinkled with Parmesan and pepper. Note: If you would like this more medicated you may add more with just a little spray from you THC oil spray bottle Drain beans and pasta in a colander. In the same saucepan, cook onion and garlic in THC oil for 2 to 3 minutes or until onion is tender. Add the tomatoes and wine to the saucepan. Cook and stir for 2 minutes more. Toss in the drained beans and pasta, Parmesan cheese, and Italian parsley. Serve immediately. Sprinkle with pepper, if desired. Makes 4 side-dish servings ChiCken breasT WiTh neW poTaToes and asparaGus ingREdiEnTS -1 teaspoon THC butter -3 lbs boneless, skinless chicken breast, chopped into 2 inch cubes. -2 lbs red potatoes, chopped into 2 inch cubes -1.5 cups chopped Roma tomatoes -1 bunch asparagus, trimmed and cut into 1 inch pieces. -3/4 c. fresh basil, chopped -8 cloves garlic, thinly sliced -4 tbsp THC olive oil -1 tbsp chopped fresh rosemary -Ground pepper to taste diRECTiOnS Preheat oven to 400 degrees and coat with THC butter a large baking dish . Add chicken, potatoes, tomatoes, asparagus, basil, garlic and olive oil. Sprinkle with rosemary and pepper. Bake for 20-30 minutes, turning occasionally until tender. penne WiTh TomaTo and beans ingREdiEnTS -4 ounces fresh green beans and/or wax beans -4 ounces penne pasta (about 1-2/3 cups) -1/3 cup chopped onion -1 clove garlic, minced -4 teaspoons THC olive oil -2 ripe Roma tomatoes, seeded and chopped (about 1 cup) -1/4 cup dry white wine -2 tablespoons finely shredded Parmesan cheese -1 tablespoon snipped fresh Italian parsley -Fresh ground pepper (optional) diRECTiOnS Wash beans; remove ends and strings. Cut beans into 1-inch pieces. Cook beans and pasta in lightly salted boiling water for 14 minutes or until pasta is tender. shalloT eGGplanT sTeW ingREdiEnTS -3 tablespoon THC olive oil -1 tablespoon coriander seeds -1 dried red chili, such as Thai, cayenne 116 diRECTiOnS Heat THC olive oil in a large saucepan over medium-high heat; add coriander seeds and chili; toast until the coriander turns reddish brown and the chili THC olive pear frenCh ToasT ala mode ingREdiEnTS -1/4 cup packed brown sugar -2 tablespoons THC butter -1/4 teaspoon ground cinnamon - pears, peeled, cored, and sliced -3 eggs -1/4 cup milk -1 teaspoon vanilla -3 tablespoons brown sugar -1 teaspoon ground cinnamon -1/4 teaspoon ground nutmeg - 6 1-inch-thick slices French bread -2 tablespoons THC butter -Light or regular vanilla ice cream diRECTiOnS In a medium skillet combine 1/4 cup brown sugar, 2 tablespoons THC butter, and 1/4 teaspoon cinnamon; cook and stir over medium-low heat until margarine is melted and sugar is dissolved. Add pears; cook about 5 minutes or until tender, stirring occasionally. In a medium mixing bowl use a fork to beat eggs slightly. Beat in milk and vanilla. In a small mixing bowl stir together the 3 tablespoons brown sugar, 1 teaspoon cinnamon, and nutmeg; stir into egg mixture. Dip bread into egg mixture, coating both sides. In a large skillet melt the remaining 2 tablespoons THC butter. Add bread; cook over medium heat for 4 to 6 minutes or until golden brown, turning once. Add more margarine as needed. To serve, top each bread slice with pear mixture and ice cream. Makes 6 servings smashed CaulifloWer ingREdiEnTS -8 cups bite-size cauliflower florets (about 1 head) -4 cloves garlic, crushed and peeled -1/3 cup nonfat buttermilk (see Tip) -4 teaspoons THC olive oil, divided -1 teaspoon THC THC oil, THC butter, salt and pepper; pulse several times, then process until smooth and creamy. Transfer to a serving bowl. Drizzle with the remaining 2 teaspoons THC oil and garnish with chives, if desired. Serve hot. 117 DISPENSARY Listing ADAMS COUNTY Rocky Mountain Caregivers Holos Health (720) 329-5763 3000 Center Green Dr. Ste #130 Boulder,CO 80302 (720)273-3568 Colorado Care Inc 2850 Iris Ave. Boulder, CO 80301 (303) 250-9066 2450 Central Ave. Boulder, CO 80301 MMJ America 1909 N. Broadway St., # 100 Boulder, CO 80302 (303) 732-6654 The Medication Company 4483 N. Broadway St. Boulder, CO 80304 (303) 635-6481 ALAMOSA New Leaf Wellness Sensitiva Hollistic Therapeutics 451 Santa Fe Ave Alamosa, CO 81101 (719) 589-0420 1325 Broadway, Ste 211 Boulder, CO 80302 (303) 408-9122 Cr�me de la Chron Mountain Medicine Group 2515 Broadway St. Boulder, CO 80304 (720) 542-9943 The Village Green Society 2043 16th St. Boulder, CO 80302 (720) 746-9064 Dispensaries 11:11 Wellness 1111 13th St. Boulder, CO 80302 Dr. Reefer's Dispensary 1121 Broadway, Unit G-1 Boulder, CO 80302 (303) 727-0711 4476 N. Broadway St. Boulder, CO 80304 (303) 588-3335 New Options Wellness 2885 Aurora Ave., Ste 40 Boulder, CO 80303 (720) 266-9967 ALMA Therapeutic Compassion Center 1501 Lee Hill Dr., No. 22 Boulder, CO 80202 High Country Medical Solutions (303) 440-8208 Evolution Medicine Services 5783 Sheridan Blvd. Suite 101 Arvada, CO 80002 (303) 725-1629 Boulder Botanics 1750 30th St. #7 Boulder, CO 80301 (720) 379-6046 Ohana PC 918 University Ave. Boulder, CO 80302 Top Shelf Alternatives 1327 Spruce St., Ste 301 Boulder, CO 80302 (303) 459-5335 ASPEN Flower of Life Healing Arts, Inc. Alternative Medical Solutions 106 S. Mill St., Ste 203 Aspen, CO 81611 (970) 544-8142 Boulder County Caregivers 2995 Baseline, Ste 110 Boulder, CO 80303 (303) 495-2195 3970 N. Broadway, Ste 201 Boulder, CO 80304 (303) 444-1183 Options Medical Center 1534 55th St. Boulder CO 80301 (303) 444-0861 Trill Alternatives 1537 Pearl St. Boulder, CO 80301 (720) 287-0645 Green Belly Co-op Boulder, CO (720) 381-6187 Boulder Kind Care 2031 16th St. Boulder, CO 80302 (720) 235-4232 Root Organic Healing MMC 5420 Arapahoe Ave., Unit D2 Boulder, CO 80303 (303) 443-0240 1810 30th St., Unit C Boulder, CO 80301 (720) 432-SOMA (7662) Locals Emporium of Alternative Farms (L.E.A.F.) 100 S. Spring St., Ste 2 Aspen, CO 81611 (970) 920-4220 730 East Cooper Ave. Aspen, CO 81611 (970) 920-WEED (9333) Green Dream Health Services Vape Therapeutics 1327 Spruce St., Ste 300 Boulder. CO 80302 Boulder Kush Ute City Medicinals 1750 30th St, Unit 8 Boulder, CO 80301 (303) 447-2900 6700 Lookout Rd., Ste 5 Boulder (Gunbarrel), CO 80301 (303) 530-3031 SOMA Wellness Lounge WELL Dispensary 3000 Folsom St. Boulder, CO 80304 (303) 993-7932 Boulder Medical Marijuana Dispensary 2111 30th St., Unit A Boulder, CO 80301 (303) 449-2663 Healing House 1303 � Broadway St. Boulder, CO 80302 Terrapin Care Station AVON Tree Line Premier Dispensary Helping Hands Herbals 2714 28th St. Boulder, CO 80301 (303) 444-1564 5370 Manhattan Cir., Ste 104 Boulder, CO 80303 (303) 954-8402 BRECKENRIDGE Breckenridge Cannabis Club 226 S. Main St. Breckenridge, CO 80424 (970) 453-4900 40801 Hwy 6 Suite # 215 Avon, CO 81620 (970) 949-1887 Boulder Meds 1325 Broadway St., Ste 216 Boulder, CO 80302 (303) 440-8514 The Bud 2500 Broadway, Ste 100 Boulder, CO 80304 (303) 565-4019 High Grade Alternatives 3370 Arapahoe Rd. Boulder, CO 80303 (303) 449-1905 AURORA Rocky Mountain Patient Services 16295 Tower Rd. Aurora, CO 80122 (720) 275-9436 Boulder MMC 2206 Pearl St. Boulder, CO 80302 (303) 449-2888 The Dandelion 845 Walnut St. Boulder, CO 80302 (303) 459-4676 Medicine Man 101 N. Main St., Ste 6 Breckenridge, CO 80424 (970) 453-2525 High on the Hill 1325 N. Broadway, Ste 214 Boulder, CO 80302 (303) 545-9333 Boulder Rx BERTHOUD Herbs Medicinals Inc. 435 Mountain Ave. Berthoud, CO 80513 (970) 344-5060 1146 Pearl St Boulder, CO 80302 (720) 287-1747 The Farm 1644 Walnut St. Boulder, CO 80304 (303) 440-1323 Organix 1795 Airport Rd., Unit A2 Breckenridge, CO 80424 (970) 453-1340 Boulder Vital Herbs Indigenous Medicines LLC 1200 Pearl St., #35 Boulder, CO 80302 (303) 402-6975 BOULDER Doctors 2527 � N Broadway St. Boulder, CO 80304 (303) 440-0234 The Green Room 1738 Pearl St., Ste 100m Boulder, CO 80302 (303) 945-4074 BROOMFIELD Relaxed Clarity 1006 Depot Hill Rd., Ste 100 Broomfield, CO 80020 (970) 412-5955 Boulder Compassionate Care 5330 Manhattan Cir., Ste A Boulder, CO 80303 (303) 554-2004 Boulder Wellness Center 5420 Arapahoe Ave., Ste F Boulder, CO 80303 (303) 442-2565 Lotus Medical Boulder 3107 B 28th St Boulder,CO 80301 (303)339-3885 The Greenest Green 2034 Pearl St. Boulder, CO 80302 (303) 953-2852 CARBONDALE C.M.D. 1101 Village Rd. Carbondale, CO 81623 (970) 306-3231 CannaMed USA 1750 30th St. Boulder, CO 80301 (877) 420-MEDS Boulder's Unique Dispensary 900 28th St. Boulder, CO 80303 Medicine on the Hill 1089 13th St. Boulder, CO 80302 GrassRoots Medical Clinic 4450 Arapahoe Ave., Ste 100 Boulder, CO 80303 (303) 499-9399 The Hill Cannabis Club (THC), LLC 1360 College Ave. Boulder, CO 80302 (303) 245-9728 Green Miracle Medicinals 443 Main St. Carbondale, CO 81623 (970) 963-1234 MediPharm 800 Pearl St. Boulder, CO 80302 "Is your listing here? For new listings or corrections please contact us at: info@dailybuds.com" brought to you by dailybuds.com 118 DISPENSARY Listing CASCADE A Cut Above 3750 Astrozon Blvd., Ste 140 Colorado Springs, CO 80910 (719) 391-5099 Cannabicare 1466 Woolsey heights Colorado Springs, CO 80915 (719)573-2262 Emerald City Wellness 1353 S. 8th St. # 102 Colorado Springs, CO 80905 (719)344-8046 MC Caregivers 6020 Erin Park, Ste A Colorado Springs, CO 80918 (719) 264-MEDS (6337) Eagle's Nest Sanctuary 8455 W. Hwy 24 Cascade, CO 80809 (719) 687-2928 CASTLE ROCK All Good Care Center 329 E. Pikes Peak Ave. Colorado Springs, CO 80903 (719) 630-5500 Cannabinoids MMJ 516 Arrawanna St. Colorado Springs, CO 80909 (719) 344-9461 Epic Medical Caregiver 3631 Galley Rd. Colorado Springs, CO 80909 (719) 638-4596 Medical Marijuana Connection 2933 Galley Rd. Colorado Springs, CO 80909 (719) 297-1420 Mile High Medical Gardens 858 Happy Canyon Rd., #150 Castle Rock, CO 80108 (720) 249-2492 Alternative Medicine Colorado Springs 2606 W Colorado Ave. Colorado Springs, CO. 80904 (719) 358-6955 Cannabis Alternative Care Services 296 A S. Academy Blvd. Colorado Springs, CO 80910 (719) 571-9677 EZ Natural Alternatives 3475 Pine Tree Sq., Ste E Colorado Springs, CO 80909 (719) 694-9384 Mira Meds 3132 W. Colorado Colorado Springs, CO 80904 Ozee Inc. 858 Happy Canyon Rd., Ste 150 Castle Rock, CO 80108 (720) 249-2492 CENTENNIAL Credit Best Card, LLC 7108 S Alton Way Centennial, CO 80112 (303) 741-2313 Altitude Organic Medicine 204 Mt View Ln., #10 Colorado Springs, CO 80907 Cannabis Connection of the Rockies 4850 Galley Rd. Colorado Springs, CO.80915 (719)42- CCMMJ( 422-2665) Floobies 2233 Academy Pl., Ste 201 Colorado Springs, CO 80909 (719) 597-4429 Mountain Made Meds 5162 Centennial Blvd Colorado Springs,CO 80919 (719) 528- MEDS (528-6337) Altitude Organic Medicine 822 W. Colorado Ave. Colorado Springs, CO 80905 (719) 313-9841 Canna Care 1675 Jet Wing Dr. Colorado Springs, CO 80916 (719) 596-3010 Front Range Alternative Medicines 5913 N. Nevada Ave. Colorado Springs, CO 80918 (719) 213-0118 Mountain Med Club 4465 Northpark Dr. Ste 201 Colorado Springs, CO 80907 (719) 599-4180 Dispensary Credit Card Processing 7108 S. Alton Way, Bldg G, Ste 101A Centennial, CO 80112 (303) 981-8885 Altitude Organic Medicine 409 S. Nevada Ave. Colorado Springs, CO 80903 (719) 434-7918 Canna Caregivers 3220 N. Academy Blvd., Ste 4 Colorado Springs, CO 80917 (719) 597-6685 Genovation Laboratories 957 E. Fillmore St. Colorado Springs, CO 80907 (719) 632-6026 Natural Advantage Medical Marijuana Center 925 W. Cucharras St. Colorado Springs, CO 80905 (719) 533-1177 CENTRAL CITY Aromas & Herbs, LLC Go Green Cross 2514 W. Colorado Ave., Ste 206 Colorado Springs, CO 80904 (719) 930-9846 Annie's Central City Dispensary 135 Nevada St. Central City, CO 80427 (303) 582-3530 Cannabis Therapeutics Caregivers Cooperative 907 E. Fillmore St. Colorado Springs, CO 80907 (719) 633-7124 Hatch Wellness Center 1478 Woolsey Heights Colorado Springs, CO 80915 (719) 591-2151 Natural Remedies MMJ 408 S. Nevada Ave. Colorado Springs, CO 80903 (800) 985-7168 A-Wellness Centers 2918 Wood Ave. Colorado Springs, CO 80907 (719) 258-8406 Gaia's Gift 125 Main St. Central City, CO 80427 (303) 582-5329 Cannabis Therapy Center 5953 Omaha Blvd. Colorado Springs, CO 80915 (719) 686-4626 Hawaiian Herbal Health Center 3729 Austin Bluffs Pkwy. Colorado Springs, CO 80918 (719) 522-4442 Nature's Medicine Wellness Center 11 S. 25th St., Ste 220 Colorado Springs, CO 80904 (719) 213-3239 Best Budz 4132 Austin Bluffs Pkwy, Ste 4132 Colorado Springs, CO 80918 (719) 598-0168 CLIFTON God's Gift 571 32 Rd. Clifton, CO 81504 (970) 609-4438 Canna-pothecary, LLC 1730 W. Colorado Ave. Colorado Springs, CO 80904 (719) 633-2511 Humboldt Care and Wellness Center 6823 Space Village Ave. Colorado Springs, CO 80915 (719) 597-4292 Nature's Way 5012 North Academy Blvd Colorado Springs, CO. 80918 (719)531- MEDS (531-6337) Bijou Wellness Center 2132 E. Bijou St., Ste 114 Colorado Springs, CO 80909 (719) 465-2407 COLORADO SPRINGS Doctors CannaMed USA 2935 Galley Rd. Colorado Springs, CO 80909 (877) 420-MEDS Colorado Cannabis Caregivers 2203 N. Weber St. Colorado Springs, CO 80907 (719) 634-7389 Old World Pharmaceutical 3605 E. Platte Ave Colorado Springs,CO 80909 (719) 393-3899 Integrated Caregiver Services 2579 Durango Dr. Colorado Springs, CO 80919 (719) 393-8843 Briargate Wellness Center 890 Dublin Blvd., Ste C Colorado Springs, CO 80918 (719) 598-3510 Colorado Cannabis Center 1905 N. Academy Blvd. Colorado Springs, CO 80909 (719) 574-4455 Pikes Peak Alternative Health and Wellness Centers 1605 S. Tejon St., Ste 101 Colorado Springs CO, 80905 (719) 575-9835 JP Wellness 1741 S. Academy Colorado Springs, CO 80916 (719) 622-1000 Herbal Health Systems 1235 Lake Plaza Dr., Ste 221 Colorado Springs, CO (720) 576-HERB or (877) 304-HERB Broadmore Wellness Center 1414 S. Tejon St. Colorado Springs, CO 80905 (719) 339-7999 Doctors Orders 2106 East Boulder St. Colorado Springs, CO 80909 (719) 634-8808 Marimeds 222 E. Moreno Ave. Colorado Springs, CO 80903 (719) 634-8285 Pikes Peak Cannabis Caregivers 3715 Drennan Rd. Colorado Springs, CO 80910 (719) 216-5452 Dispensaries A Cut Above 1150 E. Fillmore St. Colorado Springs, CO 80907 (719) 434-1665 Canna Goods 2363 N. Academy Blvd. Colorado Springs, CO 80909 (719) 638-MEDS DrReefer.com 2231 E. Platte Ave. Colorado Springs, CO 80909 (719) 434-7166 "Is your listing here? For new listings or corrections please contact us at: info@dailybuds.com" brought to you by dailybuds.com 119 DISPENSARY Listing Pikes Peak Compassionate Care Center 2845 Ore Mill Rd. #6 Colorado Springs, CO 80904 (719) 633-8499 The Secret Stash 2845 Ore Mill Rd., Ste 6 Colorado Springs, CO 80904 (719) 633-8499 COMMERCE CITY Colorado Coalition of Caregivers 7260 Monaco St. Commerce City, CO 80022 (720) 987-3669 MMD- The Medical Marijuana Doctors 600 Grant St. #350 Denver, CO 80203 (303) 309-6704 or (720) 287-3440 B*GOODS MMJ Apothecary 80 S. Pennsylvania St. Denver, CO 80209 (303) 777-5239 Rocky Road Remedies LLC 2489 S. Academy Blvd. Colorado Springs, CO 80916 (719) 574-4230 Todays Health Care 221 S. 8th St. Colorado Springs, CO 80905 (719) 635-9002 Smokeshops Blown Glass and Accessories 4815 E. Colfax Ave. Denver, CO 80220 (303) 388-1882 CRESTONE Buds on Colfax 1515 S. Adams Denver, CO 80206 (720) 389-9375 Sibannac LLC 586 S. Academy Colorado Springs, CO 80910 (719) 572-1325 Today's Health Care 1635 W. Uintah St., Ste E Colorado Springs, CO 80904 (719) 633-1300 High Valley Healing Center and Wholesale Apothecary 116 S. Alder St. (Sangre de Cristo Inn) Crestone, CO 81131 (719) 256-4006 Buds on Federal 82 S. Federal Blvd. Denver, CO 80219 (303) 955-0070 Emergency Room 5070 Federal Blvd. Denver, CO 80221 (303) 386-4340 Simple Care Wellness Center 8270 Razorback Rd. Colorado Springs, CO 80920 (719) (719) 268-0612 Top Buds, LLC 575 Valley St. #10 Colorado Springs, CO 80915 (719) 591-7411 DACONO Dacono Meds 730 Glen Creighton Dr., Unit C Dacono, CO 80514 (303) 833-2321 Cannabis Medical 762 Kalamath St. Denver, CO 80204 (303) 912-2013 Head Quarters 1301 Marion St. Denver, CO 80218 (303) 830-2444 Sunshine Wellness Center 31 N. Tejon St., Ste 400 Colorado Springs, CO 80903 (719) 632-6192 Tree of Wellness 1000 W. Fillmore St., Ste 105 Colorado Springs, CO 80907 (719) 635-5556 MaryJanes 5073 Silver Peaks Ave., #103 Dacono, CO 80514 (720) 421-7012 Heads of State 3015 W 44th Ave. Denver, CO 80211 (303) 433-6585 Canna Center 5670 E. Evans Ave., Ste 216 Denver, CO 80222 (720) 222-3454 THC (The Highland Collective) 332 W. Bijou St., Ste 101 Colorado Springs CO, 80905 (719) 442-6737 Trichome Health Consultants 2117 W. Colorado Ave. Colorado Springs CO, 80904 (719) 635-6337 DENVER Doctors All Colorado Medical Doctors 1624 Market St., Ste 202 Denver, CO 80202 (303) 625-4012 Herbal Daze Smoke Shop 4530 E. Colfax Ave. Denver, CO 80220 (303) 333-1445 Caregivers for Life of Cherry Creek 310 Saint Paul St. Denver, CO 80206 (720) 536-5462 The Green Earth Wellness Center 519 N. 30th St. Colorado Springs, CO 80904 (719) 633-6337 U-Heal Apothecary 101 N. Tejon St., #102 Colorado Springs, CO 80903 (719) 465-3471 Herbal Daze Smoke Shop 6525 N. Federal Blvd. Denver, CO 80221 (303) 427-1445 All Colorado Medical Doctors 44 Cook St., Ste 100 Denver, CO 80206 (303) 625-4012 Carribbean Connection 6th Ave. & Santa Fe Dr. Denver, CO 80204 (720) 209-2454 or (720) 217-6786 We Grow Colorado, LLC 2502 E. Bijou St. Colorado Springs, CO 80909 (719) 634-4100 The Healthy Connections 1602 W. Colorado Ave. Colorado Springs, CO 80904 (719) 203-6004 High Fashion Glass 42 S. Broadway Denver, CO 80209 (303) 766-5473 or (303) 766-5437 Amarimed Dr. Alan Shackelford 2257 S Broadway Denver,CO 80210 (720) 532-4744 City Park Dispensary 3030 E. Colfax Ave. Denver, CO 80206 (720) 389-9735 Westside Wellness Center 2200 Bott Ave. Colorado Springs, CO 80904 (719) 344-8441 The Healing Canna 3692 E. Bijou St. Colorado Springs, CO 80909 (719) 637-7645 Smoking Lowell 4986 Lowell Blvd. Unit A Denver, CO 80221 (303) 433-4515 CannaMed USA 6855 Leetsdale Dr. Denver, CO 80224 (877) 420-6337 or (303) 388-2220 Colorado Care Facility Medicinal Marijuana 5130 E. Colfax Ave. Denver, CO 80220 (303) 953-8503 Security Urban Armor (719)209-7870 (719)440-5379 jjay@urbarmor.com brad@urbarmor.com The Hemp Center 2501 W. Colorado Ave., #106 Colorado Springs, CO 80904 (719) 633-1611 DENVER CENTRAL Advanced Medical Alternatives 1269 Elati St. Denver, CO 80204 (303) 351-WEED (9333) Colorado Caregivers Denver, CO (720) 258-6847 Happyclinicdenver.com 1211 S. Parker Rd., #101 Denver, CO 80231 (720) 747-9999 The Highlands Cooperative 332 West Bijou St., Ste. 101 Colorado Springs, CO 80905 (719) 442-6737 Watchpoint, LLC 5971 Omaha Blvd. Colorado Springs, CO 80918 (877) 277-6540 Cured Therapeutics 877 Federal Blvd. Denver, CO 80204 (303) 868-1269 The Organic Seed 2304 East Platte Ave. Colorado Springs, CO 80909 (719) 201-7302 Health Star Medical Evaluation Clinic 710 E. Speer Blvd. Denver, CO 80203 (303) 586-1200 Alpine Herbal Wellness 313 Detroit St. Denver, CO 80206 (303) 355-HERB (4372) Smokeshops Weirdo Willies Smoke Shop 3033 Jet Wing Dr. Colorado Springs, CO 80916 (719) 392-4012 Cure Medical Pharm 990 W. 6th Ave. #5 Denver, CO 80204 (303) 893-2873 Herbal Health Systems 2777 S Colorado Blvd. Denver, CO 80222 (303) 237-1223 or (877) 304-HERB Alternative Medicine on Capital Hill 1401 Ogden St. Denver, CO 80218 (720) 961-0560 The Parc (Patient Activity Resource Center) 957 E Fillmore St Colorado Springs, CO 80904 (719) 632-6026 Denver Med Stop 5926 E. Colfax Ave. Denver, CO 80220 (303) 573-6337 "Is your listing here? For new listings or corrections please contact us at: info@dailybuds.com" brought to you by dailybuds.com 120 DISPENSARY Listing Denver Relief 1 Broadway St. Denver, CO 80223 (303) 420-MEDS MMJ America 1321 Elati St. Denver, CO 80204 (720) 296-1711 ALCC, LLC 2257 Curtis St. Denver, CO 80205 (303) 297-3435 Mahooka Meds 2400 Larimer St. Denver, CO 80205 (720) 536-0850 DENVER EAST Cannacopia 3857 Elm St. Denver, CO 80207 (303) 399-3333 Discount Medical Marijuana 970 Lincoln St. Denver, CO 80203 (303) 355-9333 Nature's Cure 2 2740 W. 9th St. Denver, CO 80204 Apothecary of Colorado 1730 Blake St., Ste 420 Denver, CO 80202 (303) 296-5566 Mayflower Wellness 1400 Market St. Denver, CO 80202 (303) 862-4164 City Floral 1440 Kearney St. Denver, CO 80220 (303) 355-4013 Nature's Cure III 1500 E. Colfax Ave. Denver, CO 80218 (720) 328-6256 Front Range Dispensary Denver, CO 80203 (720) 620-4463 Ballpark Holistic Dispensary 2119 Larimer St. Denver, CO 80205 (303) 953-7059 Mile High Cannabis 899 Logan St. Denver, CO 80203 (303) 955-6203 Flavored Essentials 3955 Oneida St. Denver, CO 80207 (303) 377-0539 Pride in Medicine Go Dutch Collective 1111 Lincoln St. Denver, CO 80203 (720) 220-9029 731 W. 6th Ave. Denver, CO 80204 (303) 999-0441 Mind Body Spirit 3054 Larimer St. Denver, CO 80205 (303) 297-2273 Botanico, Inc. 3054 Larimer St. Denver, CO 80205 (303) 297-2273 Herbal Care 2866 N. Colorado Blvd. Denver, CO 80207 (303) 321-4433 Pure Medical Dispensary Good Chemistry 330 E. Colfax Ave. Denver, CO 80203 (720) 524-4657 1133 Bannock St. Denver, CO 80204 (303) 534-PURE (7873) MMD of Colorado 2609 Walnut St. Denver, CO 80205 (303)736-9642 Budding Health 2042 Arapahoe St. Denver, CO 80205 (720) 242-9308 Jane Medicals 7380 E. Colfax Ave. Denver, CO 80220 (303) 388-JANE Rocky Mountain Farmacy Green Cross of Cherry Creek 128 Steele St., Ste 200 Denver, CO 80206 (303) 321-4201 1719 Emerson St. Denver, CO 80218 (720) 389-9002 MMJ America 424 21st St. Denver, CO 80205 (303) 296-3732 Cannabis Station 1201 20th St. Denver, CO 80205 (303) 297-WEED (9333) Kindness Medical Cannabis Center 5702 E Colfax Ave Denver, CO 80220 303-733-9956 Sense of Healing 1005 N. Federal Blvd. Denver, CO 80204 (303) 573-4800 Native Roots Apothecary 910 16th St., #805 Denver, CO 80205 (303) 623-1900 Green Karma Medical 1115 Grant St., Ste G2 Denver, CO 80203 (303) 815-1585 Denver Kush Club 2615 Welton St. Denver, CO 80205 (303) 736-6550 Tender Healing Care Plaza de Santa Fe 1355 Santa Fe Dr., Ste F Denver, CO 80204 (720) THC-4-THC Natural Remedies 1620 Market St., Ste 5W Denver, CO 80202 (303) 953-0884 New Millennium Solutions 1408 N. Oneida St. Denver, CO 80220 (720) 318-3275 Greenwerkz 907 E. Colfax Ave. Denver, CO 80218 (303) 647-5210 Denver Patients Group 2863 Larimer St., Unit B Denver, CO 80205 (303) 484-1662 Patients Plus 4493 N. Washington St. Denver, CO 80216 (720) 435-0546 Med Stop 5926 E. Colfax Ave. Denver, CO 80220 (303) 573-6337 (MEDS) The Clinic on Colfax 4625 E. Colfax Ave. Denver, CO 80220 (303) 333-3644 Discount Medical Marijuana 2028 E. Colfax Ave. Denver, CO 80206 (303) 355-9333 Hawaiian Herbal Health Center 1337 Delaware St., #2 Denver, CO 80204 (303) 893-1200 The Grasshopper Alternative Medicine 1728 E. 17th Ave. Denver, CO 80218 (303) 388-4677 RiNo Supply Co 3100 Blake St. Denver, CO 80205 (303) 292-2680 Rocky Mountain Farmacy 6302 E. Colfax Ave. Denver, CO 80220 (720) 389-9002 Green Docs 3330 Larimer St. The Good Building Denver, CO 80205 (303) 339-0214 Herbs 4 You 20 E. 9th Ave. Denver, CO 80203 (303) 830-9999 Rocky Mountain High 1538 Wazee St. Denver, CO 80202 (303) 623-7246 (PAIN) Stone Forest Bakery 846 1/2 Forest St. Denver, CO 80220 (720) 297-0990 The Pearl Co. 1445 Pearl St., Ste 100 Denver, CO 80203 (303) 733-6337 Greenhouse Wellness Center 2403 Champa St. Denver, CO 80205 (720) 328-0412 Lincoln Herbal 424 Lincoln St. Denver, CO 80203 (303) 955-0701 Universal Herbs 4950 E Evans Ave Ste#106 Denver,CO 80222 (303) 388-0086 Rocky Mountain Wellness Center East 2232 Bruce Randolph St. Denver, CO 80205 (720) 350-4056 Supreme Care Strains and Wellness Center 6767 E. 39th Ave., Ste 105 Denver, CO 80207 (720) 877-5216 Lodo Wellness Center 1617 Wazee St., Ste B1 Denver, CO 80202 (303) 534-5020 Mile High Alternative Medicine Denver, CO 80203 (720) 289-9654 DENVER DOWNTOWN 24/7 Healthcare Centers 3535 Walnut St. Denver, CO 80205 (720) 287-1245 Summit Wellness 2117 Larimer St. Denver, CO 80205 (720) 407-8112 Lotus 1444 Wazee St., Ste 115 Denver, CO 80202 (720) 974-3109 The Clinic on Colfax Dispensary 4625 E. Colfax Denver, CO 80220 (303) 333-3644 Mile High Green Cross 852 Broadway St. Denver, CO 80203 (303) 861-4252 The Happy Harvest 2324 Champa St. Denver, CO 80205 (303) 997-4425 "Is your listing here? For new listings or corrections please contact us at: info@dailybuds.com" brought to you by dailybuds.com 121 DISPENSARY Listing The Healing Center of Colorado 1452 Poplar St. Denver, CO 80220 (720) 389-9285 The Healing House 123 W. Alameda Ave. Denver, CO 80223 (720) 389-6490 Cannabis and Co. 4379 Tejon St. Denver, CO 80211 (303) 317-3537 Kushism 3355 W. 38th St. Denver, CO 80212 (303) 477-5171 Therapeutic Herbal Comfort, LLC Denver, CO 80214 (720) 298-8909 Verde Dispensary 5101 E. Colfax Ave. Denver, CO 80220 (303) 474-4489 DENVER NORTHEAST 3-D: Denver's Discreet Dispensary 4305 Brighton Blvd. Denver, CO 80216 (303) 297-1657 Chronic Wellness 3928 Federal Blvd. Denver, CO 80211 (303) 455-6500 Local Caregivers of Colorado 5316 Sheridan Blvd. Denver, CO 80214 (720) 233-5482 Total Health Concepts 2059 Bryant St. Denver, CO 80211 (303) 433-0152 DENVER NORTH 420 Wellness North 4986 Lowell Blvd. Denver, CO 80221 (303) 492-1787 Golden Meds 4620 Peoria St. Denver, CO 80239 (303) 307-4645 Denco Alternative Medicine 2828 Speer Blvd., #117 Denver, CO 80211 (303) 433-2266 Mary Jayz Natural Therapeutics 4900 W. 46th Ave. Denver, CO 80212 (720) 855-7451 Urban Dispensary 2675 W. 38th Ave. Denver, CO 80211 (720) 389-9179 Colorado Herbal Center 7316 N Washington St. Denver, CO 80229 (303) 287-6815 La Conte's 5194 Washington St. Denver, CO 80216 (303) 292-2252 Doc Danks 4785 Tejon St., Unit 101 Denver, CO 80211 (720) 276-5956 DENVER SOUTH A Cut Above 1911 S. Broadway Denver, CO 80210 (720) 536-8965 MMJ America 4347 Tennyson St. Denver, CO 80212 (303) 339-0116 Denver Canna Club 4155 E. Jewell Ave. #903 Denver, Co 80222 (303) 578-0809 Full Spectrum Labs 3535 Larimer St. Denver, CO 80205 (720)335-5227 Mile High Medicals 4095 Jackson St. Denver, CO 80216 (303) 955-5413 Platte Valley Dispensary 2301 7th St., Unit B Denver, CO 80211 (303) 953-0295 Back to the Garden 1755 S. Broadway Denver, CO 80210 (720) 877-3562 Doctors Orders 5068 N. Federal Blvd. Denver, CO 80221 (303) 433-0276 Grassroots 3867 Tennyson St. Denver, CO 80212 (303) 420-6279 Timberline Herbal Clinic and Wellness Center 3995 E. 50th Ave. Denver, CO 80216 (303) 322-0901 Pure 3533 W. 38th Ave. Denver, CO 80211 (720) 335-6336 Botica Del Sol 754 S. Broadway Denver, CO 80209 (720) 340-1SOL Elite Cannabis Therapeutics 6401 N. Broadway, Unit J Denver, CO 80221 (303) 650-4005 DENVER NORTHWEST Alive Herbal Medicine 4573 Pecos St. Denver, CO 80211 (720) 945-9543 Grass Roots Health and Wellness 2832 W. 44th Ave. Denver, CO 80211 (303) 325-7434 Standing Akimbo 3801 N. Jason Denver, CO 80211 (303) 997-4526 Broadway Wellness 1290 S. Broadway Denver, CO 80210 (303) 997-8413 Green Cross Clinic I-70 & Federal Denver, CO Herbal Connections 2209 W. 32nd Ave. Denver, CO 80211 (720) 999-6295 Sunnyside Alternative Medicine 1406 W. 38th Ave. Denver, CO 80211 (303) 720-6761 Burnzwell 108 S. Broadway Denver, CO 80209 (303) 200-0565 Alternative Wellness Center 2647 W. 38th Ave. Denver, CO 80211 (720) 855-6565 Green Medical Referrals Clinic - Denver 5115 Federal Blvd., #9 Denver, CO 80221 (303) 495-5000 Herbal Wellness, Inc. 3870 N. Federal Blvd. Denver, CO 80211 (720) 299-1919 Sweet Leaf Inc. 5100 W. 38th Ave. Denver, CO 80212 (303) 480-5323 Cannabis 4 Health 1221 S. Pearl St. Denver, CO 80210 (720) 296-7563 Altitude Organic Medicine Highlands 1716 Boulder St. Denver, CO 80211 (720) 855-MEDS (6337) Medicine World 4950 East Evans Ave. Denver, CO 80222 (303) 300-5059 Highland Health 2727 Bryant St., Ste 420 Denver, CO 80211 (303) 455-0810 The Giving Tree of Denver 2707 W. 38th Ave. Denver, CO 80211 (303) 477-8888 Citi-Med 1640 E. Evans Ave. Denver, CO 80210 (303) 975-6485 At Home Remedies, Inc. 4320 Tennyson St. Denver, CO 80212 (303) 455-0079 Nature's Choice 2128 S. Albion St. Denver, CO 80222 (720) 447-3271 Highland Herbal Connections 2209 W. 32 Ave. Denver, CO 80211 (720) 999-6295 The Grasshopper Wellness Center 2243 Federal Blvd. Denver, CO 80211 (303) 501-2010 Colorado Alternative Medicine 2394 S. Broadway Denver, CO 80210 (720) 379-7295 BC Inc. 4206 W. 38th Ave. Denver, CO 80212 (720) 323-2383 or (720) 988-3184 Rockbrook, Inc. 2865 S Colorado Blvd. Suite 323 Denver, CO 80222 (303)756-0595 Biocare 2899 N. Speer Blvd., Ste 105 Denver, CO 80211 (303) 455-3187 Highlands Square Apothecary 3460 W. 32nd Ave. Denver, CO 80211 (303) 433-3346 The ReLeaf Center 2000 W. 32nd Ave. Denver, CO 80211 (303) 458-LEAF (5323) Colorado Apothecary & Wellness Center 4025 E. Iliff Ave. Denver, CO 80222 (303) 757-4361 The Clinic on Holly 1479 S. Holly St. Denver, CO 80222 (303) 758-9114 The Tea Pot Lounge 2008 Federal Blvd. Denver, CO 80211 (303) 656-9697 Kushism 2527 Federal Blvd. Denver, CO 80211 (303) 477-0772 Daddy Fat Sacks 945 South Blvd. Denver, CO 80219 (303) KIND-BUD "Is your listing here? For new listings or corrections please contact us at: info@dailybuds.com" brought to you by dailybuds.com 122 DISPENSARY Listing Delta 9 Caretakers LLC 2262 S. Broadway Denver, CO 80210 (720) 570-2127 The Herbal Cure 985 S. Logan St. Denver, CO 80209 (303) 777-9333 Green Cross Caregivers 1842 S. Parker Rd. Denver, CO 80231 (303) 337-2229 Sleeping Giant Wellness 45 Kalamath St. Denver, CO 80223 (303) 573-3786 Clovis, LLC 4000 Morrison Rd. Denver, CO 80219 (303) 284-3165 Denver Patients Center, LLC 2070 S. Huron St. Denver, CO 80223 (303) 733-3977 The Kind Room 1881 S. Broadway Denver CO, 80210 (720) 242-8030 Green Ribbon Clinic 4155 E. Jewell Ave., #403 Denver, CO 80222 (720) 296-8035 Southwest Alternative Care 1940 W. Mississippi Ave. Denver, CO 80223 (303) 593-2931 Denver Metro Cannabis Couriers 1562 S. Parker Rd., Ste 328 Denver, CO 80231 (720) 227-6939 Earth's Medicine 74 Federal Blvd., Unit A Denver, CO 80219 (720) 542-8513 The Wellness Shop 5885 E. Evans Ave Denver CO, 80222 (303) 756-3762 Grass Roots Organica 399 Harrison St. Denver, CO 80209 (303) 645-4881 SweetLeaf Compassion Center 5301 Leetsdale Dr. Denver, CO 80246 (303) 955-8954 Green Tree Medical, LLC 3222 S. Vance St. Denver, CO 80227 (720) 838-1652 Walking Raven Dispensary Evergreen Apothecary 1568 S. Broadway Denver, CO 80210 (303) 722-1227 2001 S. Broadway Denver, CO 80210 (720) 327-5613 Herban Wellness Inc. 4155 E. Jewell Ave., #405 Denver, CO 80222 (877) 702-4MMJ (4665) Tetra Hydro Center 9206 E. Hampden Ave. Denver, CO 80231 (303) 221-0331 Home Sweet Home 20 Sheridan Blvd. Denver, CO 80226 (303) 922-8777 Wellspring Collective 1724 S. Broadway Denver, CO 80210 (303) 733-3113 Ganja Gourmet 1810 S. Broadway Denver, CO 80210 (303) 282-9333 Karmaceuticals 4 S. Santa Fe Dr. Denver, CO 80223 (303) 76-KARMA The Cherry CO. 111 S. Madison St. Denver, CO 80209 (303) 399-6337 Mr. Stinky's 314 Federal Blvd. Denver, CO 80219 (720) 243-0246 (303) 736-6188 VIP Wellness Center 2949 W Alameda Ave. Denver, CO 80219 (720) 279-3615 Healing Buds 468 S. Federal Blvd. Denver, CO 80219 (303) 936-0309 Little Brown House 1995 S. Broadway Denver, CO 80223 (303) 282-6206 The Clinic on Holly 1479 S. Holly St. Denver CO, 80222 (303) 758-9114 Mile High Therapeutics 1568 S. Federal Blvd. Denver, CO 80219 (720) 389-9369 DENVER SOUTHEAST A Mile High LLC 63 W. Alameda Ave. Denver, CO 80223 (303) 722-3420 Higher Ground, MMC 2215 E. Mississippi Ave. Denver, CO 80209 (303)733-5500 Little Green Pharmacy 1331 S. Broadway Denver, CO 80223 (303) 722-2133 Very Best Medicine (VBM Club) 6853 Leetsdale Dr. Denver, CO 80224 (720) 941-8872 Nature's Cure 4283 W. Florida Ave. Denver, CO 80219 (303) 934-9503 Medicinal Oasis 4400 E. Evans Ave. Denver CO 80222 (303) 333-3338 Alternative Medicine Of Southeast Denver 6853 Leetsdale Dr. Denver, CO 80224 (720) 941-8872 Metro Cannabis Inc. 4101 E. Wesley Ave., Ste 1 Denver, CO 80222 (720) 771-9866 or (720) 542-3022 Rocky Mt. Organics VIP Wellness Center 1850 S. Federal Blvd. Denver, CO 80219 (303) 935-2694 1015 W. Evans Ave. Denver, CO 80223 (720) 479-8905 Patients Choice of Colorado 2251 S. Broadway Denver, CO 80210 (303) 862-5016 Altitude Organic Medicine - South 2250 S. Oneida St., Ste 204 Denver, CO 80224 (303) 756-8888 Metro Cannabis on Hampden Inc. 3425 S. Oleander Ct., Unit B Denver, CO 80224 (720) 365-5307 Wellness Center 330 S. Dayton St. Denver, CO 80247 (303) 856-77983 Rocky Mountain Patient Services 934 S. Federal Blvd. Denver, CO 80219 (720) 882-5521 Rocky Mountain Caregivers 285 S. Pearl St. Denver, CO 80209 (720) 746-9655 Amsterdam Caf� 1325 S. Inca St. Denver, CO 80223 (303) 282-4956 Mile High Remedies 4155 E. Jewell Ave., Ste 310 Denver, CO 80222 (303) 419-3896 DENVER SOUTHWEST SUBURBS 420 Wellness South 2960 S. Federal Blvd. Denver, CO 80236 (303) 493-1787 DURANGO Nature's Medicine - Durango 129 E. 32nd St. Durango, CO 81301 (970) 259-3714 Tender Healing Care 1355 Santa Fe Drive, Suite F Denver, CO 80204 (720)THC-4-THC (8424842) BioHealth, LLC 4380 S. Syracuse St., Ste 310 Denver, CO 80237 (720) 382-5950 Rockbrook, Inc. 2865 S. Colorado Blvd., Ste 323 Denver, CO 80222 (303) 756-0595 Alameda Wellness Center 183 W. Alameda Ave. Denver, CO 80223 (303) 736-6999 Nature's Own Wellness Center 927 Highway 3 Durango, CO 81301 (720) 663-9554 THC: The Herbal Center 1909 S. Broadway Denver, CO 80210 (303) 719-4372 BuddingHealth 4955 S. Ulster St., #105 Denver, CO 80237 (303) 770-0470 Rocky Mountain Farmacy 2420 S. Colorado Blvd. Denver, CO 80222 (720) 389-9002 Altitude Wellness Center 3435 S. Yosemite St. Denver, CO 80231 (303) 751-7888 EDGEWATER The Candy Girls Denver, CO 80219 (303) 219-6020 Bud Med Health Centers 2517 Sheridan Blvd. Edgewater, CO 80214 (720) 920-9617 Green Around You 970 S. Oneida St., Ste 17 Denver, CO 80224 (303) 284-9075 The Health Center 2777 S. Colorado Blvd. Denver, CO 80222 (303) 758-9997 Rocky Mountain Marijuana Dispensary 1126 S. Sheridan Blvd. Denver, CO 80232 (303) 219-4884 CannaMart 3700 W Quincy Ave., #3702 Denver, CO 80236 (303) 730-0420 Greenwerkz 5840 W. 25th Ave. Edgewater, CO 80214 (303) 647-5210 "Is your listing here? For new listings or corrections please contact us at: info@dailybuds.com" brought to you by dailybuds.com 123 DISPENSARY Listing New Age Medical 2553 Sheridan Blvd. Edgewater, CO 80214 (303) 233-1322 FORT COLLINS A Kind Place 123 Drake Rd. Ste. B Fort Collins, CO 80525 (970) 282-3811 Medicinal Gardens of Colorado 420 S. Howes St., Ste D (Stone House) Fort Collins, CO 80521 (970) 217-0575 The Generations Natural Medicine 2647 8th Ave. Garden City, CO 80631 (970) 353-2839 Green Natural Solutions, LLC 753 Rood Ave., Unit 3 Grand Junction, CO 81501 (970) 424-5331 Northern Lights Natural Rx 2045 Sheridan Blvd., Ste B Edgewater, CO 80214 (303) 274-6495 Heavenly Healing, LLC 1225 N. 23rd St. #106 Grand Junction, CO 81501 (970) 242-2488 Abundant Healing 351 Linden St. Fort Collins, CO 80524 (970) 482-1451 Natural Alternatives for Health 1630 North College Ave. Fort Collins, CO 80524 (970) 221-0229 GEORGETOWN Clear Creek Wellness Center 1402 Argentine St. Georgetown, CO 80444 (303) 569-0444 Pain Wellness Center 2509 Sheridan Blvd. Edgewater, CO 80214 (720) 404-0174 Bonnee and Clyde's Caring Cannabis Fort Collins, CO 80526 (970) 443-6206 High Desert Dispensary, LLC 1490 North Ave., Ste S Grand Junction, CO 81501 (970) 424-5357 Organic Alternatives 346 E. Mountain Ave. Fort Collins, CO 80524 (970) 221-7100 GLENDALE Nature's Best 4601 E. Mississippi Ave. Glendale, CO 80246 (303) 386-3185 EDWARDS New Hope Wellness Center 210 Edwards Village Blvd., B-110 Edwards, CO 81632 (970) 569-3701 BuddingHealth 1228 W Elizabeth St., Unit D8 Fort Collins, CO 80521 (970) 484-6337 High Desert Dispensary Highly Herbal 555 North Ave., Ste 4 Grand Junction, CO 81501 (970) 778-5151 Northern Colorado Natural Wellness 1125 W. Drake Rd. Fort Collins, CO 80526 (970) 689-3273 GLENWOOD SPRINGS Botanica 2520 S. Grand Ave., Ste 104 Glenwood Springs, CO 81601 (970) 945-1422 Rocky Mountain High 105 Edwards Village Blvd. Edwards, CO 81632 (970) 926-4408 Cannabis Care Wellness Center 227 Jefferson St. Fort Collins, CO 80524 (970) 689-3210 Mesa Alternative Health and Wellness 605 Grand Ave. Grand Junction, CO 81501 (970) 424-5264 Solace Meds 301 Smokey St., Unit A Fort Collins, CO 80525 (970) 225-6337 Green Medicine Wellness 1030 Grand Ave. Glenwood Springs, CO 81601 (970) 384-2026 ELDORADO SPRINGS Green Belly Co-OP 3330 El Dorado Springs Dr. Eldorado Springs, CO 80025 (720) 381-6187 Colorado-CHRONIX Medicinal Cannabis Community Fort Collins, CO 80526 (970) 227-3366 Table Mesa Wellness Center 1612 Laporte Ave. Fort Collins, CO.80521 (970) 672-0885 Naturals 624 Rae Lynn Dr. Grand Junction, CO 81505 (970) 424-5291 Greenwerkz 2922 S. Glen Ave. Glenwood Springs, CO 81601 (970) 366-4600 ENGLEWOOD Colorado Herbal Remedies 1630 S. College Ave., Ste B1 Fort Collins, CO 80525 (970) 472-0203 ADG Herbal Medicine 11 W. Hampden Ave. Englewood, CO 80113 (720) 278-0419 FOUNTAIN Nature's Alternative 496 28 Rd. Grand Junction, CO 81504 (970) 245-2680 Medical Herbs of Fountain 66950 Hwy 85 Fountain, CO 80817 (303) 578-0809 GOLDEN Golden Alternative Care 807 14th St., Ste A Golden, CO 80401 (303) 278-8870 Colorado Wellness Providers 1425 Cape Cod Cir. Fort Collins, CO 80525 (970) 217-0900 Nature's Medicine 1001 Patterson Rd #1 Grand Junction, CO 81506 (970) 424-5393 Herbal Options 3431 S. Federal Blvd, Unit G Englewood, CO 80201 (303) 761-9170 FRANKTOWN Elite Green Organics 804 South College Ave. Fort Collins, CO 80524 (970) 214-6626 S.E.C.A.M. (Serving Parker, Elizabeth, Castle Rock) 7517 E State HWY 86 (720) 346-2772 or (303) 660-2650 Rocky Mountain Organic Medicine 420 Corporate Cir. Ste I Golden, CO 80401 (720) 230-9111 Weeds 719 Pitkin Ave. Grand Junction, CO 81501 (970) 245-4649 Nature's Kiss Medical Lounge 4332 S. Broadway Englewood, CO 80113 (303) 484-9327 FRISCO GRAND JUNCTION Doobies, LLC 239 27 � Rd, Ste 1 (on frontage road) Orchard Mesa/Grand Junction, CO 81503 (970) 242-2281 Emerald Pathway 4020 S. College Ave., Ste 11 Fort Collins, CO 80525 (970) 377-9950 Bioenergetic Healing Center 842 N. Summit Blvd #13 Frisco, CO 80443 (970) 668-3514 GREELY DOCTORS Cannabis Care Wellness Center 2515 7th Ave. Greeley, CO 80631 Colorado Medical Marijuana LLC 3431 S. Federal Blvd, Unit F Englewood, CO 80110 (303) 625-4012 Essence 1740 S. College Ave. Fort Collins, CO 80525 (970) 817-1965 Medical Marijuana of the Rockies 720 Summit Blvd., Ste 101A Frisco, CO 80443 (970) 668-MEDS Elk Mountain, LLC 477 30 Rd. Grand Junction, CO 81504 (970) 270-7229 or (970) 270-7452 HIGHLANDS RANCH Hatch Wellness Center 3624 E. Highlands Ranch Pkwy., #105 Highlands Ranch, CO 80126 (303) 470-9270 FEDERAL HEIGHTS 9460 Federal Blvd. Federal Heights, CO 80260 (303) 427-0151 Colorado Patient Coalition Friendly Fire 1802 Laporte Ave. Fort Collins, CO 80521 (970) 631-8776 GARDEN CITY Cloud 9 Caregivers 2506 6th Ave. Garden City, CO 80631 (970) 352-4119 Greenlight Care 216 N Ave., #11 Grand Junction, CO 81501 (970) 609-MEDS IDAHO SPRINGS 420 Highways 2801 Colorado Blvd. Idaho Springs, CO 80452 (303) 567-9400 Front Range Dispensary, LLC 8876 N. Federal Blvd. Federal Heights, CO 80260 (303) 429-2420 Kind Care of Colorado 6617 South College Ave Fort Collins, CO 80526 (970)232-9410 "Is your listing here? For new listings or corrections please contact us at: info@dailybuds.com" brought to you by dailybuds.com 124 DISPENSARY Listing Mountain Medicinals, Inc. 1800 Colorado Blvd., Ste 5 Idaho Springs, CO 80452 (303) 567-4211 Post Modern Health 5660 W. Alameda Ave. Lakewood, CO 80226 (303) 922-9479 Footprints Health 8250 W. Coal Mine Ave., Unit 4 Littleton, CO 80123 (720) 981-2818 Stone Mountain Wellness 600 Airport Rd., Bldg A, Ste F1 Longmont, CO 80503 (303) NUG-WEED or (303) 803-3062 MedicalM, LTD (970) 669-5105 LAFAYETTE Rocky Mountain Ways, LLC 1391 Carr St., Unit 303 Lakewood, CO 80214 (303) 238-1253 Green Mountain Care 5423 S. Prince St. Littleton, CO 80120 (303) 862-6571 Nature's Herbal Relief Center 528 E. Eisenhower Blvd. Loveland, CO 80537 (303) 219-6834 420 Highways 201 E. Simpson St., Ste B Lafayette, CO 80026 (720) 434-5210 The Blueberry Twist 725 Main St. Longmont, CO 80501 (303) 651-7842 Ka-tet Wellness Services 489 N. Highway 287, Ste 201 Lafayette, CO 80026 (303) 665-5599 Rocky Mountain Wellness Center 1630 Carr St., Unit C Lakewood, CO 80214 (303) 736-6366 Mother Nature's Miracle 315 W. Littleton Blvd. Littleton, CO 80210 (303) 794-3246 Nature's Medicine 843 North Cleveland Ave. Loveland CO, 80537 (970) 461-2811 The Longmont Apothecary 1314 Coffman St. Longmont, CO 80501 (303) 702-4402 Southwest Alternative Care 2100 W. Littleton Blvd., Suite 50 Littleton, CO 80120 (720) 237-3079 Organic Roots 418 8th St. SE, Unit A6 Loveland, CO 80537 (970) 624-6030 LAKEWOOD The Healing House 10712 W. Alameda Lakewood, CO 80226 (720) 389-6490 The Zen Farmacy 323 3rd Ave., Ste 3 Longmont, CO 80501 (303) 774-1ZEN (1936) Great Scotts Total Care 9187 W Jewel Ave Lakewood,CO 80232 (720)304-5940 The Hemp Center 2430 W. Main St. Littleton, CO 80120 (303) 993-7824 Smithstonian 123 N. Lincoln Ave. Loveland, CO 80537 (303) 578-0809 Doctors Green Meadows Wellness Center 1701 Kipling St., Ste 104 Lakewood, CO 80215 (720) 435-3830 Colorado Medical Marijuana LLC 3600 S. Wadsworth Blvd. Unit B Lakewood, CO 80235 (303) 625-4012 Doctors CannaMed USA 650 2nd Ave, Ste B Longmont, CO 80501 (877) 420-MEDS Doctors Doctors Herbal Health Systems 10475 Park Meadows Dr., Ste 600 Littleton, CO 80124 (720) 279-2379 or (877) 304-HERB GrassRoots Medical Clinic 1635 Foxtrail Dr. Loveland, CO 80538 (303) 499-9399 Green Tree Medical 3222 S. Vance St., #230 Lakewood, CO 80227 (720) 838-1652 Herbal Health Systems 1630 Carr St., Ste A Lakewood, CO 80214 (720) 279-2379 or (877) 304-HERB Smokeshops High Society Smoke Shop 608 9th Ave. Longmont, CO 80501 (303) 502-7620 LYONS Kind Pain Management Inc. 2636 Youngfield St. Lakewood, CO 80215 (303) 237-KIND(5463) LONE TREE Doctors Headquarters Emporium Dispensary 310 Main St. Lyons, CO 80540 Smokeshops Heads of State 9715 W. Colfax Ave. Lakewood, CO 80215 (303) 202-9400 Colorado Medical Marijuana LLC 9233 Park Meadows Dr. Lone Tree, CO 80124 (303) 635-4012 LOUISVILLE AlterMeds 1156 W. Dillon Rd., #3 Louisville, CO 80227 (720) 389-6313 Medicinal Wellness Center 5430 W. 44th Ave. Mountain View, CO 80212 (303) 333-3338 Lakewood Patient Resource Center 7003 W. Colfax Ave. Lakewood, CO 80214 (303) 955-5190 Lazy J's Smoke Shop 10672 W. Alameda Ave. Lakewood, CO 80226 (303) 985-2113 LONGMONT Botanic Labs 1110 Boston Ave., Ste G Longmont, CO 80501 (303) 260-8203 Compassionate Pain Management 1116 W. Dillon Rd., Ste 7 Louisville, CO 80027 (303) 665-5596 MONTROSE ColoMedCenter 4860 N. Townsend Ave. Montrose, CO 81401 (970) 252-8880 Mr. Nice Guys 12550 W. Colfax Ave., Unit 119 Lakewood, CO 80215 (303) 233-6423 LARKSPUR Larkspur Herbal Services (Inside Pony Express-o Cafe) 9080 S. Spruce Mountain Rd. Larkspur, CO 80118 (303) 681-3112 Colorado Patients First 1811 Hover St., Ste G Longmont, CO 80501 (303) 449-1170 LOVELAND Cannabis Care Wellness Center 1505 N. Lincoln Ave. Loveland, CO 80210 (970) 613-1600 MONUMENT Palmer Divide Green Meds (303) 912-2818 Natures Herbal Solution 9699 W. Colfax Ave., Unit A Lakewood, CO 80215 (303) 232-2209 Longmont Cannabis Club 650 2nd Ave, Ste A Longmont, CO 80501 (720) 340-1420 LITTLETON Colorado Canna Care 129 S. Cleveland Ave. Loveland, CO 80537 (970) 593-1180 MOUNTAIN VIEW Berkeley MMC, LLC 4103 Sheridan Blvd. Mountain View, CO 80212 (720) 389-8081 Pain Management of Colorado 9114 W. 6th Ave. Lakewood, CO 80226 (303) 423-7246 Blue Sky Care Connection 1449 W. Littleton Blvd., Ste 10 Littleton, CO 80120 (720) 283-6447 Nature's Medicine 1260 S. Hover Rd., Ste C Longmont, CO 80501 (303) 772-7188 Magic's Emporium 2432 E. 13th St. Loveland, CO 80537 (970) 397-1901 (970) 667-4325 Pain Management of Colorado 3600 S. Wadsworth Blvd. Lakewood, CO 80232 (303) 423-7246 CannaMart 72 E. Arapahoe Rd. Littleton, CO 80122 (303) 771-1600 NEDERLAND Grateful Meds 110 Snyder Street Nederland CO, 80466 (303) 258-7703 New Age Wellness 625 Main St. Longmont, CO 80501 (720) 381-2581 Pain Management of Colorado 12018 W. Jewell Ave. Lakewood, CO 80228 (303) 423-7246 Colorado Medical Marijuana LLC 2 W. Dry Creek Cir. Littleton, CO 80120 (303) 625-4012 Marry Janes 4229 W Eisenhower Blvd., Ste B2 Loveland, CO 80537 NEDICATE, LLC 150 N. Jefferson St., Ste B-3 Nederland, CO 80466 (303) 258-7141 "Is your listing here? For new listings or corrections please contact us at: info@dailybuds.com" brought to you by dailybuds.com 125 DISPENSARY Listing NedMeds (303) 258-7981 Inthebowl.LLC Pueblo, CO 81007 (330) 703-7500 Herbal Remedies 3200 W. 72nd Ave. Westminster, CO 80030 (303) 430-0420 Colorado's Absolute Alternative Denver, CO 80205 (720) 327-8572 Baked At a Mile High (720) 470-4441 bakedatamilehigh@gmail.com One Brown Mouse/ Cannabis Healing Arts 35 and 95 E. First St. Nederland, CO 80446 (303) 258-0633 Medimar Ministry 112 Colorado Ave. Pueblo, CO 81004 (719) 545-0100 Bennett Bail Bonds (303) 663-1010 The Nichol's Factory Westminster, CO (720) 422-5714 Dignity Group LLC Denver, CO 80218 (303) 238-4428 BioTrack THC (720) 432-5051 Tea Alchemy 98 Hwy 119 South, Ste 2 (303) 258-3561 Doctors Herbal Health Systems 1014 Eagleridge Blvd., Unit A Pueblo, CO 81008 (720) 279-2379 or (877) 304-HERB WHEAT RIDGE Dr. Green Genes Denver, CO 80202 (720) 329-3643 NORTHGLENN Cannabis Kindness Caregivers 4045 Wadsworth Blvd. #270 Wheat Ridge, CO 80033 (303) 431-4994 Bowl Mole Green Medical Referrals Clinic - Northglenn 10781 Washington St. Northglenn, CO 80233 (303) 495-5000 GeNEDics Medical Delivery Service Nederland, CO 80477 CannaPunch (303) 242-6643 sales@cannapunch.com PUEBLO WEST Marisol Therapeutics Wellness Center 177 Tiffany Dr. Pueblo West, CO 81007 (719) 547-4000 or (800) 584-MARI (6274) Clone Depot 3505 Kipling St. Wheat Ridge, CO 80033 (303) 547-2252 Herbal Delivery Services Denver, CO 80210 (303) 868-0242 CannLabs (720) 998-9454 The Green Solution 470 Malley Dr. Northglenn, CO 80233 (303) 990-9723 Mile High Relief Center Denver, CO (303) 886-7030 Catnips NatuRx 10107 W. 37th Pl. Wheat Ridge, CO 80033 (303) 420-PAIN (7246) Organic Solutions 356 S. McCulloch Blvd # 106 Pueblo West, CO 81007 (719) 547-5179 Cheeba Chews PAGOSA SPRINGS Good Earth Meds PO Box 1149 Pagosa Springs, CO 81147 (970) 731-2175 Mobile Dispensary LLC Denver, CO 80220 (303) 396-5710 Cool Jars (714) 602-2169 Rocky Mountain Herbal Health Center 434 S. Culloch Blvd. Pueblo West, CO 81007 (719) 562-0420 WINDSOR A New Dawn Wellness Clinic 520 � Main St. Windsor, CO 80550 (970) 599-6896 Nature's Medicine Pagosa Pagosa Springs, CO 81447 (970) 507-0148 PALISADE CQB K-9 (719) 494-0345 Colorado Alternative Health Care 125 Peach Ave., Unit B Palisade, CO 81526 (970) 424-5844 SALIDA Sublime Wellness Center Denver, CO 80203 (720) 382-0890 Dazys (303) 818-0083 Medical 420 7595 West Hwy 50 Sailda, CO 81201 (719) 214-9515 In Harmony Wellness 4630 Royal Vista Cir., Ste #12 Windsor, CO 80528 (970) 222-5555 Victory Gardens Grand Junction, CO 80501 (970) 314-5725 PALMER LAKE Delta 9 Tekhnologe (720) 327-2903 Mile High Holistics 626 Hwy 105 Palmer Lake, CO 80133 (719) 291-3335 SILVERTHORNE 191 Blue River Pkwy Silverthorne, CO 80497 (970) 468-7858 WOODLAND PARK Comfort Care Centers 1750 East Highway 24 Woodland Park, CO 80863 (719) 687-2221 High Country Healing Zen Cafe Denver, CO 80203 (720) 306-8339 Denver Mile Hydro 355 S. Harlan St. Lakewood, CO 80226 (303) 935-GROW (4769) Palmer Lake Wellness Center 850 Commercial Ln. Palmer Lake, CO 80133 (719) 488-9900 STEAMBOAT SPRINGS Aloha's Medical Marijuana Center 21600 US Hwy 40 Milner, CO 80487 (970) 875-0420 (970) 846-7490 DELIVERY SERVICES LAWYERS Rachel K. Gillette 801 Main St, Ste 210 Loouisville, CO 80027 (303) 665-0860 Dixie Elixirs (866) 928-1623 A1 Mobile Meds (MMJ) Commerce City, CO 80022 (720) 422-0503 PARKER A Kinder Way 10290 S Progress Way, Ste 204 Parker, CO 80134 (303) 325-5187 Alternative Health Center Littleton, CO 80165 (720) 227-5816 The Joffe Law Firm Doobtubes (510) 677-6053 or (303) 955-5190 Colorado Medical, LLC 11257 Tumbleweed Way Parker, CO 80134 (303) 588-0372 Rocky Mountain Remedies 2750 Downhill Plaza #205 Steamboat Springs, CO 80487 (970) 871-2768 Danyel S. Joffe & Sheri Gidan 1776 S. Jackson St., Ste 602 Denver, CO 80210 (303) 757-6572 Chronic Express Denver, CO 80224 (303) 656-7300 Dragon Chewer (213) 973-DRGN OTHER BUSINESSES 8 Rivers Restaurant 1550 Blake St. Denver, CO 80202 (303) 623-3422 Enlightened Platypus Insurance Green Point Insurance Group 11479 S. Pine Dr. Parker, CO 80134 (303) 841-8999 VICTOR Phantom Canyon Apothecary 415 Victor Ave. Victor, CO 80860 (719) 689-5560 ClearLabs Windsor, CO 80550 (720) 785-4788 EZ ATM (888)884-4ATM (4286) PUEBLO WESTMINSTER 9460 Federal Blvd. Westminster, CO 80260 (303) 427-0151 Colorado Cannabis Therapy, LLC Grand Junction, CO 81501 (970) 460-3017 420 Science Colorado Patient Coalition Anti-Aging and Wellness (970) 381-1621 tammyhiattmonaco Fantazmo Farmaceuticals South Denver Denver, CO 80219 (562) 209-0632 Grassland Greenhouse LLC Pueblo, CO 81004 (719) 671-8857 "Is your listing here? For new listings or corrections please contact us at: info@dailybuds.com" brought to you by dailybuds.com 126 DISPENSARY Listing Full Spectrum Labs fullspectrumlaboratories.com (720) 335-LABS OrganaLabs (720)-412-5194 Bio Track BioCare pList 61 420 Wellness p 48 A Cut Above p 32 Alive Herbal Medicine p 61 Altermeds LLC p 20 Alternative Wellness Center p 62 Altitude Organic p 26 & 27, 95 A Mile High p 13 At Home Remedies p 20 B Goods p 131 Ballpark Holistic p 13 & 70 of Advertisers Higher Ground p 59 Karmaceuticals p 20 Kindness Medical Cannabis Center Kush Brand Clothing p 113 Kushism p 2 Lakewood Patient Resource Center p 128 Mari-gro p 55 Maryjanes p 60 Medical Herbs of Fountain p 13 Mersa Tech p 70 Metro Cannabis p 73 Mile High Remedies, Inc p 13 MMD of Colorado p 76 MMJ America p 7 MMJ Daily Deals p 57 Natural Advantage MMJ Center p 76 Natural Remedies p 72 Natural Remedies MMJ p 15 Natures Best p 13 Nature's Kiss p 77 Naturx LLC p 60 Organa Labs p 129 Pain Management of CO p 33 Patient's Choice p 60 Post Modern Health p 62 Pure Medical Dispensary (backcover) Robb Corry p 91 Rocky Mountain Caregivers p 58 Rocky Mountain MMJ Dispensary p 84 Rocky Mountain Organic Medicine p 51 Rocky Mountain Wellness Center East p 13 Rocky Road Remedies p 89 Safer p 112 Sense of Healing p 28 Sensible CO p 82 Simply Pure p 37 Smithstonian p 13 Southwest Alternative Care p 29 Stone Mountain Wellness p 13 Summit Wellness p 72 Sweet Leaf p 72 Tender Healing Care p 22 The Giving Tree p 76 The Grasshopper Alt. Medicine p 83 The Green Earth Wellness p 70 The Hemp Center p 54 The Releaf Center p 36 Timberline Herbal Clinic & Wellness Center p 85 Today's Health Care (insert) Top Buds p 13 Urban Dispensary p 85 Ute Miracle Medicinals p 13 Global Transaction Solutions (800) 728-6597 ext. 1616 OTD Cycle Sports 7010 E. Colfax Ave. Denver, CO 80220 (303) 399-5447 3-D Denver Discreet Dispensary p 54 Greenfaith Ministry P.O. Box 024 Nunn, CO 80648 (307) 221-2180 Plant Medicine Expo HealthCare Provider Conference plantmedicineexpo.com (303) 991-6196 GrowBot.com (888) 391-4522 (949) 226-4468 RxHydro (304) 69Hydro (304) 694-9376 Back to the Garden Health & Wellness Center p 44 BC Inc. p 69 BioCare p 45 Bio Health Wellness p 91 Blown Glass p 79 Botica Del Sol p 13 Boulder County Caregivers p 54 Broadway Wellness p 4 BuddingHealth p 31 Canna Mart p 19 Cannabicare p 63 Cannabinoids MMJ p 41 Cannabis Kindness Caregivers p 54 Canna License p 29 Caregivers for Life p 44 Catnips p 43 Cheeba Chews p 50 Chef Herb p 102 City Park Dispensary p 13 Colorado Care Facility Inc. p 55 Colorado Cannabis Caregivers p 44 Colorado Dispensary Services p 54 Comfort Care Centers p 91 Delta 9 Tekhnologe p 38 & 39 DenCo p 71 Denver Canna Club p 12 & 13 Denver Patients Group p 16 & 17 Dixie Elixirs p 47 Doctors Orders Co Springs p 49 Doctors Orders Denver p 23 Doobtubes p 128 Doctors Orders Co. Springs" Emergency Room p 49 Enlightened Platypus p 36 Evergreen Apothecary p 129 Floobies p 13 Full Spectrum Labs (centerfold) Ganja Gourmet p 25 Good Meds p 36 Grassroots p p 48 Grass Roots Organica p 131 Green Cross Clinic p 9 Green Miracle Medicinals p 13 Green Mountain Care p 3 Green Point Insurance Group p 102 Hatch Wellness Center p 75 Hawaiian Herbal Health Center p 13 Herbal Connections LLC p 11 Herbal Remedies (insert) Herbs Medicinal p 13 High Tech Garden Supply 5275 Quebec St., Unit 105 Commerce City, CO 80022 (720) 222-0772 Safer Colorado Denver, CO 80204 (303) 861-0033 Installation Shoe Gallery 1955 Broadway Ave. Boulder, CO 80302 (303) 440-3820 Sensible Colorado P.O. Box 18768 Denver, CO 80218 (720) 890-4247 sensiblecolorado.org Joe's Salon & Barbers 2260 S. Quebec St., Unit 4 Denver, CO 80231 (303) 695-8004 Super Closet (877) GROW-SUPER (877) 476-9787 Keef Cola (303) 530-0382 Tastee Yummees P.O. Box 181457 Denver, CO 80205 (720) 937-1559 Lindsay's Boulder Deli 1148 Pearl St. Boulder, CO 80302 (303) 443-9032 The 420 Deal Mari-gro The Comfort Caf� 3945 Tennyson St. Denver, CO 80212 (303) 728-9251 marQaha medicated beverages MersaTech 8795 Ralston Rd., Ste 225 Arvada, CO 80002 (303) 955-2655 The Mad Hatter Coffee & Tea Co. P.O. Box 140266 Edgewater, CO 80214 (505) 690-1316 Mile High Mike (719) 646-2984 The Pure Gourmet (303) 501-3967 Mile Hydro 355 S. Harlan St. Lakewood, CO 80226 (303) 935-GROW Tingly Treats Denver, CO 80204 (720) 545-8322 MMAPR P. O. Box 40862 Denver, CO 80204 (303) 386-4001 VapeRX Votex Water Pipes MMJ Daily Deals "Is your listing here? For new listings or corrections please contact us at: info@dailybuds.com" brought to you by dailybuds.com 127 128 129 130 131 131 | http://issuu.com/kushdailybuds/docs/kushco_jan11 | CC-MAIN-2015-22 | refinedweb | 30,020 | 70.13 |
#include <hallo.h> * Peter Samuelson [Mon, Jan 08 2007, 03:26:21AM]: > > [Claus Fischer] > > 9. cdrecord's miserable state is well known > > > > Like the majority of other Linux users, I wonder when > > $ burn_my_iso_to_cd <iso-file> /dev/cdrom > > will work as expected. > > Hmmm. > > $ wodim filename.iso > > works for most situations. But: > > 1) You must be root, or a member of the 'cdrom' group. > > Use 'adduser yourusername cdrom' to define the group membership, > users allowed to talk directly to the CD drives. (Note also, > changes in group membership only take effect at login time.) > > 2) You have a /dev/cdrw symlink to your CD writer device. > > I'm not certain whether that symlink is set up automatically by > udev, as I don't use udev. If you want to use a different default > device than /dev/cdrw, see /etc/wodim.conf.. I think we have a problem here. As a workaround, I am going to add a bit code to parse /proc/sys/dev/cdrom/info and pickup the first CDR or DVDR capable device there (depending on the track size, IMO). Eduard. -- Demokratie: Zwei Wölfe und ein Schaf stimmen darüber ab, was gegessen wird. | https://lists.debian.org/debian-devel/2007/01/msg00288.html | CC-MAIN-2014-10 | refinedweb | 193 | 73.47 |
Archive for May, 2011
Object Export Options for EPUB & HTML
In an earlier post on Object Export Options, I had shown you how to specify Alt-text. In this post, we’ll see how Object Export options help us in the EPUB and HTML workflows. You can use the Object Export options to create different conversion settings on each object, with special attention spent on settings useful for different screens sizes and pixel densities (ppi).
Unlike alt text, which is supported in all three major export formats—PDF, EPUB, and HTML— the EPUB and HTML tab in the Object Export dialog box represents conversion and formatting options unique to EPUB and HTML.
The main purpose for this tab is to set image conversion properties on an object by object basis. This enables you to apply different degrees of quality on each individual object. If not specified, the global conversion options defined by EPUB or HTML export is used. To enable per object conversion settings, the “Custom Rasterization” box must first be checked.
Below is a description for the major features. (The rest are easy to understand).
Size
You can choose between fixed size or the new relative to page width size. Using Fixed results in an image with static height and width pixel dimensions based on the size of the object used in the InDesign document. “Relative to page width” setting sets a % value based on the width of the image relative to the InDesign page width. The % value enables the image to resize automatically based on the screen size of the device or the size of the browser window. “Relative” is recommended when producing EPUBs that are intended to be viewed on different devices.
Format
PNG is now supported, in addition to GIF and JPEG. PNG is a lossless format and also supports transparency. When PNG is selected, compression options are dimmed out.
Resolution (ppi)
While operating systems have standardized on either 72 or 96 ppi, mobile devices range from 132 ppi (iPad), to 172 ppi (Sony Reader), to over 300 ppi (iPhone 4). New in CS5.5 is the ability to specific a ppi value for each object selected. Values include 72, 96, 150 (average of all ebook devices today), and 300.
Digital Publishing Suite Tips
Learn.
Alt Text using Object Export Options | InDesign CS5.5
Object Export options consolidate two major functions when exporting images to EPUB/HTML and Tagged PDF.
- The first function is the requirement to add and persist “alternative text” to placed images and graphics, which i’ll talk about in this post.
- The second function is the ability to create different conversion settings on each object, with special attention spent on settings useful for different screens sizes and pixel densities (ppi).
The dialog box has been made modeless, so that you can select different individual frames while leaving the dialog active.
The Object Export Options can be applied to both graphic and text frames, as well as to groups. You can also apply conversion options to text frames, which is very useful when you want to control the quality of rasterization applied to text effects like Drop Shadows, and Bevel and Emboss, in the exported HTML and EPUB files.
Object Export Options > Alt Text
Alternative text (or alt text for short) is a brief text based description of the subject captured in the photograph or illustration. Adding alt text is a common practice when creating HTML web pages and is also supported in EPUB and Tagged PDF. You can find information on writing Alt text at. In InDesign CS5, do the following:
- Select an object and choose Object > Object Export Options > Alt Text.
- Choose one of the following
- From Structure: For legacy INDD files where users have already created all of the alt-text using the Structure Pane.
- From XMP (Title | Description | Headline ): Common XMP metadata fields used to capture some text about the image or graphic. If the XMP data is updated in another application like Adobe Bridge, updating the link in InDesign results in the alt text string being updated.
- From Other XMP: Only to be used by an XMP expert! Requires understanding the XMP path namespace and the array value. For example, the Bridge user interface supports IPTC Core, which contains a field titled “IPTC Subject Code”. If this was the field where Alt text string is stored, then in InDesign CS5.5, the value would have to be written as “Iptc4xmpCore:SubjectCode[1]”. For advertuous mortals, in Photoshop, you can view the full namespace in an image under File Info > Raw Data.
- Custom: Users can enter their own custom alt text string. Useful when there is no pre-existing data, or when the metadata text is lacking in quality.
Besides this there are a few otherways Alt Text can be created in InDesign:
- From Microsoft Word: When you import a Word document with graphic images that have Alt Text already created in Word, these are converted to native InDesign alt text. Currently, only the Windows version of Word supports this feature.
- When a text frame has Object Export Settings for EPUB/HTML conversion to a graphic format like PNG, GIF, or JPEG, any alt text value is appropriately be passed through.
InDesign CS5 Training | Print Production Guidelines
J | http://blogs.adobe.com/vikrant/2011/05/page/2/ | CC-MAIN-2014-42 | refinedweb | 879 | 61.97 |
At 13:04 2003-03-10 +0000, Michael Hudson wrote: >Armin Rigo <arigo at tunes.org> writes: > >> Hello'). > >So don't do that, run > >$ PYTHONPATH=. python pypy/interperter/interactive.py > >then. I can see this being tedious, though. > >interactive.py could remove the cwd from sys.path, but that also might >be surprising. > >>> > > >I actually realised this bit :-) It's just that if you set up your >paths the Right Way(tm) you shouldn't bump into this. > >Perhaps we should do a mass test -> pypytest (or something) renaming, >for simplicity. > >Cheers, >M. >(relative imports are evil, have I mentioned this lately?) I wonder if some kind of aliasing mechanism could solve this kind of problem. E.g., if about to raise an exception for non-existent file/directory, first check for a match to some rewrite patterns in an alias file. They could be simple prefix replacements for paths or more complex regex subs. Then, after rewriting, try the modified path, and only after that fails raise the exception. This would allow creation of platform-independent sys.path name prefixes like, e.g., r"/$$py_lib" and generating, e.g., r"D:\python22\lib" or the unix equivalent as the modified path prefix actually used. All python system files could thus exist under a tree of platform- independent alias prefixes, and pypy could define some for special purposes. There could even be a mechanism for pushing/popping a temporary prefix definition override. Appropriate naming conventions should avoid problems. (I have previously posted about virtualizing part of the file/directory namespace accessed via python's open/file built-ins, and gotten objections that this would not appear to i/o done via the os module or user-written extension modules. I didn't pursue it further, but I still think something platform-independent could be useful.) my .02USD Regards, Bengt Richter (Back from 2 wks on a sunny beach ;-) | https://mail.python.org/pipermail/pypy-dev/2003-March/000621.html | CC-MAIN-2017-30 | refinedweb | 318 | 58.89 |
log4j
log4j explain about log4j with example ?
Please visit the following link:
log4j Tutorials
log4j
;
In this tutorial you will learn about Log4J, which is one
of the most used Logging API...
In earlier example you have seen that we can define
properties of log4j... log4.xml or log4j.properties file. The external
properties file can be used
Log4j
Log4j what is the use of log4j in realtime?
Please visit the following link:
Log4j Tutorials
Log4J Architecture - Introduction to the Log4J architecture
will learn about the architectural component of Log4J.
Understanding....
There are two ways to configure the Log4J one is by using
properties file and other...
Log4J Architecture - Introduction to the Log4J architecture
hi - Log4J
hi Please give me the clear definition of Log4J and give me the example
java - Log4J
regarding "LOG4J" and where these are exactly used in realtime application... as posible. Hi Friend,
Log4j is a logging package to help..., an OutputStream, a java.io.Writer etc. Using Log4j it is possible to enable logging
log4j example
log4j example
This Example shows you how to create a log in a Servlet.
Description of the code:
Logger.getLogger(): Logger class is used for handling the majority
TTCCLayout in Log4j
TTCCLayout in Log4j
In this part of Log4j tutorial we are going to discuss
about... (output message)
For example in our output as "16 [main] INFO
log4j - Java Interview Questions
log4j plz send me example program, using log4j and tel me how to execute that program.(what are the requirements to execute that program
Console Appender in Log4j
Console Appender in Log4j
In this log4j console appender tutorial you will be introduced with ConsoleAppender
which is used in Log4j for appending output
Layouts in Log4j
Layouts in Log4j
... in Log4j:
PatternLayout
SimpleLayout
HTMLLayout
XMLLayout
TTCCLayout
Pattern Layout :
In our this section we are describing about most
general layout
A simple example of log4j for Servlet
A simple example of log4j for Servlet
This Example shows you how to create a log in a Servlet.
Description of the code:
Logger.getLogger(): Logger class is used
A simple example of log4j
A simple example of log4j
This Example shows you how to create a log in a program.
Description of the code:
Logger.getLogger
remove logging - Log4J
remove logging How to remove the logging from crystal reports only? log4j hss been used for logging purpose.Thanks and regards vishal hi friend,Crystal Reports XI is the definitive ?out of the box�
Hibernate Logging using Log4j
This section contains detail description of how you can use Log4j for logging configuration
spring rmi - Log4J
://
Thanks
Use of HTMLLayout in Log4j
to member fields of the logging event.
In this
example we have used FileAppender...
Use of HTMLLayout in Log4j
In this section we will describe you about HTMLLayout
class Q What is the real use for log4J? Hi Friend,
Please visit the following link:
Thanks
FileAppender in Log4j
FileAppender in Log4j
In previous section we have used ConsoleAppender
to append log events... throws
exception so we need to handle exception.
Here is the example code
Use of XMLLayout in Log4j
;log4j:event> tags or elements as
specified in the log4j.dtd.
In this example we...="1221114548421" level="INFO" thread="main">
<log4j:message><![CDATA[Example...
Use of XMLLayout in Log4j
WriterAppender in Log4j
WriterAppender in Log4j
... to a Writer or an OutputStream depending on the
user's choice.
In our example we have... is the example code for WriterAppenderExample
class:
WriterAppenderExample.java
java - Log4J
jsp - Log4J
Steps to log4j configuration in Struts2
Steps to log4j configuration in Struts2 Hi... any tel me each step for configure log4j in struts2 application.xml Example
log4j.xml Example
In earlier example you have seen that we can define
properties of log4j logging file with the help of log4j.properties. Properties
can also be defined
System Properties
Properties).
A Properties object can also be used to store your own program
properties...
// Description: Shows system properties. This must be an application...
Java: System Properties
From System Properties you can find
C++Tutorials
object is being used to represent something; you only care about the symbolic... benefit to download the source code for the example programs, then compile and execute each program as it is studied. The diligent student will modify the example
Tutorial, Java Tutorials
working example code. Code can be downloaded and used to learn the
technology...
log4j Tutorials
I18N Tutorials
iBatis...
Tutorials
Here we are providing many tutorials on Java related
RDF Tutorials
are going to generate our first RDF(
Resource Description File). This example... is
used to represents the information, so the particular RDF Example is written...
RDF Tutorials
Persistent Fields and Properties in Entity Classes
In this tutorial we will understand about the Persistent Fields and Properties in Entity Classes.
Here we will learn about the field types supported... is an
example:
Product prodObj = em.find(Product.class,2);
Read more JPA 2.1 tutorials
JSP cannot log to FileAppender - Log4J
Log4j.
At the moment, it logs fine to console when run from class main method... on log4j visit to :
Thanks
Java Swing Tutorials
description of Swing and then many
example are provided. Swing is mostly used...
In this section, you will learn about the input map.
Input map is used to perform...;
marks obtained in particular session. In this example we have used
Compression of rolled-over files using log4j
Compression of rolled-over files using log4j How can I use log4j to rollover the log files on hourly basis and compress the rolled over files
JSP Tutorials - Page2
. Here, you will learn more about the getHeader()
method like why is it used... XHTML
This JSP example describes how JSP tags for XML can be used.... In the example given below we have used HTML tag in jsp code to
embed flash file
Use of SimpleLayout in Log4j
Use of SimpleLayout in Log4j
In the previous example we have studied that there are
many....
Example of SimpleLayout log message is as
follows:
"Debug - and debug log
JSF Properties - Using Properties files in JSF
JSF Properties - Using Properties files in JSF
...) Tutorial -
JSF Tutorials. JSF Tutorials at Rose India covers
everything you need to know about JSF.
In this section you will learn how to use
More About Triggers
or
firing jobs. When we wish to schedule the jobs, then we can used the
properties...
More About Triggers
In this section we will try to provide the brief description
Any specific log4j statements for Hibernate Envers for showing the Envers related errors
Any specific log4j statements for Hibernate Envers for showing the Envers... are not coming in console.
We had the following statements in log4j...
Is there any specific log4j statements for Hibernate Envers
How to write properties file in java
.
Program Description:-
In this example we have created Properties class and we have...Write properties file in java
In this example we will discuss how to write property file in java. In this
example We have write data to properties file
How to read properties file in java
How to read properties file in java
Description of Example:-
In this example
we have used Properties class. This Properties class permit us to save data...= 8285279608
Example of Read properties file in java
import java.io.
XML Tutorials
programming. In these tutorials we have
developed example programs using... in a document
This Example shows you the Lists of nodes used in a DOM... properties from a XML file
This Example shows you how to Load
JSF - Java Server Faces Tutorials
illustrates you about the JSF navigation by providing
the example...JSF - Java Server Faces Tutorials
Complete Java Server Faces (JSF) Tutorial -
JSF Tutorials. JSF Tutorials at Rose India covers
RMI Tutorials
RMI Tutorials
In this section there are many tutorials on RMI, these tutorials... of a interface. RMI is
very promising technology from Java. It is used for accessing
Description of Database Table
is
not executed!".
Description of code:
DESCRIBE table_name;
This code is used... DiscriptionTable
See Description of Table Example!
Enter table name...
Description of Database Table
Properties file in Java
about the properties
file. The properties file is a simple text file. Properties... of the properties file through an example.
Read the Key-Value of Properties... of properties files in Java. This section
provides you an example for illustration how
Flex Tutorials
Check Box control example
The page provides illustration about...
In this example working of properties charCode
and keyCode... Flex Tutorials
description about jdk 1.5 features - Hibernate Interview Questions
description about jdk 1.5 features Hi.
Please can anyone provide the details description regarding the new features available in jdk 1.5.... ... tools are used with JDK for developing java based application & java applets. So
How to read properties file in Java?
is the data.properties the properties file used in this example:
name=Rose...Example program of reading a properties file and then printing the data... to value.
The java.util.Properties class is used to read the
properties file
Appending Strings - Java Tutorials
will compare the performance of these two ways by taking
example. We will take... the time taken. Given below the
timer Class:
/**
* Class used to measure...
of accurate size and after that we append. Given below the example:
public
JavaScript array properties
properties and few read-write properties that may be used sometimes whenever
they are required. All these properties are :
constructor
index
input
length...
JavaScript array properties
java tutorials
topics in detail..but not systematically.For example ..They are discussing about...java tutorials Hi,
Much appreciated response. i am looking for the links of java tutorials which describes both core and advanced java concepts
Properties file in Java
Properties file in Java
In this section, you will know about the properties
file. The properties file is a simple text file. Properties file contains keys
and values
Read the Key-Value of Properties Files in Java
Read the Key-Value of Properties Files in Java
... to read the
key-value of properties files in Java. This section
provides you an example for illustration how to read key and it's regarding
values from
Web Services Tutorials and Links
Web Services Tutorials and Links
Web
services Activity:
The World Wide Web is more and more used for application... properties and endpoint references, that normalize the information typically
Java read properties file
Java read properties file
In this section, you will learn how to read properties file.
Description of code:
There are different tools to access different files. Now to load the data
from the properties file, we have used Properties
SPEL-Wiring values from other bean properties
SpEL: Wiring value from other bean properties
Spring 3 provides powerful Expression Language which can be used to wire
values into beans properties.
Lets take an
example to demonstrate how SpEL is used to wire value from other bean
Velocity with External Properties
Velocity with External Properties
This Example shows you how to use Velocity with External Properties.
The method used in this example are
described below
Storing properties in XML file
Storing properties in XML file
This Example shows you how Store properties in a new
XML.... These are some of the methods used in code given below for storing
properties
Write Keys and Values to the Properties file in Java
to write or
store the keys and values in properties file list. The
OutputStream used... Write Keys and Values to the Properties file in
Java... how to write keys and values
in the properties files through the Java program
JSP Tutorials Resource - Useful Jsp Tutorials Links and Resources
JSP Tutorials
... page is based on data submitted by the
user. For example the results pages... frequently. For example, a
weather-report or news headlines page might build
Built In Properties
Built In Properties
This example illustrates how to access various system properties using
Ant. Ant provides access to all system properties as if they had been
Reference Class Properties
Reference Class Properties
This
Example shows you how
to use class properties in your velocity template.
Steps used in the code given below :
1
Chart & Graphs Tutorials in Java
Chart & Graphs Tutorials in Java
Best collection of Graphs and Charts Tutorials in Java. Our Chart and Graphs
tutorials will help learn everything you need
Technical description of Java Mail API
Technical description of Java Mail API
... java.util.Properties object to get information about mail server, username, password etc... default Session object which takes Properties and Authenticator objects as arguments
Get All Keys and Values of the Properties files in Java
file through an example. The example
is given bellow.
Description of program...
Get All Keys and Values of the Properties files in
Java... to get all keys and
values of the properties files in the Java. This section
Good tutorials for beginners in Java
in details about good tutorials for beginners in Java with example?
Thanks.
.../
Certainly it will help you in details about the beginners Java Tutorials...Good tutorials for beginners in Java Hi, I am beginners in Java
Struts2.2.1 hello world annotations Example using Properties
Struts2.2.1 hello world annotations Example using Properties
In this tutorial, We will discuss about hello world annotation
application using properties.
In this example we use the /result where we put the result
C Tutorials
;
C array sort example
For sorting an array, we have used... in the given example, we have used the library function strstr()
provided... C Tutorials
Pragmatic Programmer - Java Tutorials
as
programmers? When I was at school, I used to go spearfishing
about once a week... in the
days when I thought I was unfit, I used to do that in 20
seconds... be an example
of too much pragmatism and not enough theory
Java Write to properties file
data to properties file.
The properties file is basically used to store... to it in the form of key-value.
In the given example, we have created an object of Properties... is then store the Properties
object to the file.
Example:
import java.io.*;
import
Java and Dilbert - Java Tutorials
, South Africa,
at the newly formed Java User Group talking about Design
Patterns... "Cheaper by the Dozen"
by Frank Gilbreth, about a time-and-motion expert and his... their own laziness. For example,
he should look at the person who will rather
write
HTML5 Tutorials
HTML 5 Tutorials
In this section we have listed the tutorials of HTML 5... HTML5 tutorials. Here are some of the best HTML 5
tutorials:
HTML5 Tutorials
HTML 5 Introduction
Here you will learn
Ego Tripping with Webservices - Web Services Tutorials
have been getting from
you, urging me to continue, for example... to mention that Clark Updike sent me a similar
code snippet to what I used.... My total real
experience is about two weeks. However, after starting
Resetting ObjectOutputStream - Java Tutorials
information
about the courses that we offer. I don't mind getting lots...?
Let's look at an example. First we have class Person, which
is the class... it to the
correct number, not too big and not too little.
What About RMI
Check Properties
Check Properties
This example illustrates how to check properties using... properties;
the first two are used to define source directory and destination
Index | About-us | Contact Us
|
Advertisement |
Ask
Questions...;
C Tutorials |
Java
Tutorials |
PHP Tutorials |
Linux Tutorials |
WAP Tutorial
|
Struts
Tutorial |
Spring
Struts Tutorials
Struts Tutorials
Struts Tutorials - Jakarta Struts... is provided with the example code. Many advance topics like Tiles, Struts Validation... up Apache Struts to use multiple configuration files. You'll learn about
DBUnit Tutorials
with proper example.
The DbUnit is an extension of JUnit which is used data-driven
Implementing a Least-Recently-Used(LRU) Cache
Implementing a Least-Recently-Used(LRU) Cache
In this section, you will learn about the least-recently- used(LRU)
cache in Java. LRU
Cache helps you how to use | http://www.roseindia.net/tutorialhelp/comment/98624 | CC-MAIN-2014-35 | refinedweb | 2,646 | 56.96 |
I'm trying to use an existing js library (validate.js) in both the client and server.
I installed it using npm, and everything compiles for both server and client.
When using it in the server it works great, but when I execute it in the browser it throws an error.
The same file is used in both cases:
import validate = require("validate.js");
export function RequestValidator(data: any): any {
return (validate as any)(data, constraints, { allowEmpty: true });
}
validate
any
TS2349: Cannot invoke an expression whose type lacks a call signature.
.d.ts
declare module "validate.js" {
export interface ValidateJS {
(attributes: any, constraints: any, options?: any): any;
async(attributes: any, constraints: any, options?: any): Promise<any>;
single(value: any, constraints: any, options?: any): any;
}
export const validate: ValidateJS;
export default validate;
}
Uncaught TypeError: validate is not a function(…)
commonjs
"use strict";
const validate = require("validate.js");
...
system
System.register(["validate.js"], function(exports_1, context_1) {
"use strict";
var __moduleName = context_1 && context_1.id;
var validate;
...
return {
setters:[
function (validate_1) {
validate = validate_1;
}],
...
validate
validate: r
EMPTY_STRING_REGEXP: (...)
get EMPTY_STRING_REGEXP: function()
set EMPTY_STRING_REGEXP: function()
Promise: (...)
get Promise: function()
set Promise: function()
__useDefault: (...)
get __useDefault: function()
set __useDefault: function()
async: (...)
get async: function()
set async: function()
capitalize: (...)
get capitalize: function()
set capitalize: function()
cleanAttributes: (...)
get cleanAttributes: function()
set cleanAttributes: function()
...
When you compile with
"module": "system", node-compatible import
import validate = require("validate.js");
no longer works - the value you are getting for
validate is a module, not a function. This may be a bug in typescript, or it could be by design - I don't know. (update: from github comment addressed to JsonFreeman here, it looks like it's by design:
you get the module object with a set of properties including one named default).
There are several ways around this.
First, you can do easy conversion yourself - the function that you need is provided as
default property of the module, so this line will fix it:
validate = validate.default ? validate.default : validate;
Or, you can compile with
"module": "commonjs" even for the browser, so typescript will generate working code, and systemjs will detect format for your modules automatically.
Or finally, you could still compile with
"module": "system", but import
validate.js as it's intended in its typings:
import validate from 'validate.js';
That way you don't have to do any casts to
any, and typescript will generate necessary access for
default property, but the drawback is that it will not work at all in node when imported that way. | https://codedump.io/share/uZjkCtwN9Q1Z/1/importing-goes-wrong-with-systemjs | CC-MAIN-2016-50 | refinedweb | 417 | 50.33 |
Article 1: Introduction to the Arduino Hardware Platform
Article 3: Arduino-Based MIDI Expression Pedal.
After I initially discovered the Arduino platform, I immediately noticed a wide variety of components that can be connected to an Arduino - everything from inexpensive LEDs, to a moderately priced Ethernet shield, to an outrageous and over-the-top tank shield (which sells for nearly $200!). While shopping for my Arduino, I noticed that LCDs were fairly inexpensive so I purchased a $10 16x2 character LCD, and an $18 128x64 Graphic LCD.
Character LCDs and graphic LCDs are completely different devices and require different libraries and APIs to drive them. Fortunately, both devices are supported by the Arduino community. For my character LCD, I used the LiquidCrystal Library, and for my graphic LCD, I used the KS0108 Graphics LCD library.
Connecting a character LCD and programming it was a breeze and I didn't run into any problems. I simply followed the instructions and wiring diagram in the Arduino Character LCD Tutorial and everything worked as expected. After running the LCD_example sample sketch, I wrote a sketch to take advantage of my character LCD called HelloCodeProject:
HelloCodeProject
/*
HelloCodeProject.cpp, based off of LCD_example from
*/
#include <LiquidCrystal.h>
const int BACK_LIGHT = 13; // Pin 13 will control the backlight
//
g_lcd.setCursor(0, 0); // Set the cursor to the beginning
g_lcd.print("Hello,");
g_lcd.setCursor(0, 1); // Set the cursor to the next row
g_lcd.print("CodeProject");
}
void loop()
{
}
The second sketch I wrote was TickerTape, which simulates a ticker tape message scrolling across the display. Since TickerTape was the first sketch that I wrote which used an algorithm (the algorithm uses a single buffer for the message, an int to keep track of the starting point of the message to display, and takes into consideration when the message 'wraps' around the end of the buffer), I decided to code the algorithm in a native Win32 C/C++ application first. Since the Arduino has only limited debugging capabilities, I felt that coding the algorithm first in an environment which supports line level debugging and breakpoints would be faster than trying to debug within the Arduino using a bunch of Serial.println() statements. After I verified that the algorithm worked, I coded up the TickerTape sketch:
TickerTape
TickerTape
int
Serial.println()
/*
TickerTape.cpp, based off of LCD_example from
*/
#include <LiquidCrystal.h>
const int BACK_LIGHT = 13; // Pin 13 will control the backlight
const char* MESSAGE = "Example 2: Hello, CodeProject. ";
const int MESSAGE_LENGTH = 31;
const int DISPLAY_WIDTH = 16;
//
Serial.begin(9600);
}
void loop()
{
static int s_nPosition = 0;
int i;
if(s_nPosition < (MESSAGE_LENGTH - DISPLAY_WIDTH))
{
for(i=0; i<DISPLAY_WIDTH; i++)
{
g_lcd.setCursor(i, 0);
g_lcd.print(MESSAGE[s_nPosition + i]);
}
}
else
{
int nChars = MESSAGE_LENGTH - s_nPosition;
for(i=0; i<nChars; i++)
{
g_lcd.setCursor(i, 0);
g_lcd.print(MESSAGE[s_nPosition + i]);
}
for(i=0; i<(DISPLAY_WIDTH - nChars); i++)
{
g_lcd.setCursor(nChars + i, 0);
g_lcd.print(MESSAGE[i]);
}
}
s_nPosition++;
if(s_nPosition >= MESSAGE_LENGTH)
{
s_nPosition = 0;
}
delay(500);
}
My second attempt compressed the image by using run length encoding. Since much of the image's pixels repeat, I thought this might be an efficient way to reduce the data needed for the image. The format of each line was:
[1,0,-42], run1, run2, ..., runN, -1
The first int is a marker. 1 indicates the first run of the line is black, 0 indicates the first run of the line is white, and -42 indicates the array is finished. The second int indicates the run length and toggles back and forth between black and white until -1 is found, which indicates the end of the row. As an example, if there was:
1
0
-42
-1
1, 64, 32, 32, -1
...64 black pixels would be drawn, then 32 white pixels, then 32 black pixels.
This worked better as the sketch size was now small enough to upload to the Arduino, however the array was too large for the Arduino's stack. When I included the entire array, the Arduino crashed and it kept restarting itself over and over, and I found that if I included only 1/4 of the total array it worked fine, so the RLE approach didn't work either.().
GLCD.SetDot()
In addition to discovering that I needed to store the data in flash memory, I also learned that the beta version of the KS0108 library also included a new function, DrawBitmap(). The second version of CodeProjectLogo (just comment out the #define USE_SET_DOT statement) declares the array as:. | http://www.codeproject.com/Articles/38204/Interfacing-an-Arduino-with-LCDs?fid=1543898&df=10000&mpp=10&sort=Position&spc=None&select=3464158&tid=3961442 | CC-MAIN-2015-11 | refinedweb | 746 | 53 |
The Seeq SDK for Python
Project description
The seeq Python library is used to interface with Seeq Server ().
Execute
pip install seeq to make it available for import.
seeq.spy
The Seeq Spy module is a friendly set of functions that are optimized for use with Jupyter, Pandas and NumPy.
The Spy module is the best choice if you're trying to do any of the following:
- Pull data out of Seeq
- Import data in a programmatic way (when Seeq Workbench's CSV Import capability won't cut it)
- Calculate new data in Python and push it into Seeq
- Create an asset model
To start exploring the Spy module, execute the following lines of code in Jupyter:
from seeq import spy spy.docs.copy()
Your Jupyter folder will now contain a
Spy Documentation folder that has a Tutorial and Command Reference
notebook that will walk you through common activities.
For more advanced tasks, you may need to use the SDK module described below.
seeq.sdk
The Seeq SDK module is a set of Python bindings for the Seeq Server REST API. You can experiment with the REST API by selecting the API Reference menu item in the upper-right "hamburger" menu of Seeq Workbench.
Login is accomplished with the following pattern:
import seeq import getpass api_client = seeq.sdk.ApiClient('') # Change this to False if you're getting errors related to SSL seeq.sdk.Configuration().verify_ssl = True auth_api = seeq.sdk.AuthApi(api_client) auth_input = seeq.sdk.AuthInputV1() # Use raw_input() instead of input() if you're using Python 2 auth_input.username = input('Username:').rstrip().lower() auth_input.password = getpass.getpass() auth_input.auth_provider_class = "Auth" auth_input.auth_provider_id = "Seeq" auth_api.login(body=auth_input)
The
api_client object is then used as the argument to construct any API object you need, such as
seeq.sdk.ItemsApi. Each of the root endpoints that you see in the API Reference webpage corresponds
to a
seeq.sdk.XxxxxApi class.
In case you are looking for the Gencove package, it is available here:
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/seeq/0.0.58/ | CC-MAIN-2019-51 | refinedweb | 357 | 54.32 |
I've earlier described a simple way to deal with development specific config but the larger your system is the more likely it is that you will have multiple environments you want to run against; private development, latest build deployment, stress deployment, production deployment etc. Here is a good suggestion on how to solve that problem in a pretty convenient way.
The other day I was doing my usual double check locking when a co-worker pointed out that without a memory barrier the typical implementation is not necessarily safe. It is described here together with several options from a singleton perspective. Also my time on the Robotics team have made me almost religious about trying to avoid any kind of manual synchronization (i.e. when I use a synchronization object explicitly in my code). The problem I was trying to fix this time involved one method that runs for a relatively long time. During this time a property must not change. Easy enough to get around I thought and made the type of the property a struct (I could have made the class IClonable too, but since the object consist only of a number of value types a struct made sense to me.
Since assignment of structs are not atomic I needed something to make sure the struct being copied did not change at the same time resulting in a corrupt struct. Nor could I take a lock for the whole execution of this method since any thread needing to change this property would need to do so quickly without waiting for the long running method. I realized that I could use the Lazy<T> to not create my own lock and make the code look really nice. This is the essence of what I came up with:
1: public class RepetitiveLazyUsage
2: {
3: private Lazy<int> theValue =
4: new Lazy<int>(LazyThreadSafetyMode.ExecutionAndPublication);
5:
6: public int UseTheValue()
7: {
8: int valueToUse = theValue.Value;
9: // Relativly slow code here that depends on valueToUse.
10: return valueToUse;
11: }
12:
13: public void SetTheValue(int value)
14: {
15: theValue = new Lazy<int>(
16: () => value,
17: LazyThreadSafetyMode.ExecutionAndPublication);
18: }
19: }
As you can see this looks pretty nice I think and I'm using two features to make sure this code does not have any problems in a multi-threaded environment. First I use Lazy<T> in a thread-safe mode and that way I don't need an explicit lock. Second I use the fact that reference type assignments are atomic in .Net and hence replacing the instance of theValue in the SetNewValue method does not need a lock nor when I'm getting the value.
I guess this plugin for Visual Studio should be mandatory from now on...
If you're uploading (and I guess downloading) large blobs to Azure you might hit a timeout consistently because the ClientBlobClient have a timeout property. It defaults to 90 seconds which means that if Azure is your bottle neck (and throttling you) anything above 5400 MB will result in a timeout. In real life your connection up to azure is more likely your bottle neck (for example I experienced timeouts of files above approximately 200MB when uploading from home). So make sure you increase this timeout for large files to avoid these timeouts.
There is however another problem with this API in my opinion. As a server I think it makes sense to have a timeout preventing clients from taking up resources by just uploading data very slowly, but this is a client API so as long as I'm uploading data I'm technically fine I think. A much shorter timeout where it only fires if no data can be sent during that period makes more sense to me personally. In a perfect world there would actually be two timeouts; the idle timeout and an overall timeout. But the default (and what I think should be there if I had to choose one) would be the idle timeout. I don't think a client really wants to timeout if transfer is slow as long as there is progress. But again, that's just me...
I usually hand roll my own fake objects for my tests. They have always looked a lot like what Stubs generate. I just think that it's so cheap to create them that I don't even need Stubs. In this series I'll assume an interface that looks like this:
1: interface ITheInterface
2: {
3: void DoSomething(int x);
4: int ComputeSomething(int a, int b);
5: }
When I first started to hand roll my fakes it looked something like this:
6: private class FakeTheInterface : ITheInterface
7: {
8: public Action<int> DoSomethingHandler { get; set; }
9: public Func<int, int, int> ComputeSomethingHandler { get; set; }
10:
11: public void DoSomething(int x)
12: {
13: if (DoSomethingHandler == null)
14: {
15: Assert.Fail("Unexpected call to DoSomething");
16: }
17:
18: DoSomethingHandler(x);
19: }
20:
21: public int ComputeSomething(int a, int b)
22: {
23: if (ComputeSomethingHandler == null)
24: {
25: Assert.Fail("Unexpected call to ComputeSomething");
26: }
27:
28: return ComputeSomethingHandler(a, b);
29: }
30: }
Which gave you a test that looked something like this:
31: [TestMethod]
32: public void UsingFake1()
33: {
34: var thing = new FakeTheInterface();
35: thing.ComputeSomethingHandler = (a, b) => 42;
36: Assert.AreEqual(42, thing.ComputeSomething(0, 0));
37: }
After a while I realized that I could make the fake a little nicer by doing this:
38: private class FakeTheInterface : ITheInterface
39: {
40: public Action<int> DoSomethingHandler { get; set; }
41: public Func<int, int, int> ComputeSomethingHandler { get; set; }
42:
43: public void DoSomething(int x)
44: {
45: Assert.IsNotNull(DoSomethingHandler,
46: "Unexpected call to DoSomething");
47: DoSomethingHandler(x);
48: }
49:
50: public int ComputeSomething(int a, int b)
51: {
52: Assert.IsNotNull(ComputeSomethingHandler,
53: "Unexpected call to ComputeSomething");
54: return ComputeSomethingHandler(a, b);
55: }
56: }
But once in a while I came across an interface with a method that had a method like FooHandler. "FooHandlerHandler" is just very confusing. Recently I tried a different approach that looks like this:
57: private class FakeTheInterface : ITheInterface
58: {
59: private Action<int> doSomething =
60: x => Assert.Fail("Unexpected call to DoSomething({0}).", x);
61:
62: private Func<int, int, int> computeSomething =
63: (a, b) =>
64: {
65: Assert.Fail(
66: "Unexpected call to ComputeSomething({0}, {1}).",
67: a, b);
68: return 0;
69: };
70:
71: public FakeTheInterface(
72: Action<int> DoSomething = null,
73: Func<int, int, int> ComputeSomething = null)
74: {
75: doSomething = DoSomething ?? doSomething;
76: computeSomething = ComputeSomething ?? computeSomething;
77: }
78:
79: public void DoSomething(int x)
80: {
81: doSomething(x);
82: }
83:
84: public int ComputeSomething(int a, int b)
85: {
86: return computeSomething(a, b);
87: }
88: }
Note that I abuse the naming guidelines for arguments in order to make it consistent with the method name. A test using this fake looks like this:
89: [TestMethod]
90: public void UsingFake3()
91: {
92: var thing = new FakeTheInterface(
93: ComputeSomething: (a, b) => 42);
94: Assert.AreEqual(42, thing.ComputeSomething(0, 0));
95: }
So far I'm happy with this evolution. The only potential problem I see is if I need to replace the implementation half way through a test, but that can still be achieved by having a seperate variable in the test that I use and then change. So all in all it feels like this last evolution will be used (by me) for a while. Suggestions on improvements welcome!
I recently had to install some software that wouldn't run because I gave my SQL Server instance a descriptive name. There was no (or at least not easy) way to get it to use anything other than the default name "MSSQLSERVER". At the same time I removed my SQL Express which my Azure storage emulator used so I had to fix that one too... At least that is fairly easy: DSInit /sqlInstance:.
Over the holidays I've been starting to clean up a backlog of old RSS items I should read and one of them covered a way to deal with Azure configurationss and how they differ in development and production. While I've been using a similar approach to hide the fact if a configuration setting is read from the role configuration or web.config I've dealt with development vs production configuration in a different way. So far I've kept the configuration file that is part of the project as development specific and had separate configurations for production elsewhere. Naturally this makes me have to keep two files in sync when I add new settings and once in a while I need to use custom storage for my development like hitting a real azure storage account but not the production one.
What I like with the approach used in the link above is that you only need one file. Drawback is however that now my configuration file now needs a few extra development only settings that override the production ones. But compared to having two files I kind of like this idea. Worth a try I think and I'll let you know how it works for me once I've tried it.
As of 2012 I'm no longer working on the robotics team. I'm now working on the Xbox Live team.
Time): | http://blogs.msdn.com/b/cellfish/archive/2012/01.aspx | CC-MAIN-2014-42 | refinedweb | 1,548 | 58.62 |
Iron.io and Lumen
Lumen Iron Worker
What and why
A worker is a great way to run tasks as needed taking the load off your applications server and greatly speeding up the process of a task as you can run numerous workers at once.
A lot of this comes from and and their examples
Topics covered
- Creating a Lumen Worker
- Creating a statically linked binary in the worker
- Testing the worker locally with Docker
- Entering your docker environment
- Design patterns
Install Lumen
composer create-project laravel/lumen --prefer-dist
Add to composer.json
"iron-io/iron_mq": "~1.5", "iron-io/iron_worker": "~1.4"
So now it looks like
"require": { "laravel/lumen-framework": "5.0.*", "vlucas/phpdotenv": "~1.0", "iron-io/iron_mq": "~1.5", "iron-io/iron_worker": "~1.4" },
Install iron client
See their notes here
Install docker
On a mac they have great steps here for that
Environment settings
For Lumen we can simply use our typical .env file. For Iron you put your info in the iron.json file in the root of the app (make sure to add this to .gitignore)
The format is
{ "token": "foo", "project_id": "bar" }
The worker
Make a folder called workers at the root of your app
In there place your worker file. In this case
ExampleOneWorker. This is what gets called, as you will see soon, when the worker starts. This is what will receive the payload.
workers/ExampleOneWorker.php
Inside of this to start will be
<?php require_once __DIR__ . '/libs/bootstrap.php'; $payload = getPayload(true); fire($payload); function fire($payload) { try { $handler = new \App\ExampleOneHandler(); $handler->handle($payload); } catch(\Exception $e) { $message = sprintf("Error with worker %s", $e->getMessage()); echo $message; } }
For testing reasons and code clarity I do not like to put much code in here. I instantiate a handler class and pass in the payload.
The getPayload in the helper.php file, provided by an Iron.io example, will get the payload for us.
There is another folder to make in there called libs and for now it has this file
bootstrap.php and
helper.php [1] The helper is here
With the contents as seen below for bootstrap or visit to get the files.
<?php require __DIR__ . '/../../vendor/autoload.php'; $app = require_once __DIR__ . '/../../bootstrap/app.php'; if(!function_exists('getPayload')) require_once __DIR__ . '/helper.php'; use Illuminate\Encryption\Encrypter; $app->boot(); function decryptPayload($payload) { $crypt = new Encrypter(getenv('IRON_ENCRYPTION_KEY')); $payload = $crypt->decrypt($payload); return json_decode(json_encode($payload), FALSE); }
helper.php I placed a gist here
Also for this example we will need a
payload.json file in the root of our app. More on that shortly, for now put this into the file.
{ "foo": "bar" }
Finally our app folder has the
ExampleOneHandler.php file to handle the job.
<?php namespace App; class ExampleOneHandler { public function handle($payload) { echo "This is the Payload"; echo print_r($payload, 1); } }
We will do more shortly.
Here is the folder/file layout
Round 1 ExampleOneHandler
Lets now run this and see what happens.
Using docker we can run this locally
docker run --rm -v "$(pwd)":/worker -w /worker iron/images:php-5.6 sh -c "php /worker/workers/ExampleOneWorker.php -payload payload.json"
You just ran, what ideally will be, the exact worker you will run when you upload the code. It will take a moment on the first run. After that it will be super fast.
Here is my output
Uploading to Iron
Bundle
This is really easy to make a script for by just adding them to an upload_worker.sh file in the root of your app and running that as needed.
touch ExampleOneWorker.zip rm ExampleOneWorker.zip zip -r ExampleOneWorker.zip . -x *.git* iron worker upload --stack php-5.6 ExampleOneWorker.zip php workers/ExampleOneWorker.php
So we are touching the file so there are no errors if it is not there. Then we rm it And zip it ignoring .git to keep it slim and then we upload it with the worker and point to the directory to use.
Don't run it just yet
I add my iron.json file to the root of my app as noted above.
and I make the Project on the Iron HUD
And then I can run the
make_worker.sh I made above
You should end up with this output
Looking at the HUD (Iron WebUI)
Under Worker and tasks we see
So lets run it from the command line to see it work
iron worker queue --wait -payload-file payload.json ExampleOneWorker
The wait is pretty cool since we can get this output. This is key when doing master slave workers as well.
You get the same output as before. But it was run on the worker
Here is the HUD
Round 2 Lets do something real
So far the payload has not done much but lets use it in this next example.
As above we make and
ExampleTwoWorker.php
Make payload2.json file
{ "search_word": "batman" }
Then we use it to call our
ExampleTwoWorkerHandler
warning this is not an example on good php code
<?php namespace App;']; return file_get_contents($image); } } }
I test locally
docker run --rm -v "$(pwd)":/worker -w /worker iron/images:php-5.6 sh -c "php /worker/workers/ExampleTwoWorker.php -payload payload2.json" > output.png
But this time put the output into a file and we get
Making a custom binary
Before I get this to iron lets make it more useful since I will lose that output.png file on the worker. Some workers we have would convert that into a base64 blob and send that back in a callback.
One enter into docker like I noted above
Two run
apt-get update
Then run
apt-get install jp2a
Then make a folder called /worker/builds/
And in there follow these instructions replacing jp2a as needed.
Then make a folder called /worker/bin and copy jp2a from
/worker/builds/jp2a-1.0.6/src/jp2a to this bin folder.
You should be able to see that run now by ding /worker/bin/jp2a even run
apt-get remove jp2a to show it works as a standalone library [3]
Let's adjust our code
<?php /** * Created by PhpStorm. * User: alfrednutile * Date: 4/27/15 * Time: 9:02 PM */ namespace App; use Illuminate\Support\Facades\File;']; $path_to_worker = base_path('bin/'); exec("chmod +x {$path_to_worker}/jp2a"); exec("TERM=xterm {$path_to_worker}/bin/jp2a $image", $output); return implode("\n", $output); } } }
run locally and you might get some decent output or not :(
Make and upload the worker
Then I run
sh ./make_worker_two.php
touch ExampleTwoWorker.zip rm ExampleTwoWorker.zip zip -r ExampleTwoWorker.zip . -x *.git* iron worker upload --stack php-5.6 ExampleTwoWorker.zip php workers/ExampleTwoWorker.php
And run and wait
iron worker queue --wait -payload-file payload2.json ExampleTwoWorker
And if all goes well your console and the logs should show something like
Entering your docker environment
Easy
docker run -it -v "$(pwd)":/worker -w /worker iron/images:php-5.6 /bin/bash
Now you can test things in there, download packages etc.
MVC
Not sure if this really is correct but I tend to see the Worker file as my route file. The handler as the controller and other classes as needed, Service, Repository etc. This makes things more testable etc and better organize imo.
Connecting the Queue to the Worker
Numerous Environments
Waiting on bug report
But part of the process is to setup other projects at iron. For example if my worker is ExampleWorker then I would make ExampleWorker-dev. I would then switch to my git branch dev and do my changes. Once that is done I would make sure the token and key in my iron.json file matches that new project I made for dev and that is it.
The other way is slicker cause you do not need to change your iron.json each time but in the mean time this works fine.
Deploy from Codeship
Codeship will allow you to set custom deploy scripts or bash shells scrips basically.
In here I placed for the branch I wanted
curl -sSL -O chmod +x ironcli_linux touch iron.json echo "{" >> iron.json echo '"token": "bar",' >> iron.json echo '"project_id": "foo"' >> iron.json echo "}" >> iron.json zip -r PDF2PagesWorker.zip . ./ironcli_linux worker upload --stack php-5.6 PDF2PagesWorker.zip php workers/PDF2PagesWorker.php
You can easily then swap out the related project id and token for the environment you are uploading to eg development, staging etc.
Repo
another example Thumbnail Maker
[1] These seems to be a part of the iron worker for version 1 but not sure why not for 2 maybe there is a better pattern for this.
[2] I renamed it to ExampleOneLumen
[3] So far this is a 50/50 solution it did not work for pdf2svg but it did work for pdftk | https://alfrednutile.info/posts/143 | CC-MAIN-2018-47 | refinedweb | 1,468 | 67.04 |
On Mar 26, 2006, at 07:32:31, Arjan van de Ven wrote:> On Sun, 2006-03-26 at 06:54 -0500, Kyle Moffett wrote:>> Create initial kernel ABI header infrastructure>> it's nice that you picked this one;> for this you want an arch-generic/stddef32.h and stddef64.h>> and have arch-foo just only include the proper generic one..I plan to add a lot of other definitions to this file later on. For example different architectures have different notions of what a __kernel_ino_t is (unsigned int versus unsigned long). I may rename this file as types.h, but from looking through the code I figure I'll have enough general purpose declarations about "This architecture has blah" that a separate stddef.h file will be worth it.> (and... why do you prefix these with _KABI? that's a mistake imo. > Don't bother with that. Really. Either these need exporting to > userspace, but then either use __ as prefix or don't use a prefix. > But KABI.. No.)According to the various standards all symbols beginning with __ are reserved for "The Implementation", including the compiler, the standard library, the kernel, etc. In order to avoid clashing with any/all of those, I picked the __KABI_ and __kabi_ prefixes for uniqueness. In theory I could just use __, but there are problems with that too. For example, note how the current compiler.h files redefine __always_inline to mean something kinda different. The GCC manual says we should be able to write this:inline __attribute__((__always_inline)) int increment(int x){ return x+1;}Except when compiling the kernel headers turn that into this (which obviously doesn't compile):inline __attribute__((__attribute__((always_inline)))) int increment (int x){ return x+1;}As a result, I kinda want to stay away from anything that remotely looks like a conflicting namespace. Using such a unique namespace means we can also safely do this if necessary (Since you can't "typedef struct foo struct bar"):kabi/foo.h: struct __kabi_foo { int x; int y; };linux/foo.h: #define __kabi_foo foo #include <kabi/foo.h>drivers/foo/foo.h: #include <linux/foo.h> void func() { struct foo = { .x = 1, .y = 2 }; }Cheers,Kyle Moffett-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | http://lkml.org/lkml/2006/3/26/60 | CC-MAIN-2014-15 | refinedweb | 399 | 57.37 |
Summary
The next version of the Cat programming language, contains namespaces and type annotations.
The upcoming version of Cat, which is still undergoing testing, implements type annotations and namespaces.
The type annotation of Cat is based on the idea that every program, or subprogram, is a transition from one stack to another. For example the following is the type annotation for several of the atomic programs of Cat (the primitives if you prefer):
+ : (int int) -> (int) - : (int int) -> (int) and : (bool bool) -> (bool) > : (int int) -> (bool) pop : (any) -> () dup : (any) -> (any any)Some programs have side-effects, and these are annotated differently:
rnd : () ~> (int) write : (any) ~> (any)A program which has no side-effects, is called a function, whereas a program with side-effects is called a subroutine.
Cat now also supports scopes. A namespace is just a named scope. This addition makes it much easier to write non-trivial software using Cat.
Here is what some sample programs look like in the upcoming Cat:
define clear : (any*) ~> () { [pop] [is_stack_empty not] while } define simple_test : () -> (int) { 0 1 + } define nested_test : () -> (int) { define f : () -> (int) { 1 } define plus_one : (int) -> (int) { 1 + } f plus_one } namespace X { define f : (int) -> (int) { 2 + } } namespace Y { define f : (int) -> (int) { 3 + } } define main : () ~> () { "expect 1" write simple_test write clear "expect 2" write nested_test write clear "expect 3" write 1 X.f write clear "expect 4" write 1 Y.f write clear }
Have an opinion? Readers have already posted 2 comments about this weblog entry. Why not add yours?
If you'd like to be notified whenever Christopher Diggins adds a new entry to his weblog, subscribe to his RSS feed. | http://www.artima.com/weblogs/viewpost.jsp?thread=168661 | CC-MAIN-2017-22 | refinedweb | 274 | 59.23 |
Opened 7 years ago
Closed 7 years ago
Last modified 7 years ago
#14432 closed (fixed)
Tutorial import line missing (tiny correction)
Description
Hi, thanks a lot for the tutorial, it's great!
At the very end of Part 3, in the last highlighted code block, after the admin registration lines have been removed, the following line should remain:
from django.conf.urls.defaults import *
before the "urlpatterns = patterns(..." code block. Otherwise, the newly created polls/urls.py doesn't work correctly. Better yet, "*" could be replaced with "patterns".
Thanks again,
Anton
Attachments (1)
Change History (5)
comment:1 Changed 7 years ago by
Changed 7 years ago by
comment:2 Changed 7 years ago by
comment:3 Changed 7 years ago by
comment:4 Changed 7 years ago by
Note: See TracTickets for help on using tickets.
Technically, a few lines earlier the tutorial tells you to "Copy the file mysite/urls.py to polls/urls.py." If you followed along verbatim, you'd still have the import lines at the top of that file.
However, for the sake of completeness, showing the suggestion import line in that last code block won't hurt. | https://code.djangoproject.com/ticket/14432 | CC-MAIN-2017-26 | refinedweb | 195 | 63.49 |
This is the mail archive of the libstdc++@gcc.gnu.org mailing list for the libstdc++ project.
On Thu, Jan 31, 2002 at 09:32:11PM -0600, Loren James Rittle wrote: > > [...] Sometimes we may have to work around a buggy libc, but let's > > not pretend it's OK. > > To pretend it's OK was never my intention. This is exactly why I > posted the situation with as much context as possible. I meant no criticism. > >). I recognize that your intent is to make the library better, and to make it do something useful on more platforms, even buggy ones. I appreciate both your efforts and your explanations. > You are asking for the code to be made even more robust than it was > originally written or as a tweaked the detection tradeoff. This is > not a bad thing. You are asking for a fast binary decision: ... > to be turned into a three-way with more complex checks (pseudo): ... > (); Actually, what I would like *best* would be to eliminate the call to the C library strtod. However, users probably expect us to match their C library's handling of least-significant bits bug-for-bug-compatibly, which militates in favor of using the libc's strtod. (Sigh. Fortunately that argument doesn't apply to strtol.) I suppose my code would be more like int old_errno = errno; errno = 0; double result = ::strtod( [...] ); if (!errno) { if (old_errno) errno = old_errno; use_results(); } else { int new_errno = errno; errno = old_errno; #if defined(_GLIBCPP_STRTOD_WRONG_ERRNO) if (new_errno == _GLIBCPP_STRTOD_WRONG_ERRNO) #else if (new_errno == ERANGE) #endif handle_error(); else #if defined(_GLIBCPP_STRTOD_GRATUITOUS_ERRNO) if (_GLIBCPP_STRTOD_GRATUITOUS_ERRNO(new_errno)) #endif abort(); } Notes: 1. In general, the error case doesn't need to be optimal because it had better occur only rarely. We do want the (nearly dead) code to be smallish, though. 2. num_get doesn't have license to clobber errno, so we have to save and restore it. 3. On some platforms (e.g. some multi-threading environments) errno is a macro for a function call or worse (it may be something like #define errno (*__access_thread_local_storage(_ERRNO_OFFSET)) ) so you don't want to "mention" it too too frequently. > As people find aborts in the test cases (which hopefully cover all > corner cases), they will have to add configure tests that correctly > find the buggy libc behavior and set [various macros] appropriately. > > Is that, more or less, what you want to see? Exactly (more or less :-) Thank you for your scrupulous care. I apologize for posting unclearly, and for complaining more than coding. Nathan Myers ncm at cantrip dot org | http://gcc.gnu.org/ml/libstdc++/2002-01/msg00521.html | crawl-001 | refinedweb | 423 | 65.52 |
Ethan Furman <ethan at stoneleaf.us>: > On 08/31/2014 02:19 PM, Marko Rauhamaa wrote: >> The application will often want the EINTR return (exception) instead >> of having the function resume on its own. > > Examples? > > As an ignorant person in this area, I do not know why I would ever > want to have EINTR raised instead just getting the results of, say, my > read() call. Say you are writing data into a file and it takes a long time (because there is a lot of data or the medium is very slow or there is a hardware problem). You might have designed in a signaling scheme to address just this possibility. Then, the system call had better come out right away without trying to complete the full extent of the(). Marko | https://mail.python.org/pipermail/python-dev/2014-September/136103.html | CC-MAIN-2022-40 | refinedweb | 131 | 66.88 |
Angular 8 is the latest version of Google's Angular – one of the best JavaScript frameworks around. In this article, we'll run through what's special about Angular 8, and show you how to get started. First, a brief look back at what's happened with the framework so far.
Angular's introduction led to a paradigm shift in web development: while most libraries limited themselves to providing support to developers with relatively limited architectural impact, Angular's developer team went in the other direction. Their product forces you to use a specific architecture, with deviations ranging from difficult to commercially pointless. In fact, most Angular code runs through a relatively complex transpilation toolchain before it ever hits the browser.
Due to the immense success of Angular, both inside and outside of Google Inc, development has – by and large – stabilised. This means that breaking code changes are few, while the semi-annual upgrades are focused on adapting the framework to changes in the web browsing landscape.
In the case of Angular 8, for example, a new JavaScript compiler is deployed (albeit still experimentally). It optimises generated compatibility code to be significantly smaller and faster at the expense of older browsers. Furthermore, Web Worker support is integrated to increase Angular's processing capability. In short, there is a lot to see – so let us dive right in.
If you'd rather design a site without code, try one of these easy website builders. And to make things run even smoother, get your web hosting service right.
01. Run a version check
Angular's toolchain lives inside the NodeJS environment. As of this writing, Node.js 10.9 or better is needed – if you find yourself on an older version, visit the Node.js website and get an upgrade. The code below shows the version status on this machine.
tamhan@TAMHAN18:~$ node -v v12.4.0 tamhan@TAMHAN18:~$ npm -v 6.9.0
02. Install Angular
Angular's toolchain resides in a command line utility named ng. It can be installed via the well-known NPM.
tamhan@TAMHAN18:~$ sudo npm install -g @angular/cli tamhan@TAMHAN18:~$ ng version
Be careful to answer the question shown in the image below.
Getting version info out of the tool is quite difficult – not only is the syntax unique, but the output is also verbose (see image below).
03. Create a project skeleton
ng generates the Angular scaffolding for us. In the following steps, we want to add routing, and use Sass for CSS transpilation. Should the deployment fail for some reason, empty the working directory, and restart ng with superuser rights.
tamhan@TAMHAN18:~$ mkdir angularspace tamhan@TAMHAN18:~$ cd angularspace/ tamhan@TAMHAN18:~/angularspace$ ng new workertest
04. Harness differential loading
The new version of Angular optimises backward compatiblity code for reduced impact – a file called browserslist lets you decide which browsers are to be supported. Open browserslist and remove the word not in front of IE 9 to IE11.
. . . > 0.5% last 2 versions Firefox ESR not dead IE 9-11 # For IE 9-11 support, remove 'not'.
05. ... and see the results
Order a compile of the project, change into the distribution folder and purge unneeded map files.
tamhan@TAMHAN18:~/angularspace/workertest$ sudo ng build tamhan@TAMHAN18:~/angularspace/workertest/dist/workertest$ ls
Invoke tree to see the results – ng creates multiple versions of various code files (see image below).
06. Spawn a web worker
Web workers let JavaScript enter the last frontier of native applications: massively parallel processing of tasks. With Angular 8, a web worker can be created right from the comfort of the ng command line utility.
tamhan@TAMHAN18:~/angularspace/workertest$ sudo ng generate web-worker myworker CREATE tsconfig.worker.json (212 bytes) CREATE src/app/myworker.worker.ts (157 bytes) UPDATE tsconfig.app.json (236 bytes) UPDATE angular.json (3640 bytes)
07. Explore the code
ng's output is likely to look intimidating at first glance. Opening the file src/app/myworker.worker.ts in a code editor of choice reveals code which you should know well from the WebWorker specification. In principle, the worker receives messages and processes them as needed.
/// <reference lib="webworker" /> addEventListener('message', ({ data }) => { const response = `worker response to ${data}`; postMessage(response); });
08. Set up scaffolding
Angular applications consist of components. Firing off our web worker is best done inside the AppComponent, which is expanded to include a listener for the OnInit event. For now, it will emit status information only.
import { Component, OnInit } from '@angular/core'; @Component({ . . . }) export class AppComponent implements OnInit { title = 'workertest'; ngOnInit() { console.log("AppComponent: OnInit()"); } }
09. Don't worry about the lack of constructor
Experienced TypeScript developers ask themselves why our code does not use the constructor provided by the programming language. The reason for that is that ngOnInit is a lifecycle event which gets fired whenever an initialisation event takes place – this does not need to be correlated to class invocation.
10. Execute a small compile run
At this point in time, the program is ready to run. We will execute it from the server inside of ng, which can be invoked via the serve command. A neat aspect of this approach is that the program detects changes and recompiles the project on the fly.
tamhan@TAMHAN18:~/angularspace/workertest$ sudo ng serve
Take a look at the figure to see this in action in the image below.
11. ...and find the output
ng serve putputs the address of its local web server, which is usually. Open the web page and open the developer tools to see the status output. Keep in mind that console.log outputs data to the browser console and leaves the console of the NodeJS instance untouched.
12. Get to work
At this point in time, we create an instance of the worker and provide it with a message. Its results are then shown in the browser console.
if (typeof Worker !== 'undefined') { // Create a new const worker = new Worker('./myworker.worker', { type: 'module' }); worker.onmessage = ({ data }) => { console.log('page got message: $\ {data\}'); }; worker.postMessage('hello'); } else { console.log("No worker support"); }
13. Explore Ivy
Future versions of Angular will use a more advanced compiler, leading to even smaller views. While the product is not finished yet, an ivy-enabled skeleton can be spawned via ng new ivy-project – enable-ivy. Alternatively, change the compiler settings as shown in the snippet.
"angularCompilerOptions": { "enableIvy": true }
A word of warning: Ivy leads to amazing size reductions, but it is not free. The product has yet to stabilize, so using it in productive environments is not recommended.
14. Try modified ng processing
Angular's ng command line tool used child scripts internally for some time. Angular 8 ups the ante in that you can now, also, use this facility to run your own tasks as your application is assembled and compiled.
"architect": { "build": { "builder": "@angular-devkit/ build-angular:browser",
One neat application of ng scripts involves directly uploading applications to cloud services. The Git repository provides a useful script that uploads your work to a Firebase account.
15. Enjoy improved migration
Developers migrating away from Angular 1.x, also known as AngularJS, have had a fair share of issues getting the navigator to work right in 'combined' applications. The new Unified Location Service aims to make this process smoother.
16. Explore workspace control
Large projects benefit from the ability to change the workspace structure dynamically. This is done via the new Workspace API introduced in Angular 8.0 – the snippet accompanying this step provides a quick overview of the behaviour.
async function demonstrate() { const host = workspaces. createWorkspaceHost(new NodeJsSyncHost()); const workspace = await workspaces. readWorkspace('path/to/workspace/directory/', host); const project = workspace.projects. get('my-app'); const buildTarget = project.targets. get('build'); buildTarget.options.optimization = true; await workspaces.writeWorkspace(workspace, host); }
17. Accelerate the process
Building large JavaScript code bases gets tedious. Future versions of AngularJS will use Google's Bazel build system to accelerate the process – sadly, at time of writing it wasn't ready for primetime.
18. Avoid the walking dead
Even though Google takes extreme care not to break code, some features simply need to be removed as they are no longer needed. Check this depreciations list to learn more about features which should be avoided.
19. Look at the change log
As always, one article can never do justice to an entire release. Fortunately, this change log provides a detailed list of all the changes – just in case you ever feel like checking the pulse of a feature especially dear to you.
Got a lot of files ready for upload to your site? Back them up in the most reliable cloud storage.
This article was originally published in creative web design magazine Web Designer.
Read more: | https://www.creativebloq.com/how-to/angular-8 | CC-MAIN-2021-10 | refinedweb | 1,458 | 57.87 |
ISAPI Extension Architecture
This topic outlines architectural features of ISAPI extensions.
The ISAPI model, unlike other Web content development technologies, does not require a separate process for every request to an HTTP server. ISAPI uses threads to isolate work items in a process. Because IIS employs multiple threads to synchronize work, it makes more efficient use of system resources than the Common Gateway Interface (CGI) model or other models based on process isolation.
Each client request to an ISAPI extension initiates the creation of an EXTENSION_CONTROL_BLOCK data structure. Creating and maintaining a data structure is much easier and faster than initiating a new process. The EXTENSION_CONTROL_BLOCK structure and the extension usually run in the same process as IIS, therefore the server can process requests faster and accommodate a higher number of them.
As with other DLLs, Web server applications must be thread-safe. Because more than one client will be running the same function simultaneously, the code must follow safety procedures when modifying global or static variables.
ISAPI extensions can be kept thread-safe with the use of appropriate synchronization techniques, such as creating critical sections and semaphores. For additional information about writing thread-safe DLLs, see Synchronization Techniques in the Windows DDK.
IIS supports process isolation for ISAPI DLLs and scripts. It uses custom, high-speed methods to establish communication between the server process and the surrogate process that houses the DLLs, thus providing robustness with high performance. ISAPI extensions are implemented as DLLs that are loaded either into the IIS process or, if they are part of an out-of-process application, into a separate process. Like ASP and HTML pages, IIS uses the virtual location of the DLL file in the file system to map the ISAPI extension into the URL namespace served by IIS.
By default, IIS is configured to cache ISAPI extensions. When it receives a request that maps to an ISAPI extension, it first checks to see if the DLL is already loaded. If it is not, IIS loads it. Once this is done, the entire request is managed by the extension, with IIS acting as an efficient intermediary and helper.
ISAPI can also control an entire namespace. If you use an asterisk (*) as your extension, all requests to that namespace are handled by the specified ISAPI.
It is possible to isolate an Internet Server Application Programming Interface (ISAPI) extension by running it in a separate server process. To do this, choose the Run in separate memory space option in the Internet Information Services (IIS) user interface when creating an extension.
Some ISAPI extension DLLs might not be suitable for running in a separate process, for any of the following reasons:
File access can break if an instance of an ISAPI DLL in one process opens a file exclusively, locking it from being used by another instance of the same ISAPI DLL in another process.
Security access is different for an ISAPI running out-of-process than for an ISAPI running in-process.
Combining ISAPI extension functionality and filter functionality in a single DLL does not work if you mark the extension as running in a separate process because an instance of the ISAPI DLL cannot exist in both Inetinfo.exe (in-process for the filter), and DLLHost.dll (out-of-process for the extension). IIS versions 5.0 and above detect when the same ISAPI DLL is configured as an extension and a filter, and forces the extension to run in-process.
Performance can slow down because the ISAPI needs to perform cross-process communication between DLLHost.dll and Inetinfo.exe. Also, if the ISAPI DLL uses its own caching scheme, each instance of the DLL has its own cache, using up memory. Finally, if the ISAPI DLL has a long initialization period, that initialization has to happen for each instance of the DLL (for each process).
Unexpected behavior can occur if an ISAPI DLL is configured to run in a separate process because the different instances of that ISAPI will not know about each other.
For servers running in IIS 5.0 isolation mode, you can use the metabase property InProcessIsapiApps to specify a list of all ISAPI DLLs you want to run in the same server process as IIS. Requests for these DLLs are routed to the default application root in IIS (/LM/W3SVC). The default application root is marked in-process only, which guarantees that all such requests will run in-process. Setup will create the InProcessIsapiApps property and populate it with Ssinc.dll and Httpodbc.dll, which must run in the same process as IIS. For more information about IIS 5.0 isolation mode, see IIS Process Models for ISAPI below.
Starting with IIS 6.0, IIS supports two distinct application isolation modes: worker process isolation mode, and IIS 5.0 isolation mode. Worker process isolation mode can be configured to recycle processes. When applications are recycled, it is possible for session state to be lost. During an overlapped recycle, the occurrence of multi-instancing is also a possibility.
It is recommended that ISAPI applications persist any states externally, such as in an SQL database. If an ISAPI application’s state management code cannot be modified, IIS should be configured to run in a manner that prevents state loss. This includes disabling recycling functionality and the idle time-out of worker processes. Disabling these features is a better alternative than running IIS in IIS 5.0 isolation mode. Also, disabling pinging functionality is recommended if an ISAPI in the pool uses the HSE_REQ_REPORT_UNHEALTHY structure, as do ASP and ASP.NET, because the worker process is recycled after a ping is sent in.
Multi-instancing can cause problems for ISAPI extensions that are not specifically coded to handle this execution environment. For example, if an ISAPI application implements its own logging to a file, it must be able to handle the log file being accessed by threads outside of its own process, which might use different credentials or lock the file. If an application uses kernel objects such as events, it needs to take into account that these objects will be signaled by threads in other processes, and not necessarily within their own.
For more information, see IIS Process Recycling.
IIS 5.1 and earlier: Worker process isolation mode, process recycling, and application pools are not available.
ISAPI supports HTTP 1.1. Two areas in which this has potential impact on ISAPI extension and filter developers are support for pipelined requests and support for the transfer encoding header.
The transfer encoding header tells the client if a transformation has been applied to the body of the message being sent. For example, if the client supports HTTP 1.1, IIS can specify a transfer-encoding value of "chunked." When the chunked transformation is applied, the message is broken into chunks of varying sizes, each of which contains its own size indicator and optional footers for specifying header fields. Chunking is more efficient than non-chunked connections in terms of internal application processing, especially for streaming data sources and very large data sets. | http://msdn.microsoft.com/en-us/library/ms524581(v=vs.90).aspx | CC-MAIN-2014-52 | refinedweb | 1,181 | 54.63 |
I thought it’s a good idea went back to the basics of C++ and demonstrate some basics concepts of this programming language.
What’s C++?
C++ is a general purpose multi paradigm programming language. Also, C++ can be referred as a compiled language, this means that a typical C++ program is composed by a bunch of source files. Those files have to be processed by a compiler, producing objects file, which are combined by a linker yielding an executable program. The next figure demonstrates the process to create a executable file by C++ [1].
There two kind of entities in C++:
- Core language features, such as built in types, loops, conditional statements, basics keywords.
- Standard Library Components, such as STL containers, algorithms and I/O operations, basically everything you add using #include.
Practical Introduction.
For instance, the next snippet shows how to declare a function, use the standard output stream (cout) and the basic adoption of the main function- which triggers the executable program.
using namespace std; // return functionName(args){ //Function Body } //Create a square function int sqrt(int num){ return num*num; } //Another Function void printSqrt(int num){ //cout : Print at console. cout<<"The square of: "<<num<<" is"<<sqrt(num)<<endl; } int main(){ //Call local function printSqrt(5); //Nonzero return from main means a error. return 0; }
Type, Variables & Arithmetic
Some of the C++ fundamental types are: bool(true or false),char (character, for example, ‘a’,’f’, and ‘&’) ,int (1,5 or 1245), and double (floating point number, for example 3.1415). The size of the type vary among different machines and can be obtained by the sizeofoperator.
For example, the typical size of a char is 8-bits, and the size of the others variables are quoted in multiples of the size of a char; if a char is 8-bits, an int is 32-bits, and a doubleshall be 64-bits. The following image illustrate this concept [1]:
Universal Initializer
With the apear of C++11, the syntax choices for object initialization grow, but braced initialization is the most recommended usable initialization syntax. This type of initialization prevents narrowing conversions, and inadequate calls. The next snippet shows what I mean with this last point:
class Widget{ public: void operator()(int i){} }; int main(){ int a{1};//Ok int b(2);//Ok Widget w;//Build in type //... w(3); //Error, calling operator() instead cout<<a<<'\t'<<b<<'\t'; return 0; }
Points and Arrays
Pointers are the nightmare of inexpert C++ developers, because it’s easy to screw up everything with them. In my C++ programs, a huge percentage of errors are related with pointers . For example, check the following code, which explains a few operations, and at the end it accesses prohibited memory:
int main(){ int v[4]={1,2,3,4}; //Array int* p=v; //p->address of v //Loop array for(auto i=0;i<4;++i){ cout<<"Value of v["<<i<<"]"<< v[i]<<endl; } cout<<"Address of p: "<<&p<<endl; cout<<"Value of p: "<<p<<endl; cout<<"Contents of p: "<<*p<<endl; //Let's treat a pointer as array cout<<"Contents of p[3]: "<<*(p+3)<<endl; //Access violation cout<<"Contents of p[255]: "<<*(p+255)<<endl; }
It’s important to notice that the prefix unary * means “contents of”, and the prefix unary &means “address of”. Moreover, when we need to represent the notion of “no object available”, we should give the pointer the value nullptr.
References
[1] Stroustrup, B. (2000). The C++ programming language. Boston: Addison-Wesley. | http://gearstech.com.mx/blog/2019/03/01/c-variable-and-pointers/ | CC-MAIN-2021-31 | refinedweb | 584 | 56.08 |
Qemu and other emulators)
- is quite simply unavailable at a reasonable price (e.g. SGI MIPS systems or the Chinese MIPS-based systems).
It also briefly mentions User Mode Linux for x86 and the Hercules emulator for IBM zSeries mainframes despite the fact that these are not particularly relevant to Free Pascal, and Docker containerisation. It does not consider x86-on-x86 virtualisation systems such as VMware, Microsoft Hyper-V or Oracle VirtualBox, and only considers Linux KVM as a foundation technology for Qemu.
Contents
- 1 The Host System
- 2 Debian Guest using Qemu
- 3 Windows 2K Guest using Qemu
- 4 KVM on a Linux host
- 5 Common Qemu startup, ifup and ifdown scripts
- 6 Slackware x86 Guest using User Mode Linux
- 7 Debian 68k Guest using Aranym
- 8 Debian zSeries Guest using Hercules, without VM
- 9 Debian zSeries Guest using Hercules, with VM
- 10 MUSIC/SP using Sim/390 or Hercules
- 11 VM/370 using Hercules
- 12 IBM S/370, S/390, and the S/380 "hack"
- 13 IA-64 (Itanium, Merced, McKinley etc.)
- 14 Relative Performance
- 15 Graphical Access using VNC etc.
- 16 Accessing "Unusual" Devices
- 17 QEMU User Emulation Mode in Chrooted Environment
- 18 Docker images
- 19 Further Reading
- 20 See also
The Host System
In the current case, the host is a Compaq rack-mount server running at around 3GHz with several Gb RAM; be warned that performance will drop off drastically with a lower specification system.a
- Debian on ARM (little-endian, armel) using Qemu
- pye-dev-07b
- Debian on MIPS (little-endian, mipsel) using Qemu
- pye-dev-07c
- Slackware x86 13.37 using Qemu
- pye-dev-07d
- Slackware x86 13.37 using User Mode Linux
- pye-dev-07e
- Windows 2K using Qemu
- pye-dev-07f
- Debian on zSeries using the Hercules emulator
- pye-dev-07g
- Debian on 68k using the Aranym.
KVM on a Linux host
KVM (Kernel-based Virtual Machine) is an enabling API supported on more recent x86 and x86-64 (AMD64) systems, this replaces the older KQemu kernel module which is now deprecated by both Qemu and the kernel.
KVM is typically enabled by the host system BIOS. If not enabled by the BIOS it cannot be enabled by the kernel or by (a suitably privileged) application program, since the x86 architecture requires power to be cycled to change the state of this facility. The result of this is that KVM might be unavailable (no /dev/kvm device) even if it shown as supported by the CPU flags in /proc/cpuinfo./qemu ;; tun6) PROXY_ID=7 ;; tun7) PROXY_ID=8 ;;/qemu x86. Also see [1].
Debian 68k Guest using Aranym
# apt-get install aranym
I (MarkMLl) note this installs bridge-utils but instead am going to use my standard Qemu-style startup scripts, which themselves were originally based on sequences executed internally by UML; note that Hercules for zSeries (above) is the odd-one-out here since the guest uses SLIP for networking.
Referring to, download and check its signature. Unpack, noting a kernel vmlinuz-2.6.39-2-atari, a filesystem image dc11.img.xz and a configuration file aranym.config.
In aranym.config, change the ETH0 section to read:
Type = ptp HostIP = 192.168.22.1 AtariIP = 192.168.22.22 Netmask = 255.255.255.0
Change the startup script runaranym to read:
#!/bin/sh QEMU_GUEST_IP_ADDRESS=192.168.22.22 QEMU_GUEST_IP_GATEWAY=192.168.22.1 QEMU_HOST_GATEWAY=eth0 export QEMU_GUEST_IP_ADDRESS QEMU_GUEST_IP_GATEWAY QEMU_HOST_GATEWAY /etc/qemu-ifup tap7 cd "$(dirname "$0")" SDL_AUDIODRIVER=dummy; export SDL_AUDIODRIVER aranym-mmu -l -c aranym.config /etc/qemu-ifdown tap7
Uncompress the image file:
# unxz dc11.img.xz
Using xterm on a graphical login, run the startup script:
# runaranym
The result of that should be a console for the guest Debian system.
On the guest console, login as root with password root, and immediately change the password to something appropriate using the passwd command. Change the hostname in /etc/hostname and /etc/hosts, the IP address etc. in /etc/network/interfaces, and the nameserver in /etc/resolv.conf. Reboot and check that the network is operational by pinging from the guest to the site router (e.g. 192.168.1.1) and then pinging the guest (192.168.22.22) from any convenient system; if this doesn't work fix it before continuing.
Then as described on run the three commands:
# dpkg-reconfigure openssh-server # apt-get update # apt-get install popularity-contest
Finally edit /root/profile to remove the reminder to run the above. It should now be possible to login using SSH, and to continue to configure and use the system like any Debian guest.
Debian zSeries Guest using Hercules, without VM
Hercules is a commercial-grade emulator for IBM mainframes, it is available as a standard package for e.g. Debian and related Linux distributions. via a simulated CTC (Channel To Channel) device. Best not fooled with.
Debian zSeries Guest using Hercules, with VM
This combination is unlikely to work using freely-available software, since Linux requires at least an S/390 G3 system while the most recent IBM VM available is VM/370. It might be technically possible to run a non-free VM on Hercules, but at the time of writing it is at best unclear whether this could be done legally.
This means that it is not, for example, possible to run a VM host with simultaneous Linux and MUSIC/SP guests.
MUSIC/SP using Sim/390 or.
MUSIC/SP is a freely-available (but not open source) operating system which implements a subset of IBM's MVS API, i.e. is to some extent compatible with operating systems of the OS/360 lineage in particular OS/VS1, extended with some novel features including a filesystem with user-accessible directories. It does not provide an API compatible with Linux, and uses the EBCDIC character set.
Unlike other freely-available OS-compatible operating systems (see for example [2], [3], and the section below), MUSIC/SP provides TCP/IP networking. However this requires that the IUCV (Inter User Communication Vehicle) be provided by the underlying platform and that there is a suitable network support kernel for it to talk to: these are normally made available by IBM's VM operating system (VM/SP or VM/ESA, a sufficiently-recent version of which is not freely available).
Considering emulated environments, IUCV can be provided either by running a recent VM on top of Hercules, or by running the freely-available (but not open source) Sim/390 emulator on Windows. Hercules does not provide IUCV or a network support kernel directly (although as of early 2012 this might be being addressed), so while MUSIC/SP will run on Hercules it will not have TCP/IP-based networking facilities: use Sim/390 if you really need this.
Regrettably, the maintainer of MUSIC/SP and Sim/390 is no longer active, and while binaries and documentation remain available for download via [4] the sources are not.
VM/370 using.
VM/370, which is freely available as the "SixPack" (currently v1.2 as of early 2013), provides a hypervisor running on the "bare metal" which can host multiple single- or multitasking operating systems such as the (provided) Conversational Monitor System (CMS) or derivatives of DOS or OS (e.g. VSE or MVS).
The CMS interactive environment is a distant cousin to unix, and is probably usable by anybody who remembers MS-DOS or older operating systems; the "SixPack" includes extensive but not complete online help. There is no networking API exposed to user programs: code and data may be moved between the host computer (running Hercules) and guest sessions by mounting files simulating tape devices, by simulations of an 80-column card punch and reader, or by a simulation of a line printer.
In common with other IBM operating systems of the era, the documented API is in the form of macros for an IBM-compatible assembler; other types of interface are available including diag codes for communication between CMS and VM, and virtualised hardware access using Channel Control Words (CCWs). The services provided are roughly comparable with MS-DOS v1 or CP/M, i.e. there are separate sets of macros for handling different classes of peripherals: do not expect to open the terminal, card reader or printer as a file. Particularly if the S/380 hack (see below) is applied, the GCC compiler and standard libraries may be used but most software development and maintenance is done using assembler.
IBM S/370, S/390, and the S/380 "hack"
This section is an even sketchier outline.
As discussed elsewhere, the S/360 and S/370 architectures are limited to 24-bit addresses while the S/390 allows 31-bit addresses without, in general, breaking compatibility. There is an unofficial extension of the Hercules emulator (frowned upon by purists) that implements a non-standard "S/380" architecture and modifies the most-recent freely-available IBM mainframe operating systems (VSE nee DOS, MVS nee OS, and VM/370) to exploit this extension. Using this, there is sufficient available memory to run a large-scale compiler such as GCC natively on one of the classic IBM operating systems, with the important caveats that only one program can use this facility at a time (i.e. while a 31-bit GCC and a 24-bit make should work, two copies of the 31-bit GCC won't), and that in the case of VM one program means one program per computer rather than one program per virtualised system/login.
To make use of this, you need the Hercules-380 patch from [5], a classic operating system such as the VM/CMS "sixpack" from [6], and possibly the MECAFF enhancement for additional terminal sessions and the IND$FILE program. In practice, it is impossible to do any of this without joining Yahoo!, and subscribing to the Hercules-VM370, hercules-os380 and H390-VM groups.
IA-64 (Itanium, Merced, McKinley etc.)
Terminally sketchy.
The FPC port targeting IA-64 exists in absolutely minimal form, i.e. a few skeleton files and that's about it. Since the IA-64 architecture appears to be heading in the same direction as its predecessor the iAPX-432, it's highly questionable whether any more work will be done on this architecture. However, for completeness (and because this author wants to be able to inspect some Itanium binaries)...
While a few systems turn up on eBay etc., the asking price tends to be dictated by what the seller paid a few years ago rather than by what anybody expects to pay today. There is a simulator, written by HP, called "Ski", which is now open source. See [7], [8] plus the Sourceforge project repository.
(On a fairly complete development system running Debian) get the most recent Ski sources from Sourceforge. Run autogen.sh, installing e.g. autoconf, libtool and libelf-dev if necessary. Run make, install additional libraries such as gperf, bison, flex etc. as necessary. It might be necessary to edit syscall-linux.c to comment out the reference to asm/page.h which apparently isn't needed for very recent (Linux 3.x) kernels. On success get root and run make install.
Assuming the disc image is a file named /export/ia-64/sda (e.g. renamed from the downloaded sda-debian) then
bski bootloader vmlinux simscsi=/export/ia-64/sd
Note the dropped final letter, that parameter is used as a prefix rather than being the name of a file or directory.
As with a number of other emulators described on this page, this requires an X session for the console.
This writer (MarkMLl) can't get networking running, either using the instructions at [9] or using a tun/tap solution very roughly as suggested by [10]. It might be necessary to try a much older host kernel, i.e. late 2.4 or early 2.6, however this is not currently being pursued due to lack of relevance.
The kernel from HP is 2.4.20, the filesystem is described as "Debian Sid" but from the kernel version is probably somewhere between "Woody" (3.0) and "Sarge" (3.1). I don't know how easy it's going to be to get this up to scratch, at the very least a working network is going to be a prerequisite.-20 # 9:23:respawn:/usr/local/vnc/vncshim-4-24 # Example how to put a getty on a serial line (for a terminal) # #T0:23:respawn:/sbin/getty -L ttyS0 9600 vt100
Many systems limit the number of VNC servers that may be run per system (including localhost) as the side effect of a window manager XDMCP security precaution. If necessary, edit the window manager configuration file (gdm.conf, kdmrc etc.) to increase the number of sessions that each XDMCP client can request, e.g. in the case of gdm.conf:
[xdmcp] Enable=true DisplaysPerHost=6
Create /usr/local/vnc/vncshim-0-20=570$SUFFIX XDISPLAY=.
Accessing "Unusual" Devices
This section covers devices such as audio (i.e. used for playing .wav files) and MIDI, and USB devices which are not claimed by the host or guest kernel.
Sound/MIDI for Qemu
Don't expect this to be hi-fi quality, but it should be adequate for operator alerts etc.
Identifying the Host's Hardware
Obviously, a good starting point for this is using lspci and lsusb to determine what physical hardware is installed. The next stage is usually to consult dmesg output and to use lsmod to determine what subsystems the kernel has loaded hence what APIs are available. Graphical tools such as the KDE Info Centre can also be useful here, however identifying which API is most useful can be a bit of a black art.
The current writer (MarkMLl) has a Debian "Squeeze" host with USB-connected audio and MIDI devices, with the latter connected to a Yamaha tone generator. An alternative (and more conventional) configuration would be to have an internal soundcard, with both audio and MIDI output feeding a loudspeaker.
Knowing what hardware and subsystems are available, the remaining question is which subsystems Qemu can make use of. Running this command goes some way towards providing an answer:
$ qemu -audio-help |grep ^Name: Name: alsa Name: oss Name: sdl Name: esd Name: pa Name: none Name: wav
For reasons that are unclear (roughly translated: I'd like some help here) I've had most success with OSS, starting Qemu with a script that includes these lines:
export QEMU_AUDIO_DRV=oss export QEMU_OSS_DAC_DEV=/dev/dsp1 export QEMU_OSS_ADC_DEV=/dev/dsp1
Selecting the Guest's Soundcard
Assuming that the guest is to have both audio and MIDI, common sense would suggest that the guest operating system should see both audio and MIDI hardware implemented by Qemu. According to, this implies that Qemu should be recompiled with the capability of emulating an Adlib card, which for current versions of Qemu means running something like this:
make clean distclean ./configure --audio-card-list=ac97,es1370,sb16,adlib make
Irrespective of whether a custom Qemu is built or not, it's useful to check what devices it emulates:
$ qemu -soundhw ? Valid sound card names (comma separated): pcspk PC speaker sb16 Creative Sound Blaster 16 ac97 Intel 82801AA AC97 Audio es1370 ENSONIQ AudioPCI ES1370
For some guest operating systems (e.g. Windows NT 4), adding Adlib emulation to Qemu is sufficient to get MIDI working with the Sound Blaster, i.e.
$ qemu ... -soundhw sb16,adlib ...
In this case the guest uses only the standard Sound Blaster driver, possibly ignoring MPU-401 support.
For guest systems which don't benefit from this, there's little point adding something like an Adlib card if the guest operating system doesn't have a driver for it- which appears to be the case with Windows 2000. The current writer's preference is to select the ES13770, since as a straightforward PCI device it should be enumerated without difficulty by almost any guest operating system:
$ qemu ... -soundhw es1370 ...
Having started Qemu, it's possible to query the emulated devices:
(qemu) info pci ... Bus 0, device 4, function 0: Audio controller: PCI device 1274:5000 IRQ 11. BAR0: I/O at 0xc200 [0xc2ff]. id ""
The expected result of this is that the guest operating system would see audio hardware, but not MIDI. However in the case of Windows 2000 a MIDI device is emulated, so programs which use MIDI for e.g. operator alerts will work properly.
Anybody: need help here generalising this to other guest operating systems.
USB for Qemu
As an example, a Velleman K8055 board is plugged into the host. First use lsusb to gets its vid:pid identifiers:
$ lsusb Bus 001 Device 005: ID 10cf:5500 Velleman Components, Inc. 8055 Experiment Interface Board (address=0)
Now add these parameters to the Qemu command line:
qemu ... -usb -usbdevice host:10cf:5500 ...
On the guest, lsusb should show the device available for use.
The Velleman board is unusual in that it has ID jumpers which set the last digit, i.e. multiple devices appear as 10cf:5500 through 10cf:5503. In cases where this facility is not available, the bus and device number can be used:
qemu ... -usb -usbdevice host:1.5 ...
Obviously, there's a risk here that devices will move around between invocations.
USB for UML
There was supposedly a patch that mapped unused USB devices on the host into the guest. I'm not sure it made it into the standard release. libraries. To run a small program created for fe. ARM with no dependencies you can simply do a
$ qemu-arm program
When the program has more and more dependencies, it becomes increasingly difficult to put all dependencies in locations that can be found by qemu-arm without messing up your host system. The solution that will be developed spectacular-to:
The --static option makes that qemu-arm and qemu-sparc are linked statically. Since they will be running in the chroot environment later on, they must be build without dependencies. Some distributions provide already statically linked binaries. fe qemu-arm-static on debian. Although they are not the latest and greatest, using these packages makes it somewhat easier. Building QEMU from source is really just a matter of minutes.-back. Calculate worthwhile.
Docker images
Docker uses various APIs provided by the underlying operating system to isolate programs in separate namespaces.
The implication of this is that every Docker container has its own process space (i.e. processes in different containers might have the same numeric PID), network space (processes in different containers can have different routing, firewall rules and so on), filesystems (except that they share kernel files and a number of crucial libraries and configuration files) and so on.
Containerisation is distinct from virtualisation in that all containers which are running on the same host computer make use of the same host kernel, it is not possible for one container to use a different kernel (e.g. for development or deployment test purposes), and there is no absolute guarantee that a container taken over by malicious code cannot subvert the remainder of the system.
Note that there are pitfalls for the unwary. For example, having Docker installed and the Docker daemon running can break Qemu networking if this expects to have bridged access to the host interface rather than to be sitting on its own private subnet.
One thing that does work however, is running a 32-bit container on a 64-bit host, and in most cases the Linux kernel APIs are sufficiently backwards-compatible that the programs and libraries from an older release of a distro can run on a much more recent host. One of the cases which will be discussed here runs a container containing 32-bit Debian "Lenny" from 2008 on a host running Debian "Buster" from 2019, this combination allows e.g. compatibility with the GTK v1 widget set (no longer supported by more recent distreaux) to be tested.
Docker fundamentals
This is not the place for a full discussion of the care and feeding of Docker, but it's worth establishing a few basic concepts.
- Host: the physical computer running e.g. Qemu (with or without KVM) or Docker.
- Guest: on a virtualised system (Qemu etc.), a completely separate memory space with its own kernel.
- Container: on a Docker based system, a group of application programs sharing the host kernel but largely isolated from the rest of the host system using various namespace mechanisms.
Looking specifically at Docker:
- Dockerfile: a succinct description of what is to go into a Docker image.
- Image: the Docker analogue of a bootable filesystem, initially built with reference to a Dockerfile.
- Container: an image which has been run, whether or not it is currently running.
Once running, a container has an associated parameter list which specifies the main program together with any sockets etc. exposed to the host. Neither an image nor a container correspond to a single file on the host, but:
- A container may be stopped and committed to an image.
- An image may be saved to a single file, in which state it can be moved between hosts.
- An image may be run a set of parameters different from those of the container committed to it.
The result of this is that if something is installed in the context of a container (e.g. Apache is added from the operating system's main repository) the container should be stopped, committed to a new image and then run with an additional parameter specifying what sockets etc. should be exposed. There are ways around that, but that combination appears to be the easiest to manage.
The Docker community has a master repository, and the docker program on a host computer may search it to locate images. However in the context of those images there appears to be no "official" way of searching the tags that allow a specific version to be pulled, hence something "messy" and subject to change like
$ docker search debian $ wget -q -O - \ | sed -e 's/[][]//g' -e 's/"//g' -e 's/ //g' | tr '}' '\n' | awk -F: '{print $3}' | grep buster $ docker pull debian:buster
or alternatively (for the really hard to reach bits)
$ docker search vicamo $ wget -q -O - \ | sed -e 's/[][]//g' -e 's/"//g' -e 's/ //g' | tr '}' '\n' | awk -F: '{print $3}' ... buster-i386 buster-i386-sbuild buster-i386-slim ... $ docker pull vicamo/debian:buster-i386-slim
Generally speaking, if a developer on e.g. Github says that he has a special-purpose image, some combination of the above commands will allow the required version (e.g. in the above example tagged buster-i386-slim) to be pulled.
Neither a container nor an image is directly mountable e.g. by using losetup. It is possible to copy files to or from a container using the
docker container cp command, subject to the same precautions that one would take on a running unix system.
A 64-bit Docker container with basic utilities
This is a basic installation of a 64-bit Debian stable release ("Buster" at the time of writing) on the same host version. This is useful in a situation where a developer wants to experiment with libraries etc. that he does not normally have installed, it should probably not be trusted to quarantine untrusted code.
In the instructions below, commands on the host are run in the context of a normal user so are shown prefixed by a $ prompt, while commands run in the context of the container are run by root so shown prefixed by a # prompt.
$ docker pull debian:buster $ docker run --interactive --tty debian:buster # apt-get update # apt-get install dialog tasksel python3 # tasksel --new-install # dpkg-reconfigure locales
Using tasksel, remove desktop and print server and add SSH. The author favours installing net-tools so that the
netstat -an command can be used to check socket accessibility.
# apt-get install net-tools elvis-tiny
So far there is no provision for starting the SSH daemon. Add or edit /etc/rc.local to contain
#!/bin/sh rm -rf /tmp/* # For cases where a 32-bit container is running (by implication on a 64-bit host), make # sure that commands such as uname -a return consistent information to avoid breaking # the traditional ./configure && make operation. (file -L /bin/sh | grep -q 'ELF 32-bit LSB') if [ $? -eq 0 ] ; then LINUX32=linux32 fi cd /etc/init.d $LINUX32 ./ssh start # $LINUX32 ./openbsd-inetd start cd exec $LINUX32 /bin/bash
The ID of the current container is in the prompt. Exit it and commit to a new image, then start with a parameter to expose the SSH socket:
# exit $ docker container commit 4efab4804fb6 debian_minimal $ docker run -it -p 65022:22 \ --entrypoint=/etc/rc.local debian_minimal # groupadd -g 1000 markMLl # useradd -g markMLl -u 1000 -m -d /home/markMLl markMLl # passwd markMLl
Add newly created user name to sudo group (/etc/group and /etc/gshadow files). Note the container ID, if it is now stopped the parameters are preserved and it should be possible to start it in the background and connect as the above user via SSH.
$ docker container stop c5be7fd96866 $ docker container start c5be7fd96866 $ ssh localhost -p 65022
The above is a good starting point for a general-purpose containerised system. The section below will also show how to get a graphical program (e.g. the Lazarus IDE) running in a container.
A 32-bit Docker container with FPC and Lazarus
An understanding of the principles and commands described above is a prerequisite to continuing with this section.
Unlike Qemu, a container will run at the host computer's full speed. Unlike User Mode Linux (UML), a 32-bit container will run on a 64-bit host. Subject to security issues, this makes Docker the preferred choice when setting up a temporary 32-bit environment to e.g. investigate libraries which would not normally be installed on the host.
This section discusses installing a 32-bit Debian "Lenny" container, which allows a program to be checked with libraries etc. dating from roughly 2008.
Referring to
$ docker pull lpenz/debian-lenny-i386 $ docker image ls $ docker run --interactive --tty lpenz/debian-lenny-i386
Lenny authentication is broken and is not going to be fixed (short of forking the entire repository). In /etc/apt/apt.conf put
APT { // Options for apt-get Get { AllowUnauthenticated "true"; } }
which should allow tasksel etc. to work. Add contrib and non-free to /etc/apt/sources.list, then
# apt-get update # apt-get upgrade # tasksel --new-install # dpkg-reconfigure locales # apt-get install openssh-server sudo
Populate /etc/rc.local as discussed above, comment out the temporary setting from /etc/apt/apt.conf
Create group and user as discussed above, add user to sudo group.
Add the build-essential, gdb, subversion and zip packages as basic prequisites. Add libncurses5-dev, libncursesw5-dev and libgpm-dev to allow FPC to be rebuilt, plus possibly libpango1.0-dev for the FP IDE. Install FPC v2.2.4 binary for i386 and corresponding sources, the compiler should run and be able to recompile itself.
Install the libgtk1.2-dev package then Lazarus 0.9.24.1, don't attempt to run it yet. In the per-user shell startup file e.g. ~/.bashrc put the line
export DISPLAY=:0
then exit and commit the container as described above and restart with something like
$ docker run -it -p 65022:22 -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix \ --entrypoint=/etc/rc.local lenny_with_lazarus
Again, details as discussed in the section above.
It should now be possible to login to the container from the host over SSH, invoke e.g. /usr/local/share/lazarus/lazarus by name, and have the IDE appear on the host computer's screen.
Further Reading
- Debian on an emulated ARM machine
- Debian on an emulated MIPS(EL) machine
- QEMU/Windows XP on Wikibooks
- The User-mode Linux Kernel Home Page
- Installing Debian under Hercules | https://wiki.freepascal.org/Qemu_and_other_emulators | CC-MAIN-2020-45 | refinedweb | 4,666 | 53 |
tag:blogger.com,1999:blog-20099260471680590202018-03-06T12:01:17.528-08:00Dimitar Evgeniev Dimitrov - Software Development LogSoftware development problems that I encounter at work and their solutions - .NET, C#, C++ and its pitfalls, Java, VB, VB.NET Windows Installer MSI and moreDev Dimitrov Injection vs Encapsulation<br /><br />When trying to show coworkers of the advantages of Dependency Injection I often encounter the argument, that it hurts one of the major object-oriented principles – encapsulation.<br /><br />Let us consider the following basic example of class that has two dependencies, created in the constructor:<br /><br />public class SomeClass<br />{<br /> IWriter writer;<br /> ILogger logger;<br /><br /> public SomeClass()<br /> {<br /> this.writer = new StreamWriter();<br /> this.logger = new RegistryLogger();<br /> }<br />}<br /><br />Client code:<br /><br />SomeClass someClass = new SomeClass();<br />SomeClass has direct dependencies and can’t be used without them and is clearly not testable. Let us see a implementation with dependency injection:<br /><br /><br />public class SomeClass : ISomeClass<br />{<br /> IWriter writer;<br /> ILogger logger;<br /><br /> public SomeClass(IWriter writer, ILogger logger)<br /> {<br /> this.writer = writer;<br /> this.logger = logger;<br /> }<br />}<br /.<br /><br />However let us look at the client code and see if it actually breaks the encapsulation principle:<br /><br />ISomeClass someClass = Container.Resolve<ISomeClass>();<br />What we see here is that, the client code does not even know a damn thing about the class dependencies!<br />We just let the dependency injection container do the heavy lifting for us and don’t worry how to create all of the class dependencies.<br />In practice we actually do not even have the possibility to use the constructor!<br /.<br /><br /.<br /><br /.<br /><br />Happy dependent less programming!<img src="" height="1" width="1" alt=""/>Dev Dimitrov movedMy blog has moved to word press<br /><a href=" "></a><br /><br />Older posts can be found there.<img src="" height="1" width="1" alt=""/>Dev Dimitrov Builder vs .NET and C#Lately I was looking for a nice GUI front-end for MySql and also for Sqlite. MySql now comes with several GUI Tools, including MySql Query Browser which is supposed to do exactly what I need - except, that it doesn't.<br />Some features simply aren't just there - what about clicking in data cell to edit its value? Too advanced to be required from a application being developed for at least two years or even more? And I was thinking that Database GUI editors were all about editing data.<br /><br />Than I tried the open source <a href="">HediSql</a>. It is truly awesome! Given the enormous competition in the field, and official GUI from MySql itself these guyes just rock!<br />It gives you everything that you need; it is fast, responsible and nice to work with. Keep up the good work! What about Sqlite support?<br /><br />It was clear that it is native application - it just worked too smooth and had too small memory footprint to be developed with .NET.<br />A wanted to look at the source and to my surprise it wasn't in C++, it was <a href="">Delphi</a>!<br />That really got me thinking - some of the best software products, that I have used - <a href="">BSPlayer</a>, <a href="">Skype</a>, <a href="">HeidiSql</a>, <a href="">TestComplete</a>- just to name a few, aren't developed in a mainstream language like C++ or C#, but in Deplhi. I even was told, that the initial version of the eBay ISAPI was also developed in Delphi.<br />But there is more - a lot of bussines applications, that really make a lot of money for their developers are also developed also in Delphi.<br /><br /.<br /><br /. <br /><br />I know that there is no market for Delphi developers and I know that Java and .NET jobs are all around the place, but Delphi and C++ Builder are better for developing user software.<br /><br />So if you are looking for career opportunities - go .NET or J2EE. But if you want to develop a software product and sell it successfully - <a href="">Deplhi and C++ Builder</a> are the way to go.<br /><br />I wish I had the courage to use this platform way myself.<img src="" height="1" width="1" alt=""/>Dev Dimitrov considered harmful ( if you don't know how to use it)Ever heard of the <a href="">memccpy </a> function?<br /><br />It is very similar to memcpy, but it will stop copying if given delimiter character is encountered. So that's a nice feature - let's just use it instead of memcpy everywhere!<br /><br /. <br /.<br /><br />This brings up the question of the data representation, that you are using and how well you understand it, and also the quality of the interface definitions of common C functions and stuff like:<br /><pre><br />if(strcmp(str1, str2)) {<br /> /* <br /> if you are expecting to get here <br /> if str1 and str2 are the same, <br /> you are in for a surprise<br /> */<br />}<br /></pre><br /><br />Otherwise remember it also very important to always check the return values, that you are getting and event more important also: test, test, test<img src="" height="1" width="1" alt=""/>Dev Dimitrov, their representation and Base64Everyone knows what a GUID is - 128 bit number, that is supposed to be global unique and it is used for identifier for different types of objects, COM objects most notably.<br /><br />Usually they are formatted as hex string like {F7F052A2-8BC7-4b84-8330-228BCA8A6E19}. A tool for creating guids can is guuidgen.exe, there are also System.Guid class and <a href="">CoCreateGuid</a> API.<br />Sometimes GUIDs have to be formatted more compactly, for instance in the <a href="">IFC Specification</a> GUIDs have to be formatted as Base64, making them string with 22 characters length.<br />Sadly the base64 encoding is non-compatible with the .NET implementation, which makes it a hard task to convert System.Guid object to required format.<br />IFC Base64 is using 0-9A-Za-z_$ characters and .NET implementation is using something like A-Za-z0-9 for encoding table. <br />There is some sample C code on the ifc wiki site, so I went for the easy solution - make a dll and call it from .NET.<br />The problems with this approach kept comming one after another - mostly dll was not always found in Web scenario, due to deployment issues. But there are also other possible hurles - 64bit migration, deployment on Mono and so on.<br />So I needed a pure managed implementation of Base64 encoding for GUIDs.<br />Some googleing brought me to sample code, that I adjusted it to the spec and here is the solution:<br /><br /><pre><br /> public class Managed {<br /><br /> public static string GetId(Guid guid) {<br /> return ToBase64String(guid.ToByteArray());<br /> }<br /><br /> public static string GetId() {<br /> return ToBase64String(Guid.NewGuid().ToByteArray());<br /> }<br /><br /> public static readonly char[] base64Chars = new char[]<br /> { '0','1','2','3','4','5','6','7','8','9',<br /> 'A','B','C','D','E','F','G','H','I','J','K','L','M',<br /> 'N','O','P','Q','R','S','T','U','V','W','X','Y','Z',<br /> 'a','b','c','d','e','f','g','h','i','j','k','l','m',<br /> 'n','o','p','q','r','s','t','u','v','w','x','y','z',<br /> '_','$' };<br /><br /> public static string ToBase64String(byte[] value) {<br /> int numBlocks;<br /> int padBytes;<br /><br /> if ((value.Length % 3) == 0) {<br /> numBlocks = value.Length / 3;<br /> padBytes = 0;<br /> } else {<br /> numBlocks = 1 + (value.Length / 3);<br /> padBytes = 3 - (value.Length % 3);<br /> }<br /> if (padBytes < 0 || padBytes > 3)<br /> throw new ApplicationException("Fatal logic error in padding code");<br /><br /> <br /> byte[] newValue = new byte[numBlocks * 3];<br /> for (int i = 0; i < value.Length; ++i)<br /> newValue[i] = value[i];<br /><br /> byte[] resultBytes = new byte[numBlocks * 4];<br /> char[] resultChars = new char[numBlocks * 4];<br /><br /> for (int i = 0; i < numBlocks; i++) {<br /> resultBytes[i * 4 + 0] =<br /> (byte)((newValue[i * 3 + 0] & 0xFC) >> 2);<br /> resultBytes[i * 4 + 1] =<br /> (byte)((newValue[i * 3 + 0] & 0x03) << 4 |<br /> (newValue[i * 3 + 1] & 0xF0) >> 4);<br /> resultBytes[i * 4 + 2] =<br /> (byte)((newValue[i * 3 + 1] & 0x0F) << 2 |<br /> (newValue[i * 3 + 2] & 0xC0) >> 6);<br /> resultBytes[i * 4 + 3] =<br /> (byte)((newValue[i * 3 + 2] & 0x3F));<br /> }<br /><br /> for (int i = 0; i < numBlocks * 4; ++i)<br /> resultChars[i] = base64Chars[resultBytes[i]];<br /><br /> string s = new string(resultChars);<br /> return s.Substring(0, 22);<br /> } <br /> }<br /></pre><br /><br />So if you have to encode something as Base64 or deal with GUIDs with one way or another - this may be helpful to you.<br />Original code by <a href="">James McCaffrey</a><img src="" height="1" width="1" alt=""/>Dev Dimitrov World!<p class="MsoNormal"><span style="" lang="EN-GB">Hello everyone!<br /><br /.<br />I hope this will be a nice trip and I will be glad if this is helpful to someone.<o:p></o:p></span></p><img src="" height="1" width="1" alt=""/>Dev Dimitrov | http://feeds.feedburner.com/SoftwareDevelopmentLog | CC-MAIN-2018-13 | refinedweb | 1,552 | 52.8 |
Hi friends here I have written the code to generate the auto generated password
open the visual studio
select the new website
After that copy and paste the following code in aspx page
After that open the aspx.cs file before copy and paste the code declare the two variables globally
And you have to give the Microsoft. Visual basic reference to that particular project.
After that press F5 you will see the auto generated password for every button click
I think this code will help you.
9 comments :
Hello Suresh,
I tried with this code in my project and i got errors on this
1)objrandom.next are you missing assembly refrence
i already add visual basic what you said i did that
hi suresh i got an error in this code Error 1 The type or namespace name 'Interaction' does not exist in the namespace 'Microsoft.VisualBasic' (are you missing an assembly reference?)
hi suresh i got an error in this code Error 1 The type or namespace name 'Interaction' does not exist in the namespace 'Microsoft.VisualBasic' (are you missing an assembly reference?)
@ All You need to add a reference as suresh already mentioned.right click on ur website>add reference>under .net column select Microsoft. Visual basic.
Thats it.
hai sir , this code working well when we open website.. bt if we open project means its displayed the parse error like
"<%@ Application Codebehind="Global.asax.cs" Inherits="Generate_password.Global" Language="C#" %>"
how to clear it?? help me
Hi everyone,not to declare globally.
string strPassword = string.Empty;
Random objRandom = new Random();
Modify the generatePassword code like these it works.
private string generatePassword(int length)
{
string strPassword = string.Empty;
Random objRandom = new Random();
int randomNumber;
int i = 1;
string strTemp;
while (i <= length)
{
randomNumber = objRandom.Next(1, 30);
strTemp = (string)Microsoft.VisualBasic.Interaction.Choose(randomNumber, "B", "!", "D", "2", "F", "3", "$", "4", "a", "5", "j", "6", "K", "7", "L", "8", "m", "9", "N", "p", "@", "X", "Y", "#", "G", "H", "%", "R", "w");
strPassword += strTemp;
i++;
}
return strPassword;
}
From Sunny Vishwakarma
sunnyvishwakarma940@gmail.com
Thanks, you help me solve the password problem.
venkatesh maddireddy: For resoving the error you faced, you need to add Microsoft.VisualBasic dll in your references.
Its Working Dude....Thank you So Much........
its working very Easily......
Awesome.......
Thanks Again.... | http://www.aspdotnet-suresh.com/2010/04/generating-password-automatically-by.html | CC-MAIN-2014-42 | refinedweb | 383 | 58.69 |
Created on 2010-11-23 23:37 by terry.reedy, last changed 2012-03-17 13:16 by python-dev. This issue is now closed.
Add list.clear() method with obvious semantics.
Pro:
1. parallel to set/dict/defaultdict/deque.clear(),
usable in generic mutable collection function;
2. makes it easier to switch between list and other collection class;
3. current alternatives are not as obvious;
4. some people seem to expect it.
Anti:
1. unneeded; del l[:] or l[:]=[] do same already.
Guido: (python-ideas list, 'Set Syntax' thread, today)
"FWIW I'm fine with adding list.clear() to 3.3."
Guido's email is archived at:
Guido approved these both in a thread earlier this year.
The reasoning for copy() was the same as for clear(), some folks couldn't cope with:
b = a[:]
Objects/listobject.c has a static function named list_clear used internally. Is it possible to just expose this function as a clear() method?
One problem is that it has this strange comment in the end:
/* Never fails; the return value can be ignored.
Note that there is no guarantee that the list is actually empty
at this point, because XDECREF may have populated it again! */
However, looking at the code I'm not sure the list can be cleared any more than the function does, and it actually deallocates the ob_item field of the list.
Hi, I'm also looking at listobject.c also... if we want list.clear() to behave exactly like del list[], we may be able to just call list_ass_slice on the list. Similarly for list.copy which should behave like a=l[:]
>
> Hi, I'm also looking at listobject.c also... if we want list.clear() to
> behave exactly like del list[], we may be able to just call list_ass_slice
> on the list. Similarly for list.copy which should behave like a=l[:]
>
Note that when executed to do 'del lst[:]' (i.e. with argument v set to 0
and ilow/ihigh to the maximal range of the list), list_ass_slice will just
call list_clear anyway, which is a cue that this indeed is the right way to
do it, despite the strange comment I mentioned in my message above.
Yes, list_clear should be called, but no, it cannot be used directly because a method needs a PyObject* return value. So a wrapper method is needed that looks like listappend() does for list.append(). list_copy() will just look like list_slice() with the index fiddling removed.
That's good if it's so... can you explain why list_clear doesn't guarantee that the list is empty? Why would XDECREF populate the list? I don't quite understand it.
eli: are you writing a patch for this?
Attaching a patch for list.clear():
1. Implements a new function in Objects/listobject.c named listclear() (to be consistent with the other "method functions")
2. listclear() is registered in list_methods and just calls list_clear(), returning None
3. A documentation string is modeled after dict.clear(), but shaped a bit differently to follow the conventions of other list method docs.
If this look fine to the more experienced devs, things left to do are:
1. Add tests
2. Implement the .copy() method in a similar manner + tests for it
Some random observations:
1. The naming of functions/methods could be made more consistent. See, for example, list_reversed vs. listreverse.
2. The documentation style of list and dict methods is different for no apparent reason:
help({}.pop) gives:
pop(...)
D.pop(k[,d]) -> v, remove specified key and return the corresponding value.
If key is not found, d is returned if given, otherwise KeyError is raised
While help([].pop) gives:
pop(...)
L.pop([index]) -> item -- remove and return item at index (default last).
Raises IndexError if list is empty or index is out of range.
Note the '--' which separates the signature from description in the list version.
Was list.copy() also approved? After all, there are many ways to achieve the same even now:
1. L[:]
2. list(L)
3. import copy and then copy.copy
Especially re the last one: list.copy() can be deep or shallow, which one should it be?
Also, where is the *official* place to document list objects and their methods?
Yes, list.copy was also approved IIRC. And it should be a shallow copy, like all other copy methods on builtins.
This is really welcome. It makes Python even more readable.
If 'a' is a list object, a[:] is not so obvious at first to a newcomer, but
a.copy() is.
Also, a.clear() is so perfect and understandable. I wish you could decorate Python versions prior to 3.3 with this two new list methods.
Attaching a patch with the following:
1. list.copy() and list.clear() implemented in Objects/listobject.c, with appropriate doc strings (modeled after dict's docstrings)
2. Same methods implemented in collections.UserList
3. Tests added that exercise the methods in both list and UserList
Re the documentation, it's currently unclear where it should go. I asked on docs@python.org.
Hi Eli,
I think the right place is 4.6.4,
It starts with “List and bytearray objects support additional operations
that allow in-place modification of the object”.
For methods not supported by bytearray, you can use the fake footnote
(8) and edit its texte (“sort() is not supported by bytearray objects”).
Regards
Following Éric's suggestion, I'm attaching an updated patch with with the documentation in Doc/library/stdtypes.rst updated with the new methods.
There seems to be a slight Sphinx markup problem with this addition. I rewrote note (8) as:
:meth:`clear`, :meth:`copy` and :meth:`sort` are not supported by
:class:`bytearray` objects.
Unfortunately, :meth:`copy` is getting linked to the copy module.
Nothing will happen on this until 3.2 is done and the py3k branch starts with 3.3 submissions.
Eli, I learned this trick recently: :meth:`!copy`.
Éric - thanks, it works [attaching updated patch]. However, don't you think the core problem is a Sphinx bug we should report?
Raymond - this happens after final 3.2 release (on Feb 05 2011 if it's on schedule), right?
I'm troubled with one little letter:
"L.copy() -> list -- a shallow copy of L"); should be
"L.copy() -> list -- shallow copy of L"); without the letter 'a',
because other sentences also don't say "L.__sizeof__() -- *A* size of
L in memory, in bytes");
Please fix this.
Can you please help me find the definition of the copy() method of dict in
the Python sources? I want to see how that method is defined and compare the
definition to the one in Eli's patch.
Objects/dictobject.c
Boštjan,
"a shallow copy": I took this directly from the documentation of dicts, which says:
"D.copy() -> a shallow copy of D")
As I mentioned in an earlier message, the doc-strings of list and dict methods are inconsistent in more than one way, so I'm going to leave this decision to the committer. I'll be happy to help with fixes too.
Re your other question, in the Python source root, dictionaries are mostly implemented in Objects/dictobject.c - there's an array called mapp_methods that lists the functions used to implement relevant methods. For copy() it lists:
{"copy", (PyCFunction)dict_copy, METH_NOARGS,
So you need dict_copy. Note that it's just a wrapper (of another wrapper, by the way) bit it's a good place to start. Arm yourself with an editor or IDE with some code-searching capabilities.
mapp_methods ? Don't you mean map_methods ?
No, he means mapp_methods. Why don't you simply look at the file?
mapp_methods looks like a typo. you know -- mapp_...? isn't map_... correct?
No, and please do not clutter this issue with any perceived typo discussions.
Why mapp_methods, why not map_methods? Any reason for this?
1) Obviously because they’re mapping methods, not map methods.
2) Again, opening up the file and looking through it for some seconds or minutes would have allowed you to understand it.
3) Again, this is not the right place to discuss this.
4) Again, please do not send HTML email to this tracker.
eli, you should also add "New in version 3.3" to the doc of the tow new list methods.
> That's good if it's so... can you explain why list_clear doesn't
> guarantee that the list is empty? Why would XDECREF populate the list?
> I don't quite understand it.
Does this mean that durning the Py_DECREF progress the list may be populated again? It's not a problem. Here is the simplest example(with applying eli's patch):
class A(object):
def __init__(self, li):
self._li = li
def __del__(self):
self._li.append(self)
li = []
li.append(A(li))
li.clear()
print(li)
Updated patch with "versionadded" tag for the new methods
The patch looks great.
Please apply it.
Please modify the patch so that it can be applied to current py3k trunk cleanly. (Notice that Lib/collections.py has been changed to a package in #11085)..
Eli doesn't need to post a new patch. I'm sure he will fix any nits in his commit.
On Thu, Feb 24, 2011 at 05:26, Éric Araujo <report@bugs.python.org> wrote:
>
> Éric Araujo <merwok@netwok.org> added the comment:
>
>.
>
I will fix and commit over the weekend.
[Éric Araujo]
> +"L.clear() -> None -- remove all items from L");
> It looks like other methods that return None
> just omit the “-> type” part.
These kind of nitty comments really aren't helpful.
It consumes more time to talk about them than they're worth.
In this case, Eli was modeling after the docstring in dictobject.c:
PyDoc_STRVAR(clear__doc__,
"D.clear() -> None. Remove all items from D.");
Just because list.remove.__doc__ failed to consistently follow that convention doesn't make Eli's patch incorrect.
A slightly revised patch committed in revision 88554:
1. Fixed Éric's whitespace comment
2. Fixed a test in test_descrtut.py which was listing list's methods
3. Moved the change to collections.py onto Lib/collections/__init__.py
4. Added NEWS entry
Éric - as I mentioned earlier in this issue, I chose to leave the syntax of the docstring for the new methods similar to the same methods in dict (dict docstring look better and more internally consistent anyhow). I propose to move further discussion of this matter into a separate issue which will deal with the overall (in)consistency in the docstrings of list and dict.
Reading "clear and copy are not supported by bytearray": shouldn't they be? ("sort" probably really makes no sense on bytearrays.)
On Fri, Feb 25, 2011 at 10:11, Georg Brandl <report@bugs.python.org> wrote:
>
> Georg Brandl <georg@python.org> added the comment:
>
> Reading "clear and copy are not supported by bytearray": shouldn't they be?
Perhaps they should, and it's not a big deal to implement. But I'm not
100% clear on the policy of how such changes are approved. Should this
be discussed in the list?
Unless someone raises a controversial and non-trivial issue about adding clear() and copy() to bytearray, there is no need for a python-dev discussion on the subject. Just post a patch to the tracker.
Yes, it should be discussed on python-dev.
In any case, this issue can be closed.
I have installed Python 3.2 final on my Windows machine and I get an
exception when doing list.copy or list.clear in the interpreter. Why is that
so?
Because they got added *after* 3.2 was released?
Right, right. My bad. Can't wait for Python 3.3! ;)
Georg, what is the issue? Is there some reason that bytearrays should not be copied or cleared? Is there some reason to prefer the current:
dup = b[:] # copy
del b[:] # clear
No -- but the question is if copy() and clear() mightn't be added to the (mutable) sequence ABC if we make all builtin such sequences implement them.
The ABCs are subset of the methods for the concrete APIs. We've avoided the likes of copy() because it requires knowledge of the constructor's signature -- for example, MutableMapping does not cover copy().
It is okay for Eli to add MutableSequence.clear() because it can be implemented in terms of pop(), much like we do for MutableMapping.clear().
Eli, feel free to create a patch to add clear() and copy() to bytearray and to add clear() to MutableSequence. Assign the patch to me for review.
Attaching a patch adding copy() and clear() to bytearrays, with tests and doc.
I didn't add the methods to MutableSequence because I have a doubt about it - in particular which exception get raised by .pop if it's empty. Curiously, lists and bytearrays raise different exceptions in this case - IndexError and OverflowError, respectively.
The patch is fine. Do consider using assertIsNot() in the tests. Then go ahead and apply it.
The OverflowError in bytearray.pop() is a bug, please open a separate report for it and make a patch changing it to IndexError. Assign that bug report to me.
Go ahead and propose a patch for MutableSequence.clear() implemented with MutableSequence.pop() and catching an IndexError when empty.
Thanks for your efforts.
Hmm, shouldn't self.__class__(self) be a good default implementation of copy()?
I'd expect any sequence to support this way of creation from another sequence, even if it's inefficient.
> Hmm, shouldn't self.__class__(self) be a
> good default implementation of copy()?
>
> I'd expect any sequence to support this way
> of creation from another sequence, even if it's inefficient.
The copy() method isn't being proposed for MutableSequence because it presumes that we know something about the constructor's signature. For example, the constructor of array() needs the element storage type as an argument. We refuse the temptation to guess.
In the Set api, we had no choice because many set-methods necessarily create a new set. To handle the constructor signature problem, the creation step was factored-out into the from_iterable() method so that a user could override it if necessary.
Also copy() is handled oddly in the builtin types. To handle the constructor signature issue for subclasses, they ignore the subclass and return a instance of the base class. For example, the inherited copy() method on a subclass of list or dict will create an instance of list or dict, not of the subclass itself. Accordingly, subclasses that want instances of themselves have to override the inherited copy() method. They would have to do this anyway if they subclass contained any other data in the class dictionary that would need to be passed along to a copy.
In short, we're better-off not supplying copy() as part of the MutableSequence ABC.
Committed the bytearray methods in revision 88733.
Closing this issue. Will handle wrong exception and MutableSequence ABC method addition in separate issues.
New changeset 958a98bf924e by Eli Bendersky in branch 'default':
updated whatsnew/3.3.rst with the new methods added to list and bytearray (issue 10516) | http://bugs.python.org/issue10516 | CC-MAIN-2015-11 | refinedweb | 2,528 | 69.28 |
Prerender.
io
1.0.0.2
.NET Framework 4.0
Install-Package Prerender.io -Version 1.0.0.2
dotnet add package Prerender.io --version 1.0.0.2
<PackageReference Include="Prerender.io" Version="1.0.0.2" />
For projects that support PackageReference, copy this XML node into the project file to reference the package.
paket add Prerender.io --version 1.0.0.2
The NuGet Team does not provide support for this client. Please contact its maintainers for support.
#r "nuget: Prerender.io, 1.0.0.2"
#r directive can be used in F# Interactive, C# scripting and .NET Interactive. Copy this into the interactive tool or source code of the script to reference the package.
// Install Prerender.io as a Cake Addin #addin nuget:?package=Prerender.io&version=1.0.0.2 // Install Prerender.io as a Cake Tool #tool nuget:?package=Prerender.io&version=1.0.0.2
The NuGet Team does not provide support for this client. Please contact its maintainers for support.
Use this filter that prerenders a javascript-rendered page using an external service and returns the HTML to the search engine crawler for SEO. namespace. | https://www.nuget.org/packages/Prerender.io | CC-MAIN-2022-21 | refinedweb | 192 | 55.61 |
This project came about when I encountered a client that needed a fiber scheduler. The problem arose because of a need to convert non-preemptive “threads” from an existing programming domain to Windows. [This was part of my teaching, not part of a contract, and I presented this solution in class; therefore, I was not actually paid to write it, and it remains mine. Just so you know that this wasn’t proprietary work.]
The classes I designed were intended to illustrate a round-robin fiber scheduler. You can feel free to adapt these classes to any application domain you want, using any discipline you want. For example, you could create a priority-based scheduler if you needed one (more on this later).
This uses a subset of the capabilities of fibers to implement a solution. Fibers are a good deal more general than used in this solution, and I'll talk about some of these issues later.
A fiber is a non-preemptive multiplexing mechanism. In fact, fibers are more like an implementation of coroutines, a technology dating back to the 1960s. A fiber consists of a register set and a stack, so is much like a thread, but it is not a thread. A fiber has no identity unless it runs inside a thread. When a fiber is suspended, it transfers control to another fiber by explicitly selecting and naming (by supplying the address of) the target fiber. The operation that performs this is called SwitchToFiber. The fiber will continue to run in that thread until another SwitchToFiber is executed.
SwitchToFiber
A fiber, unlike a thread, never exits. If a fiber actually exits, by returning from its top-level fiber function, the thread that is executing it will implicitly call ExitThread and terminate. Note that this call on ExitThread means the thread does not itself exit via its top-level function, and consequently cleanup that must be done there will not be performed. Consequently, you should consider the actual exit of a thread to be a serious programming error.
ExitThread
The MSDN lists a set of fiber-related functions, but the documentation is incomplete, confusing, sometimes misleading, and in one case, I believe overtly wrong. This essay is going to attempt to clarify these issues, although I don't have enough information to resolve some of the ambiguities or fill in the holes (although I have passed this off to my MVP contact in the hopes he can get some answers for me).
The fiber functions are listed under the topic "Process and Thread functions [base]". Apparently, "fibers" is not considered worthy of mention in the title.
ConvertFiberToThread
ConvertThreadToFiber(Ex)
ConvertThreadToFiber
ConvertThreadToFiberEx
CreateFiber
PVOID
CreateFiberEx
DeleteFiber
CreateFiber(Ex)
FiberProc
typedef
typedef void (CALLBACK * LPFIBER_START_ROUTINE)(PVOID parameter)
FlsAlloc
FlsFree
FlsCallback
FlsGetValue
FlsSetValue
GetCurrentFiber
There are several steps involved in creating a running fiber environment.
First, some fibers have to be created. When a thread is created, it is implicitly scheduled to run. When a fiber is created, there is no mechanism that makes it spontaneously able to run. Instead, the fiber remains "suspended" until there is an explicit SwitchToFiber to transfer control from a currently-running fiber to one of the other fibers.
Since the threads are what are actually running, only a running thread can do a SwitchToFiber. But for this to work correctly, it must be from an existing fiber, to an existing fiber. A thread is not a fiber, so although it could create a bunch of fibers, it can't cause them to run.
This is solved by using either ConvertThreadToFiber or ConvertThreadToFiberEx calls. These calls create a "fiber work area" in the current thread, thus imbuing the thread with fiberness, or to say it in another way, "the thread has the fiber nature". Now, the underlying thread scheduler will schedule threads, including the thread which now has the fiber nature. Essentially, upon return from the ConvertThreadToFiber(Ex) call, you find yourself running in a fiber, which is running inside the original thread. This fiber is now free to do a SwitchToFiber to any other fiber.
At some point, when a fiber is no longer in use, you must call DeleteFiber on the fiber object. There are some restrictions on how this can be done, which will be discussed later. But ultimately, every fiber you create must be deleted.
When the ConvertThreadToFiber(Ex) calls are performed, they, too, return a fiber object reference. When you call SwitchToFiber to this fiber, you are "back in the thread", and you then call ConvertFiberToThread to release the fiber-specific resources that had been allocated to the thread.
That's basically it. It isn't everything, and what I've described here represents a subset of the full capabilities of the fiber universe, but I'll elaborate on these a bit later.
Speed. Simplicity. Elimination of the need for synchronization.
Fibers are lightweight. The operation to switch from one fiber to another is 32 instructions, involving no kernel call at all. On a 2GHz machine, this means that you are looking at a swap as fast as 8ns (worst case, if none of the data is in the cache, could be several tens of nanoseconds).
In case you're wondering where those numbers come from, 32 instructions would take, on a 2GHz machine, 16ns, except that the Pentium 4 is a "pipelined superscalar" architecture and can issue two integer instructions per clock cycle, coming up with the 8ns value. Instruction pipelining and pre-fetching tend to mask instruction fetch times. However, in the worst case when nothing is in the cache, the speed depends upon the pattern of cache hits and replacements, and therefore, is limited by the memory cycle speed, which is platform-specific.
This means that if you have a lot of little things to do, fibers may be a better choice than threads, because threading involves kernel calls and invocation of the scheduler. Fibers execute within the time slice of the thread in which they are running, so swapping among many short-computation fibers does not involve any potential blocking in the kernel. The elimination of the need for synchronization in many cases also eliminates the need for the scheduler to come into play.
The simplicity from fibers resembles the simplicity of threading in many ways: instead of having to write complex interlaced code that tries to do several things at once, you write simple code that does only one thing, and use a lot of different fibers to do the different things. This allows you to concentrate on doing one thing well, rather than lots of things in a somewhat complex, and potentially fragile, way.
In a preemptive multithreading environment, if two threads can access the same data, the data must be protected by some form of synchronization. This could be as simple as using one of the Interlocked... operations that lock at the memory-bus level, or it may require the use of a CRITICAL_SECTION or Mutex for synchronization. The downside of the latter two is that if access cannot be granted, the kernel is called to de-schedule the thread, which is queued up on the synchronization object for later execution when the synchronization object is released. The CRITICAL_SECTION is distinguished by the fact that if the synchronization object can be acquired (the most common situation in most cases), no kernel call is required, whereas the Mutex always requires a kernel call.
Interlocked...
CRITICAL_SECTION
Synchronization between threads is where threads rub together. Synchronization is friction. As in mechanical systems, friction generates heat and wastes energy. Perhaps the best solution is to avoid it.
Fibers are a way to avoid this. ::SwitchToFiber is a "positive handoff". Until a fiber switches control, it cannot be preempted by any other fiber that would be running in that thread. Thus, once a fiber starts running, provided the state it is modifying is shared only with other fibers that would execute within that thread, there is no need to synchronize the state at all. The synchronization is implicit in the fiber scheduling, which is explicitly under the control of the programmer.
::SwitchToFiber
This is not a perfect solution. A fiber can be scheduled to run in multiple threads; if the shared state could now be accessed by fibers running in separate threads, the fiberness is irrelevant; you have a multithread synchronization problem, and you must use a CRITICAL_SECTION or Mutex.
When you have fibers, you do not have concurrency. If a fiber blocks, such as on an I/O call, it is actually the thread running the fiber that blocks. No additional fibers can be scheduled to run in that thread, because the thread itself is blocked. You do not get any concurrency between fibers running in the same thread (although you can get concurrency between fibers running in other threads). But, if you don't have blocking calls, fibers are particularly useful for multiplexing simple compute-bound tasks.
The first class is a wrapper around the basic Fiber object:
class CFiber {
public: // constructors
CFiber(LPFIBER_START_ROUTINE rtn, SIZE_T stack = 0)
{ fiber = ::CreateFiber(stack, rtn, this); }
CFiber() { fiber = NULL; }
virtual ~CFiber() { ::DeleteFiber(fiber); }
public: // methods
BOOL Create(LPFIBER_START_ROUTINE rtn, SIZE_T stack = 0)
{
ASSERT(fiber == NULL);
fiber = ::CreateFiber(stack, rtn, this);
return fiber != NULL;
}
BOOL ConvertThreadToFiber() {
ASSERT(fiber == NULL);
fiber = ::ConvertThreadToFiber(this);
return fiber != NULL;
}
void Attach(LPVOID p) { ASSERT(fiber == NULL); fiber = p; }
LPVOID Detach() { LPVOID result = fiber;
fiber = NULL; return result;
}
LPVOID GetFiber() { return fiber; }
public: // methods
void run() { ASSERT(fiber != NULL); ::SwitchToFiber(fiber); }
protected: // data
LPVOID fiber;
};
CFiber
~CFiber
Create
Creates a new fiber in an existing CFiber object
Used to convert a thread to a fiber for purposes of fiber scheduling
Attach
Attaches an existing fiber to a CFiber object
Detach
GetFiber
run
There are several constructors. The key here is the assumption that the “fiber parameter” seen by the fiber will actually be the CFiber object, or a subclass derived from it.
CFiber::CFiber(LPFIBER_START_ROUTINE rtn, SIZE_T stack = 0);
This constructor will create a fiber that will execute the specified routine rtn. It has an optional stack size parameter which defaults to the process stack size.
rtn
The fiber routine. void CALLBACK rtn(LPVOID p)
void CALLBACK rtn(LPVOID p)
stack
The desired stack size; if omitted, 0 (use default stack size) is assumed.
See also the implementation of CFiber::CFiber.
CFiber::CFiber
CFiber::CFiber();
This constructor creates a CFiber object which is not connected to any fiber. This is typically used when the user wishes to have the fiber parameter be other than the CFiber object.
CFiber::~CFiber()
The destructor deletes the underlying fiber object.
Notes: The delete operation must be done carefully on a fiber because deleting the currently executing fiber will terminate the currently executing thread that the fiber is running in. A fiber must not delete itself.
delete
BOOL CFiber::Create(LPFIBER_START_ROUTINE rtn, SIZE_T stack = 0)
Given a CFiber object which is not associated with a fiber, this will create a new fiber and attach it to the CFiber object.
The following are equivalent:
(1)
CFiber * fiber = new CFiber(function);
(2)
CFiber * fiber = new CFiber;
LPVOID f = ::CreateFiber(0, function, fiber);
Fiber->Attach();
BOOL CFiber::ConvertThreadToFiber()
Converts the current thread to a schedulable (target of ::SwitchToFiber) fiber. The current CFiber object becomes the fiber parameter.
Result: TRUE if the underlying ::ConvertThreadToFiber call succeeded; FALSE if it failed (use GetLastError to determine what went wrong).
TRUE
::ConvertThreadToFiber
FALSE
GetLastError
Notes: After finishing with the fibers and having deleted them all, ::ConvertFiberToThread should be called to release the fiber-specific data structures which have been allocated to the thread.
::ConvertFiberToThread
void CFiber::Attach(LPVOID p)
This attaches a fiber to a CFiber object. The existing CFiber object must not already have a fiber attached.
p
A fiber reference
Notes: Unlike MFC, this does not maintain a map to see if a given fiber is bound to more than one CFiber object. It would be erroneous to have a program do so, but no additional checks are made.
When created via the constructors, the fiber parameter (returned from ::GetFiberData) is the CFiber object. If the application would like to have a different object associated as the fiber parameter, then the fiber can be created and attached.
::GetFiberData
LPVOID CFiber::Detach()
This breaks the association between the CFiber and the underlying fiber.
Result: The fiber that was formerly associated with the CFiber object.
Notes: The fiber retains the fiber parameter with which it was created; consequently, if a fiber is Detached from one CFiber object and Attached to another, and it has a fiber parameter which is the original CFiber object, this will now be an erroneous reference. Attach and Detach should not be used when the expected behavior is that the CFiber object is the fiber parameter.
LPVOID CFiber::GetFiber();
Result: The fiber associated with the CFiber object.
void CFiber::run();
Switches control to the fiber using ::SwitchToFiber.
The goal here is to have a “time-shared” set of fibers, where the fibers would simply yield control when appropriate and some other fiber would run. In the normal mode of operation, a fiber yields control by calling ::SwitchToFiber, specifying the next fiber to run. However, this requires actually knowing what the next fiber should be. In some applications, this is part of the specification. In the system I was writing, all I knew was that I wanted to yield control to some other fiber. The choice was to do a round-robin fiber scheduler, which implies a queue. This required a queue-entry class to represent the fibers in the queue. I derived this from the CFiber class.
class QE : public CFiber {
public: // constructor/destructor
QE(LPFIBER_START_ROUTINE rtn, SIZE_T stack = 0)
: CFiber(rtn, stack) { next = NULL; }
QE() { next = NULL; }
virtual ~QE() { }
public:
virtual void Display(LPCTSTR s) { }
public: // internal state
QE * next;
};
typedef QE * PQE;
QE
Constructors
~QE
Destructor
Display
Virtual method for debugging output
The link used to insert elements in the queue
QE::QE(LPFIBER_START_ROUTINE rtn, SIZE_T stack = 0);
This constructor will create a queue entry and a new fiber associated with that queue entry that will execute the specified routine rtn. It has an optional stack size parameter which defaults to the process stack size.
QE::QE();
This constructor will create a queue entry which is not associated with any fiber. The CFiber::Attach call can be used to attach a fiber. See the caveats about the fiber data.
CFiber::Attach
Notes: This code will not function properly if the ::GetFiberData call returns a value other than the QE reference.
virtual QE::~QE();
The destructor will delete the QE object and its associated fiber. See the caveats about deleting the running fiber. The delete operator should never be called on the QE representing the running fiber.
virtual void QE::Display(LPCTSTR s);
The intent of this is that subclasses of QE would implement this virtual method for purposes of debugging. This class does nothing in the implementation of this virtual method. While in principle it could have been a pure virtual method, there may be reasons to create raw QE objects, and this would not be possible if the method were a pure virtual method.
QE * QE::next;
This provides the queue structure managed by the Queue class.
Queue
Notes: C++ has a mechanism called friend which allows another class access to private variables. However, this is poorly-designed, because it means that the QE class has to know the name of the class that will use it. This reduces the generality of the classes; so rather than mark this as protected and require a known class name, I chose to make it public and thus allow other implementations of queues.
friend
protected
public
I wanted to implement a FIFO queue to do round-robin scheduling. I would normally use the MFC CList class, but this code had to work in a non-MFC environment.
CList
class Queue {
public:
Queue() { queueHead = queueTail = emptyFiber = killFiber = NULL; }
public:
void appendToQueue(PQE qe);
PQE removeFromQueue();
public:
void next();
void yield();
void kill();
public:
void SetEmptyFiber(PQE qe) { emptyFiber = qe; }
protected:
PQE queueHead;
PQE queueTail;
PQE emptyFiber;
protected: // used for fiber-kill logic
PQE killFiber;
PQE killTarget;
static void CALLBACK killer(LPVOID p);
};
This is intended to support the queuing of fibers. The yield method puts the current queue entry at the tail of the queue. The next method dispatches the fiber at the head of the queue. The kill method does a delete on the currently running element, but it does so very carefully by using a separate fiber for this purpose. It then initiates the next fiber in the queue.
yield
kill
This code implements simple FIFO queuing. Other classes can be created that implement more sophisticated kinds of queuing.
Queue management methods
Constructor
appendToQueue
Adds an element to the tail of the queue
removeFromQueue
Removes an element from the head of the queue
Scheduling methods
Dequeues the head of the queue and switches to that fiber; if the queue is empty, switches to the empty queue fiber
Schedules the current fiber to the end of the queue and runs the fiber at the head of the queue
Deletes the current fiber and schedules the next fiber
SetEmptyFiber
Specifies the fiber to be switched to if the queue is empty
Queue::Queue();
Constructs an empty queue.
void Queue::appendToQueue(PQE qe);
Places the QE pointed to by its parameter at the end of the queue.
PQE Queue::removeFromQueue();
Removes the queue element from the front of the queue.
Result: A pointer to the QE entry which has been dequeued, or NULL if the queue is empty.
NULL
void Queue::next();
Dequeues the queue element at the head of the queue and switches control to the fiber it represents. If there is no element in the queue, it will switch to the fiber represented by the “empty queue” element (see SetEmptyFiber).
void Queue::yield();
When a fiber wishes to yield control, it calls this method, and control is transferred to the next fiber in the queue. The current fiber is placed at the end of the queue. Execution will resume on the line following the yield call when the fiber is re-scheduled.
void Queue::kill();
When a fiber is done computing, it calls this method. Control will be transferred to the next fiber in the queue. The current fiber is not placed at the end of the queue, and therefore will no longer be scheduled.
Notes: To “suspend” a fiber so it can be “restarted”, a more elaborate mechanism will need to be constructed.
void Queue::SetEmptyFiber(PQE qe);
qe
A QE reference to the fiber to be scheduled if the queue is empty.
The queue is first populated by using appendToQueue to add a QE object to the queue. Once the fibers start running, they will normally execute in round-robin FIFO order. The current mechanism does not have a way to “suspend” a fiber; that generalization is left as an Exercise For The Reader.
To start the queue, the main thread must create a QE which represents the CFiber::ConvertThreadToFiber call. This is typically set as the default fiber to resume when the queue is empty, so the SetEmptyFiber method is used to establish this.
CFiber::ConvertThreadToFiber
The following sections illustrate the code in this example.
All of the methods of the CFiber class are defined in the header file. The CFiber class wraps the primitive fiber representation, which is an LPVOID:
LPVOID
protected: // data
LPVOID fiber;
CFiber(LPFIBER_START_ROUTINE rtn, SIZE_T stack = 0)
{
fiber = ::CreateFiber(stack, rtn, this);
}
Note that this constructor creates the fiber using ::CreateFiber, passing this as the fiber parameter.
::CreateFiber
this
CFiber() { fiber = NULL; }
This creates a CFiber object which is not bound to a particular fiber; the Attach method can be used later to bind a fiber to this CFiber object. Note that this has both power and risk: it allows the fiber parameter to be other than the CFiber object, but some of the later classes are not designed to work under these conditions.
virtual ~CFiber() { ::DeleteFiber(fiber); }
This merely deletes the fiber. However, this method cannot be executed by the running fiber on itself, or the thread which is running the fiber will be terminated.
BOOL Create(LPFIBER_START_ROUTINE rtn, SIZE_T stack = 0)
{
ASSERT(fiber == NULL);
if(fiber != NULL)
return FALSE;
fiber = ::CreateFiber(stack, rtn, this);
return fiber != NULL;
}
In debug mode, this will handle the ASSERT test to make sure this CFiber is not already bound to a fiber. In release mode, the ASSERT disappears, but the method will return FALSE if the fiber is already bound.
ASSERT
BOOL ConvertThreadToFiber() {
ASSERT(fiber == NULL);
if(fiber != NULL)
return FALSE;
fiber = ::ConvertThreadToFiber(this);
return fiber != NULL;
}
The thread which is going to start scheduling fibers needs to do a ConvertThreadToFiber. This invokes the API call to map the current thread to a fiber, passing the current CFiber object as the fiber parameter.
void Attach(LPVOID p) { ASSERT(fiber == NULL); fiber = p; }
This should only be called for CFiber objects which are not bound to fibers. It is nominally erroneous to attempt to bind a CFiber to another fiber without doing a Detach first.
LPVOID Detach() { ASSERT(fiber != NULL);
LPVOID result = fiber;
fiber = NULL;
return result;
}
This breaks the association between the CFiber and its underlying fiber, and returns the fiber reference. If the fiber has not been bound, the value returned is NULL. Nominally, it is erroneous to Detach if there is no association, and the ASSERT will cause a failure in debug mode if this situation arises. Note that there is no mechanism for changing the fiber parameter on an Attach of a formerly Detached fiber, and other subclasses described here will not function properly if the fiber parameter is not a reference to the CFiber object.
void run() { ASSERT(fiber != NULL);
::SwitchToFiber(fiber);
}
This switches the currently executing fiber to the fiber on which the run method is specified.
This class is derived from the CFiber class, and consequently inherits the constructor and other methods. The QE constructor is:
QE(LPFIBER_START_ROUTINE rtn, SIZE_T stack = 0)
: CFiber(rtn, stack) { next = NULL; }
so it is just the construction of a CFiber, with the addition of initializing the link used to maintain the queue.
The Display method is used to produce debugging output, but it is the responsibility of the subclass to implement the actual output.
The Queue class is essentially a singly-linked queue manager combined with the fiber-scheduling logic. The singly-linked queue manager is fairly straightforward. What I’ll show here are the fiber-scheduling functions.
Note that to simplify the coding, I chose to assume that the fiber parameter is the actual QE object (or an instance of a subclass of QE). This means that this code would not operate correctly if Attach/Detach had been used in a way that made the fiber parameter inconsistent, since there is no way to reset the fiber parameter (while there is a ::GetFiberData API, there is no corresponding ::SetFiberData!).
Attach/Detach
::SetFiberData
All this method does is queue the currently-running fiber onto the tail of the queue and activate the next fiber in the queue (using the next method).
void Queue::yield()
{
PQE qe = (PQE) (::GetFiberData());
qe->Display( _T("yield: returning to queue - ") );
appendToQueue(qe);
} // Queue::yield
This is called whenever a fiber is terminated, and must be deleted.
void Queue::kill ()
{#ifdef _DEBUG
PQE qe = (PQE) (::GetFiberData());
qe->Display(_T("kill"));
#endif
killTarget = qe;
if(killFiber == NULL)
{ /* create killer fiber */
killFiber = new QE();
LPVOID f = ::CreateFiber(0, killer, this);
killFiber->Attach(f);
} /* create killer fiber */
killFiber->run(); } // Queue::kill
The problem here is that a fiber cannot delete itself. If ::DeleteFiber is called on the currently running fiber, the entire thread in which the fiber is running will exit because this will call ExitThread. In order to kill the fiber, it switches control to a different fiber, the killFiber. This fiber is able to do a delete operation on the QE object (which, remember, is a subclass of CFiber, and may itself be an instance of some further subclass of QE), which is implemented by a call on ::DeleteFiber, because at that point, the fiber being deleted is not the running fiber.
::DeleteFiber
killFiber
Note that this appears to be inconsistent with the documentation of DeleteFiber, which states:
If the currently running fiber calls DeleteFiber, its thread calls ExitThread and terminates. However, if a currently running fiber is deleted by another fiber, the thread running the deleted fiber is likely to terminate abnormally because the fiber stack has been freed.
If the currently running fiber calls DeleteFiber, its thread calls ExitThread and terminates. However, if a currently running fiber is deleted by another fiber, the thread running the deleted fiber is likely to terminate abnormally because the fiber stack has been freed.
This appears to make no sense at all. Due to the nature of fibers, a “running fiber” cannot be deleted by another fiber (in the same thread), since at the time it is being deleted it is, by definition, not the “running” fiber! However, I suspect that if the paragraph is replaced by one which says (with my suggested changes italicised).
I am currently researching this with Microsoft. The current documentation paragraph, as written, would make it impossible to ever call DeleteFiber!
The question then arises: since fibers appear in many ways to be coroutines, how is the parameter passed to the killer? In this case, a member variable killTarget was added to the Queue class. Because we are working with non-preemptive fibers, and we are assuming that there is no concurrent access to the Queue object from another thread (this is a basic assumption in this code), there is no need to worry about any form of synchronization; furthermore, no fiber other than the one being switched to can get control, so there is no problem in the use of such a variable (this would be a fatal design error in a preemptive multithreaded system, but a perfectly valid approach in a non-preemptive fiber system).
killTarget
To avoid any need for explicit initialization or finalization, I create the kill fiber only if it does not already exist, and will show later where it is deleted. Why do it this way instead of in the constructor and destructor?
Mostly, it was an issue of lifetime. I felt that the fiber should not exist any longer than it needs to. There also seems to be some serious problems with determining exactly when it would be possible to delete the fiber, particularly since the destructor for the queue might run after the program has done a ::ConvertFiberToThread call to release the fiber resources, so the effects of doing a ::DeleteFiber under these conditions appear to be undefined.
This does lead to the question of when the kill fiber can be deleted. I chose to delete it when the queue goes empty. Note that this causes a reversion of scheduling to the “main fiber”, actually the empty-queue fiber. Since the queue is empty, there will be no more need to be able to kill fibers, and the killFiber fiber can be safely deleted. Note that the program may, as a consequence of having done all the processing, choose to create more queue entries; in this case, the killFiber is automatically re-created, “on demand”.
The killer fiber is quite simple; the key here is that since it is a separate fiber, it can now safely ::DeleteFiber the fiber which has now completed its operation.
void CALLBACK Queue::killer(LPVOID)
{
delete killTarget;
} // Queue::killer
The next method dequeues the next element from the queue and schedules it to run by calling ::SwitchToFiber on its fiber. If there are no more fibers queued up in the queue, it reverts to the fiber established by the SetEmptyFiber. Note that a failure to call SetEmptyFiber will be considered a fatal error.
void Queue::next()
{
PQE qe = (PQE)removeFromQueue();
if(qe != NULL)
{ /* got one */
qe->Display(_T("Dequeued QE"));
qe->run();
} /* got one */
else
{ /* all done */
delete killFiber;
killFiber = NULL;
TRACE( (_T("Queue empty: switch to EmptyFiber\n")) );
ASSERT(emptyFiber != NULL);
// Note that a failure to have set the empty fiber is a fatal
// error and is unrecoverable!
emptyFiber->run();
} /* all done */
} // Queue::next
Why did I use a QE structure for the killFiber and the emptyFiber? Since these are not actually scheduled, why not use a raw CFiber object for these instead?
emptyFiber
In the case of the emptyFiber, it is possible that someone might choose, instead of the approach I took, to create a priority-scheduled queue, and make the empty fiber be simply the lowest-priority fiber. I felt it would be easier to see how to do this if the emptyFiber appeared to be a schedulable fiber. Since the caller has to create the fiber, it also meant the underlying implementation could be changed without changing the user interface.
The choice for the killFiber is harder to justify; in this case, it became more an issue of consistency rather than one of functionality, particularly because this never escapes to the user interface level. However, doing this way allows me to decide to do “lazy killing”, in that I could consider scheduling the kill fiber as simply one more task to do, and could either insert it at the head of the queue or later in the queue. However, note that if I were to insert it at other than the first queue item, then the use of the global killTarget would no longer be viable. This is because the killTarget is explicitly set for each fiber to be killed, and if I could have more than one pending kill request, this is insufficient (in this case, I would probably create the fibers on demand, and use the fiber parameter which would have to be a QE object; I would have to create a class Killer : public QE to maintain the context I need).
class Killer : public QE
The goal of the sample program is to take a set of file names on the command line, and print out the files, one file per fiber. To make it appear as a concurrent program, the program is defined to print out only a few lines at a time, then the fiber is descheduled and another fiber runs. When end-of-file is reached, the fiber is no longer scheduled. Upon completion of the program, all the fibers are deleted.
To solve this, a class is derived from the QE class:
class CReaderFiber : public QE {
public: // constructors
CReaderFiber(LPCTSTR f, int c, Queue * q) : QE(reader) {
name = f;
count = c;
queue = q;
file = NULL;
}
virtual ~CReaderFiber() { if(file != NULL) fclose(file); }
public: // parameters
LPCTSTR name; // name of file
int count; // number of lines to write
Queue * queue; // the queue shared by all these fibers
public:
virtual void Display(LPCTSTR s);
public: // local state
FILE * file; // currently-opened file object
char buffer[MAX_LINE]; // local buffer
protected:
static void CALLBACK reader(LPVOID p);
};
This derived class holds all the problem-specific information, such as the number of lines to write during each execution of the fiber, the name of the file, the buffer holding the input, and so on. The fiber function itself is a static class member.
/* static */ void CALLBACK CReaderFiber::reader(LPVOID p)
{
CReaderFiber * rf = (CReaderFiber *)p;
TRACE( (_T("reader: called for %s\n"), rf->name) );
rf->file = _tfopen(rf->name, _T("ra"));
if(rf->file == NULL)
{ /* failed */
DWORD err = ::GetLastError();
reportError(err, rf->name);
TRACE( (_T("reader: could not open %s\n"), rf->name) );
rf->queue->kill();
return;
} /* failed */
Once the file is opened, we simply go into an infinite loop in the fiber, reading and printing lines. Note that this code assumes that a Unicode version of the program will be reading Unicode files and an ANSI version of the program will be reading ANSI (8-bit native code page) files. The generalization where either version can read or write either type of file is left as an Exercise For The Reader. Note that within the fiber loop, after printing the count number of lines, the fiber yields. Therefore another fiber will run.
count
while(TRUE)
{ /* fiber loop */
for(int i = 0; i < rf->count; i++)
{ /* read lines */
if(_fgetts(rf->buffer, MAX_LINE, rf->file) == NULL)
{ /* all done */
TRACE( (_T("reader: done with %s\n"), rf->name) );
rf->queue->killFiber();
ASSERT(FALSE);
} /* all done */
_tprintf(_T("%s"), rf->buffer);
} /* read lines */
TRACE( (_T("reader: yielding for %s after %d lines\n"),
rf->name, rf->count) );
rf->queue->yield();
} /* fiber loop */
} // reader
The command line has the syntax:
argv[0]
argv[1]
argv[2..argc-1]
int _tmain(int argc, TCHAR * argv[])
{
Queue queue;
if(argc < 3)
{ /* usage */
usage();
return 1;
} /* usage */
int lines = _ttoi(argv[1]);
if(lines <= 0)
{ /* failed */
_tprintf(_T("Illegal buffer count (%d)\n"), lines);
usage();
return 1;
} /* failed */
Once the boring details of the validation of the arguments is finished, the real work begins. First, an array is created to hold all the CFiber-derived objects. To simplify the code and eliminate the need to worry about all the offsets, I just created an array the same size as the argc array and ignore the first two entries in it.
argc
CReaderFiber ** fibers = new CReaderFiber*[argc];
In order to handle the “main fiber”, which will be this thread converted to a fiber, I will need a QE object to allow it to be scheduled. The ConvertThreadToFiber method will accomplish this.
PQE mainFiber = new QE();
if(!mainFiber->ConvertThreadToFiber())
{ /* failed conversion */
DWORD err = ::GetLastError();
reportError(err, _T("Converting thread to fiber"));
return 1;
} /* failed conversion */
Next, I scan the list of file names. For each file in the command line, I create a new CReaderFiber object and stick it in the array of fibers. As each instance of a fiber is created, I place it in the queue.
CReaderFiber
for(int i = 2; i < argc; i++)
{ /* create fibers */
fibers[i] = new CReaderFiber(argv[i], lines, &queue);
if(fibers[i] == NULL)
{ /* failed */
DWORD err = ::GetLastError();
reportError(err, _T("Creating fiber"));
return 1;
} /* failed */
queue.appendToQueue(fibers[i]);
} /* create fibers */
When all the threads have finished, control will return to this thread, the “main fiber”. To accomplish this, we have to set the mainFiber (the result of ConvertThreadToFiber) to send control back here when the queue is empty (all the contents of all the files have been printed).
mainFiber
queue.SetEmptyFiber(mainFiber);
At this point, we start scheduling the fibers. The fibers will now run until all the contents of the files have been printed.
queue.next();
When all the contents have been printed, the ::SwitchToFiber call will switch execution to this fiber, and the code below will execute.
TRACE( (_T("Finished\n")) );
delete mainFiber;
::ConvertFiberToThread();
return 0;
}
The command line arguments supplied were:
5 a.txt b.txt c.txt
where the contents of the files were of the form “File <filename> <line number>”. The output is shown below; the grouping-of-5 is based on the first parameter to the command line. Note the changes shown: file a.txt has 200 lines, b.txt has 250 lines, and c.txt has 300 lines. These were created by a little program called "datagen", by the following command lines:
datagen 200 "File A" > a.txt
datagen 250 "File B" > b.txt
datagen 300 "File C" > c.txt
The output:
File A 1
File A 2
File A 3
File A 4
File A 5
File B 1
File B 2
File B 3
File B 4
File B 5
File C 1
File C 2
File C 3
File C 4
File C 5
File A 6
File A 7
File A 8
File A 9
File A 10
File B 6
File B 7
File B 8
File B 9
File B 10
File C 6
File C 7
File C 8
File C 9
File C 10
File A 11
File A 12
File A 13
File A 14
File A 15
…
File C 191
File C 192
File C 193
File C 194
File C 195
File A 196
File A 197
File A 198
File A 199
File A 200
File B 196
File B 197
File B 198
File B 199
File B 200
File C 196
File C 197
File C 198
File C 199
File C 200
File B 201 <- Note that file A, with 200 lines is no longer in the mix after this point
File B 202
File B 203
File B 204
File B 205
File C 201
File C 202
File C 203
File C 204
File C 205
File B 206
File B 207
File B 208
File B 209
File B 210
File C 206
File C 207
File C 208
File C 209
File C 210
File B 211
File B 212
File B 213
File B 214
File B 215
…
File B 246
File B 247
File B 248
File B 249
File B 250
File C 246
File C 247
File C 248
File C 249
File C 250
File C 251 <- Note that file B, with 250 lines, is no longer in the mix after this point
File C 252
File C 253
File C 254
File C 255
File C 256
File C 257
File C 258
File C 259
File C 260
This code makes a number of simplifying assumptions which should not be seen as limiting in the general fiber world. For example:
These are both implementation choices. While it is possible to have a fiber be executed by one thread, and then another (::SwitchToFiber doesn’t seem to care which thread the fiber was created in), there are certain risks involved in doing so. If you assume your fibers have no synchronization issues because they are fibers, then suddenly start executing them in separate threads, you have to be exceedingly cautious that you have not introduced a synchronization problem, and furthermore, under maintenance, be careful that no future synchronization problem could arise.
Note that if you choose to run fibers in different threads (that is, a given fiber might be running in different threads; not the situation where multiple threads are each running fibers only within themselves), be sure you read about the /GT compiler option, “Support fiber-safe thread local storage”, before proceeding. This has an impact on the use of __declspec(thread) variables when used in fibers that might be later scheduled in different threads.
__declspec(thread)
My choice of using the ::GetFiberData to obtain the fiber data was a simplifying assumption, and need not be considered a definitive approach to how this is done. It simplified the code for one case, which is the case I was implementing.
As in any real program, there are a few things done that are not part of the original goal of the program, but need to be done for either functionality, elegance, convenience, or some other good reason. This program is no exception, although some of these components had been developed independently and simply used here.
The MFC ASSERT macro is well-designed. It merely reports a problem, but it does not abort the program. This is a valuable feature. The programmer can use the ASSERT macro to aid in debugging, but has the option, upon return, of doing “graceful recovery” and allowing the programmer to do anything from hand-tweaking values with the debugger to doing program changes and using edit-and-go to re-execute lines. Overall, an intelligent and elegant design.
The assert function of the C library, on the other hand, is poorly-designed, using what I call the “PDP-11 mindset”, referring to the earliest minicomputer that popularized the C language. The philosophy then was that programs were small, and it was OK to terminate a program on an internal error. In the world of GUI design, this is always a bad design decision; only the user is permitted to terminate the program.
assert
So, I needed an ASSERT macro for non-MFC applications. In addition, I wanted this code to easily fit inside an MFC app (note that my C++ classes in this article are all standalone, rather than being based on MFC classes like CObject).
CObject
The first trick here was the header file, assert.h:
#pragma once
#ifndef ASSERT
#ifdef _DEBUG
#define ASSERT(x) if(!(x))
\
fail(_T("Assertion Failure %s(%d)\n"), __FILE__, __LINE__)
#define VERIFY(x) ASSERT(x)
extern void fail(LPCTSTR fmt, ...);
#else
#define ASSERT(x)
#define VERIFY(x) x
#endif // _DEBUG
#endif // ASSERT
The fail method is straightforward:
fail
void fail(LPCTSTR fmt, ...)
{
va_list args;
va_start(args, fmt);
TCHAR buffer[1024];
// should be large enough
_vstprintf(buffer, fmt, args);
va_end(args);
OutputDebugString(buffer);
}
However, note how dangerous this is! What if the data were to exceed 1024 characters? The answer is simple: this would be a fatal buffer-overflow situation!
Because I am using only a filename (maximum MAX_PATH characters (nominally 260 characters)) and a line number, this is safe. But it is not safe in general!
MAX_PATH
Unfortunately, this had been done in Visual Studio 6, and there is no solution. But in Visual Studio .NET, after the numerous security holes caused by buffer overflows that Microsoft has been victimized by, the appropriate primitives were added. The important new calls needed here are _vscprintf/_vscwprintf, or as defined “bimodally” in tchar.h, _vsctprintf, which compiles as appropriate for ANSI or Unicode apps. These return the number of characters required to hold the formatted string (not including the NULL terminator). Therefore, a properly-written function for VS.NET would be as shown below:
_vscprintf/_vscwprintf
_vsctprintf
void fail(LPCTSTR fmt, ...)
{
va_list args;
va_start(args, fmt);
int n = _vsctprintf(fmt, args);
LPTSTR buffer = new TCHAR[n + 1];
_vsctprintf(buffer, fmt, args);
va_end(args);
OutputDebugString(buffer);
delete [] buffer;
}
In MFC, this would be easily accomplished by using the CString::FormatV method, which implements a similar algorithm to _vsctprintf, but the premise in this code is that we are not using MFC.
CString::FormatV
Note that this code does not pop up a message box or perform interaction with the user. But it does allow the program to continue. I have often just set a breakpoint at the end of this function if I need to have it stop; or you can create something more elaborate using DebugBreak.
DebugBreak
In the file debug.cpp, I have an implementation of simple output tracing. Note that it looks remarkably like the fail method described above, including the buffer-overflow bug (and the same solution applies if VS.NET is being used). However, we have an additional problem with TRACE that we did not have with ASSERT: a variable number of arguments.
TRACE
The macro system in C represents the state-of-the-art for macro systems of the 1950s, and is nothing as elaborate as a lot of programming languages have had. A good macro system is actually Turing-equivalent in that you can write, in the macros, programs that write your code. The C macro system is, well, to call it “primitive” is a bit unfair. In many cases, a stone ax was far more sophisticated. Nonetheless, it is what we have to work with, so we have to deal with it as best as we can.
The trick here is taken from the KdPrint macro of the Microsoft DDK. To get a variable number of arguments into a macro call, you have to wrap them in an extra set of parentheses, so syntactically, they look like a single argument to the macro. The definition of the TRACE macro, and its accompanying function, is therefore given as (from debug.h):
KdPrint
void CDECL trace(LPCTSTR fmt, ...);
#ifdef _DEBUG
#define TRACE(x) trace x
#else
#define TRACE(x)
#endif
A call on the macro might be done as:
TRACE( (_T("The result for 0x%p is %d (%s)\n"),
object, object->value, object->name) );
Note my tendency to use whitespace between the parameter parentheses of the TRACE macro and the parameter parentheses of the argument list.
Note also the use of %p to get an address to print. It is very important to develop this habit instead of using 0x%08x or some other non-portable method. This would not work correctly if the program were ported to Win64 (the format would have to be hand-edited to be 0x%16I64x; note the first argument after the % is the number 16, and the next sequence is the qualifier I64 to indicate the incoming argument is a 64-bit value, and finally the x to specify the desired formatting).
Generally, I use a much more general implementation wrapping ::FormatMessage, but for this simple example, a console-based app, with no GUI interface, and no use of MFC, a simpler implementation suffices.
::FormatMessage
In keeping with my philosophy of "no two places in my code ever issue the same error message", I allow the user to pass in a unique identification string. For this toy example, I have not bothered to provide for localization via the STRINGTABLE.
STRINGTABLE
void reportError(DWORD err, LPCTSTR reason)
{
LPTSTR msg;
if(FormatMessage(FORMAT_MESSAGE_ALLOCATE_BUFFER | FORMAT_MESSAGE_FROM_SYSTEM,
NULL, // source of message table
err, // error code to format
0, // language ID: default for thread
(LPTSTR)&msg, // place to put pointer
0, // size will be computed
NULL) == 0) // arglist
{ /* error */
_tprintf(_T("Error %d (0x%08x)\n"), err);
return;
} /* error */
_tprintf(_T("Error %s\n%s\n"), reason, msg);
LocalFree(msg);
} // reportEr. | http://www.codeproject.com/script/Articles/View.aspx?aid=27410 | CC-MAIN-2014-41 | refinedweb | 7,607 | 58.11 |
KHTML
KHTMLPageCache Class ReferenceSingleton Object that handles a binary cache on top of the http cache management of kio. More...
#include <khtml_pagecache.h>
Detailed DescriptionSingleton Object that handles a binary cache on top of the http cache management of kio.
A limited number of HTML pages are stored in this cache. This cache is used for the history and operations like "view source". These operations always want to use the original document and don't want to fetch the data from the network again.
It operates completely independent from the kio_http cache.
Definition at line 41 of file khtml_pagecache.h.
Constructor & Destructor Documentation
Definition at line 135 of file khtml_pagecache.cpp.
Member Function Documentation
Add
data to the cache entry with id
id.
Definition at line 158 of file khtml_pagecache.cpp.
Cancel the entry.
Definition at line 174 of file khtml_pagecache.cpp.
Cancel sending data to
recvObj.
Definition at line 218 of file khtml_pagecache.cpp.
Create a new cache entry.
- Returns:
- a cache entry ID is returned.
Definition at line 143 of file khtml_pagecache.cpp.
Signal end of data for the cache entry with id
id.
After calling this the entry is marked complete
Definition at line 166 of file khtml_pagecache.cpp.
Fetch data for cache entry
id and send it to slot
recvSlot in the object
recvObj.
Definition at line 200 of file khtml_pagecache.cpp.
- Returns:
- true when the cache entry with id
isstill valid, and the complete data is available for reading
Definition at line 191 of file khtml_pagecache.cpp.
- Returns:
- true when the cache entry with id
isstill valid, and at least some of the data is available for reading (the complete data may not yet be loaded)
Definition at line 185 of file khtml_pagecache.cpp.
Save the data of cache entry
id to the datastream
str.
Definition at line 272 of file khtml_pagecache.cpp.
static "constructor".
- Returns:
- returns a pointer to the cache, if it exists. creates a new cache otherwise.
Definition at line 121 of file khtml_pagecache.cpp.
The documentation for this class was generated from the following files: | https://api.kde.org/3.5-api/kdelibs-apidocs/khtml/html/classKHTMLPageCache.html | CC-MAIN-2020-29 | refinedweb | 344 | 59.9 |
:orange_book: simple approach for javascript localization
:warning: This project was previously namedc-3po. Some of the talks, presentations, and documentation may reference it with both names.
import { t, ngettext, msgid } from 'ttag'
// formatted strings const name = 'Mike'; const helloMike = t
Hello ${name};
// plurals (works for en locale out of the box) const n = 5; const msg = ngettext(msgid
${ n } task left,
${ n } tasks left, n)
npm install --save ttag
You may also need to install ttag-cli for
pofiles manipulation.
ttag cli -
npm install --save-dev ttag-cli
This project is designed to work in pair with babel-plugin-ttag.
But you can also play with it without transpilation. | https://xscode.com/ttag-org/ttag | CC-MAIN-2021-21 | refinedweb | 109 | 59.23 |
Need some help understanding python solutions of leetcode 371. "Sum of Two Integers". I found is the most voted python solution, but I am having problem understand it.
class Solution(object): def getSum(self, a, b): """ :type a: int :type b: int :rtype: int """ MAX_INT = 0x7FFFFFFF MIN_INT = 0x80000000 MASK = 0x100000000 while b: a, b = (a ^ b) % MASK, ((a & b) << 1) % MASK return a if a <= MAX_INT else ~((a % MIN_INT) ^ MAX_INT)
Let's disregard the
MASK,
MAX_INT and
MIN_INT for a second.
Why does this black magic bitwise stuff work?
The reason why the calculation works is because
(a ^ b) is "summing" the bits of
a and
b. Recall that bitwise xor is
1 when the bits differ, and
0 when the bits are the same. For example (where D is decimal and B is binary), 20D == 10100B, and 9D = 1001B:
10100 1001 ----- 11101
and 11101B == 29D.
But, if you have a case with a carry, it doesn't work so well. For example, consider adding (bitwise xor) 20D and 20D.
10100 10100 ----- 00000
Oops. 20 + 20 certainly doesn't equal 0. Enter the
(a & b) << 1 term. This term represents the "carry" for each position. On the next iteration of the while loop, we add in the carry from the previous loop. So, if we go with the example we had before, we get:
# First iteration (a is 20, b is 20) 10100 ^ 10100 == 00000 # makes a 0 (10100 & 10100) << 1 == 101000 # makes b 40 # Second iteration: 000000 ^ 101000 == 101000 # Makes a 40 (000000 & 101000) << 1 == 0000000 # Makes b 0
Now
b is 0, we are done, so return
a. This algorithm works in general, not just for the specific cases I've outlined. Proof of correctness is left to the reader as an exercise ;)
What do the masks do?
All the masks are doing is ensuring that the value is an integer, because your code even has comments stating that
a,
b, and the return type are of type
int. Thus, since the maximum possible
int (32 bits) is 2147483647. So, if you add 2 to this value, like you did in your example, the
int overflows and you get a negative value. You have to force this in Python, because it doesn't respect this
int boundary that other strongly typed languages like Java and C++ have defined. Consider the following:
def get_sum(a, b): while b: a, b = (a ^ b), (a & b) << 1 return a
This is the version of
getSum without the masks.
print get_sum(2147483647, 2)
outputs
2147483649
while
print Solution().getSum(2147483647, 2)
outputs
-2147483647
due to the overflow.
The moral of the story is the implementation is correct if you define the
int type to only represent 32 bits.
Here is solution works in every case
cases - - - + + - + +
solution
python default int size is not 32bit, it is very large number, so to prevent overflow and stop running into infinite loop, we use 32bit mask to limit int size to 32bit (0xffffffff)
a,b=-1,-1 mask=0xffffffff while (b & mask): carry=a & b a=a^b b=carray <<1 print( (a&Mask) if b>0 else a)
User contributions licensed under CC BY-SA 3.0 | https://windows-hexerror.linestarve.com/q/so38557464-sum-of-two-integers-without-using-operator-in-python | CC-MAIN-2021-39 | refinedweb | 533 | 77.67 |
Main Solution - 572 ms
class Solution { int height(TreeNode root) { return root == null ? -1 : 1 + height(root.left); } public int countNodes(TreeNode root) { int h = height(root); return h < 0 ? 0 : height(root.right) == h-1 ? (1 << h) + countNodes(root.right) : (1 << h-1) + countNodes(root.left); } }
Explanation
The height of a tree can be found by just going left. Let a single node tree have height 0. Find the height
h of the whole tree. If the whole tree is empty, i.e., has height -1, there are 0 nodes.
Otherwise check whether the height of the right subtree is just one less than that of the whole tree, meaning left and right subtree have the same height.
- If yes, then the last node on the last tree row is in the right subtree and the left subtree is a full tree of height h-1. So we take the 2^h-1 nodes of the left subtree plus the 1 root node plus recursively the number of nodes in the right subtree.
- If no, then the last node on the last tree row is in the left subtree and the right subtree is a full tree of height h-2. So we take the 2^(h-1)-1 nodes of the right subtree plus the 1 root node plus recursively the number of nodes in the left subtree.
Since I halve the tree in every recursive step, I have O(log(n)) steps. Finding a height costs O(log(n)). So overall O(log(n)^2).
Iterative Version - 508 ms
Here's an iterative version as well, with the benefit that I don't recompute
h in every step.
class Solution { int height(TreeNode root) { return root == null ? -1 : 1 + height(root.left); } public int countNodes(TreeNode root) { int nodes = 0, h = height(root); while (root != null) { if (height(root.right) == h - 1) { nodes += 1 << h; root = root.right; } else { nodes += 1 << h-1; root = root.left; } h--; } return nodes; } }
A Different Solution - 544 ms
Here's one based on victorlee's C++ solution.
class Solution { public int countNodes(TreeNode root) { if (root == null) return 0; TreeNode left = root, right = root; int height = 0; while (right != null) { left = left.left; right = right.right; height++; } if (left == null) return (1 << height) - 1; return 1 + countNodes(root.left) + countNodes(root.right); } }
Note that that's basically this:
public int countNodes(TreeNode root) { if (root == null) return 0; return 1 + countNodes(root.left) + countNodes(root.right)
That would be O(n). But... the actual solution has a gigantic optimization. It first walks all the way left and right to determine the height and whether it's a full tree, meaning the last row is full. If so, then the answer is just 2^height-1. And since always at least one of the two recursive calls is such a full tree, at least one of the two calls immediately stops. Again we have runtime O(log(n)^2).
My code (check the left subtree is perfect or not), 76 ms even the time complexity is also o(height^2)
class Solution { public: int countNodes(TreeNode* root) { if(!root) return 0; int num=1; TreeNode *curR(root->left), *curL(root->left); while(curR) // curR is the rightmost edge, which has a height equal to or less than the leftmost edge { curL = curL->left; curR = curR->right; num = num<<1; } return num + ( (!curL)?countNodes(root->right):countNodes(root->left) ); } };
Hi @StefanPochmann thanks for your post.
I understand the runtime as O(log(n)^2) for your main solution. However when applying the Master theorem to the recursion (),
I got the time complexity O(log(n)), as the recurrence relation can be expressed as T(n) = 1 * T(n/ 2) + log(n).
Do you know why the master theorem leads to a different runtime here?
The master theorem gives you O(log(n)^2) (actually theta, but I can't type that on my phone :-). You got the recurrence relation right, don't know what mistake you made. Which of the three cases did you use?
I think according to the recurrence relation this is case 3.
Because O(1) < f(n) = O(log(n)) < O(n) so c is a number between 0 and 1,
a is 1 and b is 2, therefore c > logb(a) which maps to case 3.
Is it correct? If so runtime of case 3 is O(f(n)) which is O(log(n))
right I see, I had some misunderstanding on master theory before. Thanks for your explanation Stefan! :)
My Java code derived from your solution. Thanks for sharing!
public class Solution { public int countNodes(TreeNode root) { if(root == null) return 0; TreeNode left = root.right, right = root.right; int height = 0; while(right!= null){ left = left.left; right = right.right; height++; } if(left == null && right == null) return 1+(1 << height)-1 +countNodes(root.left); else return 1+(1 << height+1)-1 + countNodes(root.right); }
}
The while-loop only ends with right==null, so you don't need to check that afterwards.
Share my solution without repeating to calculate the subtree height again and again.
kind of caching the sub problem result.
class Solution { public: int countNodes(TreeNode* root) { return helper(root, -1, -1); } private: int helper(TreeNode* root, int leftH, int rightH){ if(!root) return 0; TreeNode* cur; int lH, rH; if(-1==leftH){ lH=0; cur=root; while(cur){ ++lH; cur=cur->left; } leftH=lH; } if(-1==rightH){ rH=0; cur=root; while(cur){ ++rH; cur=cur->right; } rightH=rH; } if(rightH==leftH){ return (1<<leftH)-1; } return 1+helper(root->left, leftH-1, -1)+helper(root->right, -1, rightH-1); } };
Brilliant code!
One minor issue. For empty tree, would it be even cleaner to use 0 instead of -1 as height? I tend to think any empty is a zero. I sound pretty dull. Do i? :-)
Here is Java code :
public class Solution { int height(TreeNode root) { return root == null? 0 : 1 + height(root.left); } public int countNodes(TreeNode root) { int h = height(root); if (h == 0) return 0; return height(root.right) == h-1? (1 << h-1) + countNodes(root.right) : (1 << h - 2) + countNodes(root.left); } }
I think they're equally clean. I just define the height to be the length of the longest root-to-leaf path (counting steps), which for the single-node tree is zero.
I don't remember, but I probably tried it your way as well and used mine because it allowed the rest of the code to be slightly shorter.
public class Solution { public int countNodes(TreeNode root) { return helper(root, findDepth(root)); } int helper(TreeNode root, int depth){ if(root == null) return 0; int rightDepth = findDepth(root.right); return helper(rightDepth == depth-1?root.right:root.left, depth-1)+(1<<rightDepth); } int findDepth (TreeNode root){ return root == null? 0:1+findDepth(root.left); } }
A little improvement by passing current depth into next recursion.
Excuse me a naive question, O(log(n)^2) is better than O(n)? Did I miss anything? The last solution is O(n), I don't see why it is slower than O(log(n)^2).
Why is the right subtree one height shorter than the left subtree? ie why is it h-2 for counting the right subtree instead of h-1? Shouldn't it be symmetrical?
I'm getting a Time Limit Exceeded with this code, which I believe is equivalent - what's wrong with this?
public class Solution { public int countNodes(TreeNode root) { if (root == null) { return 0; } int h = height(root); if (h == 0) { return 0; } int rightHeight = height(root.right); if (rightHeight+1 == h) { return (int) Math.pow(2, h) + countNodes(root.right); } else { return (int) Math.pow(2, h-1) + countNodes(root.left); } } private int height(TreeNode root) { if (root == null) { return 0; } return 1 + height(root.left); } }
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect. | https://discuss.leetcode.com/topic/15533/concise-java-solutions-o-log-n-2 | CC-MAIN-2017-51 | refinedweb | 1,328 | 76.62 |
Checking for file existence on potentially hanging NFS mountsPosted: | More posts about python
NFS mounts can be annoying as they tend to "hang" (for various reasons). Here I show how I replaced os.path.exists(..) with a better solution for potentially hanging paths.
The following code will "hang" if the NFS mount /mnt/nfs-shr is unresponsive:
if os.path.exists('/mnt/nfs-shr/file.txt'): print 'OK, file exists.' # .. do important stuff
I could not find any easy solution using the Python standard modules. But what about using a small timeout? In my case I just want to print "OK" if the file exists and can be read. Using the subprocess32 module with timeouts and a reasonable command (test) works.
The subprocess32 module is a backport of features found in subprocess of Python 3 to use on 2.x. One of the best features of subprocess32 is the timeout parameter for the Popen calls.
Installing the module as always:
sudo pip install subprocess32
The resulting code looks like:
from subprocess32 import check_call try: check_call(['test', '-f', '/mnt/nfs-shr/file.txt'], timeout=0.5) except: pass # ignore non-zero return code and timeout exceptions else: print 'OK, file exists.' # .. do important stuff
This worked good enough for me. | https://srcco.de/posts/checking-for-file-existence-on-potentially-hanging-nfs-mounts.html | CC-MAIN-2018-47 | refinedweb | 209 | 66.23 |
Azure Active Directory B2C: Build a .NET web API
With Azure Active Directory (Azure AD) B2C, you can secure a web API by using OAuth 2.0 access tokens. These tokens allow your client apps to authenticate to the API. This article shows you how to create a .NET MVC "to-do list" API that allows users of your client application to CRUD tasks. The web API is secured using Azure AD B2C and only allows authenticated users to manage their to-do list.
Create.
Note
The client application and web API must use the same Azure AD B2C directory.
Create a web API
Next, you need to create a web API app in your B2C directory. This gives Azure AD information that it needs to securely communicate with your app. To create an app, follow these instructions. Be sure to:
- Include a web app or web API in the application.
- Use the Redirect URI the web app. This is the default location of the web app client for this code sample.
- Copy the Application ID that is assigned to your app. You'll need it later.
- Enter an app identifier into App ID URI. Copy the full App ID URI. You'll need it later.
- Add permissions through the Published scopes menu.
Create your policies
In Azure AD B2C, every user experience is defined by a policy. You will need to create a policy to communicate with Azure AD B2C. We recommend using the combined sign-up/sign-in policy, as described in the policy reference article. When you create your policy, be sure to:
- Choose Display name and other sign-up attributes in your policy.
- Choose Display name and Object ID claims as application claims for every policy. You can choose other claims as well.
- Copy the Name of each policy after you create it. You'll need the policy name have successfully created the policy, you're ready to build your app. an MVC web application that the user interacts with.
TaskService is the app's back-end web API that stores each user's to-do list. This article will only discuss the
TaskService application. To learn how to build
TaskWebApp using Azure AD B2C, see our .NET web app tutorial.
Update the Azure AD B2C configuration
Our sample is configured to use the policies and client ID of our demo tenant. If you would like to use your own tenant, you will need to do the following:
api:ApiIdentifierwith your "App ID URI"
Secure the API
When you have a client that calls your API, you can secure your API (e.g
TaskService) by using OAuth 2.0 bearer tokens. This ensures that each request to your API will only be valid if the request has a bearer token. Your API can accept and validate bearer tokens by using Microsoft's Open Web Interface for .NET (OWIN) library.
Install OWIN
Begin by installing the OWIN OAuth authentication pipeline by using the Visual Studio Package Manager Console.
PM> Install-Package Microsoft.Owin.Security.OAuth -ProjectName TaskService PM> Install-Package Microsoft.Owin.Security.Jwt -ProjectName TaskService PM> Install-Package Microsoft.Owin.Host.SystemWeb -ProjectName TaskService
This will install the OWIN middleware that will accept and validate bearer tokens.
Add an OWIN startup class
Add an OWIN startup class to the API called
Startup.cs. Right-click on the project, select Add and New Item, and then search for OWIN. The OWIN middleware will invoke the
Configuration(…) method when your app starts.
In our sample, we changed the class declaration to
public partial class Startup and implemented the other part of the class in
App_Start\Startup.Auth.cs. Inside the
Configuration method, we added a call to
ConfigureAuth, which is defined in
Startup.Auth.cs. After the modifications,
Startup.cs looks like the following:
// Startup.cs public partial class Startup { // The OWIN middleware will invoke this method when the app starts public void Configuration(IAppBuilder app) { // ConfigureAuth defined in other part of the class ConfigureAuth(app); } }
Configure OAuth 2.0 authentication
Open the file
App_Start\Startup.Auth.cs, and implement the
ConfigureAuth(...) method. For example, it could look like the following:
// App_Start\Startup.Auth.cs & signing keys from the OpenIDConnect metadata endpoint AccessTokenFormat = new JwtFormat(tvps, new OpenIdConnectCachingSecurityTokenProvider(String.Format(AadInstance, Tenant, DefaultPolicy))) }); } }
Secure the task controller
After the app is configured to use OAuth 2.0 authentication, you can secure your web API by adding an
[Authorize] tag to the task controller. This is the controller where all to-do list manipulation takes place, so you should secure the entire controller at the class level. You can also add the
[Authorize] tag to individual actions for more fine-grained control.
// Controllers\TasksController.cs [Authorize] public class TasksController : ApiController { ... }
Get user information from the token
TasksController stores tasks in a database where each task has an associated user who "owns" the task. The owner is identified by the user's object ID. (This is why you needed to add the object ID as an application claim in all of your policies.)
// Controllers\TasksController.cs public IEnumerable<Models.Task> Get() { string owner = ClaimsPrincipal.Current.FindFirst("").Value; IEnumerable<Models.Task> userTasks = db.Tasks.Where(t => t.owner == owner); return userTasks; }
Validate the permissions in the token
A common requirement for web APIs is to validate the "scopes" present in the token. This ensures that the user has consented to the permissions required to access the to-do list service.
public IEnumerable<Models.Task> Get() { if (ClaimsPrincipal.Current.FindFirst("").Value != "read") { throw new HttpResponseException(new HttpResponseMessage { StatusCode = HttpStatusCode.Unauthorized, ReasonPhrase = "The Scope claim does not contain 'read' or scope claim not found" }); } ... }
Run the sample app
Finally, build and run both
TaskWebApp and
TaskService. Create some tasks on the user's to-do list and notice how they are persisted in the API even after you stop and restart the client.
Edit your policies
After you have secured an API by using Azure AD B2C, you can experiment with your Sign-in/Sign-up policy and view the effects (or lack thereof) on the API. You can manipulate the application claims in the policies and change the user information that is available in the web API. Any claims that you add will be available to your .NET MVC web API in the
ClaimsPrincipal object, as described earlier in this article. | https://docs.microsoft.com/en-us/azure/active-directory-b2c/active-directory-b2c-devquickstarts-api-dotnet | CC-MAIN-2018-13 | refinedweb | 1,059 | 58.99 |
Hello,
I have created a GUI using the Netbeans GUI Builder and I have a JTextArea that I want to output to, but because I need to use threads I am doing a lot of processing in an external thread. I am having issues outputting data from the thread class to the JTextArea in my main GUI Class. Here is the most important code:
// In my MainGui class I have a public JTextArea public javax.swing.JTextArea console; console = new javax.swing.JTextArea(); //I also have a button on this GUI private void jButton2ActionPerformed(java.awt.event.ActionEvent evt) { Thread connect = new Thread(new ConnectThread()); connect.start(); //ConnectThread is my external Thread Class } //In ConnectThread //It is thread so it implements Runnable public class ConnectThread implements Runnable{ MainGui maingui = new MainGui(); public void run(){ maingui.console.append("test"); //Doesn't output to the text area! } }
Any help would be appreciated | https://www.daniweb.com/programming/software-development/threads/455822/access-a-jtextarea-from-another-class | CC-MAIN-2018-30 | refinedweb | 150 | 64.61 |
Get the highlights in your inbox every week.
Optimize your Python code with C | Opensource.com
Optimize your Python code with C
Cython creates C modules that speed up Python code execution, important for complex applications where an interpreted language isn't efficient.
Subscribe now
Cython is a compiler for the Python programming language meant to optimize performance and form an extended Cython programming language. As an extension of Python, Cython is also a superset of the Python language, and it supports calling C functions and declaring C types on variables and class attributes. This makes it easy to wrap external C libraries, embed C into existing applications, or write C extensions for Python in syntax as easy as Python itself.
Cython is commonly used to create C modules that speed up Python code execution. This is important in complex applications where an interpreted language isn't efficient.
Install Cython
You can install Cython on Linux, BSD, Windows, or macOS using Python:
Once installed, it's ready to use.Once installed, it's ready to use.
$ python -m pip install Cython
Transform Python into C
A good way to start with Cython is with a simple "hello world" application. It's not the best demonstration of Cython's advantages, but it shows what happens when you're using Cython.
First, create this simple Python script in a file called
hello.pyx (the
.pyx extension isn't magical and it could technically be anything, but it's Cython's default extension):
print("hello world")
Next, create a Python setup script. A
setup.py file is like Python's version of a makefile, and Cython can use it to process your Python code:
from setuptools import setup
from Cython.Build import cythonize
setup(
ext_modules = cythonize("hello.pyx")
) source (54,000 compared to 20 bytes). Then again, Python is required to run a single Python script, so there's a lot of code propping up that single-line
hello.pyx file.
To use the C code version of your Python "hello world" script, open a Python prompt and import the new
hello module you created:
>>> import hello
hello world
Integrate C code into Python
A good generic test of computational power is calculating prime numbers. A prime number is a positive number greater than 1 that produces a positive integer only when divided by 1 or itself. It's simple in theory, but as numbers get larger, the calculation requirements also increase. In pure Python, it can be done in under 10 lines of code:
import sys
number = int(sys.argv[1])
if not number <= 1:
for i in range(2, number):
if (number % i) == 0:
print("Not prime")
break
else:
print("Integer must be greater than 1")
This script is silent upon success and returns a message if the number is not prime:
$ ./prime.py 3
$ ./prime.py 4
Not prime.
Converting this to Cython requires a little work, partly to make the code appropriate for use as a library and partly for performance.
Scripts and libraries
Many users learn Python as a scripting language: you tell Python the steps you want it to perform, and it does the work. As you learn more about Python (and open source programming in general), you learn that much of the most powerful code out there is in the libraries that other applications can harness. The less specific your code is, the more likely it can be repurposed by a programmer (you included) for other applications. It can be a little more work to decouple computation from workflow, but in the end, it's usually worth the effort.
In the case of this simple prime number calculator, converting it to Cython begins with a setup script:
from setuptools import setup
from Cython.Build import cythonize
setup(
ext_modules = cythonize("prime.py")
)
Transform your script into C:
$ python setup.py build_ext --inplace
Everything appears to be working well so far, but when you attempt to import and use your new module, you get an error:
>>> import prime
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "prime.py", line 2, in init prime
number = sys.argv[1]
IndexError: list index out of range
The problem is that a Python script expects to be run from a terminal, where arguments (in this case, an integer to test as a prime number) are common. You need to modify your script so that it can be used as a library instead.
Write a library
Libraries don't use system arguments and instead accept arguments from other code. Instead of using
sys.argv to bring in user input, make your code a function that accepts an argument called
number (or
num or whatever variable name you prefer):
def calculate(number):
if not number <= 1:
for i in range(2, number):
if (number % i) == 0:
print("Not prime")
break
else:
print("Integer must be greater than 1")
This admittedly makes your script somewhat difficult to test because when you run the code in Python, the
calculate function is never executed. However, Python programmers have devised a common, if not intuitive, workaround for this problem. When the Python interpreter executes a Python script, there's a special variable called
__name__ that gets set to
__main__, but when it's imported as a module,
__name__ is set to the module's name. By leveraging this, you can write a library that is both a Python module and a valid Python script:
import sys
def calculate(number):
if not number <= 1:
for i in range(2, number):
if (number % i) == 0:
print("Not prime")
break
else:
print("Integer must be greater than 1")
if __name__ == "__main__":
number = sys.argv[1]
calculate( int(number) )
Now you can run the code as a command:
$ python ./prime.py 4
Not a prime
And you can convert it to Cython for use as a module:
>>> import prime
>>> prime.calculate(4)
Not prime
C Python
Converting code from pure Python to C with Cython can be useful. This article demonstrates how to do that part, yet there are Cython features to help you optimize your code before conversion, options to analyze your code to find when Cython interacts with C, and much more. If you're using Python, but you're looking to enhance your code with C code or further your understanding of how libraries provide better extensibility than scripts, or if you're just curious about how Python and C can work together, then start experimenting with Cython.
1 Comment, Register or Log in to post a comment.
I'm basically not been decide between
<a href="">react vs angular</a>
In parallel, I'm working on python as well. | https://opensource.com/article/21/4/cython | CC-MAIN-2022-05 | refinedweb | 1,115 | 58.21 |
Share your code.
node-find-files
node-find-filesnode-find-files
This is a quick utility I wrote for recursively searching a directory structure and finding files and directories that match a particular spec.
What's Different About itWhat's Different About it
Similar projects that I was able to find processed the whole directory tree and then handed a set of results back at the end. This module inherits from EventEmitter so it will begin streaming results as soon as the first one is found.
My initial use case was to find files modified since a particular date, but you can also pass a filter function to return files that match any criteria you can find on the fs.stat object in node.
Usage:
var FindFiles = require("node-find-files"); var d = new Date() d.setDate(d.getDate() - 1); var finder = new FindFiles({ rootFolder : "/Users", fileModifiedDate : d }); finder.on("match", function(strPath, stat) { console.log(strPath + " - " + stat.mtime); }) finder.on("complete", function() { console.log("Finished") }) finder.on("patherror", function(err, strPath) { console.log("Error for Path " + strPath + " " + err) // Note that an error in accessing a particular file does not stop the whole show }) finder.on("error", function(err) { console.log("Global Error " + err); }) finder.startSearch();
OK but give me more PowerOK but give me more Power
You can set up the finder object with any filter function you like
// Alternate Usage to achieve the same goal, but you can use any of the properties of the fs.stat object or the path to do your filtering var finder = new FindFiles({ rootFolder : "/Users", filterFunction : function (path, stat) { return (stat.mtime > d) ? true : false; } }); | https://www.npmjs.com/package/node-find-files | CC-MAIN-2018-30 | refinedweb | 273 | 56.55 |
Searching Part 3
Senior Editor, TheScripts.com
I want to explain this part before I explain the
&grab_data and &process_record subs.... as it is a little more complicated. Just a note, this sub procedure just sorts the data as the user defines it to be sorted. It isn't important, and is a feature that can be left out of your database... just you will have to edit the code as needed.
&search_sorter is sent all the data that was pushed into the array
@search_results. It also returns the result into
@search_results... so this part can be skipped as long as you remove it from
sub search_sorter {
my (@results) = @_;
This part reserves
@results as a my variable, limiting it to sub search_sorter only, and fills it with the array we sent it when calling the sub procedure.
my(@rec);
my (%temp_rec,$eval_code);
These bits just reserve the variables strictly into
&search_sorter as well.
$stop = @db_fields;
Saves in the variable
$stop the amount of records in
@db_fields;
foreach $result (@results){
Like all of our foreach loops, this one is re-iterating over the contents of
@results.... the array we sent to it.
(@rec) = &grab_data($result);
Here is that pesky
&grab_data sub again. Like I mentioned above, this will be explained later.
$eval_code ='$temp_rec{$rec[0]} = { $db_key => "$rec[0]", ';
Now here is something kind of new, though it is similar to the code we had to create in
$findit for the regular expression search. Anyways, this code is just starting to make a new hash..... the contents of his hash will be all of the parsed fields, and their respective values. The key to access this hash will be its ID number in the array.
for($i=1;$i<$stop;$i++){
$eval_code .= "\$db_fields[$i] => \"\$rec[$i]\",\n";
}
This bit just starts at the array index 1 (the second one really, as we have the first one already saved as a key). It will just go through all the fields in
@db_fields, and create keys accordingly for this little hash we are creating.
$eval_code .= '};';
This just finishes off the
$eval_code variable by ending it with a proper ending for a hash;
eval $eval_code;
Here it is again. That eval function. How is it being used here? Same way as above, its just inserting this perl code we just made into the program. Therefore, the hash we just made is now considered part of the perl script.
}
$sort_field = $form{'sort_field'};
What field are we sorting by? Well, the
&search_form specifies with radio buttons a bunch of options, all with the key name of each field as their value. This just saves the value into
$sort_field for readability.
@results=();
This empties the @results array, as we have all of its contents in a bunch of hashes anyways.
foreach $field (sort {lc($a->{$sort_field}) cmp lc($b->{$sort_field})} values %temp_rec){
The actual sorting takes place here. Let me explain this bit by bit....
foreach $field # Each value being re-iterated in
# the hash %temp_rec is now referred to as $field
(sort # Just as it looks. This is part of the sort function
{
lc($a->{$sort_field}) # We don't want capitalization
# getting in the way. This compares items as lowercase form....
# just as the lc() function works.
# $a->{$sort_field} sorts it in descending order by the
# $sort_field
cmp # compares to the next
# value..... alphabetically
lc($b->{$sort_field}) # Same as the one above,
# except by $b, which means it is working in
# descending order
} # Closes the sort parameters
values %temp_rec # Shows what to sort by....
# the values of %temp_rec, which eval just made
) # Ends the foreach parameters.
Well, now that perl knows how to sort everything, time to get to work
$new_record = "";
This just resets
$new_record to a null string;
for($i=0;$i<$stop;$i++){
For loop again.... time to rebuild the
@results array with the proper sorted data
$field->{$db_fields[$i]} =~ s/\Q$delimeter\E/~~/og;
$field->{$db_fields[$i]} =~ s/\n/``/g;
$new_record .= "$field->{$db_fields[$i]}\|";
Now We have to make sure all of our fields are encoded again so it looks right. After that, the $new_record variables adds onto itself the values of the fields.... all in proper order. Basically it is just making the data appear again as it does in the database file
}
chop $new_record;
There is an extra delimeter here.... which we are immediately ridding ourselves of.
push @results, $new_record;
Pushes into the array
@results the new values of
$new_record;
}
return (@results);
returns the values of
@results
}
And here is the end of sub search_sorter. I hope you enjoyed your stay! Ya, I'm horrible at jokes.
return (@search_results); }
Here is the end of sub search. The values of
@search_results are returned to whatever called upon
&search. | http://bytes.com/serversidescripting/perl/tutorials/asimpledatabaseprogram/page6.html | crawl-002 | refinedweb | 786 | 73.37 |
i have managed to write a program that reads an arbitrary number of integers less than or equal to 20 and finds the sum and average. i am now trying to write a bubble sort to order the integers from least to greatest but i am having problems. this is the code i have so far.
if it works its suppose to do this.
3 2 1 *
Entered integer 0: was 3
Entered integer 1: was 2
Entered integer 2: was 1
The sum was 6
the average is: 2.000000
--------------
The sorted list is 0: 1
The sorted list is 1: 2
The sorted list is 2: 3
#include <stdio.h> #define SIZE 20 int main(void) { int count = 0; int array[SIZE]; int sum = 0; int i = count; while (scanf("%d", &array[count]) != 0 ){ printf("Entered Integer number %d: was %d\n", count, array[count]); sum += array[count]; count++; } printf("The sum was %d\n", sum); float average = (float)sum / count; printf("The average is: %f\n", average); int this, next, temp; for ( this = 0; this < count ; this ++){ for ( next = this + 1; next < count ; next ++ ){ if ( array[this] > array[next] ){ temp = array[this] ; array[this] = array[next] ; array[next] = temp ; } } } printf("------------\n"); for(count = 0; count < i; count++);{ printf("%d\n", array[count]); } return 0; }
it instead does this
3 2 1 *
Entered integer 0: 3
Entered integer 1: 2
Entered integer 2: 1
The sum was 6
The average is: 2.00000
-----------------
1
please help | https://www.daniweb.com/programming/software-development/threads/316109/bubble-sort-with-an-array | CC-MAIN-2017-17 | refinedweb | 248 | 63.43 |
Internal rate of return has been supported by many scholars as it is easy to evaluate the project using this technique. But IRR has also been criticized on the grounds that it is a makers percentage that contains the implicit assumption that returns are invested at a rate equal to the IRR. There has been much discussion about this point. It may not be possible for a firm to reinvestment intermediate cash flows at a rate of return equal to the project’s internal rate of return. The analysts favouring the use of IRR but concerned about the impact of the reinvestment debate have provided a modified device, also consistent with NPV, when circumvents any reinvestment worries. This is called a modified internal rate of return (MIRR) or the terminal rate of return. Homework help and assignment help section at Transtutors.com provides clear concept of all topics related with finance.
Under MIRR, we aim at converting Non-Conventional Cash Flows into Conventional Cash Flows, thereby eliminating the problem of Multiple IRR. There were several limitations attached with the concept of the Conventional IRR. The MIRR addresses some of these deficiencies e.g., it eliminates multiple IRR rates; it addresses the reinvestment rate issue and produces results which are consistent with the Net Present Value Method. At Transtutors.com we provide complete doubt removal assistance at our homework help and assignment help section.
Under this method of decision making cash flows associated with the project are compounded to the terminal period i.e. end of the life of the project. This compounding is done using an appropriate discount rate, generally the cost of capital is used to compound the cash flows. Thus all the cash flows are accumulated at the end of the life. The MIRR is computed by using the terminal value compounded and the outflow of the project. MIRR is the rate which when discounted at point zero gives the outflow of the projectOur teams of tutors are available to provide homework help and assignment help on any topic related to finance.
The Reinvestment Assumption: The Net Present Value technique assumes that all cash flows can be reinvested at the discount rate used for calculating the NPV. This is logical assumption since the use of NPV technique implies that all projects which provide a higher return than the discounting factor are accepted. In contrast, IRR technique assumes that all cash flows are reinvested at the projects IRR. This assumption means that projects with heavy cash flows in the early years will be favoured by this method vis-à-vis projects which have got heavy cash flows in the later years. Hence reinvestment assumption of internal rate of return is not realistic.
Thus we can see that the Modified Internal Rate of Return is a better technique of evaluating the profitability of a project as it assumes that the cash flows are reinvested at the same initial cost. If you will work out the same problem using both the techniques of Internal Rate of Return and Modified Internal Rate of Return you will sometimes find that both the methods are giving contrasting results, while NPV method gives a positive result, MIRR gives a negative result, therefore NPV method may not be relied upon, when one has to evaluate the profitability of a project. Modified internal rate of return technique also serves as a useful technique of appraising long term projects and performances of real state investments. Our tutors at Transtutors.com are expert in finance from years and years, we provide excellent finance homework and assignment help.
Attach Files | http://www.transtutors.com/homework-help/corporate-finance/capital-budgeting/modified-internal-return-rate/ | CC-MAIN-2017-47 | refinedweb | 600 | 59.23 |
Xamarin ListView control is a list-like interface used to render a set of data items in a vertical or horizontal orientation with visual representation of linear or grid structure. It supports all the essential features such as swiping, template selectors, horizontal and vertical orientations, pull-to-refresh, load more, reordering items, autofitting items, and more. The control also supports sorting, grouping, and filtering with optimizations for working with large amounts of data.
Data binding works out of the box for the most popular data sources such as Lists, ObservableCollection and much more. The ListView has built-in support to load data from data sources and supports sorting, grouping, and filtering out of the box.
The Xamarin.Forms ListView provides the best possible performance on the Xamarin platform with an optimized reuse strategy, smooth scrolling experience, and virtualization, even when loading large data sets.
The Xamarin.Forms ListView supports two different layouts: linear and grid. The linear layout arranges items in a single column, whereas the grid layout arranges items in a predefined number of columns. Both layouts are supported in a horizontal list view as well.
Host any view or control to customize the ListView items using data templates. The control supports customizing each item by dynamic selection of the UI using a data template selector.
Easily configure a horizontal ListView to load items in a horizontal orientation based on your business requirements.
Associate swipe views with custom actions. Swipe views are displayed by swiping from left to right or right to left over an item.
Reorder items by dragging them either with a long press or from the drag indicator view. Xamarin ListView supports customizing item appearances while dragging.
Refresh the data source at runtime by performing a pull-to-refresh action.
Display the ListView items in an accordion view. Each item can be expanded or stretched to reveal the content associated with that item. No items, exactly one item, or more than one item can be expanded at a time depending on the configuration.
Easy and flexible way to use all the necessary properties and commands of Xamarin.Forms ListView in the MVVM approach.. Pull-to-refresh and load more are also supported seamlessly in the MVVM pattern.
Sort data in ascending or descending order in programmatically and XAML as well. Custom sorting logic is also supported.
Group items with easy-to-use APIs and use custom grouping logic. ListView also supports expanding and collapsing groups, and freezing group headers.
Set predicates to easily filter items by searching data and view data as needed.
Automatically update the UI when adding new items and deleting items in the underlying collection. Update the sorting and grouping when changing business objects.
Display a header view at the top of the control and customize the header UI. The Xamarin.Forms ListView also supports freezing a header or making it scrollable.
Freeze a footer at the bottom of the control or make it scrollable. Customize the footer by adding any view such as an image, text, and more.
Dynamically change the size of items to enhance their readability.
Xamarin.Forms ListView items can be paged using the data pager control, which supports interactively manipulating data.
Specify the required space between items in the ListView for an elegant look and feel.
Customize the size of the header, footer, group header, and items in a ListView. It’s also possible to autofit them based on their content.
Customize ListView items with rounded corners to match the native user experience.
Apply alternating row styling to the ListView items based on specific conditions for better data readability.
Customize the appearance of ListView items to show drop shadow effects using frames. The ListView also supports applying the built-in Xamarin.Forms effects to an item.
Apply styling for each item in the ListView based on different conditions. This allows you to apply styling to particular views for an item or an entire row based on the property values of the business object.
Apply default and custom animations to the ListView items when they appear in the view, when scrolling, when navigating from one page to another page, or when interacting with an item.
ListView supports changing the flow direction of text from right to left in vertical and horizontal orientations.
All static text within the ListView can be localized to any desired language.
Automatically adjust the list height at runtime when the content of a list view item is changed.
Easily get started with Xamarin ListView using a few simple lines of C# code, as demonstrated below. Also explore our Xamarin Listview Example that shows you how to render the Xamarin listview component.
<?xml version="1.0" encoding="utf-8" ?> <ContentPage xmlns="" xmlns: <syncfusion:SfListView x: </ContentPage>
using Syncfusion.ListView.XForms; using Xamarin.Forms; namespace GettingStarted { public class App : Application { SfListView listView; public App() { listView = new SfListView(); MainPage = new ContentPage { Content = listView }; } } }
We do not sell the Xamarin ListView separately. It is only available for purchase as part of the Syncfusion Xamarin suite, which contains over 150 Xamarin components, including the ListView. ListView, are not sold individually, only as a single package. However, we have competitively priced the product so it only costs a little bit more than what some other vendors charge for their ListView. | https://www.syncfusion.com/xamarin-ui-controls/xamarin-listview?utm_medium=listing&utm_source=vs_marketplace&utm_campaign=xamarin-image-editor-trial-vs_marketplace | CC-MAIN-2021-39 | refinedweb | 879 | 56.55 |
Up to [cvs.NetBSD.org] / src / lib / libc / gen
Request diff between arbitrary revisions
Default branch: MAIN
Revision 1.9.44.1 / (download) - annotate - [select for diffs], Tue Oct 30 18:58:45 2012 UTC (3 years ago) by yamt
Branch: yamt-pagecache
CVS Tags: yamt-pagecache-tag8
Changes since 1.9: +3 -5 lines
Diff to previous 1.9 (colored) next main 1.10 (colored)
sync with head
Revision 1.10 / (download) - annotate - [select for diffs], Mon Jun 25 22:32:43 2012 UTC (3 years,, agc-symver-base, agc-symver, HEAD
Changes since 1.9: +3 -5 lines
Diff to previous 1.9 (colored)
Update old-style definitions to ANSI, remove a couple of register definitions along the way. Fixed gcc 4.1 build (thank you vax)
Revision 1.9 / (download) - annotate - [select for diffs], Tue Nov 29 13:30:49 2005 UTC (9.8: +3 -6 lines
Diff to previous 1.8 (colored)
cleanup casts and KNF.
Revision 1.8 / (download) - annotate - [select for diffs], Thu Aug 7 16:42:47 2003 UTC (12 years,.7 / (download) - annotate - [select for diffs], Tue Mar 4 19:44:09 2003 UTC (12 years, 8 months ago) by nathanw
Branch: MAIN
Changes since 1.6: +2 -7 lines
Diff to previous 1.6 (colored)
Don't acquire __environ_lock around exec*() calls; nothing requires that these calls be thread-safe with respect to the environment, and it causes serious problems for threaded applications which call vfork() and exec*() (including indirectly, via popen() or system()). Acquire and release __environ_lock in the parent in popen() and system() to play safe and provide the child with a stable environment. __environ_lock should also have an atfork() handler; still under development.
Revision 1.6 / (download) - annotate - [select for diffs], Sat Jan 18 11:23:53 2003 UTC (12 years, 10 months ago) by thorpej
Branch: MAIN
Changes since 1.5: +3 -3 lines
Diff to previous 1.5 (colored)
Merge the nathanw_sa branch.
Revision 1.5.6.1 / (download) - annotate - [select for diffs], Wed Aug 8 16:27:43 2001 UTC (14 years, 3 months ago) by nathanw
Branch: nathanw_sa
CVS Tags: nathanw_sa_end
Changes since 1.5: +3 -3 lines
Diff to previous 1.5 (colored) next main 1.6 (colored)
_REENT -> _REENTRANT
Revision 1.5 / (download) - annotate - [select for diffs], Sat Jan 22 22:19:09 2000 UTC (15 years,_before_merge, nathanw_sa_base, minoura-xpg4dl-base, minoura-xpg4dl, fvdl_fs64_base
Branch point for: nathanw_sa
Changes since 1.4: +3 -3 lines
Diff to previous 1.4 (colored)
Delint. Remove trailing ; from uses of __weak_alias(). The macro inserts this if needed.
Revision 1.4 / (download) - annotate - [select for diffs], Fri Sep 11 21:03:18 1998 UTC (17 years, 2.3: +12 -3 lines
Diff to previous 1.3 (colored)
Add a multiple-reader/single-writer lock to protect environ.
Revision 1.3 / (download) - annotate - [select for diffs], Mon Jul 21 14:06:56 1997 UTC (18 years,: +7 -2 lines
Diff to previous 1.2 .2 / (download) - annotate - [select for diffs], Sun Jul 13 19:45:52 1997 UTC (18 years, 4 months ago) by christos
Branch: MAIN
Changes since 1.1: +3 -2 lines
Diff to previous 1.1 (colored)
Fix RCSID's
Revision 1.1.2.1 / (download) - annotate - [select for diffs], Thu Sep 19 20:02:30 1996 UTC (19 years, 2 months ago) by jtc
Branch: ivory_soap2
Changes since 1.1: +7 -2 lines
Diff to previous 1.1 (colored) next main 1.2 (colored)
snapshot namespace cleanup: gen
Revision 1.1 / (download) - annotate - [select for diffs], Wed Jul 3 21:41:53 1996 UTC (19 years, 4 months ago) by jtc
Branch: MAIN
CVS Tags: nsswitch
Branch point for: ivory_soap2. | http://cvsweb.netbsd.org/bsdweb.cgi/src/lib/libc/gen/execv.c | CC-MAIN-2015-48 | refinedweb | 618 | 76.22 |
![if !IE]> <![endif]>
Programs using Tuples
Program 1: Write a program to swap two values using tuple assignment
a = int(input("Enter value of A: "))
b = int(input("Enter value of B: "))
print("Value of A = ", a, "\n Value of B = ", b)
(a, b) = (b, a)
print("Value of A = ", a, "\n Value of B = ", b)
Output:
Enter value of A: 54
Enter value of B: 38
Value of A = 54
Value of B = 38
Value of A = 38
Value of B = 54
Program 2: Write a program using a function that returns the area and circumference of a circle whose radius is passed as an argument.two values using tuple assignment
pi = 3.14
def Circle(r):
return (pi*r*r, 2*pi*r)
radius = float(input("Enter the Radius: "))
(area, circum) = Circle(radius)
print ("Area of the circle = ", area)
print ("Circumference of the circle = ", circum)
Output:
Enter the Radius: 5
Area of the circle = 78.5
Circumference of the circle = 31.400000000000002
Program 3: Write a program that has a list of positive and negative numbers. Create a new tuple that has only positive numbers from the list
Numbers = (5, -8, 6, 8, -4, 3, 1)
Positive = ( )
for i in Numbers:
if i > 0:
Positive += (i, )
print("Positive Numbers: ", Positive)
Output:
Positive Numbers: (5, 6, 8, 3, 1)
Related Topics
Copyright © 2018-2023 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai. | https://www.brainkart.com/article/Programs-using-Tuples_37266/ | CC-MAIN-2022-40 | refinedweb | 235 | 52.73 |
I have a multiprocessing job where I'm queuing read only numpy arrays, as part of a producer consumer pipeline.
Currently they're being pickled, because this is the default behaviour of
multiprocessing.Queue
from multiprocessing import Process, Queue
import numpy as np
class __EndToken(object):
pass
def parrallel_pipeline(buffer_size=50):
def parrallel_pipeline_with_args(f):
def consumer(xs, q):
for x in xs:
q.put(x)
q.put(__EndToken())
def parallel_generator(f_xs):
q = Queue(buffer_size)
consumer_process = Process(target=consumer,args=(f_xs,q,))
consumer_process.start()
while True:
x = q.get()
if isinstance(x, __EndToken):
break
yield x
def f_wrapper(xs):
return parallel_generator(f(xs))
return f_wrapper
return parrallel_pipeline_with_args
@parrallel_pipeline(3)
def f(xs):
for x in xs:
yield x + 1.0
@parrallel_pipeline(3)
def g(xs):
for x in xs:
yield x * 3
@parrallel_pipeline(3)
def h(xs):
for x in xs:
yield x * x
def xs():
for i in range(1000):
yield np.random.uniform(0,1,(500,2000))
if __name__ == "__main__":
rs = f(g(h(xs())))
for r in rs:
print r
Since you're using numpy, you can take advantage of the fact that the global interpreter lock is released during numpy computations. This means you can do parallel processing with standard threads and shared memory, instead of multiprocessing and inter-process communication. Here's a version of your code, tweaked to use threading.Thread and Queue.Queue instead of multiprocessing.Process and multiprocessing.Queue. This passes a numpy ndarray via a queue without pickling it. On my computer, this runs about 3 times faster than your code. (However, it's only about 20% faster than the serial version of your code. I have suggested some other approaches further down.)
from threading import Thread from Queue import Queue import numpy as np class __EndToken(object): pass def parallel_pipeline(buffer_size=50): def parallel_pipeline_with_args(f): def consumer(xs, q): for x in xs: q.put(x) q.put(__EndToken()) def parallel_generator(f_xs): q = Queue(buffer_size) consumer_process = Thread(target=consumer,args=(f_xs,q,)) consumer_process.start() while True: x = q.get() if isinstance(x, __EndToken): break h(xs): for x in xs: yield x * x def xs(): for i in range(1000): yield np.random.uniform(0,1,(500,2000)) rs = f(g(h(xs()))) %time print sum(r.sum() for r in rs) # 12.2s
Another option, closer to what you requested, would be to continue using the multiprocessing package, but pass data between processes using arrays stored in shared memory. The code below can do that. Before spawning subprocesses, it creates a pool of numpy arrays backed by shared memory. The workers retrieve a free array, copy their results into it, then put the id of the array (not the array itself) onto the queue. This is much faster than pushing the whole array onto the queue, since it avoids pickling the arrays. This has similar performance to the threaded version above (about 10% slower), and may scale better if the global interpreter lock is an issue (i.e., you run a lot of python code in the functions). However, this is more complex than your version or the threaded version shown above.
from multiprocessing import Process, Queue, Array import numpy as np # how many shared-memory arrays will be needed? n_arrays = 12 # find the size and data type for the arrays # note: every @parallel_pipeline function must accept and yield arrays of this size template = np.random.uniform(0,1,(500,2000)) dtype = template.dtype shape = template.shape byte_count = len(template.data) del template # make a pool of numpy arrays, each backed by shared memory, # and a queue to keep track of which ones are free array_pool = [None] * n_arrays avail_arrays = Queue(n_arrays) for i in range(n_arrays): buf = Array('c', byte_count, lock=False) array_pool[i] = np.frombuffer(buf, dtype=dtype).reshape(shape) avail_arrays.put(i) class __EndToken(object): pass # note: the function result is copied into shared memory in consumer() # and copied out of shared memory in parallel_generator(). # The consumer() copy could be avoided if f(x) directly assigned the # computation result to a shared-memory array (using code like consumer()). # The parallel_generator() copy could be avoided if the user of the value # always put the array id back into the avail_arrays queue. def parallel_pipeline(buffer_size=50): def parallel_pipeline_with_args(f): def consumer(xs, q): for x in xs: # get the ID of an available shared-memory array id = avail_arrays.get() # copy x to the shared-memory array array_pool[id][:] = x # put the array's id (not the whole array) onto the queue q.put(id) q.put(__EndToken()) def parallel_generator(f_xs): q = Queue(buffer_size) consumer_process = Process(target=consumer,args=(f_xs,q,)) consumer_process.start() while True: # get the id of the array holding the next value id = q.get() if isinstance(id, __EndToken): break # copy the array x = array_pool[id].copy() # put the shared-memory array back into the pool avail_arrays.put(id) s(xs): for x in xs: yield x.sum() def xs(): for i in range(1000): yield np.random.uniform(0,1,(500,2000)) print "multiprocessing with shared-memory arrays:" %time print sum(r.sum() for r in f(g(h(xs())))) # 14.0s
The code above is only about 20% faster than a single-threaded version (12.2s vs. 14.8s for the serial version shown below). That is because each function is run in a single thread or process, and most of the work is done by xs(). The execution time for the example above is nearly the same as if you just ran
%time print sum(1 for x in xs()).
If your real project has many more intermediate functions and/or they are more complex than the ones you showed, then the workload may be distributed better among processors, and this may not be a problem. However, if your workload really does resemble the code you provided, then you may want to refactor your code to allocate one sample to each thread instead of one function to each thread. That would look like the code below (both threading and multiprocessing versions are shown):
import multiprocessing import threading, Queue import numpy as np def f(x): return x + 1.0 def g(x): return x * 3 def h(x): return x * x def final(i): return f(g(h(x(i)))) def final_sum(i): return f(g(h(x(i)))).sum() def x(i): # produce sample number i return np.random.uniform(0, 1, (500, 2000)) def rs_serial(func, n): for i in range(n): yield func(i) def rs_parallel_threaded(func, n): todo = range(n) q = Queue.Queue(2*n_workers) def worker(): while True: try: # the global interpreter lock ensures only one thread does this at a time i = todo.pop() q.put(func(i)) except IndexError: # none left to do q.put(None) break threads = [] for j in range(n_workers): t = threading.Thread(target=worker) t.daemon=False threads.append(t) # in case it's needed later t.start() while True: x = q.get() if x is None: break else: yield x def rs_parallel_mp(func, n): pool = multiprocessing.Pool(n_workers) return pool.imap_unordered(func, range(n)) n_workers = 4 n_samples = 1000 print "serial:" # 14.8s %time print sum(r.sum() for r in rs_serial(final, n_samples)) print "threaded:" # 10.1s %time print sum(r.sum() for r in rs_parallel_threaded(final, n_samples)) print "mp return arrays:" # 19.6s %time print sum(r.sum() for r in rs_parallel_mp(final, n_samples)) print "mp return results:" # 8.4s %time print sum(r_sum for r_sum in rs_parallel_mp(final_sum, n_samples))
The threaded version of this code is only slightly faster than the first example I gave, and only about 30% faster than the serial version. That's not as much of a speedup as I would have expected; maybe Python is still getting partly bogged down by the GIL?
The multiprocessing version performs significantly faster than your original multiprocessing code, primarily because all the functions get chained together in a single process, rather than queueing (and pickling) intermediate results. However, it is still slower than the serial version because all the result arrays have to get pickled (in the worker process) and unpickled (in the main process) before being returned by imap_unordered. However, if you can arrange it so that your pipeline returns aggregate results instead of the complete arrays, then you can avoid the pickling overhead, and the multiprocessing version is fastest: about 43% faster than the serial version.
OK, now for the sake of completeness, here's a version of the second example that uses multiprocessing with your original generator functions instead of the finer-scale functions shown above. This uses some tricks to spread the samples among multiple processes, which may make it unsuitable for many workflows. But using generators does seem to be slightly faster than using the finer-scale functions, and this method can get you up to a 54% speedup vs. the serial version shown above. However, that is only available if you don't need to return the full arrays from the worker functions.
import multiprocessing, itertools, math import numpy as np def f(xs): for x in xs: yield x + 1.0 def g(xs): for x in xs: yield x * 3 def h(xs): for x in xs: yield x * x def xs(): for i in range(1000): yield np.random.uniform(0,1,(500,2000)) def final(): return f(g(h(xs()))) def final_sum(): for x in f(g(h(xs()))): yield x.sum() def get_chunk(args): """Retrieve n values (n=args[1]) from a generator function (f=args[0]) and return them as a list. This runs in a worker process and does all the computation.""" return list(itertools.islice(args[0](), args[1])) def parallelize(gen_func, max_items, n_workers=4, chunk_size=50): """Pull up to max_items items from several copies of gen_func, in small groups in parallel processes. chunk_size should be big enough to improve efficiency (one copy of gen_func will be run for each chunk) but small enough to avoid exhausting memory (each worker will keep chunk_size items in memory).""" pool = multiprocessing.Pool(n_workers) # how many chunks will be needed to yield at least max_items items? n_chunks = int(math.ceil(float(max_items)/float(chunk_size))) # generate a suitable series of arguments for get_chunk() args_list = itertools.repeat((gen_func, chunk_size), n_chunks) # chunk_gen will yield a series of chunks (lists of results) from the generator function, # totaling n_chunks * chunk_size items (which is >= max_items) chunk_gen = pool.imap_unordered(get_chunk, args_list) # parallel_gen flattens the chunks, and yields individual items parallel_gen = itertools.chain.from_iterable(chunk_gen) # limit the output to max_items items return itertools.islice(parallel_gen, max_items) # in this case, the parallel version is slower than a single process, probably # due to overhead of gathering numpy arrays in imap_unordered (via pickle?) print "serial, return arrays:" # 15.3s %time print sum(r.sum() for r in final()) print "parallel, return arrays:" # 24.2s %time print sum(r.sum() for r in parallelize(final, max_items=1000)) # in this case, the parallel version is more than twice as fast as the single-thread version print "serial, return result:" # 15.1s %time print sum(r for r in final_sum()) print "parallel, return result:" # 6.8s %time print sum(r for r in parallelize(final_sum, max_items=1000)) | https://codedump.io/share/9Jpt6sMEwbbg/1/fast-queue-of-read-only-numpy-arrays | CC-MAIN-2017-13 | refinedweb | 1,863 | 56.15 |
C# is a simple programming language that is aimed at those who wish to develop applications based on Microsoft’s .NET platform. What makes C# simple and easy to learn is the fact that the language is a direct descendant of Java, while also carrying a lot of C and C++ family traits. As such, anyone with exposure to these programming languages will be able to connect instantly with C# courses. C# was developed by Microsoft as part of its .Net initiative and has since gained ECMA and ISO certifications. It is a general purpose object oriented programing language that conforms to Common Language Infrastructure – a key aspect of .Net technology that allows an application to be written in any of the several commonly used programming languages for use on any operating system while requiring a common run-time program rather than a specific one for their execution.
In today’s tutorial, we are going to discuss the Char type in C#. We assume at least beginners level understanding of C#. If you’re completely new to it, do go over and first check out this introductory course to C#.
What is a CHAR Type?
Char represents a character value type and holds a single Unicode character value. It is 2 bytes in size. This is a built-in value type in C#. What this means is that the Char type is integral to the C# programming language and is not one that has been defined by the user. Also, Char is a value type since it actually stores the value in the memory that has been allocated on the stack. This is unlike reference type where the stack actually contains the reference or address of the variable while the object itself resides in the heap. To learn more about character data type and how it’s used, you can check out this course on C#.
Example 1 : How to Assign and Print a Char Variable
Here is a simple program to show you how char type can be used:
using System;
class Program
{
static void Main()
{
//Store character ‘a’ in variable temp_var
char temp_var = ‘a’;
//Store unicode 64 ie A, converted to a char type, in variable temp_var char temp_var = (char)64;
//Print it out on the screen Console.WriteLine(A);
}
}
The program is perhaps the simplest that it can be, but it does point out the major elements you need to understand.
The first two statements inside Main() show how Char variables are defined and assigned values. Here temp_var is the name of the variable and it’s defined to be of type char. What this means is that temp_var will only be able to hold a single character value, which is assigned within single quotes. While the first statement assigns the value ‘a’ directly to the variable temp_var, the second one assigns the Unicode character corresponding to the uppercase letter A.
Example 2 : How to Assign and Print a Char Variable
Let’s take a look at another program to further clarify the char data type:
using System;
class Program
{
static void Main()
{
// Assigning an actual character with the single quotes char value = ‘a’;
// Print the actual value of the char variable Console.WriteLine(value);
// Print the unicode integer value corresponding to 'a Console.WriteLine((int)value);
// Print the Console.WriteLine(value.GetType());
}
}
Output:
a
97
System.Char
This is similar to the first program we wrote. Like before, we assigned the character ‘a’ to the variable ‘value’. After that, we have printed it out in different ways. Note that the second WriteLine prints ’97’ – which is the Unicode value of ‘a’. This just goes to show how char variables are stored internally. The third WriteLine shows you can actually check what type your variable really is. The GetType is a method that points out the most derived type of the object, which in this case is Systems.Char.
Example 3: Arrays With Char Types
Though we showed you two simple examples above, it is extremely rare to use the char data type in C# the way it has been shown. Instead, it is used more common to use character arrays to store a string data. An array is just a cohesive set of data elements. So a character array, is a set of characters, stored together, in a single variable. Individual members of the character array are accessed via an index. Let’s understand this better with an example.
using System; class Program { static void Main() { // Different ways to assign values to a character array // Method 1 char[] array1 = { 'a', 'b', 'c' }; // Method 2 char[] array2 = new char[] { 'l', 'm', 'n' }; // Method 3 char[] array3 = new char[4]; array3[0] = 'x'; array3[1] = 'y'; array3[2] = 'z'; array3[3] = 'a';
// Let's print these arrays Console.WriteLine(array1); Console.WriteLine(array2); Console.WriteLine(array3); } }
Output:
a b c
l m n
x y z a
In this example, we show you three different ways of assigning values to character arrays. The first one, array1 is a direct assignment. You can just list the characters (enclosed in single quotes) within curly braces, using a comma to separate each of them. This is the most common approach when you have short pre-defined string that you want to store in the character array.
In the second method, we take a more cautious approach, and we use “new char[]” to specify that the characters following it are part of a character array. This is a good programming practice, though not mandatory.
The third method is most commonly used in well structured, larger programs. We first define array3 to be a new character array, with space for 4 characters. Then, at a later point in the program we actually assign the values. This is usually separated by a few other lines of code in between. We’ve just made it simpler to show you.
While working with arrays of type char, note that the array index always starts from 0. To learn more about arrays and how to use them, you can take this intermediate course on C#.
Example 4: Converting String Data Type to Char Array Type
C# also has a string data type. Strings are pretty similar conceptually to character arrays. In some situations you may need to convert from one to the other. This course can show you how to handle strings in C#. In the meanwhile, here is a program that helps you convert string data type to char array type:
using System; class Program { static void Main() { // Assigns string value to the string variable a. string a = "C SHARP"; // Convert string to array usingToCharArrayfunction. char[] array = value.ToCharArray(); // UsingLoop to traverse through array. for (int i = 0; i<array.Length; i++) { // Get character from array. char letter = array[i]; // Print each letter on screen Console.Write("Letter: "); Console.WriteLine(letter); } } }
Output: C
S
H
A
R
P
We hope these examples have given you a good idea about how to use character variables in C#. In addition, there are several methods that can be used on the char data type within the .Net framework (to learn more about C# and the .Net framework, you can take this course). These include but not limited to:
char.IsControl
char.IsDigit
char.IsLower
char.IsSeparator
char.IsWhiteSpace
char.ToLower
ToChar | https://blog.udemy.com/c-sharp-char/ | CC-MAIN-2017-17 | refinedweb | 1,222 | 63.19 |
.
// A shoe is just many decks of cards, usually 6 in Las Vegas
public class Shoe).
Strategy
Player
Hand
public class Player
{
private Strategy plyrStrategy;
private Hand[] hands;
_
A hand is just an array of Card objects:
Card.
Shoe
for( int k=0; k<2; k++ )
{
foreach( Player player in players )
{
player.GetHands()[0].Add( shoe.Next() );
}
dealer.Hand.Add(shoe.Next() );
}:
CurrentHand[0]
IList
ArrayList: would like to write and then model your objects to allow it..
Dealer).
Deck.()<br />
{<br />
if( m_strCurrentAudioFile != "" )<br />
PlaySound( Application.StartupPath + @"\sounds\" + m_strCurrentAudioFile, 0, 0 );<br />
<br />
m_strCurrentAudioFile = ""; <br />
<br />
if( audioThread != null )<br />
audioThread.Abort();<br />
}
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Articles/3121/Blackjack-a-real-world-OOD-example?PageFlow=FixedWidth | CC-MAIN-2017-30 | refinedweb | 135 | 58.69 |
# Disposable pattern (Disposable Design Principle) pt.1
[](https://github.com/sidristij/dotnetbook)
Disposable pattern (Disposable Design Principle)
================================================
I guess almost any programmer who uses .NET will now say this pattern is a piece of cake. That it is the best-known pattern used on the platform. However, even the simplest and well-known problem domain will have secret areas which you have never looked at. So, let’s describe the whole thing from the beginning for the first-timers and all the rest (so that each of you could remember the basics). Don’t skip these paragraphs — I am watching you!
If I ask what is IDisposable, you will surely say that it is
```
public interface IDisposable
{
void Dispose();
}
```
What is the purpose of the interface? I mean, why do we need to clear up memory at all if we have a smart Garbage Collector that clears the memory instead of us, so we even don’t have to think about it. However, there are some small details.
> This chapter was translated from Russian jointly by author and by [professional translators](https://github.com/bartov-e). You can help us with translation from Russian or English into any other language, primarily into Chinese or German.
>
>
>
> Also, if you want thank us, the best way you can do that is to give us a star on github or to fork repository [ github/sidristij/dotnetbook](https://github.com/sidristij/dotnetbook).
>
>
There is a misconception that `IDisposable` serves to release unmanaged resources. This is only partially true and to understand it, you just need to remember the examples of unmanaged resources. Is `File` class an unmanaged resource? No. Maybe `DbContext` is an unmanaged resource? No, again. An unmanaged resource is something that doesn’t belong to .NET type system. Something the platform didn’t create, something that exists out of its scope. A simple example is an opened file handle in an operating system. A handle is a number that uniquely identifies a file opened – no, not by you – by an operating system. That is, all control structures (e.g. the position of a file in a file system, file fragments in case of fragmentation and other service information, the numbers of a cylinder, a head or a sector of an HDD) are inside an OS but not .NET platform. The only unmanaged resource that is passed to .NET platform is IntPtr number. This number is wrapped by FileSafeHandle, which is in its turn wrapped by the File class. It means the File class is not an unmanaged resource on its own, but uses an additional layer in the form of IntPtr to include an unmanaged resource — the handle of an opened file. How do you read that file? Using a set of methods in WinAPI or Linux OS.
Synchronization primitives in multithreaded or multiprocessor programs are the second example of unmanaged resources. Here belong data arrays that are passed through P/Invoke and also mutexes or semaphores.
> Note that OS doesn’t simply pass the handle of an unmanaged resource to an application. It also saves that handle in the table of handles opened by the process. Thus, OS can correctly close the resources after the application termination. This ensures the resources will be closed anyway after you exit the application. However, the running time of an application can be different which can cause long resource locking.
Ok. Now we covered unmanaged resources. Why do we need to use IDisposable in these cases? Because .NET Framework has no idea what’s going on outside its territory. If you open a file using OS API, .NET will know nothing about it. If you allocate a memory range for your own needs (for example using VirtualAlloc), .NET will also know nothing. If it doesn’t know, it will not release the memory occupied by a VirtualAlloc call. Or, it will not close a file opened directly via an OS API call. These can cause different and unexpected consequences. You can get OutOfMemory if you allocate too much memory without releasing it (e.g. just by setting a pointer to null). Or, if you open a file on a file share through OS without closing it, you will lock the file on that file share for a long time. The file share example is especially good as the lock will remain on the IIS side even after you close a connection with a server. You don’t have rights to release the lock and you will have to ask administrators to perform `iisreset` or to close resource manually using special software.
This problem on a remote server can become a complex task to solve.
All these cases need a universal and familiar *protocol for interaction* between a type system and a programmer. It should clearly identify the types that require forced closing. The IDisposable interface serves exactly this purpose. It functions the following way: if a type contains the implementation of the IDisposable interface, you must call Dispose() after you finish work with an instance of that type.
So, there are two standard ways to call it. Usually you create an entity instance to use it quickly within one method or within the lifetime of the entity instance.
The first way is to wrap an instance into `using(...){ ... }`. It means you instruct to destroy an object after the using-related block is over, i.e. to call Dispose(). The second way is to destroy the object, when its lifetime is over, with a reference to the object we want to release. But .NET has nothing but a finalization method that implies automatic destruction of an object, right? However, finalization is not suitable at all as we don’t know when it will be called. Meanwhile, we need to release an object at a certain time, for example just after we finish work with an opened file. That is why we also need to implement IDisposable and call Dispose to release all resources we owned. Thus, we follow the *protocol*, and it is very important. Because if somebody follows it, all the participants should do the same to avoid problems.
Different ways to implement IDisposable
---------------------------------------
Let’s look at the implementations of IDisposable from simple to complicated. The first and the simplest is to use IDisposable as it is:
```
public class ResourceHolder : IDisposable
{
DisposableResource _anotherResource = new DisposableResource();
public void Dispose()
{
_anotherResource.Dispose();
}
}
```
Here, we create an instance of a resource that is further released by Dispose(). The only thing that makes this implementation inconsistent is that you still can work with the instance after its destruction by `Dispose()`:
```
public class ResourceHolder : IDisposable
{
private DisposableResource _anotherResource = new DisposableResource();
private bool _disposed;
public void Dispose()
{
if(_disposed) return;
_anotherResource.Dispose();
_disposed = true;
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private void CheckDisposed()
{
if(_disposed) {
throw new ObjectDisposedException();
}
}
}
```
CheckDisposed() must be called as a first expression in all public methods of a class. The obtained `ResourceHolder` class structure looks good to destroy an unmanaged resource, which is `DisposableResource`. However, this structure is not suitable for a wrapped-in unmanaged resource. Let’s look at the example with an unmanaged resource.
```
public class FileWrapper : IDisposable
{
IntPtr _handle;
public FileWrapper(string name)
{
_handle = CreateFile(name, 0, 0, 0, 0, 0, IntPtr.Zero);
}
public void Dispose()
{
CloseHandle(_handle);
}
[DllImport("kernel32.dll", EntryPoint = "CreateFile", SetLastError = true)]
private static extern IntPtr CreateFile(String lpFileName,
UInt32 dwDesiredAccess, UInt32 dwShareMode,
IntPtr lpSecurityAttributes, UInt32 dwCreationDisposition,
UInt32 dwFlagsAndAttributes,
IntPtr hTemplateFile);
[DllImport("kernel32.dll", SetLastError=true)]
private static extern bool CloseHandle(IntPtr hObject);
}
```
What is the difference in the behavior of the last two examples? The first one describes the interaction of two managed resources. This means that if a program works correctly, the resource will be released anyway. Since `DisposableResource` is managed, .NET CLR knows about it and will release the memory from it if its behaviour is incorrect. Note that I consciously don’t assume what `DisposableResource` type encapsulates. There can be any kind of logic and structure. It can contain both managed and unmanaged resources. *This shouldn't concern us at all*. Nobody asks us to decompile third party’s libraries each time and see whether they use managed or unmanaged resources. And if *our type* uses an unmanaged resource, we cannot be unaware of this. We do this in `FileWrapper` class. So, what happens in this case? If we use unmanaged resources, we have two scenarios. The first one is when everything is OK and Dispose is called. The second one is when something goes wrong and Dispose failed.
Let’s say straight away why this may go wrong:
* If we use `using(obj) { ... }`, an exception may appear in an inner block of code. This exception is caught by `finally` block, which we cannot see (this is syntactic sugar of C#). This block calls Dispose implicitly. However, there are cases when this doesn’t happen. For example, neither `catch` nor `finally` catch `StackOverflowException`. You should always remember this. Because if some thread becomes recursive and `StackOverflowException` occurs at some point, .NET will forget about the resources that it used but not released. It doesn’t know how to release unmanaged resources. They will stay in memory until OS releases them, i.e. when you exit a program, or even some time after the termination of an application.
* If we call Dispose() from another Dispose(). Again, we may happen to fail to get to it. This is not the case of an absent-minded app developer, who forgot to call Dispose(). It is the question of exceptions. However, these are not only the exceptions that crash a thread of an application. Here we talk about all exceptions that will prevent an algorithm from calling an external Dispose() that will call our Dispose().
All these cases will create suspended unmanaged resources. That is because Garbage Collector doesn’t know it should collect them. All it can do upon next check is to discover that the last reference to an object graph with our `FileWrapper` type is lost. In this case, the memory will be reallocated for objects with references. How can we prevent it?
We must implement the finalizer of an object. The 'finalizer' is named this way on purpose. It is not a destructor as it may seem because of similar ways to call finalizers in C# and destructors in C++. The difference is that a finalizer will be called *anyway*, contrary to a destructor (as well as `Dispose()`). A finalizer is called when Garbage Collection is initiated (now it is enough to know this, but things are a bit more complicated). It is used for a guaranteed release of resources if *something goes wrong*. We *must* implement a finalizer to release unmanaged resources. Again, because a finalizer is called when GC is initiated, we don’t know when this happens in general.
Let’s expand our code:
```
public class FileWrapper : IDisposable
{
IntPtr _handle;
public FileWrapper(string name)
{
_handle = CreateFile(name, 0, 0, 0, 0, 0, IntPtr.Zero);
}
public void Dispose()
{
InternalDispose();
GC.SuppressFinalize(this);
}
private void InternalDispose()
{
CloseHandle(_handle);
}
~FileWrapper()
{
InternalDispose();
}
/// other methods
}
```
We enhanced the example with the knowledge about the finalization process and secured the application against losing resource information if Dispose() is not called. We also called GC.SuppressFinalize to disable the finalization of the instance of the type if Dispose() is successfully called. There is no need to release the same resource twice, right? Thus, we also reduce the finalization queue by letting go a random region of code that is likely to run with finalization in parallel, some time later. Now, let’s enhance the example even more.
```
public class FileWrapper : IDisposable
{
IntPtr _handle;
bool _disposed;
public FileWrapper(string name)
{
_handle = CreateFile(name, 0, 0, 0, 0, 0, IntPtr.Zero);
}
public void Dispose()
{
if(_disposed) return;
_disposed = true;
InternalDispose();
GC.SuppressFinalize(this);
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private void CheckDisposed()
{
if(_disposed) {
throw new ObjectDisposedException();
}
}
private void InternalDispose()
{
CloseHandle(_handle);
}
~FileWrapper()
{
InternalDispose();
}
/// other methods
}
```
Now our example of a type that encapsulates an unmanaged resource looks complete. Unfortunately, the second `Dispose()` is in fact a standard of the platform and we allow to call it. Note that people often allow the second call of `Dispose()` to avoid problems with a calling code and this is wrong. However, a user of your library who looks at MS documentation may not think so and will allow multiple calls of Dispose(). Calling other public methods will destroy the integrity of an object anyway. If we destroyed the object, we cannot work with it anymore. This means we must call `CheckDisposed` at the beginning of each public method.
However, this code contains a severe problem that prevents it from working as we intended. If we remember how garbage collection works, we will notice one feature. When collecting garbage, GC *primarily* finalizes everything inherited directly from *Object*. Next it deals with objects that implement *CriticalFinalizerObject*. This becomes a problem as both classes that we designed inherit Object. We don’t know in which order they will come to the “last mile”. However, a higher-level object can use its finalizer to finalize an object with an unmanaged resource. Although, this doesn’t sound like a great idea. The order of finalization would be very helpful here. To set it, the lower-level type with an encapsulated unmanaged resource must be inherited from `CriticalFinalizerObject`.
The second reason is more profound. Imagine that you dared to write an application that doesn’t take much care of memory. It allocates memory in huge quantities, without cashing and other subtleties. One day this application will crash with OutOfMemoryException. When it occurs, code runs specifically. It cannot allocate anything, since it will lead to a repeated exception, even if the first one is caught. This doesn’t mean we shouldn’t create new instances of objects. Even a simple method call can throw this exception, e.g. that of finalization. I remind you that methods are compiled when you call them for the first time. This is usual behavior. How can we prevent this problem? Quite easily. If your object is inherited from *CriticalFinalizerObject*, then *all* methods of this type will be compiled straight away upon loading it in memory. Moreover, if you mark methods with *[PrePrepareMethod]* attribute, they will be also pre-compiled and will be secure to call in a low resource situation.
Why is that important? Why spend too much effort on those that pass away? Because unmanaged resources can be suspended in a system for long. Even after you restart a computer. If a user opens a file from a file share in your application, the former will be locked by a remote host and released on the timeout or when you release a resource by closing the file. If your application crashes when the file is opened, it won't be released even after reboot. You will have to wait long until the remote host releases it. Also, you shouldn’t allow exceptions in finalizers. This leads to an accelerated crash of the CLR and of an application as you cannot wrap the call of a finalizer in *try… catch*. I mean, when you try to release a resource, you must be sure it can be released. The last but not less important fact: if the CLR unloads a domain abnormally, the finalizers of types, derived from *CriticalFinalizerObject* will be also called, unlike those inherited directly from *Object*.
> This charper translated from Russian as from language of author by [professional translators](https://github.com/bartov-e). You can help us with creating translated version of this text to any other language including Chinese or German using Russian and English versions of text as source.
>
>
>
> Also, if you want to say «thank you», the best way you can choose is giving us a star on github or forking repository [ https://github.com/sidristij/dotnetbook](https://github.com/sidristij/dotnetbook)
>
> | https://habr.com/ru/post/443958/ | null | null | 2,704 | 57.37 |
Why it matters how you log in python
Logging in python is something that won’t matter to you until it matters. If you’ve ever seen a message like one of these, I’m here to explain why it’s a bad idea to ignore them:
. Here is an example of where you might hit a snag that you can avoid:
import logging
logger = logging.getLogger()def bad_call_with_problem():
try:
logger.error('haha %d' % None)
except:
print("Raised an error.")
raisedef good_call_with_problem():
try:
logger.error('haha %d', None)
except:
print("This will never be raised.")
raise
Non-Lazy Logging
The difference is subtle but to Python important. If you’ve read the documentation about python’s logging module you may have seen that logging in python is lazy — this is only the case if you’re actually using the logging call correctly.
In the first example, there is one line of code that is actually doing two things, and the first action raises an assertion, which prevents logging from being lazy (i.e. only evaluating things passed to the logging call if it needs to be emitted):
logger.error('haha %d' % None)
This is equivalent to:
message = 'haha %d' % None
logger.error(message)
This is NOT a contrived example — it’s literally an error we were troubleshooting in production today that caused a worker to restart. In the first line, we are trying to define a message, but there is a TypeError while we’re defining that message. The habit of building your logging message within the logging call is common for new loggers. Hell — it’s common for me, and I know the pitfalls! The upside is that logging handles it well *if you do it right*.
Lazy Logging
The second example in our logging functions above is actually using logging’s “lazy” logging to evaluate — but it has an added benefit: Logging that’s called correctly won’t actually raise an assertion if there’s a problem building your logging message. There’s one obscure line in the logging documentation that describes this:
The message is actually a format string, which may contain the standard string substitution syntax of %s, %d, %f, and so on. The rest of their arguments is a list of objects that correspond with the substitution fields in the message.
So putting two and two together, you get this sucker:
logger.error('haha %d', None)
What the logging module will do in this case is log the error that occurred when trying to log the message. That means you get the messages in your logs, and you don’t break your production application via logging.
Trust Your Linter
If you’re seeing those linting errors — don’t ignore them! They’re not outdated and they aren’t useless — they will prevent you from making your logging call accidentally break your application! | https://medium.com/swlh/why-it-matters-how-you-log-in-python-1a1085851205?source=post_internal_links---------5---------------------------- | CC-MAIN-2020-50 | refinedweb | 477 | 62.07 |
Multiple" which is what is run when invoking
/usr/bin/python. On Fedora 21 RHEL:
Note that the use of
%{!? [...]} does allow this to work without the check for rhel versions but putting the conditional in documents when we can remove the entire stanza from the spec file.
In Fedora,/
Using installing python modules we include several different types of files.
- *.
Source files
Source files (*.py) must be included in the same packages as the byte-compiled versions of them.__/* %global with_python3 1 %else %{!?__python2: Arch: noarch BuildRequires: python2-devel %if 0%{?with_python3} BuildRequires: python3-devel %endif # if with_python3
When we build the python3 module in addition to the python.
%if 0%{?with_python3} cp -a python2 python3 find python3 -name '*.py' | xargs sed -i '1s|^#!python|#!%{__python3}|' %endif # with_python3 find python2 -name '*.py' | xargs sed -i '1s|^#!python|#!%{__python2}|'.
%build_sitelib} to
%{python3_sitelib}. Since we chose to install the python2 version of
%{_bindir}/easy_install earlier we need to include that file in the python2 package rather than the python3 subpackage.
The problem)
Guidelines rpms.1. 32_sitelib} or %{python2. | https://www.fedoraproject.org/w/index.php?title=Packaging:Python&direction=next&oldid=409010 | CC-MAIN-2022-27 | refinedweb | 178 | 78.65 |
Everybody knows that the minute you put your email address into the href attribute of an <A> tag with a mailto: prefix, the spam harvesters are going to scrape it and it will increase the amount of absolutely ludicrous spam that (hopefully at least) ends up in your Junk Mail folder.
There are a number of ways to make a "mailto:" link continue to work but to provide obfuscated glop to the spambot harvesting crawlers that is completely useless to them, since they examine the html of the page, not what you "see" in the rendered page.
One way to do this is to convert everything into it's HTML Entity representation. Browsers are happy with this, but since spambots cannot see anything in the HTML they have scraped from the page that Regex-es into what could be an email address, they typically miss it.
There are a number of javascript examples that will do this, but up until today I have not seen any ASP.NET controls that perform this useful action of "entity-izing" the email link.
So, I decided to have a little fun and write a control, and put it out into the public domain for .NET developers. Now when authoring a custom control to perform some functionality, the first thing we should ask ourselves as developers is whether there is already a control that we can derive our class from in order to save time with existing needed functionality that's already present.
And of course in this case, the answer is yes - we have the ASP.NET Hyperlink control; all we really need to do is provide a feature where if the NavigateUrl contains "mailto:" we take over from there.
So this control derives from Hyperlink, and all I needed to do to put it together was to override the control's Render method, and provide my own private method to obfuscate the content that follows so that it works as a mailto: link in the page but is useless to spambot harvesters and the horses-ass punk script kiddies that run them. I don't know about you, but when the last scumbag email spammer is in jail, I will sleep well at night.
Here's the code:
using System;
using System.ComponentModel;
using System.Web.UI;
using System.Web.UI.WebControls;
[assembly : TagPrefix("PAB.WebControls", "pab")]
namespace PAB.WebControls
{
[DefaultProperty("Text")]
[ToolboxData("<{0}:EmailLink runat=server></{0}:EmailLink>")]
public class EmailLink : HyperLink
{
private string HtmlObfuscate(string text)
{
string tempHtmlObfuscate = null;
int i = 0;
int acode = 0;
string repl = "";
tempHtmlObfuscate = text;
for (i = tempHtmlObfuscate.Length; i >= 1; i--)
{
acode = Convert.ToInt32(tempHtmlObfuscate[i - 1]);
if (acode == 32)
{
repl = " ";
}
else if (acode == 34)
{
repl = "\"";
}
else if (acode == 38)
{
repl = "&";
}
else if (acode == 60)
{
repl = "<";
}
else if (acode == 62)
{
repl = ">";
}
else if (acode >= 32 && acode <= 127)
{
repl = "&#" + Convert.ToString(acode) + ";";
}
else
{
repl = "&#" + Convert.ToString(acode) + ";";
}
if (repl.Length > 0)
{
tempHtmlObfuscate = tempHtmlObfuscate.Substring(0, i - 1) +
repl + tempHtmlObfuscate.Substring(i);
repl = "";
}
}
return tempHtmlObfuscate;
}
protected override void Render(HtmlTextWriter writer)
{
HyperLink link = this;
writer.Write("<a");
if (!string.IsNullOrEmpty(link.NavigateUrl))
{
if (link.NavigateUrl.StartsWith("~"))
writer.WriteAttribute("href", link.ResolveClientUrl(link.NavigateUrl));
else if (link.NavigateUrl.StartsWith("mailto:"))
{
link.NavigateUrl = HtmlObfuscate(link.NavigateUrl);
writer.WriteAttribute("href", link.NavigateUrl);
}
else
{
writer.WriteAttribute("href", link.NavigateUrl);
}
}
if (!string.IsNullOrEmpty(link.CssClass))
writer.WriteAttribute("class", link.CssClass);
if (!string.IsNullOrEmpty(link.Target))
writer.WriteAttribute("target", link.Target);
foreach (string key in link.Attributes.Keys)
writer.WriteAttribute(key, link.Attributes[key]);
writer.Write(">");
RenderContents(writer);
writer.Write("</a>");
}
protected override void RenderContents(HtmlTextWriter output)
{
output.Write(Text);
}
}
}
This is super handy, but I'm running into a snag that is probably a silly mistake on my part. When the user clicks on the link, it opens another browser window with the URL as well as the email client. The email part is working perfectly...but how do I stop it from opening a new browser window as well?
My code...I'm sure I'm missing something silly...thanks...
<
Your registration confirmation failed. Go back to your email and attempt the link again, or click
</
so it will always open in it's own desktop window outside of the browser.
However, have you tried the target="_self" attribute on the emailLink control definition?
Thanks for the quick response...!
Ok, with a little more testing, here's some more info and a little more detail...
It opens a second browser window with the proper mailto URL. That window then provides an error that it cannot find the page. But simultaneously, it brings up the email client as expected in it's own window. It just leaves this single browser window open with the standard error on it that it can't find the URL (same standard error you would get if you just type garbage into the URL box in the browser).
Here's the weird part. If I disable Protected Mode (in IE7 under Vista) and then re-enable it, it works just fine! Seems the "extra window" is occurring when Vista brings up the user control dialog that asks permission to open the email client. That seems to be the problem somehow. Very strange.
I've tried different targets. The only difference is that instead of creating a new window with the errant URL message, it changes the original window to have the mailto URL and the errant URL message, but still opens the email client. Actually, that seems to make sense I guess. I'm just curious why the errant message.
One other detail...if I paste the exact same URL into a fresh browser window, it works great...no errant URL message, it just fires up the email client as expected.
I think it must have to do with how the user account control on Vista is interacting with IE7 and the email client...
Message Board
Articles
Submit Article
Add Blog To Blog Directory
.NET Developer Registry
Software Downloads
Feedback
Win a free License of CryptoLicensing! | http://eggheadcafe.com/tutorials/aspnet/9817ba6f-ba00-4523-9097-f0235cdcb480/spambot-killer-aspnet-ma.aspx | crawl-002 | refinedweb | 999 | 56.15 |
> I. I do not know why it's not working. However, I can offer an alternative to try to see if you can reproduce the same error. # Configure __mxl_shell to either 'sh' or 'cmd'. # Configure __mxl_echo to the shell echo command. # This function appends a single line of text to a file. # $1 - The file. # $2 - The text. ifeq (sh,$(__mxl_shell)) fwrite=$(shell ($(__mxl_echo) "$2") >> $1) else ifeq (cmd,$(__mxl_shell)) fwrite=$(shell ($(__mxl_echo)/$2) >> $1) else fwrite=$(call _mxl_shell_not_supported) endif endif There was a reason why I needed the extra parentheses to run the echo command in a subshell, but it's been so long since I've written it that I don't remember the exact reason (I think it's related to your problem though). I'm not sure if this will solve your problem, but I have not noticed any missing info from using $(fwrite). Feel free to give it a try. $(foreach item,$(very_long_list),$(call fwrite,foobar.txt,$(item))) Best regards, John Dill
<<winmail.dat>> | http://lists.gnu.org/archive/html/help-make/2011-02/msg00044.html | CC-MAIN-2015-35 | refinedweb | 169 | 75.61 |
architecture
twitter
Reddit
Topics
No topic found
Content Filter
Articles
Videos
Blogs
News
Complexity Level
Beginner
Intermediate
Advanced
Refine by Author
[Clear]
Vidya Vrat Agarwal (6)
Stephen Simon (5)
Abhishek Mishra (5)
Ajay Yadav (4)
Gaurav Kumar (4)
C# Corner Live (3)
Sanjay Mrinal Kumar Kodangi (2)
Bassam Alugili (2)
Sandeep Kumar (2)
Bhagvan Kommadi (2)
Rohit Gupta (2)
Nilanjan Dutta (2)
Amit Tyagi (2)
Sukanya Mandal (2)
Munib Butt (2)
Prashant Kumbhar (2)
Nagaraj M (2)
S.Ravi Kumar (2)
Bilal Shahzad (2)
Murali Krishna (1)
Mike Gold (1)
Ankit Kanojia (1)
Abhishek Jain (1)
Prashant Nimbare (1)
Jay Krishna Reddy (1)
Ravi Shukla (1)
ksasikumar (1)
Ashish Bhatnagar (1)
Gaurav (1)
Mahesh Babu (1)
Akash Bhimani (1)
Sumesh Sukumaran (1)
Mahesh Chand (1)
Abhijit Patil (1)
Miguel Teheran (1)
Varun Setia (1)
Lalit Bansal (1)
Allen O'neill (1)
Divya Sharma (1)
Shrimant Telgave (1)
Sandeep Singh Shekhawat (1)
Rajendr Singh Dodiya (1)
Venkatesh Kumar (1)
Shervin Cyril (1)
Sormita Chakraborty (1)
Tushar Dikshit (1)
Sean Franklin (1)
Omar Bangash (1)
Gowtham K (1)
Jitan Gupta (1)
Mushtaq M A (1)
Abhishek Dubey (1)
Sumantro Mukherjee (1)
Satheesh Palani (1)
Surjeet Yadav (1)
Afzaal Ahmad Zeeshan (1)
Govind Khandelwal (1)
Suthish Nair (1)
Abhishek Arora (1)
Amol Khairnar (1)
Nirav Gandhi (1)
C# Curator (1)
Matthew Cochran (1)
Naveen Sharma (1)
Paul Abraham (1)
Omar Rodriguez (1)
Related resources for architecture
No resource found
Three-Tier Architecture In ASP.NET With Example
1/15/2021 5:43:08 AM.
Layer is reusable portion of a code. In three tier architecture we are creating three layers and reusing the code as per our requirement.
Remote Sensing and Remote Control over the Internet with GP-3 Board
1/14/2021 11:16:49 AM.
In this article we will revisit the GP-3 board (which we have employed in a few other hardware projects on C# Corner) and use the GP-3 to measure temperature in a remote location.
Keynote - Azure Conference 2020
1/7/2021 5:39:25 AM.
This is the keynote session by Vishwas Lele on the first day of Azure Virtual Conference.
Docker Architecture - Environment - Advantages
1/7/2021 1:09:36 AM.
Docker is an internal or integral part of DevOps. With its fantastic architecture design tool, we can achieve major issues or exceptional use of Docker as a virtual machine.
Data Architecture Guide - AMA Ep. 26
1/6/2021 7:01:03 AM.
AMA on Data Architecture Guide ft. Buck Woody
Microservices Architecture
1/4/2021 7:07:52 AM.
In this session of Software Architecture Virtual Conference, you'll learn about Microservices Architecture
Cloud Applications Architecture
1/4/2021 7:07:37 AM.
In this session of Software Architecture Virtual Conference, you'll learn about Cloud Applications Architecture
Application/Web Architecture
1/4/2021 7:07:26 AM.
In this session of Software Architecture Virtual Conference, you'll learn about Application / Web Architecture
Remoting Technology: Distributed Computing
12/31/2020 10:20:29 AM.
This article shows how cross-process and cross-machine interaction of applications are developed with the .NET Framework work.
Realizing Continuous Integration With Cruise Control.Net (CC.Net)
12/22/2020 6:16:12 AM.
Cruise Control is a free and open source build scheduler implemented using the .Net Framework.
Using Web Services in ASP.Net
12/22/2020 4:55:00 AM.
This article does not go into the inner details of web services, but you will get enough information to start creating and consuming simple ASP.Net web services.
What, Why and How: SOA With Microsoft
12/14/2020 7:43:33 AM.
In this article you will learn in detail the what, why and how to use SOA with Microsoft.
💠 Clean Architecture End To End In .NET 5
12/9/2020 10:18:20 PM.
In this article, we are going to learn about the entire architecture setup in ASP.NET Core 5.0.
Are You Cloud Native?
12/8/2020 8:31:30 AM.
In this article, you will learn what cloud native is and how cloud native architecture is something you want to learn.
Using Mediator In Web API's For CQRS Pattern
12/8/2020 7:46:22 AM.
In this article, you will learn how to achievee the clean architecture by using MediatR nuget in WebAPI's.
NHibernate in Details: Part 2
12/8/2020 5:37:45 AM.
In this article you will learn Hibernate with .NET or another programming language.
.NET Remoting
12/7/2020 5:03:38 AM.
.NET Remoting provides an infrastructure for distributed objects. It exposes full object semantics of .NET to remote processes using plumbing that is both flexible and extensible.
Clean Architecture And CQRS Pattern
11/28/2020 12:18:44 PM.
In this article, you will learn about Design Patterns for achieving clean code architecture.
J2EE Application Architecture With Messaging
11/12/2020 8:12:49 AM.
In this article, you will learn about a J2EE Application Architecture with Messaging.
Modern Architecture Shop - Autoscaler
11/10/2020 8:39:13 AM.
Modern Architecture Shop is a clean-lightweight .NET and scalable application. Keep your eye on the Road Map (watch it on GitHub). The next version will contain a minimal feature set so that the user
Creating Information Architecture For HSEQ Document Management
11/6/2020 8:18:13 PM.
I am working as an HSEQ System Analyst, and one of my tasks includes creating a Document Management system.My aim is to create a system using out of the box features as maximum as possible.
ADO.NET Overview
11/4/2020 8:09:07 AM.
In this article we examine the connected layer and learn about the significant role of data providers that are essentially concrete implementations of several namespaces, interfaces and base classes.
Overview of ADO.NET Architecture
11/4/2020 7:42:34 AM.
In this article you will learn about the architecture of ADO.NET including the Connection, Command, Data Reader, DataAdapter and DataTable classes.
Connection Oriented Architecture In ADO.NET
11/4/2020 7:28:49 AM.
In this article I am writing about connection oriented architecture in ADO.NET.
How to Retrieve Images from Database (In Layer Architecture)
11/4/2020 7:21:39 AM.
Here you will learn how to Retrieve Images from a Database (In a Layer Architecture).
Connectionless Architecture In ADO.NET
11/3/2020 7:24:49 AM.
In this article I am writing about connectionless architecture in ADO .NET.
Database Programming With ADO.NET
11/3/2020 1:25:27 AM.
This article defines database programming With ADO.NET. Learn ADO.NET Architecture, Connected Architecture, Disconnected Architecture, Connection Object, Data Reader Object, Data Adapter Object, DataS
ADO.NET From Windows DNA's Perspective
11/2/2020 10:22:05 AM.
Windows DNA is a framework to build multi-tier, high performance, scalable distributed applications over the network. This article takes a Windows DNA perspective and compares how ADO.NET fits in Wind
ADO.NET Disconnected Architecture
11/1/2020 7:33:32 AM.
This article explains the Disconnected Architecture in ADO.NET.).
E - Learning Platform Architecture
10/9/2020 3:31:03 PM.
In this article, you will learn about the architecture of an E-Learning platform.
Data Architecture - Database Or Data Warehouse Or Data Lake
9/25/2020 8:50:42 AM.
This article explains the concept of Database, Data Warehouse and Data Lake and provides guidance on selecting one of them for your data requirements.
Intel Architecture And Devices
9/24/2020 8:47:24 AM.
In this article, we will be discussing about CPU, GPU, VPU, FPGA, etc.
Using LINQ with C#
9/21/2020 9:31:49 AM.
LINQ introduces a standard, unified, easy-to-learn approach for querying and modifying data. In this article, you'll learn basics of LINQ and how to use LINQ in C#.
Modern Architecture Shop (Clean Architecture And Microservices)
8/23/2020 2:59:09 PM.
Modern Architecture Shop is a clean, lightweight .NET microservices application, showcasing the use of Dapr to build microservices-based applications.
Microsoft Azure Well-Architected Framework
8/19/2020 7:59:43 AM.
This article walks you through the principles and tenets of a well-architected framework that you should follow while building a Microsoft Azure workload.
MVVM Architecture
8/11/2020 6:43:57 AM.
In this article, you will learn about MVVM Architecture.
5 Reasons Why Multi-Tenant Architecture Is Best For SaaS Application Development
8/7/2020 8:11:33 AM.
In this article, you will learn about 5 Reasons Why Multi-Tenant Architecture is Best for SaaS Application Development.
IT Transformation Strategy
7/31/2020 7:44:48 AM.
In this article, you wi.ll learn about an IT Transformation Strategy
Versioning REST APIs
7/16/2020 5:54:17 AM.
In this article, you will learn about Versioning REST APIs.
Code Quality/Performance & Application Architecture
7/12/2020 2:32:39 AM.
Ask Me Anything Show ft. David McCarter
How To Scroll And View Millions Of Records
7/7/2020 6:44:06 AM.
Concepts for handling large volumes of data visually - mobile and web.
Getting Started With Kubernetes - Part Two
7/1/2020 7:24:07 AM.
In this article, we will be learning about the architecture of Kubernetes.
A Complete Python TensorFlow Tutorial
6/22/2020 5:12:43 AM.
This is the eighth tutorial in the series. In this tutorial, we will be studying about Tensorflow and its functionalities. TensorFlow is a free and open-source software library for dataflow and differ
Fetch Data From Database and Show in Tabular Format in Codeigniter
6/16/2020 12:40:06 AM.
In this article, I will show how to fetch data from a database and show it in a view of a tabular format in codeigniter.
The 5 Most Important Stakeholders In An Architect's Day At Work
6/7/2020 7:42:54 AM.
This article explains the most important stakeholders in a typical day at work of a Software Architect.
Demystifying Serverless - MVP Show ft. Zeeshan Ep. 2
5/29/2020 2:01:52 AM.
Watch the second episode of C# Corner MVP show and learn about serverless computing.
Understanding MVC 5 Project Architecture
5/26/2020 12:46:13 AM.
In this video, we will understand MVC 5 project architecture and this video help you to learn MVC 5 tutorial for beginners in .net c#
Onion Architecture In ASP.NET Core MVC
5/6/2020 2:56:26 AM.
In this article, you will learn about Onion Architecture in ASP.NET Core MVC.
ASP.NET Core 3.1 Web API and Swagger
5/4/2020 4:04:09 AM.
This post shows creating a Web API in ASP.NET Core and the use of Swagger. Swagger is a GUI interface to communicate with a Web API.
Container Components in Angular
5/1/2020 6:50:41 PM.
In this article, you will learn how to create Container Components in Angular.
Let's Develop an Angular Application - Basic Architecture of an Angular Application
4/29/2020 6:36:53 AM.
In this article, you will learn about the basic architecture of an Angular Application.
CAP (a.ka. Brewer's) Theorem a Key player in Distributed System Design
4/23/2020 2:33:29 AM.
One of the key challenges in system design and software architecture is to choose a trade-off. Today's customer-obsessed software enterprises are becoming even more cognizant by putting customer e
Internet of Things (IoT) - Part 2 (Building Blocks & Architecture)
4/22/2020 12:45:42 AM.
This article explains the IoT services and the layers in the IoT.
Internet of Things (IoT) - Part 4 (Network Protocols and Architecture)
4/22/2020 12:22:26 AM.
In this article you will learn Network Protocols and Architecture in the Internet of Things (IoT).
GraphQL Integration Microservice/Monolith Architecture
4/18/2020 7:44:41 PM.
In this article, you will learn about GraphQL Integration Microservice/Monolith Architecture.
Building a .NET Desktop Application
4/16/2020 3:39:12 PM.
In my previous two articles, we first looked at the four common types of .NET applications. We briefly discussed these four types of applications. In the second article, we looked at the design and ar
Design and Architecture of the .NET Application
4/15/2020 3:51:30 PM.
Before we delve into the technical details of any application and start to put together the technical design and architecture, we first need to understand what the application will do, what type of in
IoT Solutions - Architecture/Design And Business Aspects
4/15/2020 4:51:05 AM.
IoT is the abbreviated term meaning "Internet of Things". By definition Internet of Things is: the inter-networking of physical devices, vehicles (also referred to as "connected devices
Azure IoT - Part Two
4/14/2020 9:28:31 AM.
In this article we will learn overview of Azure IoT architecture. In next article we will see more about Azure IoT Hub.
Reactive System Design Principles
4/7/2020 12:37:28 PM.
Framework Design - The Template Method Pattern
4/1/2020 7:51:19 AM.
In this article, I'll teach you how to conceptualize and implement this wonderful design pattern. It's used frequently in application frameworks, so the final product will likely appear famili
Connecting Devices Using Client and Server Architecture in Android: Part 2
3/27/2020 2:11:55 AM.
This article explains how to manage Bluetooth connections and their profiles..
Connecting Devices Using Client and Server Architecture in Android: Part 1
3/27/2020 2:05:38 AM.
This article illustrates how the sharing mechanism works using Bluetooth technology via client and server architecture.
Jetpack Architecture Component - Navigation In Android
3/25/2020 10:13:35 AM.
In this article you will learn about Navigation in Android.
Azure Kubernetes Service Architecture
3/23/2020 9:31:56 AM.
This article walks us through the internal architecture of Azure Kubernetes Service and gives a clear understanding of the Control Plane and the Node
Xamarin Project Architecture
3/22/2020 3:28:07 PM.
This article is about the architecture of Xamarin Environment in Visual Studio on Mac.
MVVM Architecture with LiveData Android
3/15/2020 12:45:34 PM.
In this article, you will learn about MVVM Architecture with LiveData Android.
Azure Serverless Architecture 🧐
3/15/2020 8:06:23 AM.
In this session, we will explore – Evolution of Serverless Architecture, its tangible benefits, most popular serverless services from Microsoft Azure, scenarios to when to these services.
Plugin Architecture using MEF Framework
3/9/2020 3:56:11 PM.
In this article, we will see how we can design a plugin solution using MEF.
Simple Plugin Architecture Using Reflection With WPF Projects
3/2/2020 1:15:29 PM.
In this article we will see simple plugin architecture using reflection.
Implementing Onion Architecture In ASP.NET Core 3.0
2/24/2020 8:39:42 AM.
From this article you will learn how to Implement Onion Architecture in ASP.NET Core 3.0
Project Setup With Mono Repo - Angular Architecture
2/19/2020 9:06:07 AM.
In this article, you will learn about project setup with Mono Repo - Angular Architecture.
Room Architecture In Android
2/19/2020 12:57:56 AM.
In Google I/O 2017, Google announced about Room Architecture for Android. This Architecture is used to maintain the State of Android Application when the orientation changes. As well as google announc
Introduction About Android
2/17/2020 2:51:19 AM.
This article is a basic introduction of Android programing.
Azure Architecture Styles
1/28/2020 4:12:55 AM.
This article describes different Architecture Styles that can be leveraged while designing cloud based solutions in Azure
Extending Futuristic Architecture To Blazor WASM
1/24/2020 11:03:52 PM.
In this article, you will learn how to extend Futuristic Architecture to Blazor WASM.
Getting Started With VPC (Virtual Private Cloud) - Part Three
1/14/2020 11:34:56 PM.
In this article of the series of articles around VPC, we are going to learn VPC architecture.
Architecture of Windows 10
1/3/2020 3:12:11 AM.
In this article, I will try to differentiate Windows 10 from the prior releases of it.
Future Ready Blazor Application Architecture
12/20/2019 9:01:21 AM.
In this article, you will learn about future ready blazor application architecture.
Getting Started With MicroServices
12/10/2019 1:38:15 PM.
In this article, you will learn about MicroServices.
HTML5 WebSockets Introduction: Part 1
12/10/2019 2:39:07 AM.
As here I am writing article about HTML5 WebSockets, let’s concentrate on it, on HTML5 Server Sent Events I will be writing another article.
Architectural Design Goals - Availability
10/9/2019 9:13:52 PM.
While designing the architecture of an application, we need to keep different design goals in mind. In this article, we'll learn about an important design goal, i.e., "Availability". The
Programming in Java Using the MVC Architecture
9/29/2019 8:24:05 AM.
This article is about the MVC framework in Java application development, from desktop applications for basic programs to enterprise solutions written in Java.
Demystifying Java Internals
9/27/2019 4:24:08 PM.
This article introduces you to the history of Java, its effect on the WWW and the underlying Java architecture.
Data Architecture - Choosing The Right Database
9/23/2019 11:54:01 PM.
This article describes basic design considerations and addresses necessary architectural concerns while choosing the right database for the Enterprise.
Introduction to Service Oriented Architecture
9/17/2019 3:41:05 AM.
In this article, we will learn the basics of the Service Oriented Architecture (SOA).
WCF Architecture
9/17/2019 12:46:09 AM.
This article helps to explain the architecture of WCF and the components that make WCF what it is.
A Beginner's Tutorial For Understanding WCF
9/12/2019 3:38:51 AM.
This article provides a beginner's tutorial of the Windows Communication Foundation (WCF).
Microsoft Teams Desktop Client - Part Two
9/8/2019 5:52:10 AM.
In this article, you will dive deep into the functionality of Microsoft Teams Desktop Client.
Architecture Design Goals - Scalability
8/20/2019 10:17:30 AM.
In this article, we'll learn what is Scalability and what are the important points we need to consider while designing an architecture of a scalable application.
Understanding A Proven Approach For Azure Web Application Architecture
7/18/2019 10:22:19 AM.
In this article, you will learn about a proven approach for Azure Web Application Architecture..
What is ADO.NET?
6/10/2019 8:29:37 PM.
ADO.NET stands for ActiveX Data Object. This article explains - what is ADO.NET, Architecture of ADO.NET, advantages of ADO.NET and components of ADO.NET including DataSet, DataReader, DataAdapter, an
MicroService Architecture
5/15/2019 1:28:13 PM.
In this article, you will learn about microservice architecture.
Solutions And Architecture
4/23/2019 11:58:49 PM.
This article provides a ground level understanding of the profiles of a solutioner and an architect.
Building a 3-Tier Application Using ASP.Net
4/9/2019 2:28:59 AM.
This article explains 3-Tier Architecture and the implementation in an ASP.NET example.
RESTful WebAPI With Onion Architecture
4/2/2019 9:38:25 AM.
In this article, you will learn about RESTful WebAPI with Onion Architecture. | https://www.c-sharpcorner.com/topics/architecture | CC-MAIN-2021-04 | refinedweb | 3,269 | 50.94 |
?
- Use Ophelia as a WSGI application
- Ophelia defines an application class compliant with the WSGI standard, PEP 333: ophelia.wsgi.Application. You can either try it by running Ophelia’s own wsgiref-based HTTP server or run it by any WSGI server you might care to use.
-.
What kind of sites is Ophelia good for? reflect in the file system organization of your documents; how templates combine don.
How Ophelia works
Template files
For each request, Ophelia looks for a number of template files. It takes one file named “__init__” from each directory on the path from the site root to the page, and a final one for the page itself. The request is served by Ophelia if that final template is found.
When building the page, the page’s template is evaluated and its content stored in what is called the inner slot. Then each template on the way back from the page to the root is evaluated in turn and may include the current content of the inner slot. The result is stored in the inner slot after each step.
The result of processing the root template is served as the page.
Python scripts
Each template file may start with a Python script. In that case, the script is separated from the template by the first occurrence of an “<?xml?>” tag on a line of its own (except for whitespace left or right). If the template file contains only a Python script but not actually a template, put “<?xml?>” in its last line.
Python scripts are executed in order while traversing from the site root to the page. They are run in the same namespace of variables that is later used as the evaluation context of the templates. Variables that are set by a Python script may be used and modified by any scripts run later, as well as by TALES expressions used in the templates.
The namespace is initialized by Ophelia with a single variable, __request__, that references the request object. Thus, scripts have access to request details and traversal internals. In addition to setting variables, scripts may also import modules, define functions, access the file system, and generally do anything a Python program can do..
How Ophelia behaves
URL canonicalization and redirection
If Ophelia encounters a URL that corresponds to a directory it behaves similarly to Apache in its default configuration: If the URL doesn’t end with a slash, it will redirect the browser to add the slash. If the slash is there, it will try to find a template named index.html by default, and render it as the directory “index”.
Depending on configuration, explicit requests for directory index pages may be redirected to bare directory URLs without the final path segment. This would turn <> into <>.
Additionally, Ophelia canonicalizes URLs containing path segments “.” and “..” according to RFC 3986 on generic URI syntax, and removes empty path segments which are not at the end of the path. If the URL is changed by these rules, Ophelia redirects the browser accordingly.):
- 27 downloads in the last day
- 94 downloads in the last week
- 539 | https://pypi.python.org/pypi/ophelia/0.4.1 | CC-MAIN-2015-18 | refinedweb | 518 | 63.19 |
You've probably already encountered a key referred to as the leader key. The leader key is essentially a namespace for user or plugin defined shortcuts. Within a second of pressing the leader key, any key that's pressed will be in from that namespace.
The default leader key is a backslash \, but it's not the most comfortable binding. There are a few alternative keys that are popular in the community, with the comma (,) being the most popular. To rebind the leader key, set the following in your .vimrc file:
" Map the leader key to a comma. let mapleader = ','
You'll want to define your leader key closer to the top of .vimrc, as the newly defined leader key will only apply to mappings defined after its definition.
When you rebind ... | https://www.oreilly.com/library/view/mastering-vim/9781789341096/71190eda-401f-4f58-84ae-413edbef298c.xhtml | CC-MAIN-2019-43 | refinedweb | 132 | 66.94 |
Home >>Java Programs >Java Program to print the smallest element in an array
In this example, we will create a java program to find out the smallest element present in the array. This can be done by defining a variable min which initially will hold the value of the first element. Then we will loop through the array by comparing the value of min with elements of the array. If any of the element's value is less than min then store the value of the element in min.
Algorithm
Program:
public class Main { public static void main(String[] args) { int [] arr = new int [] {35, 11, 17, 53, 61, 43, 94, 51, 32, 87}; int min = arr[0]; for (int i = 0; i < arr.length; i++) { if(arr[i] <min) min = arr[i]; } System.out.println("Smallest element present in given array: " + min); } }
Output
Smallest element present in given array: 11 | https://www.phptpoint.com/java-program-to-print-the-smallest-element-in-an-array/ | CC-MAIN-2021-39 | refinedweb | 151 | 59.53 |
As you’ve seen, FileInfo is a class that acts like a dictionary. To explore this further, let’s look at the UserDict class in the UserDict module, which is the ancestor of our FileInfo class. This is nothing special; the class is written in Python and stored in a .py file, just like our code. In particular, it’s stored in the lib directory in your Python installation.
Historical note. In versions of Python prior to 2.2, you could not directly subclass built-in datatypes like strings, lists, and dictionaries. To compensate for this, Python comes with wrapper classes that mimic the behavior of these built-in datatypes: UserString, UserList, and UserDict. Using a combination of normal and special methods, the UserDict class does an excellent imitation of a dictionary, but it’s just a class like any other, so you can subclass it to provide custom dictionary-like classes like FileInfo. In Python 2.2 and later, you could rewrite this chapter’s example so that FileInfo inherited directly from dict instead of UserDict. However, you should still read about how UserDict works, in case you need to implement this kind of wrapper object yourself, or in case you need to support versions of Python prior to 2.2.
class UserDict:
def __init__(self, dict=None):def __init__(self, dict=None):
self.data = {}self.data = {}
if dict is not None: self.update(dict)if dict is not None: self.update(dict)
def clear(self): self.data.clear()
def copy(self):def copy(self):
if self.__class__ is UserDict:if self.__class__ is UserDict:
return UserDict(self.data) import copyreturn UserDict(self.data) import copy
return copy.copy(self) def keys(self): return self.data.keys()return copy.copy(self) def keys(self): return self.data.keys()
def items(self): return self.data.items() def values(self): return self.data.values()def items(self): return self.data.items() def values(self): return self.data.values() | http://www.faqs.org/docs/diveintopython/fileinfo_userdict.html | CC-MAIN-2016-30 | refinedweb | 327 | 61.43 |
Build times
I'm sitting here reading JOS when I perhaps ought to be working because I'm running a build.
It's a build on maybe a medium-sized project (approx 200Kloc). It takes around maybe 45 minutes for a full build, the last time we timed it.
Is this normal? Does everyone really wait this long for things to happen? I know this is part of the rational for nightly builds, but seriously....
I suppose this is just a bitch-post, but gah, at home I have 15Kloc Java projects that compile in 3 seconds, and here you do a build before you leave for lunch. Blech!
Mike Swieton
Monday, May 19, 2003
>It's a build on maybe a medium-sized project
>(approx 200Kloc).
VC++?
Where are you workin', Mike? :-)
I would think you could use modularization and incremental builds to avoid having to do full rebuilds. You'd only do full rebuilds when you need a version to pass to testing - a "daily build"
45 minutes for 200 KLOC for VC++ sounds about right, for version 6. I dunno about .net ...
Matt H.
Monday, May 19, 2003
Matt: I'm over at Atomic Object, in Eastown. I need a full build now because I'm not working on code but on the build system and need to see what comes out of it broken.
It's just annoying, because my home box seems faster (at least if I scale the amount of code linearly to match, though I'm not sure if build times are linear...)
I hate QNX 8-}
Mike Swieton
Monday, May 19, 2003
Are the sources local, or are they on a File Server aka Repository.
Do you use precompiled header files
Here th. Ere (e-Very Where)
Monday, May 19, 2003
My system build takes about three hours. I wish it took an 45 Minutes. I am reworking the present build so it will not take as long (Per project management). I build both a Java component and a C++ component. Both components are around 2MLOC (The Java Component is Larger). It takes the Java part around 45 Minutes and two hours and 15 minutes for the C++ component. So java and C++ appear (under MS VSC++) are not linear.
A Software Build Guy
Monday, May 19, 2003
Boy, I don't miss C++ build times. :)
Our C# multi-project solution is ~2k files, ~100kloc. Full rebuild time is under 10 seconds.
Brad Wilson (dotnetguy.techieswithcats.com)
Monday, May 19, 2003
I have ~40Mb source in 4000 files, and it takes less than 10 minutes on a _really_ old machine, (Visual C++ 6), to compile the whole thing. What's different? (could be it that this is made of many small sub-projects with relatively few interdependencies?)
If you have a lot of RAM, try this, (I used to)
1. Make a huge RAM drive
2. At start of build script, copy entire source tree to the RAM drive
3. Build it on the RAM drive
4. Copy the results back to the output directory on the real drive
S. Tanna! It's a Microsoft Visual C++ addin. The idea is that you get lots of people to install it and then it distributes the build work between each machine. It seems to work using some kind of virtual machine system, so you don't have to have the same configuration on each machine - in fact it says that you don't even have to have Visual Studio installed on all of the boxes. We had a few issues at first because the dependencies weren't quite set up properly in our workspaces but it is working excellently now - a clean build used to take just under an hour (on a 2.4GHz Pentium), but my last build took exactly 3 minutes and 12 seconds - needless to say, I love it! On the downside, it's pretty expensive, but I was surprised how easy it was to convince the bosses when I compared it to man hour costs.
R1ch
Monday, May 19, 2003
Oops - I guess an URL would help...
Big C++ project in VC 6 a while back, full build - 9 hours.
punter
Monday, May 19, 2003
>> 45 minutes for 200 KLOC for VC++ sounds about right, for version 6. I dunno about .net ... <<
Is this a joke?? I just did a full clean build of a ~250 KLOC project with VC++ 6 on a relatively slow machine in 8 minutes. And there have been no attempts to optimize build times for this project.
SomeBody
Monday, May 19, 2003
Our current build process takes around 2 hours (build machine is a bit elderly), but on my laptop (1.7Ghz 256Mb) it takes around an hour or so.
The final output is around 400Mb of executable, source code is 55Mb raw, but this is shared multiple times to give around 550Mb of compilable code.
Currently most of this is in VB and we do use VB groups to speed things up (saves on compiler load time) Since no-one else uses the build machine it's never been a big problem.
Peter Ibbotson
Monday, May 19, 2003
My current MSVC6 project is about 200,000 lines of code - takes about 18 minutes to build (both debug an release)...
Spaghetti Rustler
Monday, May 19, 2003
As an earlier poster suggested, are you using pre-compiled headers? If not, you should be. You should have the includes organised to assist the include optimisation.
Also, these massive builds are ridiculous. You should componetise the architecture so that no no build is longer than 5 minutes.
echidna
Monday, May 19, 2003
That should be componentise.
Its a while since I used QNX, but it isn't really comparable to a Visual C++ type build.
Plus, as Mike Swieton said, he's testing a full system build, that means everything gets built from scratch.
Simon Lucy
Monday, May 19, 2003
I was really suss about Incredibuild as well until we tried it.
It rocks! It even sped up build times on a single machine (5-10%).
Now we just have to get funding... :-(
Taka
Monday, May 19, 2003
It could be worse, I remember the good old days when a compile could take a day to get done, depending on the mainframe load.
John McQuilling
Monday, May 19, 2003
Have you looked at a copy of "Large Scale C++ Program Design" by Lakos? It has a lot of tips on reducing build times. (e.g. reducing dependencies, eliminating redundant file reads, etc.)
Devil's Advocate! "
I have seen sooooo many spam emails start off this way....cant we keep these morons out of the forum?
FullNameRequired
Tuesday, May 20, 2003
>>> I have seen sooooo many spam emails start off this way....cant we keep these morons out of the forum?
How about disallowing anonymous posts :-)
Sometimes it actually does happen that someone finds a product that is really good and would like to share that information with others. It makes a nice change from slagging Microsoft products :-)
Taka
Tuesday, May 20, 2003
Hey, I'm sorry - next time I won't bother.
R1ch
Tuesday, May 20, 2003
ye gods.....spammer tag teams...
"Sometimes it actually does happen that someone finds a product that is really good and would like to share that information with others"
I have no doubt. and yet, in all the time Ive had friends and strangers genuinely say nice things about software, they have *never* felt the need to say things like 'it really works!' (note the exclamation mark).
or..."at first I was doubtful but then it turned out to be really great and solved all my problems."
come on..if you must spam to the lists like this like this at least learn some new lines..
I don't know why I'm bothering with this, but it does offend me to when you call me a spammer (moron, I can accept ;). I'm actually a lead dev for a company that develops a product that has been slagged off a lot lately on this forum. I've kept out of the discussion as I have no objection to people slagging off our products - I'm sure that they have good reasons, and unless there's something specific that I can help with, I'm not going to join in the discussion - I don't want to offer a biased opinion. Funnily enough, I also thought about posting something about Incredibuild to the 'software that rocks my world'-type thread, but I didn't because I thought someone would probably get on their high horse about spamming. Don't get me wrong - I hate spamming too, but that doesn't mean that people shouldn't try to help others out by pointing out good stuff they've found. If I don't beleive them then that's fine, but I wouldn't reply and slag them off - just maybe they're telling the truth. If you want proof of who I am then just click on the mail link below and I'll mail you back from my work address, but I suspect that it's more likely that you'd rather assume I'm a liar.
Hi Rich,
"I'm actually a lead dev for a company that develops a product that has been slagged off a lot lately on this forum"
??? ...and that would be...?
<g> Im really bad at mystery meat competitions...
"I hate spamming too, but that doesn't mean that people shouldn't try to help others out by pointing out good stuff they've found"
I agree. But there *is* a difference between such posts and posts that are spam.
Your post was (deliberately I have no doubt) spam.
I have an email client that uses the bayesian thingie to separate spam out from the normal emails.
Just out of interest I emailed myself your post.
<g> proves nothing whatsoever of course but it was a pleasant way to spend a few minutes.
"If you want proof of who I am then just click on the mail link below and I'll mail you back from my work address, but I suspect that it's more likely that you'd rather assume I'm a liar"
_much_ more likely.
The problem we have here of course is that I *do* believe you are a liar. Which means that providing you any personal information whatsoever about myself is a stupid thing to do.
You could easily break the deadlock though, if your product is a good one and you feel no shame about standing behind it then by all means post the url to it, and maybe just a little info about where you fit in :)
OpenMosix. If your compiling on Linux etc, it makes a big difference (if you have a large pool of otherwise idle machines, e.g. workmates).
I am in no way associated with OpenMosix. Just saw it was a bit cheaper than incredulousbuild(tm)
Nice
Tuesday, May 20, 2003
Fair enough (although I never normally like to publish identifying info on the web either). I work for Crystal Decisions - the makers of Crystal Reports. I don't actually work on Crystal Reports though - I work on the Crystal Analysis line of products ( ). I used to work on the server side of their zero client - a DHTML interface that allows OLAP analysis with nothing more than a web browser installed, but now I'm working on a new product that isn't available yet.
BTW, at the risk of being flamed for plugging again, is a great way of being able to make up disposable email addresses on the spot so that you don't have to worry about who you give them to.
Hi Rich,
and thus I am proved a pratt ;)
Please accept my sincere apologies for the totally unfounded attack on your integrity.
spamgourmet does look pretty good too <g> all I have to do now is overcome my reluctance to give them my email address....maybe I could use a second throwaway address..
No problem - I admit that my message did seem spam like, but I reckon that I've probably gained back at least an hour a day, so I do tend to rave about it. ;-)
I work on a 75K line application. It is very heavily templated C++, using a lot of STL, and a lot of non-templated STL.
We find it takes ~ 45 minutes to compile on Visual Studio.Net, and a bit longer (65 minutes) on GCC 3.2
I think that the compile time depends on exactly how hard you work the compiler - there is another group here we interact with a lot - their 35K line c++ app (which is mainly straight C) does a full recompile in about 2 minutes!
regards,
treefrog
treefrog
Tuesday, May 20, 2003
Just checked. My current Delphi project of 236K lines of code fully builds in about 9 seconds. Not that I do a full build often. I generally hit the Run command, which compiles changes only, links and runs the application in less than 5 seconds.
I don't envy you C++ guys for compilation times.
Jan Derk
Tuesday, May 20, 2003
Another vote for Incredibuild - what a fantastic bit of kit!
I would also strongly back that recommendation for Large Scale C++ Design, we were able to half the time of our 300kloc editor from 16 mins to 7 and a half with a little re-organisation and the use of external header guards.
Mr Jack
Tuesday, May 20, 2003
One way to improve the build times on VC++
1. Disable .bsc files creation from Project Settings. Uncheck 'Generate Browse Info' from C++ Tab and 'Build Browse Info' from 'Browser Info' tab. In my experience this typicaly reduces the build time by 50 %
2. Also try disabling the PreCompiled Headers. In one of my projects disabling 'precompiled' headers actually improved the build times.
3. Check your includes. If there are many include files or recursive includes compilation time unneccessarily increase. Even if you are using header file guard (#ifndef/#define/#endif) the compiler still has to read the entire file to reach #endif and discard the file.
Nitin Bhide
Tuesday, May 20, 2003
I'm with Jan - I love Delphi compile times. I work on a set of smaller apps and I turn on "show compiler progress" because without the feedback it's so fast that I worry I might have missed the button or something :).
Ryan Eibling
Tuesday, May 20, 2003
This is obvious stuff, but FWIW:
1. Have file guards on all your headers
2. List them in the same order in each C++ file, or better yet pull most of them from a common include file
3. Use pre-compiled headers
4. Avoid multiple inheritance, templating, excessive forward references
echidna
Tuesday, May 20, 2003
How odd, I was just speaking to a fellow developer (form another company) about build times. I forget the exact number, but his build was taking around 8 hours to complete.
Want to guess what he told me solved it? Incredibuild. I'm not sure what flavor Kool-Aid they send with their product, but he was a walking advertisement for them.
Marc
Tuesday, May 20, 2003
Dear Full Name Required,
Why are you apologizing. Why have you given up on your belief that the giuy is a liar. Because he's given you a link to a web page. If I said I really worked for Bush and gave you link to the White House web page would you believe me?
The guy is obviously a spammer, and is simply using other aliases to create the tag spamming effect.
Stephen Jones
Tuesday, May 20, 2003
People should stop being assholes when others are trying to help, or people are going to stop helping.
Funny, virtually this exact same thread transpired a few weeks ago on a mailing list I was on. It's rather shocking to see how radically different the reaction was.
Brad Wilson (dotnetguy.techieswithcats.com)
Tuesday, May 20, 2003
Hi Stephen,
" The guy is obviously a spammer, and is simply using other aliases to create the tag spamming effect. "
<shrug> you may be right.
I am persuaded that he is not a deliberate spammer.
Rich genuinely works at crystaldecisions.com or at least can send mail from an address there, and I can't seem to find any link between him and xoreax.com which is registered in Israel, so I must accept that he is not a spammer; he merely gives a perfect imitiation of one :)
Possibly we are all receiving so much spam in our inboxes that we will end up talking just like spammers do.
Even if "R1ch" did have some relationship with Incredibuild other than that of enthusiastic user (and I don't believe he does), what is wrong with him posting an on topic reply to a problem with build times?
It wasn't junk mail, in that he responded personally to a request for information.
His reply was on topic, and backed up by other correspondents with apparently similar experiences.
Not that it is ever likely to happen in this forum, but if someone posted a message for information about electronic banking systems or hybrid rule based expert systems tools I would reply in a similar manner about our own software.
But on the topic, I'm astonished at the long build times for reported from VC6. We use this as our development platform for Windows-based delivery. I just ran a build of our largest component, which has just under 350,000 lines of code, and it compiled in less than four minutes. That was on an old battery-powered notebook with a slow hard disk, 128M RAM and WindowsME, and a 500Mhz processor.
Could the difference be entirely due to templates and stuff, which we don't use?
HeWhoMustBeConfused
Tuesday, May 20, 2003
"Could the difference be entirely due to templates and stuff, which we don't use?"
In general, yes. Rebuilding a project that relies on STL types will bring the fastest system to its knees. The other big timesink is the Windows header file collection, although precompiled headers can speed this up.
Chris Nahr
Wednesday, May 21, 2003
I'm working under Linux, and I've found ccache to be very helpful for our system - . We obviously don't have the option of pre-compiled headers (which is too bad, as our code tends to include every header in the system!), and we have a massively irritating class heirarchy (including a lot of wierd templating, massive intertwining of classes, and darn near everything descending from a class named Object. Don't ask why - it makes my head hurt.). I used ccache to take us from a >2 hour full rebuild time down to about 25 minutes. I also used it (with a shared cache) to reduce rebuild times by sharing the cache (sort of a cheap version of what ClearCase could do). I don't know if there is any sane way to integrate ccache with VC++ (I suspect it would be hard, but perhaps possible), but as it's free, it might be worth looking at.
Michael Kohne
Wednesday, May 21, 2003
The code I'm working with uses things like templates and STL and includes Windows headers fairly frequently. I'm not seeing exhorbant build times (8 minutes, ~250k lines, relatively slow machine).
I'm curious if there might be a correlation between modularizing and build times? The code I'm working with is broken into quite a few DLLs.
Are those of you with ridiculously slow build times running virus scanners? If you are, you might want to try temporarily disabling any active scan features while building and see if it makes a difference. I've found that virus scanners can increase build times greatly (since they seem to grab onto files whenever they are saved or loaded).
SomeBody
Wednesday, May 21, 2003
Just to toss more weight on the pile, we purchased a bunch of Incredibuild licenses around a year ago and I consider it some of the best money we could've ever spent. Fantastic investment.
Jeremy Statz
Friday, May 23, 2003
I dropped'em like a bad habit, -- don't put build stuff on my resume any longer.
Steve
Friday, May 23, 2003
Recent Topics
Fog Creek Home | http://discuss.fogcreek.com/joelonsoftware2/default.asp?cmd=show&ixPost=44960&ixReplies=45 | CC-MAIN-2017-13 | refinedweb | 3,421 | 79.09 |
GL.UniformPosted Saturday, 28 May, 2011 - 02:05 by Radar in
Hi all,
i am trying to implement my own math in OpenTK.
But i have a problem with the GL.Uniform when i try to send a vector or a matrix.
I am not sure how to make that. I thought i did it right but GetError() returns Invalid Op.
I try to send my Matrix like this:
unsafe { GL.UniformMatrix4(this.uniLoc(uniformLocation), 1, false, (float*)mat44); }
I overloaded two operators of my Matrix44 struct (i adapted that from the Vector4.cs):
unsafe public static explicit operator float*(Matrix44 v) { fixed (float* first = v.fields) //fields is array return first; } unsafe public static explicit operator IntPtr(Matrix44 v) { fixed (float* first = v.fields) return (IntPtr)(first); }
And the struct starts like this:
[StructLayout(LayoutKind.Sequential)] public struct Matrix44 { [MarshalAs(UnmanagedType.R4, SizeConst=16)] public float[] fields; // = new float[16]; ... ...
The matrix contains data. So it is not a problem of a uninitialized field-variable... well i guess.
I am still learning c# so if this is a noobish error i am sorry for that, but i don't understand why it does not work. I guess it must be my code.
Thank you very much in advance!!!
Re: GL.Uniform
I use this: | http://www.opentk.com/node/2511 | CC-MAIN-2014-42 | refinedweb | 215 | 69.68 |
A palindrome is a word, phrase, number or other sequence of units that can be read the same way in either direction. Here we are going to check whether the number entered by the user is palindrome or not. You can check it in various ways but the simplest method is by reversing the digits of the original number and check to see if the reversed number is equal to the original.
Here is the code:
import java.util.*; public class NumberPalindrome { public static void main(String[] args) { try { Scanner input = new Scanner(System.in); System.out.print("Enter number: "); int num = input.nextInt(); int n = num; int reversedNumber = 0; for (int i = 0; i <= num; i++) { int r = num % 10; num = num / 10; reversedNumber = reversedNumber * 10 + r; i = 0; } if (n == reversedNumber) { System.out.print("Number is palindrome!"); } else { System.out.println("Number is not palindrome!"); } } catch (Exception e) { System.out.println(e); } } }
Output:
Advertisements
Posted on: January | http://www.roseindia.net/tutorial/java/core/checkNumberPalindrome.html | CC-MAIN-2016-30 | refinedweb | 159 | 58.38 |
Simply using this does not work
Object a = new Object(); Object b = a; // <-- this creates a reference, not a copyFortunately Java has built in methods for creating a perfect copy of your object's current state. Back in the old days clone() did not work so well, however as of Java 1.7 I have not had any issues with it.
.
Class A
.
public class Class_A { Class_B origional; Class_B clone; public Class_A(){ origional = new Class_B(); // <--- Create a new object from Class B clone = origional.clone(); // <--- Call the class method to produce a clone clone.s = "World !"; // <--- set the clone's string to something else System.out.println(origional.getString() + " " + clone.getString() ); } }.
Class B
.
public class Class_B implements Cloneable{ // <--- Class must implement Cloneable !!! int x; String s; Boolean b; public Class_B(){ x = 1; s = "Hello"; // <--- Pay attention to this verable b = true; } public String getString(){return s;} // <--- Method returns the string verable public Class_B clone() { // <--- Method must have a try block !!! try { return (Class_B) super.clone(); // <--- Returns a clone of it's self } catch (Exception e) { System.out.println("\n ===Error=== \n" + e + "\n" ); // <--- Always output errors return null;} } } | http://www.gamedev.net/blog/1816/entry-2259361-send-in-the-java-clones/ | CC-MAIN-2016-50 | refinedweb | 188 | 64.1 |
Header for the RDM Cursor APIs. More...
#include "rdmtypes.h"
#include "rdmrowidtypes.h"
Go to the source code of this file.
Header for the RDM Cursor APIs.
Definition in file rdmcursorapi.h.
Read data from a blob column.
This function reads contents of a blob column from the current row of cursor.
The read will start from the offset position and will attempt to read up to bytesIn bytes. If there are not bytesIn bytes left to read and the bytesOut parameter is NULL then the function will return the eBLOBBADSIZE error code.
If bytesOut is not NULL then the number of bytes actually read into value will be put into that parameter.
If value is NULL and bytesOut is not NULL then bytesOut will have the remaining number of bytes to be read from the offset position. | https://docs.raima.com/rdm/14_1/rdmcursorapi_8h.html | CC-MAIN-2019-09 | refinedweb | 138 | 75.91 |
On Mon, Apr 23, 2012 at 08:00:12AM -0400, Jeff Layton wrote:> On Sun, 22 Apr 2012 07:40:57 +0200> Miklos Szeredi <miklos@szeredi.hu> wrote:> > > On Fri, Apr 20, 2012 at 11:13 PM, Jeff Layton <jlayton@redhat.com> wrote:> > > On Fri, 20 Apr 2012 15:37:26 -0500> > > Mal> > > that's repeatedly renaming a new file on top of another one. The file> > > is never missing from the namespace of the server, but you could still> > > end up getting an ESTALE.> > >> > > That would break other atomicity guarantees in an even worse way, IMO...> > > > For directory operations ESTALE *is* equivalent to ENOENT if already> > retrying with LOOKUP_REVAL. Think about it. Atomic replacement by> > another directory with rename(2) is not an excuse here actually.> > Local filesystems too can end up with IS_DEAD directory after lookup> > in that case.> > > > Doesn't that violate POSIX? rename(2) is supposed to be atomic, and I> can't see where there's any exception for that for directories.Hm, but that only allows atomic replacement of the last component of apath.Suppose you're looking up a path, you've so far reached intermediatedirectory . | http://lkml.org/lkml/2012/4/23/228 | CC-MAIN-2015-14 | refinedweb | 193 | 66.54 |
SUPPORT FOR THE .NET DLR
One of the most versatile aspects of Umbraco is its support for additional languages through the .NET Dynamic Language Runtime (DLR) text. Out of the box, you can create macros that are based on IronPython and IronRuby, simply by using the backoffice interface. This is a great alternative to XSLT and .NET driven macros for those who prefer working with a different language.
This section simply provides you with syntactical examples of how to render the content using these alternate languages. The author suggests the following online resources for further reading about the DLR languages mentioned in the following sections.
-: This is an IronRuby open source project.
-: This is an IronPython open source project.
For those of you who prefer working with IronPython or IronRuby, the examples in Listings 5-7, 5-8, and 5-9 should provide you with a good introduction on how to tap into the Umbraco content using one of these languages.
IronPython
Here are some examples of what you can do using IronPython as your language of choice in your Umbraco macros. The options are virtually endless, just like when you work with .NET.
Listing Pages from Current Page
In Listing 5-7, you can see how few lines of code it takes to generate an unordered list of pages using IronPython.
LISTING 5-7: PagesFromCurentNode.py
from umbraco.presentation.nodeFactory import Node from umbraco import library #list all of the subpages ...
Get Umbraco User's Guide now with the O’Reilly learning platform.
O’Reilly members experience live online training, plus books, videos, and digital content from nearly 200 publishers. | https://www.oreilly.com/library/view/umbraco-users-guide/9780470560822/chap5-sec13.html | CC-MAIN-2022-40 | refinedweb | 271 | 57.57 |
After two base proxy classes are added, we need to make WorkerMessagingProxy class derive from these two base classes.
Created attachment 27375 [details]
Proposed Patch
This patch is to make WorkerMessagingProxy derive from two base proxy classes introduced in issue 23776. The next patch is to change to use different proxy pointers.
ChangeLog:
WorkerMessaingProxy sp
These header files seem to be missing from the patch:
#include "WorkerContextProxyBase.h"
#include "WorkerObjectProxyBase.h"
I see that have the header files in another patch. I'd recommend setting the "depends on" field above to make this more clear.
This looks good to me (just needs the typo fixed in the change log).
Created attachment 27415 [details]
Proposed Patch
Comment on attachment 27375 [details]
Proposed Patch
new patch obsoletes previous one.
It would be nice to fix the typo:
ChangeLog:
WorkerMessaingProxy sp
Created attachment 27442 [details]
Proposed Patch
All fixed. Thanks.
Comment on attachment 27415 [details]
Proposed Patch
New patch makes this one obsolete.
Looks good to me.
Comment on attachment 27442 [details]
Proposed Patch
r=me. I think that to validate this change, you need to also change the type of Worker::m_messagingProxy though.
// Only use these methods on the worker object thread.
- void terminate();
bool askedToTerminate() const { return m_askedToTerminate; }
There's only one method left here, so the comment needs to be adjusted.
Committed revision 40781. | https://bugs.webkit.org/show_bug.cgi?id=23777 | CC-MAIN-2019-43 | refinedweb | 224 | 58.48 |
Introduction to Tkinter Text characters, marks, embedded windows or images.
Syntax:
w = text (master, option,..)
where the master is the parent window, and option is the options list that can be used for the widget, and options are the pairs of key-value that have commas as a separator.
Tkinter Text Widget
The most commonly used list of options for the widget are:
- bg: This option represents the text widget’s background color, and this is the default value.
- bd: This represents the border width surrounding the text widget. Its default value is two pixels.
- cursor: When the mouse is placed on the text widget, a cursor appears, and that is this cursor.
- font: This represents the font of the text which is present in the text widget.
- fg: This represents the color of the text present in the text widget. The color can be changed for regions that are tagged. This is just a default option.
- height: This represents the text widget’s height in lines, and its measure is based on the font size.
- highlightcolor: This represents the focus highlight color when the focus is in the text widget.
- highlightthickness: This represents the focus highlight thickness. The default value for highlight thickness is one. The display of the focus light can be suppressed by setting highlight thickness as zero.
- relief: This provides the text widget’s three-dimensional appearance by option. The default value for relief is SUNKEN.
- selectbackground: This represents the background color to be used while displaying the text that is selected.
- width: This represents the width of the character present in the text widget, and its measure is based on the font size.
- xscrollcommand: The set() method of the horizontal scrollbar is set to xscroll command option, which makes the text widget to scroll in the horizontal direction.
- yscrollcommand: The set() method of the vertical scrollbar is set to xscroll command option, which makes the text widget to scroll in the vertical direction.
- exportselection: The text contained in the text widget is selected and is made a selection in the window manager by exporting it to the window manager.
- highlightbackground: The focus highlight’s color when the focus is not in the text widget.
- insertbackground: The insertion cursor’s color. The default value for this option is black.
- insertborderwidth: The three-D border’s size surrounding the insertion cursor. The default value for this option is zero.
- insertofftime: During the insertion cursor’s blink cycle, the number of milliseconds it is off is inserted off time. The blinking can be suppressed by setting this choice to zero. The default value for this option is three hundred.
- insertontime: During the insertion cursor’s blink cycle, the number of milliseconds is on insert on time. The blinking can be suppressed by setting this option to zero. The default value for this option is six hundred.
- insertwidth: This option represents the insertion cursor’s width. The default value for this option is two pixels.
- padx: Internally, padding is done to the left and right of the text area, and this option represents the size of this padding. One pixel is the default value for this option.
- pady: Internally, padding is done above and below of the text area, and this option represents the size of this padding. The default value for this option is one pixel.
- selectborderwidth: This option represents the border width around the text that is selected.
- spacing1: There is excess space that is vertical, which is assigned above each line of text, and this option represents the amount of that extra vertical space. If there is a line wrap, there is the addition of space before the first line only. The default value for this option is zero.
- spacing2: There is excess space that is vertical, which is assigned between displayed lines of text, and this option represents the amount of that extra vertical space. The default value for this option is zero.
- spacing3: There is excess space that is vertical, which is assigned below each line of text, and this option represents the amount of that extra vertical space. If there is a line wrap, there is an addition of space after the last line only. The default value for this option is zero.
- state: Keyboard and mouse events get a response from text widgets, and this response is available when the state is set to the value NORMAL. There is no response if the state value is set to DISABLED, and the contents cannot be modified programmatically.
- tabs: This option controls the position of the text based on the tab characters.
- wrap: If the lines to be displayed are too wide, then they are controlled by this option. If this option is set to WRAP, then the line is broken after the last word fits. If this option is set to CHAR, then the line is broken at any character.
Methods of Tkinter Text
There are several methods that can be implemented on text objects, they are:
- delete(startindex, [,endindex]): The indices in the range (startindex, [,endindex]) are deleted using this option. The specific character having the first index is deleted if the second argument is omitted.
- get(startindex, [,endindex]): The range of text or specific character is returned.
- index(index): Based on the given index, which is passed as an argument, the index’s absolute value is returned.
- insert(index, [,string]…): The strings passed as the second argument are inserted at the position specified by the index passed as the first argument.
- see(index): The text located at the position specified by the index argument is visible; this method returns true.
Text widgets support two helper structures. They are:
- marks: In each text, we use marks if we want to highlight positions between two characters.
- index(mark): The mark specifies the line, column that must be returned.
- mark_gravity(mark,[,gravity]): The gravity of the mark specified as the first argument is returned. The gravity for the mark specified as the first argument is set if the second argument is specified.
- mark_names(): All of the text widget’s marks is returned.
- mark_set(mark,index): The specified mark as the first argument is informed about the new position.
- mark_unset(mark): The specified mark as the first argument is removed from the text widget.
- Tags: The text regions are associated with tags given by tags, which makes the modification of the display settings of the text areas easier. Event callbacks can be bind to specific areas of text using tags.
- tag_add(tagname, startindex[,endindex]…): The start index’s position is tagged using this method or a range of positions defined by the start index and end index are tagged using this method.
- tag_config: The properties of the tag are configured using this option like justify, tabs, underline, etc.
- tag_delete(tagname): A given tag can be deleted and removed using this method.
- tag_remove(tagname,[startindex[.endindex]]..): A given tag is removed from the area where it is present without deletion of the definition of the actual tag after application of this method.
Example
Python program uses Tkinter text to display text using insert and tag methods and then search the text highlighted in red color when found.
Code:
from tkinter import * root1 = Tk( ) fram1 = Frame(root1) Label(fram1,text='Please enter the text to search:').pack(side=LEFT) edit1 = Entry(fram1) edit1.pack(side=LEFT, fill=BOTH, expand=1) edit1.focus_set( ) button = Button(fram1, text='Search') button.pack(side=RIGHT) fram1.pack(side=TOP) text1 = Text(root1) text1.insert('1.0', '''India is a beautiful nation ''') text1.pack(side=BOTTOM) def find( ): text1.tag_remove('found', '1.0', END) search1 = edit1.get( ) if search1: id = '1.0' while 1: id = text1.search(search1, id, nocase=1, stopindex=END) if not id: break lastid = '%s+%dc' % (id, len(search1)) text1.tag_add('found', id, lastid) id = lastid text1.tag_config('found', foreground='red') edit1.focus_set( ) button.config(command=find) root1.mainloop( )
Output:
Recommended Articles
This is a guide to Tkinter Text. Here we discuss a brief overview of Tkinter Text Widget, Methods, and its Examples along with its Code Implementation. You can also go through our other suggested articles to learn more – | https://www.educba.com/tkinter-text/?source=leftnav | CC-MAIN-2022-27 | refinedweb | 1,362 | 57.37 |
Using Vue as an Angular alternative for Ionic: The Directives
We keep going on with the Ionic Vue fever!
This time we will see how we can we use Directives in an Ionic Vue application.
You will need to install the Ionic Vue stack from this previous tutorial.
Directives allows us to modify the DOM. They are generally attached to an Element and change its properties.
In this tutorial, we will recreate Angular's ngIf and ngShow Directives.
You might have already seen this great example in the Angular documentation and we will do something similar with Vue.
Custom Show Directive
Let's start by attaching our CustomShow Directive in the index.html file:
<div id="app"> <div v- Custom Show </div> </div>
A Vue Directive follows the kebab case naming convention complemented by a "v-" prefix.
Here we have a Vue Directive attached to a <div> Element which receives the boolean value false.
At this point, you should receive an error saying that the <custom-show> Directive is not registered and that's normal.
We will now create the custom-show.ts file:
export default { bind: function(el, binding, vnode) { const display = binding.value ? "block" : "none"; el.style.display = display; } };
We will export a new object which has a bind field.
We can use here a function which has three arguments:
- el : The DOM element used (our <div>)
- binding: Some information on the data we passed to the Vue Directive (in our case the boolean value false)
- vnode: The Vue virtual node which contains information like the children, parent and much more information like:
Our aim here is displaying the <div> if the boolean value true is passed to this Directive.
For this example, we will retrieve the information from the binding's value property.
The ngShow or v-show only hide the DOM Element. We will do the same by modifying the style of the el argument, setting it to 'block' when it should be displayed and 'none' when it shouldn't.
That's it, we just created the equivalent of the Angular ngShow and Vue v-show Directives!
The last piece now, adding it to the Vue root instance in the main.ts file:
import Vue from "vue"; import CustomShow from "./app/custom-show"; var app = new Vue({ el: "#app", directives: { CustomShow } });
Ionic Vue Directives can be global too, just like this:
Vue.directive("CustomShow", { bind: function(el, binding, vnode) { const display = binding.value ? "block" : "none"; el.style.display = display; } });
I personally prefer local Directives, but in some cases a global Directive might be useful.
Let's see another way for properties propagation.
This time we will pass an object in the index.html file:
<div id="app"> <div v- Custom Show </div> </div>
We spice up the custom-show.ts file:
export default { bind: function(el, binding, vnode) { const display = binding.value.display ? "block" : "none"; const delay = binding.value.delay; setTimeout(() => (el.style.display = display), delay); } };
The binding's value property is now an object which has the display and delay properties.
This time, we will use the setTimeout method and make the DOM modification after the delay we received.
Here is the new result:
In Angular's terms, this is an Attribute Directives because:
Let's move to a Structural Directive with the Custom If example.
Custom If Directive
Just like before, we start in the index.html file:
<div id="app"> <div v-custom-if:4000 : Custom If </div> </div>
The properties are propagated differently.
The display value is passed as an Element attribute and the delay is passed as a Vue Directive argument.
We will also change the way we declare our Ionic Vue Directive in a new custom-if.ts file:
import { Directive } from "vue-ts-decorate"; @Directive({ name: "customIf", local: true }) export default class CustomIf { bind(el, binding, vnode) { const display = vnode.data.attrs.display; const delay = binding.arg; if (display === false) { setTimeout(() => vnode.elm.remove(), delay); } } }
In a previous Ionic Vue Component tutorial, we used the vue-class-component library.
That's an awesome library, however, it only focuses on the Component Decorator.
This time, we will use the vue-ts-decorate library which has more Decorators so don't forget to:
npm i vue-ts-decorate --save
Ok, back to our Directive.
The Directive Decorator only needs the name of the directive. I've set the local value to true in order to make the Ionic Vue Directive local.
Just like an Angular Directive, we have the traditional export default ...
The bind function is declared with its el, binding and vnode arguments.
The display value is located in the vnode.data.attrs.display property.
binding is used once again, however, instead of looking for the information in the value property, we head to the arg property.
Both of those values are stocked in a display and delay const.
We then check if we should display the <div> Element.
Just like before we use a setTimeout, but this time the process is different.
The Angular ngIf and Vue v-if don’t hide the content by modifying the Element’s style.
They destroy and recreate the DOM Element, that’s why they are called Structural Directives.
We will be simple here and only use the vnode’s elm property’s remove method to destroy our Element.
We don’t forget to add the Directive to the Ionic Vue root Instance in the main.ts file:
import Vue from "vue"; import CustomIfModule from "./app/custom-if"; const CustomIf = CustomIfModule["custom-if"]; var app = new Vue({ el: "#app", directives: { CustomIf } });
The vue-ts-decorate library has its own format, and returns the Vue Directive in the custom-if property (our Directive name) so we need to do some gymnastic here.
Here is the final result:
Conclusion
Vue has two types of Directives: Attribute (ex: v-show) and Structural (ex: v-if).
Angular has one more: the Components which are Directives with templates.
There are many ways to pass information to an Ionic Vue Directive, it can be through binding’s value, arg or vnode.
As soon as we use TypeScript libraries like vue-class-component or vue-ts-decorate, Vue and Angular Directives become very similar, however, they both have their own LifeCycle and Hooks where the work happens, but that’s for another tutorial. | https://www.javascripttuts.com/using-vue-as-an-angular-alternative-for-ionic-the-directives/ | CC-MAIN-2020-50 | refinedweb | 1,053 | 56.25 |
When I use eclipse to open an existing PyDev project, I find an error message like Unresolved import: smtplib. This error means the python3 built-in library smtplib can not be found and imported in python source code now. This error is because the python interpreter is not configured correctly, now I will tell you how to fix it.
1. How To Change Python Interpreter Correctly In Eclipse PyDev Project.
- Open eclipse which has installed the PyDev plugin ( How To Run Python In Eclipse With PyDev ).
- Click Eclipse —> Preferences… menu item at eclipse left top menu in macOS. If you use Windows OS, you should click Window —> Preferences menu item.
- Click PyDev —> Interpreters —> Python Interpreter menu item on the left panel. If you can not see a python interpreter configured on the right panel, then just add it. If there has one python interpreter added, maybe this interpreter is not correct, you should change it.
- Click Choose from list ( for macOS) or New —> Choose from list ( for Windows ) button on the right panel, then select the python interpreter which you need in the popup dialog, generally we choose the newest python version.
- Click the OK button, then select all the libraries that will be added in the PYTHONPATH system variable.
- Click the OK button again, now you can see the python interpreter has been added. Select the python interpreter and click the Libraries tab at the right panel bottom, you can see the libraries list that you selected.
- Now click Apply and Close button to close the python interpreter configuration panel to apply your changes.
2. How To Make Python Interpreter Change Take Effect.
- To make the python interpreter change take effect, you need to follow the below steps, otherwise, the Unresolved Import error still exists.
- Right-click the PyDev project, then click Delete menu item in the popup menu list.
- Click the OK button in the popup dialog to delete the project from eclipse. Please do not check Delete project contents on disk checkbox.
- Click eclipse menu item File —> Open Projects from File Systems…
- Click the Directory… button and browse the existing PyDev project saved directory.
- Click the Finish button to complete the PyDev project import. Now you can see the error has disappeared.
- If you want to add third party python libraries into the eclipse PyDev project, you can read the article How To Add Python Module Library In Eclipse PyDev
3. Question & Answer.
3.1 Unresolved import issue when import a python module from .py file.
- I have a python file /user/jerry/work/util/MySQLHelper.py, and this python file contains some utility functions which can operate the MySQL database. I want to invoke those functions in another python project, so I use the python sys module sys.path.append function to add the MySQLHelper module saved directory path to the new python project like below.
import sys sys.path.append("/user/jerry/work/util")
When I import the MySQLHelper module with the command import MySQLHelper in the eclipse Pydev project, it throws the error Unresolved Import Issues. My OS is Ubuntu, I think this error is not related to the OS. Can anyone tell me how to fix this error? Thanks a lot.
- If you use eclipse and Pydev, you can select your Pydev project in the eclipse left side Project Explorer panel, then click the Project —> Properties menu item on the top menu toolbar.
- Click the PyDev – PYTHONPATH item on the project properties window left side.
- Click the External Libraries tab on the window right side.
- Click the Add source folder button at the bottom of the External Libraries tab and select your python file saved folder ( in your case the folder should be /user/jerry/work/util).
- But first, you should add a __init__.py file in the source folder, then you can import the MySQLHelper module in your eclipse Pydev project. | https://www.dev2qa.com/how-to-fix-unresolved-import-issues-in-eclipse-pydev-project/ | CC-MAIN-2022-27 | refinedweb | 649 | 72.46 |
i was wondering if anyone could help me out with how to get a winsock control onto a VB form? i have visual studio .NET and windows XP. thanks. (its for tcp not udp just in case that matters)
Stay away from my friends, they\'re smooth operators lookin for a way in.
I gave this some thought, but sorry I can't help. Just in case you can't get any answers here, there is a alt.winsock on usenet
I hate this place, nothing works here, I\'ve been here for 7 years, the medication does\'nt work...
well ok. i finally gave up and called M$ myself and the guy told me that i had to code my own windows socket? does anybody know how to do this or should i call back also the code for the server and client are different could anyone help me put it together? thanks
I hope this source code helps.
ok, do you know how to add components in Visual Basic? you should be able to right-click on the tools-bar then click on "Components"... You should then see a list of registered .OCX files on your computer. Look for the "Microsoft Winsock Control" or something like it, then enable and add the component to your form, then your winsock code should work- but make sure the new winsock component is the same name as in your code...
yeah, I\'m gonna need that by friday...
thats for 6.0 im using .NET. thanks for the help though
sorry, i haven't play'd with the .net yet...
HOW TO: Programmatically Add Controls to Windows Forms at Run Time by Using Visual Basic .NET
The information in this article applies to:
* Microsoft Visual Basic .NET (2002)
* Microsoft .NET Framework SDK 1.0
This article was previously published under Q308433
For a Microsoft Visual C# .NET version of this article, see 319266.
For a Microsoft Visual Basic 6.0 version of this article, see 190670.
IN THIS TASK
* SUMMARY
*
o Requirements
o Create a Windows Forms Application
o Customize Form and Control Properties
o Add Controls to the Form
o Verify that it Works
* REFERENCES
SUMMARY
This step-by-step article demonstrates how to programmatically add and configure a few, commonly used controls within a Windows application. Event handling has been omitted from the sample code.
The Microsoft .NET Framework Software Development Kit (SDK) provides many visual controls that you can use to build a Windows Forms application. You can add and configure controls at design time in Microsoft Visual Studio .NET, or you can add and configure controls programmatically at run time.
back to the top
Requirements
This article assumes that you are familiar with the following topics:
* Visual Basic syntax
* Visual Studio .NET environment
* Purpose of common Visual Basic controls
back to the top
Create a Windows Forms Application
1. Start Visual Studio .NET, and create a new Visual Basic Windows Application project named WinControls. Form1 is added to the project by default.
2. Double-click Form1 to create and view the Form1_Load event procedure.
3. In the first line of Form1.vb, add a reference to the color namespace before the definition of the Form1 class as follows:
Imports System.Drawing.Color
4. Add private instance variables to the Form1 class to work with common Windows controls. The Form1 class starts as follows:
Imports System.Drawing.Color
Public Class Form1
Inherits System.Windows.Forms.Form
'Controls
Private txtBox As New TextBox()
Private btnAdd As New Button()
Private lstBox As New ListBox()
Private chkBox As New CheckBox()
Private lblCount As New Label()
back to the top
Customize Form and Control Properties
Tip: You can use the With command to perform a series of statements on a specified object without requalifying the object's name.
1. Locate to the Form1_Load event procedure, and add the following code to the procedure to customize the appearance of the Form control:
'Set up the form.
With Me
.MaximizeBox = False
.MinimizeBox = False
.BackColor = White
.ForeColor = Black
.Size = New System.Drawing.Size(155, 265)
.Text = "Run-time Controls"
.FormBorderStyle = FormBorderStyle.FixedDialog
.StartPosition = FormStartPosition.CenterScreen
End With
2. Add the following code to the Form1_Load event procedure to customize the appearance of the Button control:
'Format controls. Note: Controls inherit color from parent form.
With Me.btnAdd
.BackColor = Gray
.Text = "Add"
.Location = New System.Drawing.Point(90, 25)
.Size() = New System.Drawing.Size(50, 25)
End With
3. Add the following code to customize the appearance of the TextBox control:
With Me.txtBox
.Text = "Text"
.Location = New System.Drawing.Point(10, 25)
.Size() = New System.Drawing.Size(70, 20)
End With
4. Add the following code to customize the appearance of the ListBox control:
With Me.lstBox
.Items.Add("One")
.Items.Add("Two")
.Items.Add("Three")
.Items.Add("Four")
.Sorted = True
.Location = New System.Drawing.Point(10, 55)
.Size() = New System.Drawing.Size(130, 95)
End With
5. Add the following code to customize the appearance of the CheckBox control:
With Me.chkBox
.Text = "Disable"
.Location = New System.Drawing.Point(15, 190)
.Size() = New System.Drawing.Size(110, 30)
End With
6. Add the following code to customize the appearance of the Label control:
With Me.lblCount
.Text = lstBox.Items.Count & " items"
.Location = New System.Drawing.Point(55, 160)
.Size() = New System.Drawing.Size(65, 15)
End With
back to the top
Add Controls to the Form
1. Add the following code to add each object to the Controls array of the form:
'Add controls to the form.
With Me.Controls
.Add(btnAdd)
.Add(txtBox)
.Add(lstBox)
.Add(chkBox)
.Add(lblCount)
End With
2. Save the project.
back to the top
Verify that it Works
To verify that the sample works, click Start on the Debug menu. Note that although the form and the controls appear, they currently do nothing because you have not written any event handlers.
back to the top
REFERENCES
For more information about using controls programmatically, see the Windows Applications topic in the Visual Basic section of the Visual Studio .NET Online Help documentation.
back to the top
Last Reviewed: 1/30/2003
Keywords: kbHOWTOmaster KB308433 kbAudDeveloper
Forum Rules | http://www.antionline.com/showthread.php?245971-winsock-question | CC-MAIN-2016-50 | refinedweb | 1,028 | 59.09 |
None
I found this blog on how to do it.
Here is the basic idea..;
using Microsoft.SharePoint.WebControls;
using System.Drawing;
public partial class _Default : System.Web.UI.Page
{
private PeopleEditor objEditor;
protected void Page_Load(object sender, EventArgs e)
{
objEditor = new PeopleEditor();
objEditor.AutoPostBack = true;
objEditor.PlaceButtonsUnderEntityEditor = true;
objEditor.ID = "pplEditor";
objEditor.AllowEmpty = false;
objEditor.SelectionSet = "User,SecGroup,SPGroup";
objEditor.MultiSelect = false;
Panel1.Controls.Add(objEditor);
}
private string GetAccountName()
{
string strAccountName = String.Empty;
for (int i = 0; i < objEditor.ResolvedEntities.Count; i++)
{
PickerEntity objEntity = (PickerEntity)objEditor.ResolvedEntities[i];
SPUserInfo objInfo = new SPUserInfo();
objInfo.LoginName = objEntity.Key;
strAccountName = objInfo.LoginName;
// to return a sharepoint people group formatted string use...
// strAccountName = objEntity.EntityData["SPUserID"].ToString() + ";#" + objEntity.DisplayText.ToString();
}
return strAccountName;
}
protected void Button1_Click(object sender, EventArgs e)
{
string strAccountName = GetAccountName();
Label1.Text = "account name: " + strAccountName;
}
}
Or via HTML:
For this code to work you must publish it to a SharePoint website. You cannot run it from the Visual Studio debugger.
Here is the MSDN class reference.
olyvia
Monday, July 07, 2008 5:33 AM
I have a problem with page postbacks and the PeopleEditor. I have created the following:
- a standard web user control with a multiview with several views in it. Some linkbuttons are used for navigating between the views
- one of the views has the People Editor displayed. I build this up via code
- this user control is hosted within the Smart Part (v1.3) in a publishing MOSS site
Problem: if a user adds a group or user in the PeopleEditor, the value will not be retained after 2 postbacks. So, if users start to navigate between the views, they PeopleEditor will be empty after two postbacks. I tried different combinations with the Viewstate and autopostback attributes, but no joy.
Can you please suggest a solution?
Steve
Tuesday, July 22, 2008 10:33 AM
Hi,
I'm having trouble with the above piece of code. I'm actually inserting the value provided in the people picker control to a list field which is of type "People or Group".
I'm getting error : Invalid data has been used to update the list item. The field you are trying to update may be read only.
Regards,
Steve
srikanth
Monday, August 25, 2008 5:24 PM
Hi Kevin,
I'm using pplEditor in one of my form which is meant for editing a list item retrieved from MOSS list. I want the pplEditor to pre-populate with value(user name).
I'm having tough time in getting this work. Can you please let me know how to acheive the same.
regards,
srikanth
Anoosh Rahman
Monday, December 29, 2008 8:49 AM
Thanks a lot ...
Nagendra
Monday, February 02, 2009 12:13 AM
Hi Srikanth,
You can bind using pplEditor.Commaseparatedvalues=value;
pplEditor.Validate();
This will showup data in pplEditor.
Sakin Shetty
Thursday, February 05, 2009 9:53 AM
Do any one know how to insert in to the People or group field through code behind
dharyl
Thursday, February 12, 2009 5:37 PM
Hi,
I got problem with my web mail form. I used peoplepicker for To: CC: and Bcc: fields. i set multivalue to TRUE. In IE7 the fields were fine, but in IE6 the the peoplepicker textboxes expanded its size. it became 3x the height of the ordinary textbox. Do you have any solution for this?
Thanks,
Dharyl
hadi mehrpooya
Sunday, February 22, 2009 9:17 AM
Hey Thanks a lot Kevin it took me about three hours to find your web site ;)this post really helped me cheers
Adil
Tuesday, March 24, 2009 8:13 AM12. all patched with search server express 20083. search and query bieng served by server3also not worthy here is that due to budget constraint and stuff, we dint had load balancing so we are doing context switchin with4 app on server1 and 4 app on server2..each of this app is turned off in IIS in other server and each app has own IP and DNS.so the NIC of each server has 4 IPsbackup restore happened like a charm. the expected context swithcing happens like a charm. but now people picker goes in infinite "query bieng processed" and never fetches anythingthis problem has become a red flag as this the 4th day and i could find nothing that worked.
Kevin
Tuesday, March 24, 2009 9:17 AM
Adil... I have never worked with such a scenario. Sorry I can't help at this time. If I come across it, I'll post it.
sharepoint4u.wordpress.com
Tuesday, April 21, 2009 9:19 AM
Pingback from sharepoint4u.wordpress.com
How to update a people and group field via code? « SNR Sharepoint Blog | http://mysharepointblog.com/post/2007/07/How-to-use-the-PeoplePicker-in-SharePoint.aspx | crawl-002 | refinedweb | 779 | 59.4 |
The world of KIO metadata - checking the HTTP response from a server
Recently, had one problem: how could I check the HTTP response?
I knew already that the various ioslaves can store metadata, consisting of key-value pairs which are specific on the slave used. Normally you can get the whole map by accessing the metaData function of the job you have used, in the slot connected from the result signal. For some reason, however, in PyKDE4 calling metaData() triggers an assert in SIP, which ends in a crash (at least in my application; I stil need to debug further). KIO jobs have also the queryMetaData function, which returns the value of the key you have queried. Unfortunately, there was no way I could find the name. DESIGN.metadata (link is for the branch version). After checking with webSVN, that was exactly the thing I was looking for! It lists all the keys for the metadata, indicating also to which ioslave they begin. After that, the solution was easy.
Of course I’m not leaving you hanging there and now I’ll show you how, in PyKDE4, you can quickly check for the server response:
[python]
from PyKDE4.kio import KIO
from PyQt4.QtCore import SIGNAL
[…]
class my_widget(QWidget):
[…])
[/python]
This snippet does a few things. Firstly, it gets the specified URL, using KIO.get (KIO.stat doesn’t set the required metadata). Notice that the call is not wrapped in the new-style PyQt API because result (KJob *) isn’t wrapped like that (there’s a bug open for that). In any case, the signal passes to the connecting slot (slot_result) where we first check if there’s an error (perhaps the address didn’t exist?) and then we use queryMetaData(“responsecode”) to get the actual response code.
If you want to do error checking basing on the result, bear in mind that KIO operates asynchronously, so you should use a signal to tell your application that the result is what it expected or not.
I wonder if this should be documented in Techbase…
Luca Beltrame KDE · LINUX
KDE Linux python | https://www.dennogumi.org/2010/02/the-world-of-kio-metadata-checking-the-http-response-from-a-server/ | CC-MAIN-2016-50 | refinedweb | 352 | 71.34 |
Jason Haley is one of the most active amongst the authors of Add-ins for .NET Reflector. He has provided four Add-ins; AssemblyCollection.Sort, Enums, CodeShortcut and OpenZip, and he’s currently at work on a .NET disassembler codenamed ‘Debris’. However, it’s for his help in explaining the add-in architecture, Presentations at Code Camps, providing resources on his website, and for his VSI Add-in Starter Kit for Visual Studio (the Reflector Add-in Starter Kit (C#)) that he is probably best known to .NET Reflector users. Together with his add-ins, which are provided in source, Jason’s site provides an excellent place to start writing an Add-in (Peli’s ‘Reflector Add-In Tutorial’, and other useful resources, are linked to). Jason also maintains pages on the related subjects of Code Obfuscation and ‘Reversing’, with links to a number of important resources.
Jason is a Senior Software engineer, MCSD .NET, who has been using NET since 2001. As well as his work on disassemblers, and Reflector, he also maintains one of the best Link Blogs around, based on a cornucopia of RSS feeds.
We caught up with Jason after the Seattle Code Camp and asked him a few questions about NET Reflector and his work on Add-ins…
How did you first come across Reflector? Why did you first find it useful?
When I started learning .NET I was determined to figure out how it all worked, and started using ILDasm sometime during the .NET beta 2 period. I honestly don’t remember where I first heard about Reflector, but when I started using it, I pretty much stopped using ILDasm. The way the Lutz hyperlinked all the items of an assembly to let you navigate the code like a web site was the killer app for me.
You’ve become better known for explaining how to extend Reflector than for the Addin extensions you’ve written. Is this because your addins are primarily designed as examples to illustrate how to write add-ins?
The addins I’ve written are mostly ‘fixes’ or small extensions to Reflector and not generally useful to most people – mainly due to their inspiration coming from a need I had at the time. The inspiration behind the addin articles was to show people that it really isn’t that hard to do, in the hopes that more people would write addins. I’m planning to write a few more articles when I finish up my Debris addin (on implementing languages – something I haven’t done yet).
Of all the Add-ins, which do you see as being the ten most useful for someone, a .NET Developer, who is new to .NET Reflector?
- TestDriven.net (to make Reflector easier to get to from VS.Net)
- Any browser or Loader fits their current needs, for example: SQL2005Browser and SilverlightLoader
- CodeMetrics for getting a better sense of the state of your code
- DependencyStructureMatrix is neat to see what you code depends on
- Doubler to help get started on unit tests
- CodeSearch for a nice searching option
- Xmi4DotNet is nice to diagram assemblies in UML tools
- PowerShellLanguage for people getting familiar with PowerShell
- ReflectionEmitLanguage is good for people looking into the System.Reflection.Emit namespace
Of all the Add-ins, excluding yours, which do you see as being the five most ingenious?
I think I would have to put all of Peli’s addins at the top of that list: Graph, CodeMetrics, Review, Pex and ReflectionEmitLanguage.
How would you see an Add-in manager modeled after Firefox encouraging software developers to add new functionality?
I like the way FireFox gives you the ability to easily connect to their addin site and locate new addins. I think for Reflector it would allow users to find updated addins faster and give a central location to search for additional addins, whether on codeplex or not. In general, I figure the more visible the addins are to the end user the more likely that end user will use them and later on decide to write one themselves … though that’s just my theory.
Should Reflector have a conventional installer (with maybe the option to install it into Visual Studio) , or is it better the way it is?
No, I don’t see a lot of value added with an installer – actually it would probably cause more problems than it would solve.
Is Reflector now complete, in the sense that all necessary extensions can be made via the Addin architecture?
No, there are several areas that aren’t available to extend or at least I haven’t found a way to extend yet. The biggest area seems to be the Analyzer.
What sort of extensions could be built if the Analyser had a defined, extendible, architecture?
Hmm good question, because if there really was one that I could think of, it could be built on the existing addin architecture though it would require more work.
You are still very interested in the whole subject of NET disassemblers. Is there still a place for ILASM and ILDASM for the average .NET Developer, or can .NET Reflector do it all?
I would say for the ‘average’ developer these days, Reflector can do it all. These days it seems there is only a small percentage of developers that have ever used ildasm and even a smaller percentage that have used ilasm.
The Debris add-in for Reflector is fascinating because it will allow you to browse a lot of the metadata that you can’t get at with .NET Reflector. What do you see as being its main use?
Debris is aimed at a pretty small audience (seems to be a common theme with most of my addins) so I doubt there will be too many people who will use it. It is really geared towards the ILDasm user who also uses Reflector to view all parts of an assembly .
Is Obfuscation a necessary evil for ISVs to protect their software from corporate giants and hackers or it is just 10 minutes per release build wasted?
Unfortunately I think it is a necessary evil, without it the bar for reverse engineering is way too low … especially for a small ISV who really depends on some time in the market before their competitors catch up. However there are some design tradeoffs that have to be made when the final product is obfuscated, which also has to be taken into account.
Is there anything we can do to get more people involved in writing add-ins?
Maybe borrow something from the Google or Mono camp on how they get people to develop Addins or code for them. Or maybe some sort of a contest, but I’m not sure how that would work.
What, besides Debris, would be a ‘killer Add-in’ for someone to write?
A managed assembly (and file) diff utility that is as usable and useful as Scooter Software’s Beyond Compare – that would be sweet! | https://www.red-gate.com/simple-talk/dotnet/.net-tools/encouraging-.net-reflector-add-ins/ | CC-MAIN-2018-05 | refinedweb | 1,165 | 60.24 |
#include <Puma/Builder.h>
Syntax tree builder base class. Implements the basic infrastructure for building CTree based syntax trees.
Tree builders are used in the syntax analysis to create the nodes of the syntax tree according to the accepted grammar (see class Syntax). A syntax tree shall be destroyed using the tree builder that has created it by calling its method Builder::destroy(CTree*) with the root node of the syntax tree as its argument.
The builder is organized as a multi-level stack. If a grammar rule is parsed then a new stack level is created. The sub-trees of the syntax tree representing the parsed grammer rule are pushed on this level of the stack. If the grammar rule is parsed successfully then these sub-trees are used to build the syntax tree representing the parsed grammar rule (and thus the corresponding source code). The current stack level is then discarded and the created syntax tree is pushed on the stack of the previous level (which is now the top level of the stack). If the grammar rule could not be parsed successfully then the current stack level is discarded and all the sub-trees pushed on it are destroyed.
Constructor.
Add all nodes of the given container to the given list node.
Destroy the top tree node of the builder stack.
Reimplemented from Puma::PtrStack< CTree >.
Reimplemented in Puma::CCBuilder.
Destroy the given syntax tree recursively.
Destroy the given syntax tree node.
Child nodes are not destroyed.
Get the collected errors.
Print the collected error messages on the given error output stream.
Discard the saved state.
Get the n-th node from the builder stack.
Get the current token count.
Add all nodes on the builder stack to the given list node.
Get the current number of nodes on the builder stack.
Restore the saved state.
Save the current state.
Reset the token counter.
The error collector object. | http://puma.aspectc.org/manual/html/classPuma_1_1Builder.html | CC-MAIN-2020-16 | refinedweb | 321 | 77.13 |
It is apparent that support for networking is inherent to the Linux kernel. One could also see Linux as one of the most 'safest and secure' Networking Operating system presently available in the market. Internally Linux kernel implements the TCP/IP protocol stack . It is possible to divide the networking code into parts - one which implements the actual protocols (the /usr/linux/net/ipv4 directory) and the other which implements device driver various network hardware.(/usr/src/linux/drivers/net ).
The kernel code for TCP/IP is written in such a way that it is very simple to "slide in" drivers for many kind of real (or virtual) communication channels without bothering too much about the functioning of the network and transport layer code. It just requires a module in a standard manner, connecting the card hardware to actual software interface. The hardware part consists of an Ethernet card in case of LAN or a modem in internet.
Now a days a lot of Networking cards are available in the market, one of them is RTL8139 PCI ethernet card. RTL8139 cards are plug and play kind of devices, connected to the cpu through PCI bus scheme. PCI stands for Peripheral Component Interconnect, it's a complete set of specifications defining how different parts of computer interact with others. PCI architecture was designed as a replacement to earlier ISA standards because of its promising features like speed of data transfer, independent nature, simplification in adding and removing a device etc.
Another important way is by manually detecting and configuring a network card, for which ifconfig command is used. A typical output of ifconfig command without any arguments is shown below (it could vary system to system depending upon the configuration).
eth0 Link encap:Ethernet HWaddr 00:80:48:12:FE:B2 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:10 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:0 (0.0 b) TX bytes:600 (600.0 b) Interrupt:11 Base address:0x7000)
It shows that I have a running interface for eth0 and lo, which corresponds to ethernet card and loopback interface respectively. The loopback is completely software based and used as an dummy interface to the network. The eth0 is the default name given to real hardware interface for realtek 8139 network card. The listing also tells about its hardware (HWaddr), internet (inet addr), Broadcast(Bcast), Mask (Mask) addresses with some other statistical information on data transfer that include Maximum data unit that can be transferred (MTU), no. of received (RX) packets, no. of transmitted packets (TX), collisions etc. The ifconfig command can also be used to bring up the interface if it is not detected at boot time. This could also be associated with an IP address as given below.
ifconfig eht0 192.9.200.1 upThis brings up the ethernet card to listen to an IP address 192.9.200.1, a class-C client. At the same time ifoconfig can also be used to bring down an activated interface. This is as given below.
ifconfig eth0 downThe same is applicable to loopback interface. That is these are quite possible.
ifconfig lo 192.9.200.1 up ifconfig lo down'ifconfig' supports plenty of options that may be discovered through reference to man pages.
Another command that needs reference is netstat, It prints out network connections, routing tables, interface statistics, masquerade connections, and multicast memberships. An exhaustive list of options may be found in man pages.
Kernel as usual provides concise but efficient data structures and functions to perform elegant programming, even understandable to a moderate programmer, and the interface provided is completely independent of higher protocol suit. For an quick overview of the kernel data structures, functions, the interactions between driver and upper layer of protocol stack, we first attempt to develop a hardware independent driver. Once we get a big picture we can dig into the real platform.
Whenever a module is loaded into kernel memory, it requests the resources needed for its functioning like I/O ports, IRQ etc. Similarly when a network driver registers itself; it inserts a data structure for each newly detected interface into a global list of network devices.
Each interface is defined by a struct net_device item. The declaration of device rtl8139 could done as follows
struct net_device rtl8139 = {init: rtl8139_init};
The struct net_device structure is defined in include file linux/net_device.h .The code above initializes only a single field 'init' that carries the initialization functions. Whenever we register a device the kernel calls this init function, which initializes the hardware and fills up struct net_device item. The struct net_device is huge and handles all the functions related to operations of the hardware. Let us look upon some revelent ones.
name : The first field that needs explanation is the 'name' field, which holds the name of the interface (the string identifying the interface). Obviously it is the string "rtl8139" in our case.
int (*open) (struct net_device *dev) : This method opens the interface whenever ifconfig activates it. The open method should register any system resource it needs.
int (*stop) (struct net_device *dev) : This method closes or stops the interface (like when brought down by ifconfig).
int (*hard_start_xmit) (struct sk_buff *skb, struct net_device *dev) : This method initiates the transmission through the device 'dev'. The data is contained in the socket buffer structure skb. The structure skb is defined later.
struct net_device * (*get_status) (struct net_device *dev): Whenever a application needs to get statistics for the interface, this method is called. This happens, for example, when ifconfig or netstat -i is run.
void *priv :The driver writer owns this pointer and can use it at will. The utility of this member will be persuaded at a later stage. There exist a lot more methods to be explained but before that let us look at some working code demonstration of a dummy driver built upon the discussion above. This code would make the interactions between these elements crystal clear.
Code Listing 1
#define MODULE #define __KERNEL__ #include < linux/module.h > #include < linux/config.h > #include < linux/netdevice.h > int rtl8139_open (struct net_device *dev) { printk("rtl8139_open called\n"); netif_start_queue (dev); return 0; } int rtl8139_release (struct net_device *dev) { printk ("rtl8139_release called\n"); netif_stop_queue(dev); return 0; } static int rtl8139_xmit (struct sk_buff *skb, struct net_device *dev) { printk ("dummy xmit function called....\n"); dev_kfree_skb(skb); return 0; } int rtl8139_init (struct net_device *dev) { dev->open = rtl8139_open; dev->stop = rtl8139_release; dev->hard_start_xmit = rtl8139_xmit; printk ("8139 device initialized\n"); return 0; } struct net_device rtl8139 = {init: rtl8139_init}; int rtl8139_init_module (void) { int result; strcpy (rtl8139.name, "rtl8139"); if ((result = register_netdev (&rtl8139))) { printk ("rtl8139: Error %d initializing card rtl8139 card",result); return result; } return 0; } void rtl8139_cleanup (void) { printk ("<0> Cleaning Up the Module\n"); unregister_netdev (&rtl8139); return; } module_init (rtl8139_init_module); module_exit (rtl8139_cleanup);This typical module defines its entry point at rtl8139_init_module function. The method defines a net_device, names it to be "rtl8139" and register this device into kernel. Another important function rtl8139_init inserts the dummy functions rtl8139_open, rtl8139_stop, rtl8139_xmit to net_device structure. Although dummy functions, they perform a little task, whenever the rtl8139 interface is activated. When the rtl8139_open is called - then this routine announces the readiness of the driver to accept data by calling netif_start_queue. Similarly it gets stopped by calling netif_stop_queue.
Let us compile the above program and play with it. A command line invocation of 'cc' like below is sufficient to compile our file rtl8139.c
[root@localhost modules]# cc -I/usr/src/linux-2.4/include/ -Wall -c rtl8139.c
Let us check our dummy network driver. The following output was obtained on my system. We can use lsmod for checking the existing loaded modules. A output of lsmod is also shown.
(NB: You should be a super user in order to insert or delete a module.)
[root@localhost modules]# insmod rtl8139.o
Warning: loading test.o will taint the kernel: no license
See for information about tainted modules
Module test loaded, with warnings
[root@localhost modules]# lsmod
Module Size Used by Tainted: P
rtl8139 2336 0 (unused)
mousedev 5492 1 (autoclean)
input 5856 0 (autoclean) [mousedev]
i810 67300 6
agpgart 47776 7 (autoclean)
autofs 13268 0 (autoclean) (unused)
[root@localhost modules]# ifconfig rtl8139 192.9.200.1 up
[root@localhost modules]# ifconfig)
rtl8139 Link encap:AMPR NET/ROM HWaddr
inet addr:192.9.200.1 Mask:255.255.255.0
UP RUNNING MTU:0 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:600 (600.0 b)
Now You have been acquainted with writing a dummy driver, Let us move on to
a real driver interface for rtl8139.
Though Network interface has been built up, but still it is not possible for us to probe and initialize the card. This is only possible until we check for a PCI interface and a PCI device available. Thus it becomes necessary that we have a close look upon the PCI and PCI functions available.
As I have described earlier that the PCI hardware is a complete protocol that determines the way each components interaction with the other. Each PCI device is identified by a bus number, a device number and a function number. The PCI specification permits a system to hold upon 256 buses, with each buses having a capacity to hold 32 multiboard devices.
The PC firmware initializes PCI hardware at system boot, mapping each devices I/O region to a different address, which is accessible from PCI configuration space, which consist of 256 bytes for each device. Three of the PCI registers identify a device: vendorID, deviceID, class. Sometimes Subsystem vendorID and Subsystem deviceID are also used. Let us see them in detail.
A complete list of PCI devices on ones linux box could be seen through command lspci.
Based on the above information we can perform the detection of the rtl8139 could done in the rtl8139_init function itself, a modified version will look like
Code Listing 2
#include < linux/pci.h > static int rtl8139_probe (struct net_device *dev, struct pci_dev *pdev) { int ret; unsigned char pci_rev; if (! pci_present ()) { printk ("No pci device present\n"); return -ENODEV; } else printk ("<0> pci device were found\n"); pdev = pci_find_device (PCI_VENDOR_ID_REALTEK, PCI_DEVICE_ID_REALTEK_8139, pdev); if (pdev) printk ("probed for rtl 8139 \n"); else printk ("Rtl8193 card not present\n"); pci_read_config_byte (pdev, PCI_REVISION_ID, &pci_rev); if (ret = pci_enable_device (pdev)) { printk ("Error enabling the device\n"); return ret; } if (pdev->irq < 2) { printk ("Invalid irq number\n"); ret = -EIO; } else { printk ("Irq Obtained is %d",pdev->irq); dev->irq = pdev->irq; } return 0; } int rtl8139_init (struct net_device *dev) { int ret; struct pci_dev *pdev = NULL; if ((ret = rtl8139_probe (dev, pdev)) != 0) return ret; dev->open = rtl8139_open; dev->stop = rtl8139_release; dev->hard_start_xmit = rtl8139_xmit; printk ("My device initialized\n"); return 0; }
As you can see a probe funtion is called through rtl8139_init function. A detailed analysis of the probe functions shows that it has been passed pointers of kind struct net_device and struct pci_dev. The struct pci_dev holds the pci interface and other holds the network interface respectively, which has been mentioned earlier.
The function pci_present checks for a valid pci support available. It returns a value '0' on Success. Thereafter a probe of RTL8139 is initiated through the pci_find_device function. It accepts the vendor_ID, device_ID and the 'pdev' structure as argument. On an error-free return i.e. when RTL8139 is present, it sends the pdev structure filled. The constants PCI_VENDOR_ID_REALTEK, PCI_DEVICE_ID_REALTEK_8139 defines the vendorID and device_ID of the realtek card. These are defined in linux/pci.h.
pci_read_config_byte/word/dword are functions read byte/word/dword memory locations from the configuration space respectively. A call to pci_enable function to enable pci device for rtl8139, which also helps in registering its interrupt number to the interface. Hence if everything goes safe and error-free, your rtl_8139 has been detected and assigned an interrupt number.
In the next section we would see how to detect the hardware address of rtl8139 and start communication.
Author has just completed B.Tech from Govt. Engg. College Thrissur. | http://www.tldp.org/LDP/LG/issue93/bhaskaran.html | CC-MAIN-2016-26 | refinedweb | 2,033 | 55.03 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.