text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Wikiversity:Vision/2009
Educational goals[edit]
Pedagogy[edit]
Action required. Cormaggio has suggested a number of related initiatives (e.g. Developing Wikiversity through action research) in the area of sharpening Wikiversity's concept of learning. --McCormack 07:36, 22 March 2008 (UTC)
- Workspaces for this include: Wikiversity learning model (and related reading group and discussion group), and Building successful learning communities on Wikiversity.
PLEs[edit]
- Develop examples of Userpages as Personal learning environment (PLEs) -- Jtneill - Talk 11:27, 22 March 2008 (UTC)
- See also: Making Wikiversity a personal learning space (and my PLE :-)) ----Erkan Yilmaz uses the Wikiversity:Chat (try) 12:39, 22 March 2008 (UTC)
Ideas welcome! --McCormack 07:20, 1 April 2008 (UTC)
Educational levels[edit]
- Creating more educational content for younger students, and developing a structure and community centered around said content, see Pre-tertiary portal for ongoing discussion. --Luai lashire 15:42, 22 March 2008 (UTC)
Working on it! I'm creating a portal system for this right now :-) --McCormack 15:43, 22 March 2008 (UTC)
- Excellent. :) --Luai lashire 14:21, 23 March 2008 (UTC)
Identifying quality content[edit]
- A great problem with "Wikiversity 2007" was that quality content was so hard to find that it appeared to be non-existent. As one bureaucrat phrased it in early 2008, the actual practice of content creation on Wikiversity was "to create a page and forget about it" - or more to the point, "create a stub or less-than-half-developed page and forget about it". During Spring 2008, there have been ongoing efforts to identify quality content and make this more accessible to Wikiversity participants, so that a feeling emerges that Wikiversity is doing something worthwhile. Usually, valuable content has been "rediscovered" during the process of other reorganisational projects. At the current point of time, several dozen projects accounting for perhaps 25% of Wikiversity's total content have been tracked down and identified as valuable content in this way. It should greatly increase the community's confidence in content creation. --McCormack 07:19, 29 June 2008 (UTC)
- The above comment seems to ignore the reality of how wikis function and grow. Many stubs are requests for further content development. The attitude that stubby and developing pages are not something worthwhile is toxic and not the wiki way. Wikiversity welcomes all good faith contributions....well, some of us do. --JWSchmidt 04:38, 1 September 2008 (UTC)
- Wikiversity:Colloquium/archives/May 2008#Minimal requirements, ----Erkan Yilmaz uses the Wikiversity:Chat (try) 20:33, 1 September 2008 (UTC)
- I find McCormack's comment helpful—I'm thinking about how to focus on productive content in the projects I'm involved with. The Jade Knight 09:40, 3 September 2008 (UTC)
- I think the more that learning resources interlink the more unified Wikiversity will seem. That is one of the great strengths of Wikipeida is that its articles not only talk about that article's subject, but easily directs its readers to related material in the article. Something similar to link surfing. So if a learning resource is kind of based on another skill, hopefully you can link to another learning resource on Wikiversity that teaches that skill. --Devourer09 02:44, 5 September 2008 (UTC)
- I think there is still confusion between wikiversity and wikibooks, particularly when textbooks are developed on wikibooks - there should be an easy way to identify them and link them to wikiversity content. I am pretty sure there are people developing textbooks on Wikibooks who don't know much about Wikiversity, and there are people who develop textbook-like stuff here.--Piotrus 17:52, 18 November 2008 (UTC)
- Totally agree. The Jade Knight (d'viser) 21:30, 18 November 2008 (UTC)
Wikiversity:Featured[edit]
- There has been tremendous progress with identifying quality content using Wikiversity:Featured. This in turn has been reflected on the main page. Raising the profile of certain projects in this way has considerably motivated some of the content producers currently working on these projects. --McCormack 07:26, 29 June 2008 (UTC)
Wikiversity:Random[edit]
- The random link(s) in the left-hand sidebar (Mediawiki:sidebar) were a point of discussion and trial-and-error reform by a number of participants in Spring 2008. The original problems were that a purely random pages threw up mainly subpages and stubs, which created a bad impression of Wikiversity content. Ideally what was needed was a random link which only chose from the homepages of well-developed projects. However the Mediawiki software did not provide a good way to do this. Eventually the suggestions list at Wikiversity:Featured was used to produce a list of content (not just featured content, but drawing on slughtly-lower-quality or not-yet-featured content as well) from which a random page was chosen on every click. This works better than the equivalent kludge at Wikibooks, which as a browser-specific scripting problem. --McCormack 07:26, 29 June 2008 (UTC)
More learning projects and ones with feedback[edit]
I think we need more learning projects. I saw a psudo-namespace "Course:" run by a user that consisted of keeping track of how well users did on various resources and quizes. I think it's a good idea and we need to have something that allows someone to keep tabs on the grades, categorize them, and weigh them to use a mathematical device to create a grade.--Ipatrol 21:01, 7 November 2008 (UTC)
Matching Efforts to Resource Demand[edit]
Finding courses without content is nobody's favorite pasttime, yet there seems to be a great deal of this to do, even once the quality content has been identified. So, what then is the best way to guide developer's hands towards those resources which are most in need of development? Obviously random pages are not going to arouse feelings of grandeur and raise morale. Instead, I'd like to see a synthesis made here of the statistics available and the the proposed "Wikiversity:Activity_bars" (maybe via bot?) to better guide our collaborators to where they're needed. Those pages with the most hits will get looked at, and if the right people look, they will be developed. - Gustable 05:07, 15 December 2008 (UTC)
- Not a bad idea, but most users help with what they're interested in—this is where Departments need to step in and be helpful, herding people to where the most promising content is. We definitely need a way to handle these demoralizing empty pages in a way everyone can appreciate, however. The Jade Knight (d'viser) 23:18, 15 December 2008 (UTC)
Usability goals[edit]
Working on it! See main page learning project. --McCormack 07:42, 22 March 2008 (UTC)
Help:Contents[edit]
Done. User:Terra (and others) developed and activated a new help:contents page in March and April 2008. --McCormack 06:00, 27 April 2008 (UTC)
Faculty structure[edit]
- Wikiversity seems to have originally attempted to use a university metaphor to structure itself, and then attempted to run away from or disguise this metaphor, without ever fully completing the erasure, and without replacing the metaphor with anything else. I propose we embrace a new multi-metaphor structural concept, including both a pre-tertiary (K-12) framework and a university framework. As regards the university framework, the list of faculties (which partially morphed into a characterless list of portals) is sadly lop-sided, incoherent and incomplete. I have started a series of proposals for reform of the faculty structure on Category talk:University faculties. --McCormack 07:42, 22 March 2008 (UTC)
- Agree, the name "-versity" sets unfortunately the association to mainly university. But WV is actually so much more. We could also link the pretertiary portal etc. in the navigation bar so it gets faster seen ? ----Erkan Yilmaz uses the Wikiversity:Chat (try) 08:41, 22 March 2008 (UTC)
Please vote, discuss or express an opinion. There is currently an active thread about namespaces on the Colloquium: see Colloquium:Topic subpages & research page location. --McCormack 05:56, 27 April 2008 (UTC)
- above link archived, ----Erkan Yilmaz uses the Wikiversity:Chat (try) 08:51, 16 August 2008 (UTC)
- See Wikiversity:Portal reform for latest developments. --McCormack 20:57, 12 May 2008 (UTC)
- The university metaphor may be quite useful at the tertiary level. For example, I envision future versions of Wikiversity as being host to more actual research (on the order of the Bloom Clock Project). My own userpage is a "laboratory" run and staffed by my avatars. Although the avatars are fictional characters, the research the laboratory performs is real. Perhaps Wikiversity could make an effort to reach out to academics and encourage them to use the space for their research projects and courses? AFriedman 03:46, 17 December 2008 (UTC)
Wikiversity:Browse[edit]
- Remi started this initiative a year ago, and it perhaps needs revisiting: new concept for "browse" page. --McCormack 09:21, 22 March 2008 (UTC)
Please vote, discuss or express an opinion. I've updated the new concept for "browse" page. Effectively the principle for this page is that it should be the most comprehensive guide to content on WV which we can produce. For friendly access to content, we should prefer the major portals. But we also need something powerfully comprehensive, and which will also impress researchers and press reps who want to know the full range of what's really on WV as quickly as possible. --McCormack 14:55, 24 March 2008 (UTC)
after some years we wish wikiversity to grow up and have a greet scientific specialized content to begining in whats called continuous education courses with credited certificates also we wish wiki to be of scientific reality and action in scientific literatures all over the world also i wish from people work on wiki to make a prize for people have best scintific researches this my make wiki an unique with this researches over any sites and making it helpfull and important for researchers "giving reality for its science between scintific communication" --Ibrahim kh. rashad 05:40, 10 October 2008 (UTC)
Free culture studies[edit]
- I'm not yet familiar with the faculty structure, so this may already exist in some form or another, but I would like to propose a faculty/department (whatever) of free culture studies. This would include courses such as Composing free and open online educational resources, reading groups such as Free Culture (book), and research groups (such as User:Cormaggio's phd work), plus I'm sure much other, perhaps scattered learning projects. Such a faculty would, I'd suggest be in keeping with the overall WV mission, could become a honeypot for people being innovative in this area, and eventually a flagship program. Let me know initial thoughts and if promising how you would suggest it proceed. I'm really only familiar with bottom-up design around here. -- Jtneill - Talk 03:41, 25 March 2008 (UTC)
- There is a lot of material spread around WV on this topic. I'd suggest (if it does not yet exist as a department/topic) that material should be drawn together at the department/topic level, and then categorised under (e.g.) a faculty of Wikimedia Studies, the faculty of Educational Science (where it already mostly is anyway) and the Non-formal Education Portal. With the new portal system I am developing, subsuming something into a particular portal would require no more than correctly categorising it. You should take a good look at Wikiversity:Browse/Concept. --McCormack 05:26, 25 March 2008 (UTC)
Done Your suggestions now make sense. Scary. So, I've created Free culture and we can go from there. -- Jtneill - Talk 03:26, 9 May 2008 (UTC)
[edit]
Done Navigating by resource level: introducing portals and categories for: pre-school education, primary education, secondary education, tertiary education, non-formal education. --McCormack 09:22, 22 March 2008 (UTC)
Working on it! Navigating by resource type: discussions and debates, reading groups, lesson plans, lecture notes, audio/video lectures, textbooks, articles, quizzes and games, courses, research projects, personal learning environments, (...) --McCormack 09:26, 22 March 2008 (UTC)
Please vote, discuss or express an opinion. See Template:Gateways. --McCormack 18:53, 22 March 2008 (UTC)
Ideas welcome! More ideas: identifying and dynmically listing resources on the basis of number of edits, recent activity, size (both bytes and number of subpages) - as a way of getting to the active, worthwhile content buried somewhere in them there hills. This is technically possible, but may require some coding. --McCormack 06:10, 23 March 2008 (UTC)
Working on it! Navigating by user type: portals for (1) teachers/educators, (2) learners/students, (3) researchers, (4) maintenance/admin/development. --McCormack 11:11, 24 March 2008 (UTC)
WYSIWYG[edit]
- Obtain a clear indication of strategic intent from WM Foundation with regard to WYSIWYG editing. i.e., is a waste of time to dream of such an innovation (and therefore best to help users to learn MW syntax markup), or is a functionality worth lobbying for and investing energy in testing? -- Jtneill - Talk 11:53, 22 March 2008 (UTC)
- I think both are needed. For an average user, only basic understanding of wiki markup is needed to perform the edits they want. Some people, however, may struggle even with that, especially younger users (like first graders or younger), who we are trying to attract. So a WYSIWYG set up would definitely be useful for basic things. Then we have users who would like to do more complicated things with wiki markup, but don't know how. Currently there is no way to learn more complicated wiki manipulation (like making templates, etc) unless you get a more experienced editor to teach you. So something like a resource to teach complex wiki markup would be useful; but if we could simplify the more complicated things enough to bring them within the grasp of the average user, we'd see a lot more getting done on WV. (sorry if that doesn't make sense, I'm having trouble translating thoughts to language today) --Luai lashire 16:02, 22 March 2008 (UTC)
- I would be surprised that there is not a wysiwyg mediawiki extension. Perhaps if there is a quality one that would be useful, it could be something that could be enabled through preferences. --Remi 00:27, 16 May 2008 (UTC)
- See: strategy:Proposal:WYSIWYG default editor --fasten 13:31, 4 November 2009 (UTC)
Usability priorities[edit]
- Do you think learning manual wiki-markup is the largest obstacle to classes using Wikiversity? I'd estimate that creating an account and logging in is, in terms of time and probability of error, just as great an obstacle as wiki-markup, and once students actually find a nice table of symbols (tutorials!), they get going with editing quite fast. Advanced editing mostly involves things like templates and parsers, which no WYSIWYG editor could ever cope with anyway. What we should probably do is map out a list of things that classes need to go through, and estimate where the bottlenecks are. Quick and dirty list follows. --McCormack 12:06, 22 March 2008 (UTC)
- Creating an account and signing in (quite difficult)
- Note: This step can be skipped because anonymous editing is possible. -- Jtneill - Talk 12:28, 22 March 2008 (UTC)
- I don't think this is difficult at all- anyone with even a modicum of internet experience has most likely created an account somewhere before, and as far as account creation goes, wikiversity's sign-up is very quick and simple. The one thing I can think of that could cause a problem is locating the place to click in order to get an account, which could be remedied by placing a link to it in the navigation bar at left. --Luai lashire 15:56, 22 March 2008 (UTC)
- Introduction to basic wiki markup (moderate difficulty; easier if a good reference source is available)
- This took me quite a while to get the hang of- I would say it's quite difficult unless you have a really good quality reference source available, which Wikiversity currently does not. It has a few resources that come close, but not close enough, and I had to look at several of them AND go to Wikipedia to look for more before I could even begin to figure out how to edit. --Luai lashire 15:56, 22 March 2008 (UTC)
- Dealing with edit conflicts (occurs more frequently if a class are collaboratively authoring)
- Content policy issues, such as basic civility and not being tempted by amateur vandalism once the new environment is discovered.
- Using media (really difficult)
- I would rate this at most moderately difficult; students I work with who are fairly novice seem quite comfortable with (and enjoy) embedding hosted videos in their blogs, etc. -- Jtneill - Talk 12:28, 22 March 2008 (UTC)
- Well, perhaps the really difficult bit is when it involves legally using media - i.e. attributing authorship and getting permissions right. --McCormack 12:37, 22 March 2008 (UTC)
- Agree, people don't know much about copyright. :-( ----Erkan Yilmaz uses the Wikiversity:Chat (try) 12:42, 22 March 2008 (UTC)
- But I think they learn pretty fast by making a mistake and having an experienced user remove inappropriate content, and explain on their talk page. -- Jtneill - Talk 12:52, 22 March 2008 (UTC)
- You are remarkably optimistic about the supply of experienced users at WV to do things like categorisation and media permissions tagging ;-) But perhaps we shall have them one day. --McCormack 13:12, 22 March 2008 (UTC)
- Well, I'm thinking its not so much optimism as realism around the observation that categorising and fixing naive or vandalising stuff by others seems to pretty much be only performed by a small proportion of experienced users. (Perhaps if users weren't so focused on working out syntax every time they edited, they might get around more often to categorising?) -- Jtneill - Talk 13:31, 22 March 2008 (UTC)
- Linking pages into the grand scheme of things (using categories, portals and the like; difficult)
- I would rate this very difficult; it's taken me a long while to get to grips with namespaces, categories, etc. esp. on a new MW site. I think this is ultimately a task for more experienced users. -- Jtneill - Talk 12:28, 22 March 2008 (UTC)
- Interesting. I would rate this as easy. While I admit it's often hard to figure out precisely which categories to put a page in, categories were relatively easy for me to adapt to, and after a bit of portal-hunting I often found places I thought were right to link to. If we introduce new users to categories as the Wikiversity version of tags, I think most people understand their use pretty quickly. Namespaces, however, are confusing- but that I think is because the namespace system is pretty messed up, as we are discussing above. Perhaps if we 1) fix the namespaces, and 2) create a page introducing new users to categorization and organizational structure on WV, we can address any issues people currently have with such things? --Luai lashire 15:56, 22 March 2008 (UTC)
- Colour coding
- Colour code the different sections of the code. This will make rummaging through the code faster. I understand that having this function will probably take up much processing power, so perhaps it can be introduced as an optional feature for logged in users? --Jestermeister 07:25, 1 January 2009 (UTC)
Homework templates[edit]
- There needs to be a way for students to upload their assignments easier. This could be templates or even a bot. Using the current upload process along with manual wiki-markup for displaying the homework is now too time consuming. ~~~~ Robert Elliott 15:54, 24 March 2008 (UTC)
Welcome template[edit]
Action required. Update the welcome template. --McCormack 08:43, 24 March 2008 (UTC)
[edit]
Please vote, discuss or express an opinion. Discussion has been quite active at MediaWiki talk:Sidebar in April 2008. --McCormack 05:58, 27 April 2008 (UTC)
- MediaWiki:Sidebar is the page which defines the appearance of the left hand sidebar. Its content perhaps leaves something to be desired: there are two links each for randomness, helpfulness and communitarianism. Previous discussion (from September 2006) is at MediaWiki talk:Sidebar. We've created loads of useful content since it was created, so it's time we revised it to reflect today's Wikiversity. What do people think would be more helpful in terms of content? --McCormack 12:09, 24 March 2008 (UTC)
- A "create an account" link in the left hand sidebar would be useful to new users who may be confused about how to get involved in Wikiversity. --Luai lashire 22:50, 25 March 2008 (UTC)
Favourites[edit]
- Is there any MW functionality which would allow users to bookmark favourite pages? -- Jtneill - Talk 07:15, 25 March 2008 (UTC)
- The watchlist. Many experienced users use their user space to create lists. --McCormack 07:31, 25 March 2008 (UTC)
- Had a quick search; doesn't sound like there's any extension for this, but did find this extension request for flexible watchlists. -- Jtneill - Talk 07:44, 25 March 2008 (UTC)
- I don't quite understand why the "watch" tab doesn't fulfill this function? --McCormack 08:22, 25 March 2008 (UTC)
- I guess the issues I have are:
- The order is always different, based on what has been most recently edited. Some favourite pages may not have been edited for a long time, and not even be listed even though they are favourites.
- Maybe I 'watch' too much stuff, e.g., on WP I don't use my watchlist much any more, but I do look through out of interest sometimes to see anything leaps out.
- On confluence, favourites are separate to watches
- Maybe I should be using Special:Watchlist/edit more, since that seems to be a more stable list of 'favourites'. Are there other ways to get this list? -- Jtneill - Talk 10:09, 25 March 2008 (UTC)
- You might also want to visit where new features are taken for a drive. For example, see testwiki:Wikipedia:Gadget/Watchlist which shows that it is possible to customize the watchlist using a gadget. If the feature is of use it could be installed here. This example is a bit simple, but could be of use to some people. More generally you might find it interesting to visit there and see what people are working on for new code. --mikeu talk 23:13, 4 November 2008 (UTC)
Tagging resources by completion status[edit]
Done. 1st step: create some completion status project boxes and a help page. --McCormack 09:30, 4 May 2008 (UTC)
- Where can these be found? --Luai lashire 22:16, 5 May 2008 (UTC)
- See Help:project boxes. --McCormack 07:13, 29 June 2008 (UTC)
Working on it! 2nd step: mark all 7000+ pages on WV with completion status. --McCormack 09:30, 4 May 2008 (UTC)
Simplify Namespaces[edit]
- There was a huge discussion of namespace reform here which, due to its size, has been transferred to its own page at Namespace reform. Please feel welcome to contribute to the discussion. The results of the discussion, when consensus has been achieved, will be announced below. --McCormack 08:53, 18 May 2008 (UTC)
Technology goals[edit]
- A number of users, including especially jtneill, are pressing for a bolder and more innovative approach to issues such as media embedding. --McCormack 07:44, 22 March 2008 (UTC)
- How about therefore making a prio list for 2008 of the Wikiversity:Technical needs ? ----Erkan Yilmaz uses the Wikiversity:Chat (try) 09:04, 22 March 2008 (UTC)
- Let's see if we can't get these into a SoC application, if they are high priority. Historybuff 06:29, 2 April 2008 (UTC)
Strategies[edit]
- Make direct comparison with functions enabled by extensions implemented by WikiEducator, with strategic decisions about whether Wikiversity wishes to pursue any of these which are not currently implemented. -- -- Jtneill - Talk 11:22, 22 March 2008 (UTC)
- Hi. Don't forget to sign your comments ;-) Actually, I'd suggest that a better initial comparison is with the other Wikimedia projects, because this also gives a better guide as to what the Wikimedia devs are prepared to allow. If they've done something elsewhere, they can't refuse us! --McCormack 10:39, 22 March 2008 (UTC)
- Those WM project comparisons are fine - but from a consumer (educator's) POV, I am making direct comparison between the functionality of WV and WE in determining where I will host materials. At the moment, WE offers greater functionality in some areas, although overall I prefer WV. -- Jtneill - Talk 11:22, 22 March 2008 (UTC)
- Have you seen ? --McCormack 11:57, 22 March 2008 (UTC)
- Well, I think the idea of Wikiversity enrichment via extensions is nice. But we usually dont know, what we wont or better say we usually dont know what we can get.--Juan 17:43, 27 April 2008 (UTC)
- Strategically, closer collaboration with WikiEducator could benefit both initiatives. I am constantly torn between which platform to use and always have to make a decision - later wishing I had some of the features available on the other one. Collaboration could occur at multiple levels - shared templates, gadgets, etc., federated search, shared discussions even at the strategic level considering the shared implied vision of libre knowledge and education for all. Ktucker 23:21, 2 December 2008 (UTC)
WV vs. other WMF wiki projects[edit]
- Is there a neat table somewhere of what extensions are installed for each of the main WMF wiki projects?
- Otherwise, here's links to their extensions:
Wikiversity vs. WikiEducator[edit]
- James, what is that makes you prefer Wikiversity more ? ----Erkan Yilmaz uses the Wikiversity:Chat (try) 12:43, 22 March 2008 (UTC)
- In order of priority, at the moment, the reasons for me are:
- The broader WM Foundation family of projects - e.g., esp. sister projects with Wiki Commons, Wikipedia, and WikiBooks
- More responsive and larger set of 'custodial'-type experts.
- Cleaner, simpler interface.
- The main downside at the moment for me in using WV compared to WE is that WE are a little bit more advanced towards providing multimedia embedding. (BTW - It kind of amuses me that we're talking about MediaWiki software here!). -- Jtneill - Talk 13:19, 22 March 2008 (UTC)
MW interwiki default linking to WV?[edit]
I was surprised to notice that interwiki linking to Wikiversity is NOT a MediaWiki default!? See mw:Help:Interwiki_linking#Default. How can we get this changed? -- Jtneill - Talk 01:36, 30 March 2008 (UTC)
- I'm not sure what this means. I do believe we're on the Interwiki map, and I also think that all WMF projects use the same map. This may have changed in the last while, but that was the situation when I last tried to use interwiki linking. Historybuff 06:31, 2 April 2008 (UTC)
RSS functionality[edit]
One of the current weaknesses of Wikiversity is that the feedback loop of participant activity (e.g., their blog reflections) can't easily be completed within WV. Implementation of an RSS parsing extension would allow custom feeds to be created and shown. WikiEducator have implemented this. I'm guessing this might be an unfortunately scary concept for WM to buy into (e.g., potential for undesirable and spam material to be shown, and related legal issues, etc.) But it needs to be negotiated, otherwise it is going to be difficult for Wikiversity to become a dynamic learning environment. Perhaps we can start gently with WMF by suggesting that only feeds from within WV would be allowed. This would at least allow the use of PLEs on user pages, with an aggregated feed shown on a course page. Thoughts? -- Jtneill - Talk 12:31, 23 March 2008 (UTC)
- I think extensions can be added by community consensus, and if there are things that would help with this, then I can't see why we can't have them. The RSS feed thing did work at one point, but it might be suffering from Bitrot of some sort. We could devise a custom PLE structure as an extension, depending on the functionality we wanted to achieve. Historybuff 06:34, 2 April 2008 (UTC)
Multimedia extensions[edit]
(feel free to adjust, add, move, and add rationale/info/argument)
- Slideshare embedding - be able to embed a (flash?) slideshare presentation
- Youtube/Google Video embedding - be able to embed a flash video
- Google Calendar embedding
- Google Map embedding
- There were added comments on the Sandbox Server to the extension requests, see here, ----Erkan Yilmaz uses the Wikiversity:Chat (try) 12:45, 22 March 2008 (UTC)
- James, just a question: you know that videos can be used here or ? see e.g. commons:Category:Wikiproject videos - depending on copyright videos can be converted to that format and used also here, ----Erkan Yilmaz uses the Wikiversity:Chat (try) 13:40, 22 March 2008 (UTC)
- Ahh, ok, interesting. I have started looking at this. I like the focus on .ogg. Do you know of any embedded .ogg files being used on wikiversity? Is there a category? -- Jtneill - Talk 12:59, 24 March 2008 (UTC)
- I don't know if there is a category, but feel free to create Category:Video resources and add all .OGG video files to it. --McCormack 13:01, 24 March 2008 (UTC)
- e.g. Stanford Open Source Lab. More on: Category:Video media - btw: on commons, see commons:Category:Video, commons:Category:Wikiproject videos - you can use them at any wikimedia project. So, if you would think of uploading ogg videos, please place them there. ----Erkan Yilmaz uses the Wikiversity:Chat (try) 13:05, 24 March 2008 (UTC)
- Huh, nice ideas, but I should say, that people will have a problem to create these multimedia. Like me and all of us. JWS made some embedade ogg videos for en.wv. But he is having Apple. We pore people having Windows as an OS, havent e.g. found a way how to make a screencasting video in ogg Theora format:(.--Juan 17:55, 27 April 2008 (UTC)
- FreeMind embedding, for organizing ideas, knowledge adquisition and even links to wiki or non wiki pages. Could that be used/useful? --Esenabre 14:23, 29 August 2008 (UTC)
MPEG 3 movie files[edit]
- Why does Wikiversity not allow MPG4 files? This would be much easier for some people than OGG VIDEO. Robert Elliott 17:51, 27 March 2008 (UTC)
- I think the MPG4 file format might be covered by patents, or somehow considered non-free (perhaps it's using MP3 audio, which is covered by patents?). I don't know the filesizes for MPG4 vs other files, but it might be possible to accommodate these on the Sandbox server, if the admins are willing to host there? Historybuff 06:38, 2 April 2008 (UTC)
Wikification tools[edit]
Create something like Wikiversity:Wikification (plus some user-friendlier titles), with information about various ways to wikify content in different formats, e.g.,
- Send2Wiki
- html import
Academia[edit]
Create a file storage system for lectures in pdf format. Also creating libraries/academia/or a group for these files similar to School, where people can read the previous lectures and course material. --Tushant 16:41, 2 November 2008 (UTC)
Supporting Open Document Content Types[edit]
Wikiversity should support the uploading of Open Document file types. The use of these file types should be encouraged. -- -- wmciver UTC (or GMT/Zulu)-time used: Sunday, November 2, 2008 at 20:56:11
- OOo 2.0 file can't be upload for this moment due to suspicious code that can be in the file.Crochet.david 21:40, 2 November 2008 (UTC)
- Have you tried OOo 3.0? It is the current revision. The Jade Knight (d'viser) 11:19, 3 November 2008 (UTC)
- No, but only the developer can open new upload format. In bugzilla, we can read that : The main issue is that OASIS files can contains malicious content. Letting these be uploaded without validation would be undesirable, and as of yet (AFAIK) there is no OASIS validation interface for MediaWiki.Crochet.david 20:05, 3 November 2008 (UTC)
Publicity goals[edit]
Fliers[edit]
Action required. Develop a suite of open format fliers and posters about Wikiversity. -- Jtneill - Talk 09:26, 22 March 2008 (UTC)
- Suggested action: start Wikiversity:Publicity and link it into other pages. --McCormack 10:11, 22 March 2008 (UTC)
Statistics[edit]
Action required. Update and make user-friendly and informative: Wikiversity:Statistics. -- Jtneill - Talk 23:54, 24 March 2008 (UTC)
- Create a workshop curriculum for a f2f 1 to 2 hour hands-on introduction to Wikiversity. Much of the needed material probably exists, but may not yet be packaged as a specific learning resource? -- Jtneill - Talk 07:54, 25 March 2008 (UTC)
Guided tours[edit]
- Wikiversity:Guided tour/Lesson needs a page added about talk pages. -- Jtneill - Talk 11:12, 27 March 2008 (UTC)
Action required. If I remember correctly, you tried doing this, and now the tour seems to be broken. :( --McCormack 07:16, 1 April 2008 (UTC)
- Update: Help:Guides is now getting along well, and will have more tours added. --McCormack 07:09, 29 June 2008 (UTC)
Sister projects interview[edit]
Ideas welcome! Many people have been working on User:OhanaUnited/Sister Projects Interview. The results of this interview should probably be moved to a subpage of Wikiversity:Vision 2009, because the interview is perhaps the best current community-produced statement about what Wikiversity is and where it is going. --McCormack 06:05, 27 April 2008 (UTC)
Done. Many thanks to the many people who worked on this. The interview has now been published and this project is over. --McCormack 07:09, 29 June 2008 (UTC)
Wikiversity outreach[edit]
- I like the name Wikiversity outreach and want to change the meaning of this project a little. It could become a project with the aim to find new users for Wikiversity and to organize activities that can attract people. We could make films (for YouTube for instance), cartoons, games and material of an academic level. We could organize fairs and discussions on several topics with the aim to get people interested in a wide range of topics.
- Secondly, i like to put this project prominently at the front page of Wikiversity, so there is a bigger chance that people will join it.
- The main aim of these two measures is to get a more active institution, which is able to reach out and won't stay passive.--Daanschr 08:20, 19 December 2008 (UTC)
Community goals[edit]
Awards[edit]
Working on it! Increase retention of participants and recognition of activity, especially by newcomers. See Wikiversity:Participants for a start. It could become "normal" to recognize newcomers when they achieve "very active" status for the first time, and when they reach 1000 edits, for example. Superficial, but better than complete silence. --McCormack 07:30, 22 March 2008 (UTC)
- Annual awards - Create peer- (and participant-) review annual awards e.g., for most innovative new learning resources within major categories. 2008 could provide the inaugural awards. Get this promoted so its up there with the edublog awards. Among other things, this could also help build prestige for teachers needing to justify time spent on contributing to Wikiversity to their employers. -- Jtneill - Talk 13:13, 22 March 2008 (UTC)
- Agree. Being cautious, I'd suggest doing this within the existing barnstar system (but perhaps with some new and better barnstar templates). One can both overdo and underdo recognition, and at the moment we're definitely on the "under" side! --McCormack 06:07, 23 March 2008 (UTC)
- What about a different image for an annual award maybe a "diploma-in-hand" statuette? And maybe awards for projects and schools, not just users? Go raibh mile maith agaibh 22:47, 23 June 2008 (UTC)
Staffing/Functionaries[edit]
Done. Ensure that we have a full complement of present and active bureaucrats. --McCormack 08:59, 22 March 2008 (UTC)
- Increase the number of educators involved with maintenance, administration and development tasks. --McCormack 08:59, 22 March 2008 (UTC)
- How can this be determined positively? The Jade Knight 13:02, 3 September 2008 (UTC)
Please vote, discuss or express an opinion. I think the mission statement says we need an academic review board to think about how research proposals should work. This should have been created over a year ago. --McCormack 05:35, 24 March 2008 (UTC)
- I had a quick scan / text search of Wikiversity:Mission and Wikiversity:Wikiversity_project_proposal but couldn't find reference to this? -- Jtneill - Talk 12:43, 24 March 2008 (UTC)
- Actually it says "guidelines will be developed during the beta phase of Wikiversity's development on hosting and fostering research based in part on existing resources in Wikiversity and other Wikimedia projects", and the beta phase ended in February 2007 without any such guidelines being developed (or discussed, I think!). It says "guidelines", not a "board". However I suspect that if we did develop guidelines, it would be extremely difficult to pin down rules, so the guidelines would end up creating a board of some kind which assessed research issues on a case-by-case basis, developing a body of precedent from which firmer guidelines could be produced. --McCormack 12:57, 24 March 2008 (UTC)
- We did do a discussion of Original Research, and set out the terms of reference and what was and wasn't allowable under our Original Research policy. That said, I'm against formalizing things too early in the lifecycle of the community. While we may need a board to help arbitrate these things in the future, the WV community has to grow a great deal before we get to that need. Historybuff 06:43, 2 April 2008 (UTC)
- The approved WMF proposal required that we develop research guidelines during the beta phase. The discussion on research was held at betawikiversity: since this applies to all wikiversity languages. See betawikiversity:Wikiversity:Research/En and betawikiversity:Wikiversity:Research guidelines/En. There are also a few pages here such as Wikiversity:Original research and Wikiversity:Review board. --mikeu talk 13:30, 2 April 2008 (UTC)
- Those links take you back to the mythical Wikimedia Special Projects Committee, which was supposed to oversee the process, but which in fact became inactive about 3 months after Wikiversity started. In other words, this is a torch which needs picking up or leaving where it lies. --McCormack 14:06, 2 April 2008 (UTC)
- Thanks for the further info, Mike. I think we need to link to [1] carefully, otherwise this last known pronouncement/discussion might get forgotten. --McCormack 14:36, 2 April 2008 (UTC)
- Well, I think we can create a peer review board, but untill there will is nothing to be peer reviewed hard to perfome its unctionality.--Juan 18:04, 27 April 2008 (UTC)
Community Portal[edit]
Working on it! Luai lashire is adopting this one. Helpers welcome. --McCormack 07:07, 1 April 2008 (UTC)
- Get the Community Portal up and running again; currently it's almost completely stagnant, and it's not set up in such a way as to be easily editable by users with less knowledge of wiki markup. Frankly, it's daunting, and it's old, and doesn't do a good job of representing the community or alerting users to new developments. --Luai lashire 16:08, 22 March 2008 (UTC)
- I had a look at this and agree, but the trick will be persuading a Wikiversitarian to "adopt" these pages and maintain them as a long term project. I already have enough under my wing - but perhaps someone else could volunteer? --McCormack 06:07, 23 March 2008 (UTC)
- Well, that would certainly help- especially for an initial clean-up phase- but it's my hope that we can get the Community Portal to the point where just about anyone who knows any wiki markup at all would be able to contribute a little. That way we could use it to make course-related announcements (like the ones up right now about the Bloom Clock or the French course), delete alerts that are too old, etc, and just continuously tweak things until they're right. Currently it's far too confusing for a new user to edit it at all. Cleaning it up should be our first priority I guess, but after that it needs to be made more usable. I don't know how to do that, myself. Perhaps if we start a Community Portal Learning Project modeled on the main page learning project, we could attract more interest and get some of this done. --Luai lashire 14:30, 23 March 2008 (UTC)
- The main page learning project attracted minimal interest - a few readers, but no real participants. I still ended up doing everything myself (but I can just about manage that). While I support your ideas for a Community Portal Learning Project, we must be realistic - great wiki efforts at the moment will mostly be individual efforts - only the combination of our efforts across different parts of the site will really be communal. If you are willing to adopt the Community Portal, you can count on myself and others to provide you with the knowledge, advice and support, but at the beginning, it's probably going to be an individual effort. Start by creating Community Portal/2.0, copy in the previous code, and then experiment at your leisure and as time allows! --McCormack 05:33, 24 March 2008 (UTC)
- I'll do my best, but I have very little free time until school ends, so work will be minimal and haphazard until then. --Luai lashire 22:44, 25 March 2008 (UTC)
- Community Portal Layout 0.5 is suppose to be an updated version of the previous one, it does need a little tweeking - like the alignment with the text, though I need assistance with that, tried to fix the alignment problem on my own but I'm struggling with fixing the problem - I hope that this version will in use once finished. Dark Mage 17:49, 17 August 2008 (UTC)
Wikibooks' attempts to improve its own Community Portal might provide some inspiration: June 2007, October 2007, and May 2008. --dark
lama 22:09, 17 August 2008 (UTC)
Participants lists[edit]
Done. New concept for Wikiversity:Participants oriented towards our needs. --McCormack 05:39, 24 March 2008 (UTC)
Please vote, discuss or express an opinion. Participants lists: it's been suggested before, but I'll add the suggestion to this page as well. The participants lists on portals, departments and most learning projects are a failed wikiversity idea and all these lists should be moved to the talk pages of their respective pages in the course of time. Currently I can't find where we previously discussed this, but the general problem is that all over Wikiversity, we have these "sign yourself up" lists where people signed themselves up years ago and rarely did anything, or did for a while and then moved on. The course participants lists from previous years are about the most useless piece of junk one can have, and they really make new visitors feel that nothing's happening or that the courses are somehow in the possession of much older users. Thoughts, anyone? --McCormack 05:33, 24 March 2008 (UTC)
- For my 3rd post I would say my second was to add my name to something that I would wish to lurk on (and sometime add to) and note that the presence of a name that I could follow and relate to was a useful feature .. my previous entry to this was on her off-wiki slideshow. Learning is about people.--Paulmartin42 15:03, 17 May 2008 (UTC)
- Perhaps these lists could be moved to /Participant subpages. This could then provide a kind of archival record of people who engaged at some point with each learning project. -- Jtneill - Talk 09:59, 24 March 2008 (UTC)
- That's also a good idea, but don't forget that talk pages also serve as long term repositories. The main difference between talk page and subpage will become apparent if we manage to get subpage navigation activated - then the subpages would become much more visible - which might not be a good thing for these participants lists. The idea is to keep them, but to keep them out of the way. --McCormack 10:35, 24 March 2008 (UTC)
- I actually just had an idea for these lists in the projects I'm involved in—I was thinking of relabeling the lists to be more like a contact group or whatnot, and then creating an actual project community, with, say, community requirements of one sort or another. I'm thinking along the lines of having meetings to attend, but this could be simply a matter of making sure that those on the list are actively working on the project (by posting on the talk page, if nothing else). I haven't worked out the details yet (just got the idea tonight), but the general idea is to a) develop community, b) have listing criteria be objective, c) increase productivity. Thoughts? The Jade Knight 13:08, 3 September 2008 (UTC)
Policy and administration goals[edit]
Vision statement[edit]
- Is there one? -- Jtneill - Talk 11:25, 22 March 2008 (UTC)
- Feel free to write one ;-) --McCormack 11:49, 22 March 2008 (UTC)
- Would there be something to start with e.g., from the original proposals? -- Jtneill - Talk 13:37, 22 March 2008 (UTC)
- Found this; needs TLC: Wikiversity:Vision. -- Jtneill - Talk 14:25, 22 March 2008 (UTC)
- I think I found that too a long time ago, and then forgot about it. I think it was one of those pages which mikeu talks about - the ones where we create 'em and forget 'em. --McCormack 15:42, 22 March 2008 (UTC)
- It's tidied it up a bit more now - please contribute to Wikiversity:Vision. -- Jtneill - Talk 03:04, 23 March 2008 (UTC)
Ideas welcome! --McCormack 07:12, 1 April 2008 (UTC)
- We already have the mission statement. What do you want of a vision statement? Have you in mind some medium-term (three- to five-year?) goals? Hillgentleman|Talk 22:30, 5 May 2008 (UTC)
- Please add comment to this effect to Wikiversity:Vision#No vision statement. -- Jtneill - Talk 00:35, 6 May 2008 (UTC)
- So is this finished now? Can we have a status update? --McCormack 07:04, 29 June 2008 (UTC)
Policy review[edit]
Working on it! A number of people, and especially mikeu, have supported the idea of a wide ranging policy review and policy completion, particularly as policy has gaping holes in it. See Wikiversity:Policy. --McCormack 14:11, 27 March 2008 (UTC)
- One of my main concerns is that it is not clear which policy pages are simply proposed, and which ones were actually adopted as official. If I am confused about this, there is no way that a new editor would have a clue. We also have to decide about fair use images. Do we adopt an EDP or do we delete all non-free images? (as required by wikimedia:Resolution:Licensing policy) --mikeu talk 22:44, 27 March 2008 (UTC)
- My own impression of policy is that it is a garden full of really overgrown jungle weeds. Rather than "tender loving care", it needs "rodent kill" and a couple of heart transplants. I added in a link to the actually voting, which, while it shows what was officially decided, was also a poorly supported and rushed process which did not reach consensus in the majority of cases. The example of a soundly defeated policy (i.e. Ignore All Rules) being subsequently listed/tagged as "proposed" rather than "rejected" should suffice. Policy needs a recipe consisting of a bunch a bureaucrats, a lot of good judgment, and many cups of boldness. Just keep us informed about what you do ;-) --McCormack 05:36, 28 March 2008 (UTC)
Religious content policy[edit]
Working on it! There has been discussion of this for some weeks now. Opensourcejunkie has agreed to attempt to write a first draft to get us started. --McCormack 14:11, 27 March 2008 (UTC)
- There is a working draft at Wikiversity:Draft policy on religious content, but there have been no major edits since April. The page needs review. --mikeu talk 13:05, 21 October 2008 (UTC)
Blocking[edit]
Wikiversity:Blocking warrants discussion and possible revision in light of concerns raised particularly during September-October 2008 around appropriate usage and conduct on Wikiversity and the use/practice of blocking in dealing such issues. -- Jtneill - Talk - c 12:01, 17 October 2008 (UTC)
Working on it! There is ongoing, active discussion about several related issues and policy development (e.g., Wikiversity:Respect people), although as yet active review of the blocking policy is not underway. -- Jtneill - Talk - c 12:01, 17 October 2008 (UTC)
Permissions[edit]
Create tools, guides, materials for helping WV users to seek and gain free access to previously restricted copyright teaching materials. In the beginning, this might simply be a WMF-approved proforma WV letter requesting that a copyright holder give permission for free usage of the content (e.g., an image, a presentation, a handout, etc.) as per WMF licensing requirements. -- Jtneill - Talk 01:43, 25 March 2008 (UTC)
Image Use & Copyright Policy[edit]
I've noticed that Wikiversity doesn't have any Image Use Policy, including a policy of copying materials from Copyrighted websites - this is a major problem and have caused problems on Wikipedia, which I don't want to see happening here on Wikiversity - Could something be set-up so that user's (or) new user's could abide by the Image Use Policy including a policy which won't allow user's to copy copyrighted material which isn't their work, if the owner's of the copyrighted work had granted permission to be place on this site, then by all means - but in my view we need a policy on this. Dark Mage 17:56, 17 August 2008 (UTC)
Allowing contributors to protect their content[edit]
I decided to use Wikiversity because I did not want my content locked up in our university's Blackboard system. I want to contribute it for other to use. This does not mean that I want others to change it. I remain uncomfortable with the lack of protections for my content. The notion that I have to ask people not to change the content I have developed for my course seems backward. wmciver UTC (or GMT/Zulu)-time used: Sunday, November 2, 2008 at 20:48:28 .
- Is this about being able to rely on the page being as you last left it to show to students? I think the closest that Wikiversity might get is to allow particular revisions of a page to be marked as stable so that teachers, instructors and other people who need a reliable version of content can count on people seeing the stable version of the page if their not registered or didn't change their preferences. Would this address what your trying to achieve?
- Wikiversity is also about being open to collaboration, working together to improve content for everyone's benefit, and Wikiversity's licensing policy reflects this goal. Wikiversity is unlikely to provide the ability to lock down or restrict who can contribute to what page because this would go against the goal of being open to collaboration by anyone. Right now the most reliable way to ensure that students see a particular version of a page is to link to the particular revision somewhere by including the revision id of the page as part of the URL (for example:). --dark
lama 22:19, 2 November 2008 (UTC)
- Wikiversity is not simply a "content hosting platform"; if you place content here, you may want to expect, or even desire, for it to be edited by others. If you simply want to place content to be read by your students, you're free to create your own wiki using the MediaWiki software. The Jade Knight (d'viser) 11:18, 3 November 2008 (UTC)
I do use MediaWiki in my own work and have used it on my own machines at times. This is not an appropriate response, however, to someone who is volunteering to contribute to this project. Organizational policies at my place of work and at the university where I am an adjunct professor present barriers to hosting a public MediaWiki for me. In the spirit of cooperation, I thought I would contribute to this project. I already participate in the Global Text Project and the Appropedia.org. Wikiversity on the surface seemed the best fit. I appreciate the potentials for collaboration, but if I understand Wikiversity's policies, the model is wrong or at best incomplete. If others feel that changes are warranted in a course designer's content, a comment and revision process should be used as in academic journals and conferences. Content developers will be reluctant to contribute here if they can't rely on the stability of their own content. Let author(s) know in a non-disruptive fashion what changes are suggested. What if the author disagrees though? What if they are qualified to disagree? Should we still mandate or allow unilateral changes? The version process above would mitigate some of these flaws, but it is too cumbersome. Those who truly support the idea of Wikiversity should not be responding with technical and policy responses. They should be building an environment that encourages contributions. I have a Ph.D in computer science. I am sure that I can figure out the tedious details of templates and revision links to satisfy the current Wikiversity policies here. But why should it be difficult or tedious? This type of compliance still would not address some of the anti-user-centred approaches to this whole project. What do you want the priorities to be? Content or technically-oriented policies that are not user-friendly? All of you might want to have a look at Donald Norman's books, which I use in my course. wmciver UTC (or GMT/Zulu)-time used: Monday, November 3, 2008 at 12:44:46
- Well, the Wikimedia wikis host collaborations, using the wiki technology so that anyone can contribute (IOW, "anyone can edit"). You really don't need to use a wiki if you're only interested in posting something up for public viewing and use. We really can't host content that isn't open for others to edit, because the whole idea is to host collaborative creations. --SB_Johnny talk 14:07, 3 November 2008 (UTC)
- This is a misunderstanding of my concerns. I value and support collaboration in my work. See my course. A number of modes of collaboration are possible. My concern has to do with modes of collaboration that are suitable for content that is being used in a class while it is being developed. My concern is also about developing an inclusive and flexible environment that allows for these different modes. The current approach seems to fail in these respects. I hope that I am wrong. wmciver Monday, November 3, 2008 at 18:18:36 UTC .
- The ability to mark revisions as stable that I mentioned before, seem to me like it would allow material to be use in classes reliably while still being developed collaboratively. Perhaps you could clarify what modes of collaborations your referring to that you wish to prevent and wish to allow? --dark
lama 19:11, 3 November 2008 (UTC)--dark
lama 19:11, 3 November 2008 (UTC)
- Hi wmciver! Wikiversity and other Wikimedia wiki's do indeed have "open content" as a standard. It appears you want collaboration but also to be able to fine tune the security to pages and what users can do. There is a well known hosted wiki solution out there that accomplishes that, and it is PBwiki, see the academic link. If you like MediaWiki, what Wikiversity uses, I suggest to see if your academy will host one, which you can configure to require registration to edit pages instead of like Wikiversity that accepts any IP without registration. Dzonatas 17:33, 4 November 2008 (UTC)
For wmciver’s case. Wikiversity’s concept must be different from Wikipedia’s. In wikipedia most of the articles are written by amateurs who have paraphrased printed sources in traditional disciplines, e.g. the Humanities. When wiki authorities are bored by others’ meaningless editing or vandalism they lock the article and nobody can edit it. In Wikipedia, as the experts are few, the author of the article, who has not really written something original but has actually paraphrased from a printed encyclopedia, is not really interested whether somebody else will edit his paraphrase. Also, if your knowledge about the subject is limited, yes, you really appreciate editing, adding etc by others.
HOWEVER, Wikiversity should not be the same. Two environments miust be created: a free one and a protected environment. An expert in a field, who tries to open his thoughts and his material to others apart from his class at the University, can not tolerate silly editing and can not be checking everyday whether others have moved a sentence from here to there or have added something irrelevant or different to the concept of the specific course.
If wikiversity is looking for high quality material, must create a LOCKED/PROTECTED environment for those who are the experts and want to teach and develop a course in their own way. Parallel development of free/open or other protected/locked courses (forking) of course is welcome and is part of Wikiversity. A real expert in a field of course wants comments and opinions etc but not anybody touching an ORIGINAL material he/she has given great thought on and a lot of time to prepare for his students or even the greater public. Otherwise, NO high quality material will ever exist here, as is the case with a great part of Wikipedia (what is not paraphrased from the experts in print).
Let me add my own experience here. I am an expert on Hegel (German philosopher) which means PhD dis., articles, book chapters etc. I wanted to develop a course for Hegel, about Hegel, and Hegel’s ideas and take it out of the Univ. Blackboard system. I also wanted to bring my students here, so that they can discuss the content etc BUT NOT change the material I wanted to give them. The course was about Hegel NOT a comparison of Hegel with others. During the process, every so often silly intruders made “corrections” to the material although my intention about content stability was stated explicitly on the first course page. After a few weeks editing of others had become so annoying that I spent more time trying to find what others had changed rather than proceed with the material. I used the stability feature but as material changed overnight my students saw the new versions of others who had used the same feature for their content in the same course. I RAN AWAY FROM WIKIVERSITY (yes I know I am shouting).
Allow locked material for Wikiversity and you may be surprised to find out that a lot of young profs are willing to offer their expertise here but they do not like amateurs editing. Of course, you might say, this is not the right place for you. I agree and that is why I have decided to keep my material to my Univ. Blackboard. However, I am still an advocate of open content for others to learn. So long friends! Ariosto 15:50, 26 November 2008 (UTC)
- Ariosto: Please do not leave. Feel free to create and use page protection templates at Wikiversity. There are valid reasons for protecting some Wikiversity content fro editing so go ahead an make pages that only you can edit. Just realize that other people can make copies of those pages and edit those copies. --JWSchmidt 16:02, 26 November 2008 (UTC)
- Thank you for the suggestion. I did check the link. However, editing is again left to the good will of the Wikiversity users with just a window warning. As I have become very skeptical I would be more comfortable with no “edit this page” tab on protected content if I desire so and custodians only ability to edit/remove pages if I do not follow the code of ethics. I do understand though that at present this is a huge step and against the original wiki philosophy. I’ll wait and see. Ariosto 17:39, 27 November 2008 (UTC)
- Such protection would also prevent you from being able to edit the page as well. I imagine that wouldn't be helpful in the long run since than you would not be able to make corrections and expand upon the content either. --dark
lama 19:12, 27 November 2008 (UTC)
- The most logical thing for content that is not intended to be wiki content is simply to host it elsewhere on a private website. The Jade Knight (d'viser) 12:30, 28 November 2008 (UTC)
- Alternatively you could host what you wish to have more control over on a private website, and have a project on Wikiversity that makes use of or refers to material on the private website. Another option would be to maintain a stable/static version of content hosted here on a private website, making use only of changes that you like, as you are free to use, copy, modify and redistribute content from Wikiversity under the terms of the GFDL. If you use any media or images other licenses or terms of use might apply though, but they should also allow you to use, copy, modify and redistribute them. A third option would be to link to specific revisions that you like from your user page or somewhere within your userspace. All three options would allow for some form of collaboration to exist while providing some level of reliability without requiring pages be protected from editing. --dark
lama 13:23, 28 November 2008 (UTC)
- Ariosto: "against the original wiki philosophy" <-- Wiki is a tool that makes possible collaborative webpage editing. There is nothing that says every Wikiversity webpage must be edited by anyone at any time. Wikiversity is very much a place for experimentation and discovery of new ways to use wiki technology to support learning. I agree that page protection templates are not optimal, but they are a tool that can be used right now while we make plans for a better solution. For example, we could potentially modify the software and have a namespace where the creators of pages can control who is able to edit a page and limit editing of a page to a select group of trusted collaborators. Please do not let all of the naysayers drive you away from Wikiversity. The software that Wikiversity uses was developed for Wikipedia and does not conform to the needs of educators. Wikiversity has several projects (example: Topic:Sandbox Server 0.5) for exploring ways to find software solutions that will empower educators and help them participate efficiently at at Wikiversity. Here at Wikiversity we are not locked into doing things "the Wikipedia way", or even the "Wikibooks way". Please do not be intimidated by anyone who argues with you and against your perfectly reasonable desire put limits on the editing of some Wikiversity pages. This website was created for you, not people whose vision of Wikiversity does not get past some lame and crippled reflection of Wikipedia. --JWSchmidt 15:28, 2 December 2008 (UTC)
- Indeed. Honestly I doubt you'd have problems with people insisting on editing when the template requests that they don't. Keep in mind that even the custodians are just contributors with tools, not professionals, and any "random browser" passing through might be able to provide feedback or volunteer some copy-editing. We don't have the wiki software set up to create multiple usergroups for varying levels of protection (and probably never will), but I think an "assumption of good faith" (and manners) will not lead to disappointment. --SB_Johnny talk 15:38, 2 December 2008 (UTC)
- I think SB_Johnny is right on the mark, here: the software currently doesn't allow for this kind of selective page protection, and probably won't anytime soon. If you need the content 100% protected, you need to host it elsewhere. Otherwise, just ask people (using notes or a template) to respect your wishes, and you'll find that the vast majority of individuals will, I think. The Jade Knight (d'viser) 18:26, 2 December 2008 (UTC)
People seem to be thinking about FlaggedRevs, I oppose as wikiversity is a free learning resource with constructive dialoge between students and teachers. The only thing I could imagine is installing FlaggedRevs and calling the reviewer and edior classes one class called "Professer." This could be granted upon request. Their namespace would go from "User" to "Professer" and they would be granted all the editor and reviewer permissions as well as the protect right. All the courses they wanted soul ownership of would go in a user subpage. Until the software is changed we could create a social construct that they are not to flag or protect pages outside of that space with the penalty for violation being "title stripping" (removing professor status) or blocking.--Ipatrol 03:00, 21 December 2008 (UTC)
Strategic goals[edit]
Wikipedia[edit]
Develop stronger links with, and active use of, materials on Wikipedia; see What shall we do with WP?. -- Jtneill - Talk 11:49, 24 June 2008 (UTC)
Offering learning Courses[edit]
What i believe is to have such goals which welcomes every one to get in touch with wikievristy as offering various cources and certification and then get hired by wikipedia himself for some specific tasks.-Azamishaque 11:43, 4 November 2008 (UTC)
Creating a learning environment[edit]
I do not know how viable this is, but I would strongly suggest creating some form of learning environment, where the real-life classroom is mimicked. I believe this would achieve several goals: First, it encourages older users, who are used to more traditional methods of learning, to get involved. I also believe that it establishes more credibility with such users. Second, it adds a more social element, in which there could be participation, attendance, examinations, projects and assignments. Students can interact with each other and informal learning can initiate through interaction with instructors and lecturers. Third, I believe that a real and honest assessment needs to be established. True, there are no certifications granted at the end of the course, but I need to know how much have I learned and how much have I improved. Fourth, perhaps several "students" have not been to a real university, and I think it could be exciting for them to "enroll" in classes, and select several courses and electives to complete a "degree". I believe that we need to foster a true learning environment.
To cite myself as an example. I wanted to "register" for the Spanish courses (my mother tongues are Arabic and English), and I did read the introduction and covered chapter one. Had I had any "classmates" I would have continued the course. I would feel obligated to finish the course. Otherwise, I could simply purchase a book, or an audiobook, or register in real-life classes to learn Spanish and learn properly. What makes Wikiversity so special that I should use it?
This could be Wikiversity's competitive advantage. I am not bossing around or barking orders, or playing mr. know it all, but I am just saying that these things would do help as a person, use Wikiversity more often.Chagfeh 16:16, 28 November 2008 (UTC)
- I think there's some merit in what you have to say. The Spanish course is a good example of a problematic course at Wikiversity—lots of content, but very little user-participation focused content, and, frankly, the Wikibook is more effective at conveying the information it presents. Other learning projects, while not at all traditional, are still heavily participatory, and are still effective in that regard. However, more traditional formats, I think, can also be helpful. The biggest problem with them is that Wikiversity is really having trouble reaching critical mass in certain areas right now—certainly this is true in the School of History, where efforts to establish more traditional "classroom-style" courses have largely failed. The two projects there which are currently the most successful includes one where participants are helping to write a textbook, and another which is currently designed as a simple focused learning experience (but which is similar to a textbook in certain regards). The Jade Knight (d'viser) 09:30, 29 November 2008 (UTC) | http://en.wikiversity.org/wiki/Wikiversity:Vision_2009 | CC-MAIN-2014-52 | refinedweb | 11,486 | 59.94 |
Setting up a Living Styleguide in Jekyll
Free JavaScript Book!
Write powerful, clean and maintainable JavaScript.
RRP $11.95
I was recently working on a small Jekyll project and wanted to see whether it would be possible to have a very component-ized approach driven by a styleguide, despite the fact that Liquid (the template engine behind Jekyll) is not meant to do that.
I found it out it is doable (not without some struggling though) and I’d like to show you how so you can consider using a similar approach in your next Jekyll project.
There is quite a bit of setup around this example, so I recommend you check the live demo then follow along with this boilerplate on GitHub.
Why a Styleguide?
When working on a site or application, it is good practice to try finding common UI patterns so they can be extracted and reused across the platform. This helps maintenance, scaling and reduces overall complexity.
When pushed further, this practice can lead to the creation of a “styleguide” (or “style guide”). Very broadly speaking and according to Wikipedia, a styleguide is:
[A] set of standards for the writing and design of documents, either for general use or for a specific publication, organization, or field. A style guide establishes and enforces style to improve communication.
Now, there is no one way to do a styleguide. Ask 10 people in this industry, you will have 10 different answers. I guess that’s also the beauty of it. Some people will tell you a styleguide should not be technical, some will tell you it should. Some will call it a pattern library… and so on.
If you ask me, a component styleguide should explain what a component does, how to use it, and provide an example. That is what I expect from such a document.
Note: if you are interested in reading more about styleguides and everything related to them, have a look at styleguides.io.
Components in Jekyll
I am getting more and more used to working with React and there is one thing I really like about it — everything, even the smallest chunk of UI, is (or could be) a component. As you will soon realize, this very concept drove my research here.
Any interface module that could theoretically be reusable ended up in its own file, inside a
components folder in the Jekyll
_includes folder.
my-project/ | |– _includes/ | | | |– components/ | | | | | |– button.html | | |– header.html | | |– headline.html | | |– nav.html | | |– footer.html | | |– … |– …
As an example, let’s build the button component together (
button.html). The minimum our button component should have is a
type attribute, a
class attribute and some content.
We’ll give a default class to the button that can be extended through the
class include parameter to make it more flexible. We’ll also define the default
type to
button, just in case it is not being passed in.
Last but not least, we’ll make sure not to render the button if no content is being passed.
Note: in Jekyll, include parameters can be accessed through the
include object.
{% assign{{ content }}</button> {% endif %}
This file is then included through an
{% include %} Liquid block when used in pages, customised with include parameters. Ultimately, this means pages are basically nothing but generic containers including components.
{% include components/button.html type = "submit" content = "Get in touch" %}
Building the Styleguide
To build the styleguide itself, we will need several things:
- A Jekyll collection for all documented components.
- An entry per component in the collection.
- A styleguide page.
- A layout dedicated to the styleguide.
Creating a Dedicated Jekyll Collection
First, let’s setup the collection in the configuration:
# Styleguide settings collections: styleguide: output: true defaults: - scope: path: "" type: "styleguide" values: layout: "default"
This tells Jekyll that we will have entries from our
styleguide collection in a
_styleguide folder at project’s root level. Each documented component will have a matching file (using the
default layout).
my-project/ | |– _includes/ | | | |– components/ | | | | | |– button.html | | |– header.html | | |– headline.html | | |– nav.html | | |– footer.html | | |– … | |– _styleguide/ | | | |– button.html | |– header.html | |– headline.html | |– nav.html | |– footer.html | |– … | |– …
An Entry Per Component
Let’s create the page for our button component (
_styleguide/button.html). This page is not really meant to be seen on its own; it is intended to show all the information we need to be able to display everything about the component in the styleguide page.
What we need is a description of the UI module, the parameters it accepts when included, and an example. The content of the page itself will be a proper Liquid include, and this is what will be rendered as a demo inside an iframe.
--- description: | The button component should be used as the call-to-action in a form, or as a user interaction mechanism. Generally speaking, a button should not be used when a link would do the trick. parameters: content: "*(mandatory)* the content of the button" type: "*(optional)* either `button` or `submit` for the `type` HTML attribute (default to `button`)" class: "*(optional)* any extra class" example: | {% include components/button.html type = "button" content = "Click me" class = "pretty-button" %} --- {% include components/button.html type = "button" content = "Click me" class = "pretty-button" %}
Note: in YAML, the pipe symbol indicates the beginning of a literal style value.
A “Styleguide” Page
We now need to create the page for the styleguide. To make it easy (and because I think this is the perfect occasion for it), I added Bootstrap to this page to make it easier to style and faster to build. This page consists of three sections:
- A header that introduces the styleguide.
- A sidebar for the navigation.
- A main content area displaying all the entries of our collection.
To avoid having a page too long and bloated with logic, I recommend having each of these sections in a partial, living in a
_includes/styleguide folder.
my-project/ | |– _includes/ | | | |– components/ | | | | | |– button.html | | |– header.html | | |– headline.html | | |– nav.html | | |– footer.html | | |– … | | | |– styleguide/ | | | | | |– component.html # HTML for a component display | | |– header.html # Styleguide header | | |– navigation.html # Styleguide navigation | |– _styleguide/ | | | |– button.html | |– header.html | |– headline.html | |– nav.html | |– footer.html | |– … | |– …
The reason I recommend this is that it makes the code for our page quite clean and makes it pretty obvious about what it does.
--- layout: styleguide --- <div class="container"> <!-- Styleguide header introducing the content --> {% include styleguide/header.html %} <div class="row"> <!-- Styleguide aside navigation --> <div class="col-md-3"> {% include styleguide/navigation.html %} </div> <!-- Styleguide main content area --> <div class="col-md-9"> {% for component in site.styleguide %} {% include styleguide/component.html component = component %} {% endfor %} </div> </div> </div>
Here is the header (
_includes/styleguide/header.html):
<div class="jumbotron"> <h1>{{ page.title | default: "Styleguide" }}</h1> <p> This document is a component styleguide. Its purpose is to list all the UI modules used across the site / application, their role, how to use them and how they look. </p> <p> Furthermore, this document can be used as a single source of truth when refactoring HTML and CSS in order to ensure no component visually broke. </p> <a href="/" class="btn btn-primary">Back to the site</a> </div>
Here is the navigation (
_includes/styleguide/navigation.html):
<div class="scrollspy"> <div class="s-styleguide-aside hidden-xs hidden-sm"> <ul class="nav"> {% for component in site.styleguide %} {% assign component_name = component.slug | replace: "-", " " | capitalize %} <li> <a href="#{{ component.slug }}">{{ component_name }}</a> </li> {% endfor %} </ul> </div> </div>
Note: if the name of your components do not necessarily match their file name (
slug), you could add a
title or
name key to each of them instead.
And finally, here is the HTML for a component showcase (
_includes/styleguide/component.html), which is admittedly the most complex part of this page:
{% assign component = include.component %} {% assign iframe_source = component.url | prepend: site.baseurl %} {% assign slug = component.slug %} {% assign title = slug | replace: "-", " " | capitalize %} {% assign description = component.description | markdownify %} {% assign html_code = component.content %} {% assign liquid_code = component.example %} {% assign parameters = component.parameters %} {% assign tab_name = slug | append: "-" | append: "-tab" %} <div class="s-styleguide-showcase" id="{{ slug }}"> <div class="panel panel-default"> <div class="panel-heading"> <h2 class="panel-title">{{ title }}</h2> </div> <div class="panel-body"> {{ description }} <!-- Component include parameters --> <table class="table"> <thead> <tr> <th>Parameter</th> <th>Description</th> </tr> </thead> <tbody> {% for parameter in parameters %} {% assign parameter_name = parameter[0] %} {% assign parameter_desc = parameter[1] | markdownify %} <tr> <td><code>{{ parameter_name }}</code></td> <td>{{ parameter_desc }}</td> </tr> {% endfor %} </tbody> </table> <!-- Nav tabs --> <ul class="nav nav-tabs" role="tablist"> <li role="presentation" class="active"> <a href="#{{ tab_name }}-demo" aria-Demo</a> </li> <li role="presentation"> <a href="#{{ tab_name }}-liquid" aria-Liquid</a> </li> <li role="presentation"> <a href="#{{ tab_name }}-html" aria-HTML</a> </li> </ul> <!-- Tab panes --> <div class="tab-content"> <div role="tabpanel" class="tab-pane active" id="{{ tab_name }}-demo"> <iframe src="{{ iframe_source }}" title="{{ title }}"></iframe> </div> <div role="tabpanel" class="tab-pane" id="{{ tab_name }}-liquid"> {% highlight liquid %}{{ liquid_code }}{% endhighlight %} </div> <div role="tabpanel" class="tab-pane" id="{{ tab_name }}-html"> {% highlight html %}{{ html_code }}{% endhighlight %} </div> </div> </div> </div> </div>
A “Styleguide” Layout
This step is not really mandatory. Your styleguide page could definitely use the default layout for your site. In our case, since it needs to include Bootstrap assets and handlers, it is different enough to deserve a separate layout.
It needs to include:
- The main stylesheet from Bootstrap.
- jQuery since it is a Bootstrap dependency.
- The main JavaScript file from Bootstrap.
- A script to resize iframes based on their content.
- A script to initialize the affix navigation.
- The
data-spy="scroll"and
data-target=".scrollspy"attributes on the
bodyelement to enhance the navigation.
Since there is quite a bit of JavaScript to make the styleguide work perfectly, it might be worth adding a file for that in
_includes/styleguide/scripts.html doing just that:
<!-- jQuery --> <script src="" integrity="sha256-BbhdlvQf/</script> <!-- Bootstrap --> > <!-- Iframes resizing --> <script type='text/javascript'> $(function () { $('iframe').on('load', function () { var height = this.contentWindow.document.body.offsetHeight + 'px' $(this).css('height', height) }) }) </script> <!-- Affix sidebar initialisation --> <script> var $nav = $('.c-styleguide-aside') $nav.affix({ offset: { top: $nav.offset().top } }) </script>
Wrapping Things up
That’s it folks! I hope you enjoyed this experiment and have considered the benefits of having a living styleguide in your projects.
Because of Liquid, Jekyll is not the easiest playground to create such a document, but as you can see, it is still is possible to end up with a lovely solution.
Admittedly, there is quite a bit of groundwork to do to setup this styleguide, but from there adding new components turns out to be super simple:
- Create your component in
_includes/components/.
- Create a matching page in
_styleguide/and fill all the information you need.
- Done! ✨
If you have any idea on how to improve things, be sure to share your thoughts in the comments, or even contribute to the demo on GitHub.
Get practical advice to start your career in programming!
Master complex transitions, transformations and animations in CSS! | https://www.sitepoint.com/setting-up-a-living-styleguide-in-jekyll/?utm_source=CSS-Weekly&utm_campaign=Issue-227&utm_medium=web | CC-MAIN-2021-04 | refinedweb | 1,821 | 56.96 |
Closed Bug 1383041 Opened 5 years ago Closed 5 years ago
Update webrender to 0748e02d1be5f889fc17de2eb81c0c363ee3aa80
Categories
(Core :: Graphics: WebRender, enhancement, P3)
Tracking
()
mozilla56
People
(Reporter: kats, Assigned: kats)
References
Details
(Whiteboard: [gfx-noted])
Attachments
(5 files)
+++ This bug was initially created as a clone of Bug #1380645 +++]
Last good cset was b83c200c657f6b6fb17d09f329ba77803420b46a, but updating to cset 8fd634882111415a65da67e947f26eb170234f2f caused a bustage. Copying from bug 1380645 comment 11 and 12: WR @ 8fd634882111415a65da67e947f26eb170234f2f Bustage, regression range: * 8fd6348 Auto merge of #1504 - glennw:lines, r=kvark |\ | * 1fd1eaa Basic implementation of line decoration display items. | * bb8f891 Add LineDisplayItem to Wrench, add reftest | * 88ac352 add LineDisplayItem * dc746ed Auto merge of #1445 - kvark:namespace, r=glennw * 6a2662c Cleaning resources on RenderApi::drop The problem is that the FontKey struct in WR has been updated to include IdNamespace [1]. Both FontKey and IdNamespace have type aliases to deal with them being tuples with unnamed fields [2], but since FontKey contains IdNamespace and not WrIdNamespace, cbindgen doesn't use the alias in this case and generates a definition for IdNamespace [3] which doesn't compile. Ryan, is there a good way to fix this (either in cbindgen, or in our bindings file)? [1] [2] [3]
Flags: needinfo?(rhunt)
There are a few options here. 1. Switch to generating typedef's for `type = ` instead of generating new types. This matches up much closer with the rust definition of a type alias, and we could then have annotations be transfered from the Foo in `type Foo = Bar` to the Bar. Allowing us to specify annotations in webrender_bindings, and have them apply to the actual type in webrender. So, /// cbindgen:field-names=[mHandle] type WrIdNamespace = IdNamespace Would generate, struct IdNamespace { uint64_t mHandle }; typedef IdNamespace WrIdNamespace; I have patches that do this in cbindgen and I can confirm it doesn't break Gecko, although there are a few slight changes needed. 2. Handle tuple structs better Eventually I think we should get rid of all of those aliases, so we should provide a way without annotations to have tuple structs work. I'm thinking a rename rule that only applies to tuple structs. So, struct IdNamespace(u64); Would generate, struct IdNamespace { uint64_t m0; }; And that would at least work and match exactly how Rust handles it. I don't have patches for that, but it wouldn't be too hard.
Flags: needinfo?(rhunt)
With approach #2, we'd lose the field names, right? So we'd have to update any Gecko code that now uses e.g. WrWindowId::mHandle to instead refer to wr::WindowId::m0 instead? If so I think I prefer approach #1.
(In reply to Kartikaya Gupta (email:kats@mozilla.com) from comment #3) > With approach #2, we'd lose the field names, right? So we'd have to update > any Gecko code that now uses e.g. WrWindowId::mHandle to instead refer to > wr::WindowId::m0 instead? If so I think I prefer approach #1. Yes. Rust refers to tuple struct members by number too, but they can use match and if let() to make it not so confusing. I just pushed a new cbindgen with #1. (version 0.1.19). Let me know if there are any issues. I'll also upload a patch with the changes I had to make to Gecko.
Here is the full patch of the new output of cbindgen and the Gecko changes necessary. One gross part is in LayersLogging due to inability to forward declare typedefs. I'm not sure what a great solution for that is. I don't know what still uses LayersLogging with WR types.
So I updated to cbindgen-0.1.19, applied your patch, and ran cbindgen again on top of that. It generates a bunch of changes, like changing "struct VecU8;" to "struct Vec_u8;" and other things. Is that expected? I'm using current m-c tip as my base revision.
Oh, I think it's because of bug 1381949 which just landed. I have it in my tree and I'm guessing you don't have it in yours. I'll see if I can get it building with that.
Yes that seems like it's from bug 1381949. Now that things are typedef'ed, we end up generating more monomorphisations which are mangled. Vec_u8 will be one of those. Let me know if it's too much bustage, we can try to find another solution.
Try push: The "WIP switch to using typedefs in bindings file" patch is the one you attached above, and "[mq]: binding-fix2" is what I need to apply on top of it to effectively rebase on top of bug 1381949. And "Re-generate FFI header" and "[mq]: wr-fixit" are what I'll need as part of this bug to deal with the upstream WR changes. It builds locally, let's see how the try push turns out.
Above try push had OS X build errors which I fixed in subsequent pushes. Here's all the pushes with WR @ 8fd634882111415a65da67e947f26eb170234f2f from over the weekend: Things are green.
WR @ 9ebb50b1e22cea566a7c0a8d2cdc6e446a90ea24 Bustage, looks like a minor API change.
WR @ 9ebb50b1e22cea566a7c0a8d2cdc6e446a90ea24, with fixup Green
WR @ d88ea62991d66583fd411f160523b2de020fcbc9 Bustage again
WR @ d88ea62991d66583fd411f160523b2de020fcbc9 with fixup: Green
WR @ 283192c41743a59da87b065cbc14c659d94c90b5 Green so far, windows still pending
WR @ 8cbc971fc2a61114ba007f7c7d11f8ed72e8317f Green so far, windows still pending.
WR @ 82ac4fbbeff9602027fa972009be7f7c5693f901 Green on linux64, the windows reftest jobs didn't run for some reason (maybe the taskcluster switchover?)
WR @ 0748e02d1be5f889fc17de2eb81c0c363ee3aa80 Ditto
Assignee: nobody → bugmail
Summary: Future webrender update bug → Update webrender to 0748e02d1be5f889fc17de2eb81c0c363ee3aa80
Comment on attachment 8890963 [details] Bug 1383041 - Update WR to cset 0748e02d1be5f889fc17de2eb81c0c363ee3aa80.
Attachment #8890963 - Flags: review?(jmuizelaar) → review+
Comment on attachment 8890965 [details] Bug 1383041 - Update bindings for API change in WR cset 9868ef4.
Attachment #8890965 - Flags: review?(jmuizelaar) → review+
Comment on attachment 8890964 [details] Bug 1383041 - Update bindings for IdNamespace changes in WR cset 6a2662c. looks reasonable
Attachment #8890964 - Flags: review?(kvark) → review+
Comment on attachment 8890966 [details] Bug 1383041 - Update bindings for API change in WR cset 9f66b56. ::: gfx/webrender_bindings/src/bindings.rs:1017 (Diff revision 1) > let content_rect: LayoutRect = content_rect.into(); > let clip_rect: LayoutRect = clip_rect.into(); > > - state.frame_builder.dl_builder.define_scroll_frame(Some(clip_id), content_rect, clip_rect, vec![], None); > + state.frame_builder.dl_builder.define_scroll_frame( > + Some(clip_id), content_rect, clip_rect, vec![], None, > + ScrollSensitivity::ScriptAndInputEvents); As far I know Gecko only sets scroll offsets in WebRender and never uses the WebRender scroll API. If that's the case ScrollSensitivity::Script may be more appropriate here.
Attachment #8890966 - Flags: review?(mrobinson) → review-
Comment on attachment 8890966 [details] Bug 1383041 - Update bindings for API change in WR cset 9f66b56.
Attachment #8890966 - Flags: review?(mrobinson) → review+
Pushed by kgupta@mozilla.com: Update WR to cset 0748e02d1be5f889fc17de2eb81c0c363ee3aa80. r=jrmuizel Update bindings for IdNamespace changes in WR cset 6a2662c. r=kvark Update bindings for API change in WR cset 9868ef4. r=jrmuizel Update bindings for API change in WR cset 9f66b56. r=mrobinson
Status: NEW → RESOLVED
Closed: 5 years ago
status-firefox56: --- → fixed
Resolution: --- → FIXED
Target Milestone: --- → mozilla56 | https://bugzilla.mozilla.org/show_bug.cgi?id=1383041 | CC-MAIN-2022-33 | refinedweb | 1,150 | 65.42 |
Process Mining Based on Regions of Languages
- Lora Spencer
- 2 years ago
- Views:
Transcription
1 Process Mining Based on Regions of Languages Robin Bergenthum, Jörg Desel, Robert Lorenz, and Sebastian Mauser Department of Applied Computer Science, Catholic University of Eichstätt-Ingolstadt, Abstract. In this paper we give an overview, how to apply region based methods for the synthesis of Petri nets from languages to process mining. The research domain of process mining aims at constructing a process model from an event log, such that the process model can reproduce the log, and does not allow for much more behaviour than shown in the log. We here consider Petri nets to represent process models. Event logs can be interpreted as finite languages. Region based synthesis methods can be used to construct a Petri net from a language generating the minimal net behaviour including the given language. Therefore, it seems natural to apply such methods in the process mining domain. There are several different region based methods in literature yielding different Petri nets. We adapt these methods to the process mining domain and compare them concerning efficiency and usefulness of the resulting Petri net. 1 Introduction Often, business information systems log all performed activities together with the respective cases the activities belong to in so called event logs. These event logs can be used to identify the actual workflows of the system. In particular, they can be used to generate a workflow definition which matches the actual flow of work. The generation of a workflow definition from event logs is known as process mining. Application of process mining and underlying algorithms gained increasing attention in the last years, see e.g. [3] and []. There are a number of process mining tools, mostly implemented in the ProM framework [18]. The formal problem of generating a system model from a description of its behaviour is often referred to as synthesis problem. Workflows are often defined in terms of Petri nets [1]. Synthesis of Petri nets is studied since the 1980s [9, 10, 8]. Algorithms for Petri net synthesis have often been applied in hardware design [5, 4]. Obviously, process mining and Petri net synthesis are closely related problems. Mining aims at a system model which has at least the behaviour given by the log and does not allow for much more behaviour. In the optimal case the system has minimal additional behaviour. The goal is to find such a system which is not too complex, i.e., small in terms of its number of components. This is necessary, because practitioners in industry are interested in controllable and interpretable reference models. Apparently, sometimes a trade-off between the size of the model and the additional behaviour has to be found.
2 One of the main differences in Petri net synthesis is that one is interested in a Petri net representing exactly the specified behaviour. Petri net synthesis was originally assuming a behavioural description in terms of transition systems. For a transition system, sets of nodes called regions can be identified. Each region refers to a place of the synthesized net. Analogous approaches in the context of process mining are presented in [4, 0]. Since process mining usually does not start with a transition system, i.e., a state based description of behaviour, but rather with a set of sequences, i.e., a language based description of behaviour, the original synthesis algorithms are not immediately applicable. In [4, 0] artificial states are introduced to the log in order to generate a transition system. Then synthesis algorithms transforming the state-based model into a Petri net, that exactly mimics the behaviour of the transition system, are applied. The problem is that these algorithms include reproduction of the state structure of the transition system, although the artificial states of the transition system are not specified in the log. In many cases this leads to a bias of the process mining result. However, there also exist research results on algorithmic Petri net synthesis from languages [6, 1,, 1]. In these approaches, regions are defined on languages. It seems natural to directly use these approaches for process mining, because logs can directly be interpreted as languages. The aim of this paper is to adjust such language based synthesis algorithms to solve the process mining problem. The idea of language based synthesis algorithms is as follows: The transitions of the constructed net are given by the characters of the language. Adding places restricts the behaviour of the net. Only places not prohibiting sequences of the language are added. Thus the resulting net includes the behaviour specified by the language. This approach is very well suited for process mining. If the language is given by an event log, the constructed net reproduces the log. The algorithmic methods of language based synthesis, deciding which places are added to the net, will turn out to guarantee, that the constructed net does not allow for much more behaviour than shown in the log. We will present methods for process mining adapted from language based synthesis methods. We compare the methods and give a complete overview of the applicability of regions of languages to the process mining problem. The process mining algorithms discussed in this paper are completely based on formal methods of Petri net theory guaranteeing reliable results. By contrast, most existing process mining approaches are partly based on heuristic methods, although they borrow techniques from formally developed research areas such as machine learning and grammatical inference [3, 17], neural networks and statistics [3, 3], or Petri net algorithms [7, 4, 0]. The paper is organized as follows. Section provides formal definitions. Section 3 motivates and explains language based synthesis algorithms and defines the process mining problem tackled in this paper more formally. A preliminary solution to the process mining problem defines nets with infinitely many places. In Section 4, two methods for identifying finite (and small) subsets of places which suffice for representing the behaviour of the given event log are presented and compared. Finally, the conclusion completes the overview of the applicability of language based synthesis for process mining and provides a bridge from the more theoretical considerations of this paper to practically useful algorithms.
3 Preliminaries In this section we recall the basic notions of languages, event logs and place/transition Petri nets. An alphabet is a finite set A. The set of all strings (words) over an alphabet A is denoted by A. The empty word is denoted by λ. A subset L A is called language over A. For a word w A, w denotes the length of w and w a denotes the number of occurrences of a A in w. Given two words v, w, we call v prefix of w if there exists a word u such that vu = w. A language L is prefix-closed, if for every w L each prefix of w also belongs to L. The following definition is a formalization of typical log files. Since we focus on the control flow of activities (their ordering), we abstract from some additional information such as originators of events and time stamps of events. Definition 1 (Event log). Let T be a finite set of activities and C be a finite set of cases. An event is an element of T C. An event log is an element of (T C). Given a case c C we define the function p c : T C T by p c (t, c ) = t if c = c and p c (t, c ) = λ else. Given an event log σ = e 1... e n (T C) we define the process language L(σ) of σ by L(σ) = {p c (e 1 )... p c (e i ) i n, c C} T. Observe that the process language of an event log is finite and prefix closed. It represents the control flow of the activities given by the log. Each case of the log adds one word (drawn in italic in the following example) over the set of activities together with its prefixes to the process language. Of course several cases may add the same words to the process language (e.g. the word abba in the following example). Therefore in real life, the control flow given by an event log is a bag of words. In this paper we do not distinguish words w.r.t. their frequencies. Therefore, the process language is defined as a set of words. The following example log will serve as a running example. event log (activity,case): (a,1) (b,1) (a,) (b,1) (a,3) (d,3) (a,4) (c,) (d,) (e,1) (c,3) (b,4) (e,3) (e,) (b,4) (e,4) process language: a ab abb abbe ac acd acde ad adc adce Example 1. A net is a triple N = (P, T, F ), where P is a (possibly infinite) set of places, T is a finite set of transitions satisfying P T =, and F (P T ) (T P ) is a flow relation. Let x P T be an element. The preset x is the set {y P T (y, x) F }, and the post-set x is the set {y P T (x, y) F }. Definition (Place/transition-net). A place/transition-net (p/t-net) is a quadruple N = (P, T, F, W ), where (P, T, F ) is a net, and W : F N is a weight function. We extend the weight function W to pairs of net elements (x, y) (P T ) (T P ) with (x, y) F by W (x, y) = 0. A marking of a p/t-net N = (P, T, F, W ) is a function m : P N 0 assigning m(p) tokens to a place p P. A marked p/t-net is a pair (N, m 0 ), where N is a p/t-net, and m 0 is a marking of N, called initial marking. As usual, places are drawn as circles including tokens representing the initial marking, transitions are depicted as rectangles and the flow relation is shown by arcs which have annotated the values of the weight function (the weight 1 is not shown).
4 A transition t T is enabled to occur in a marking m of a p/t-net N if m(p) W (p, t) for every place p t. If a transition t is enabled to occur in a marking m, then its occurrence leads to the new marking m defined by m (p) = m(p) W (p, t)+ W (t, p) for every p P. That means t consumes W (p, t) tokens from p and produces W (t, p) tokens in p. We write m t m to denote that t is enabled to occur in m and that its occurrence leads to m. A finite sequence of transitions w = t 1... t n, n N, is called an occurrence sequence enabled in m and leading to m n if there exists a sequence of markings m 1,..., m n such that m t1 t m t 1... n mn. In this case m k (1 k n) is given by m k (p) = m(p) + k i=1 (W (t i, p) W (p, t i )) for p P. The set of all occurrence sequences enabled in the initial marking m 0 of a marked p/t-net (N, m 0 ) forms a language over T and is denoted by L(N, m 0 ). Observe that L(N, m 0 ) is prefix closed. L(N, m 0 ) models the (sequential) behaviour of (N, m 0 ). There is the following straightforward characterization of L(N, m 0 ): Lemma 1. Let (N, m 0 ) be a marked p/t-net. Then w = t 1... t n T, n N, is in L(N, m 0 ) if and only if for each 1 k n and each p P there holds: m 0 (p) + k 1 i=1 (W (t i, p) W (p, t i )) W (p, t k ). Let w = t 1... t n L(N, m 0 ) and t T. Then wt L(N, m 0 ) if and only if for one p P there holds: m 0 (p) + n i=1 (W (t i, p) W (p, t i )) < W (p, t). Figure 1 shows a marked p/t-net having exactly the process language of the running example as its language of occurrence sequences. That means this Petri net model is a process model describing the process given by the event log. a c b e d Fig. 1. Petri net model describing the event log of the running example. 3 Theory of Regions Applied to Process Mining In this section we formally define the process mining problem and show how the classical language based theory of regions can be adjusted to solve this problem. The regions
5 definition introduced in this section is an adaption of the definition in [6, 1] to the setting of process mining. In [6, 1] languages given by regular expressions instead of finite languages given by event logs and pure nets instead of p/t-nets are considered. In the following section we will develop concrete algorithms from the considerations presented in this section. Process mining aims at the construction of a process model from an event log which is able to reproduce the behaviour (the process) of the log, and does not allow for much more behaviour than shown in the log. Moreover, as argued already in the introduction, the process model in the ideal case serves as a reference model which can be interpreted by practitioners. Therefore the model should be as small as possible. As we will show, there is a trade-off between the size of the constructed model and the degree of the match of the behaviour generated by the model and the log. In this paper we formalize process models as Petri nets and consider the following process mining problem: Given: An event log σ. Searched: A preferably small finite marked p/t-net (N, m 0 ) such that (1) L(σ) L(N, m 0 ) and () L(N, m 0 ) \ L(σ) is small. In the following we will consider a fixed process language L(σ) given by an event log σ with set of activities T. An adequate method to solve the process mining problem w.r.t. L(σ) is applying synthesis algorithms using regions of languages. Region-based synthesis algorithms all follow the same principle: Given a language L(σ), the set of transitions of the searched net is given by the set of characters T used in L(σ). Then each w L(σ) is an enabled occurrence sequence w.r.t. the resulting marked p/t-net (, T,,, ) consisting only of these transitions (having an empty set of places), because there are no causal dependencies between the transitions. That means L(σ) L(, T,,, ) = T and thus (, T,,, ) fulfills (1). But this net has many enabled occurrence sequences not specified in L(σ), because L(σ) is finite. That means () is not fulfilled. Thus, the behaviour of this net is restricted by adding places leading to a marked p/t-net (N, m 0 ), N = (P, T, F, W ). Every place p P is defined by its initial marking m 0 (p) and the weights W (p, t) and W (t, p) of the arcs connecting them to each transition t T. In order to preserve (1), only places are added, which do not prohibit sequences of L(σ). Such places are called feasible (w.r.t. L(σ)). Definition 3 (Feasible place). Let (N, m p ), N = ({p}, T, F p, W p ) be a marked p/tnet with only one place p (F p, W p, m p are defined according to the definition of p). The place p is called feasible (w.r.t. L(σ)), if L(σ) L(N, m p ), otherwise non-feasible. Adding only feasible places yields a net fulfilling (1), while adding any non-feasible place yields a net not fulfilling (1). The more feasible places we add the smaller is the set L(N, m 0 ) \ L(σ). Adding all feasible places minimizes L(N, m 0 ) \ L(σ) (preserving (1)). That means the resulting net called the saturated feasible net is an optimal solution for the process mining problem concerning (1) and () (but it is not small). Definition 4 (Saturated feasible net). The marked p/t-net (N sat, m sat ), N sat = (P, T, F, W ), such that P is the set of all places feasible w.r.t. L(σ) is called saturated feasible (w.r.t. L(σ)) (F, W, m 0 are defined according to the definitions of the feasible places). Theorem 1. The saturated feasible p/t-net (N sat, m sat ) w.r.t. L(σ) satisfies L(σ) L(N sat, m sat ) and (N, m 0 ) : L(N, m 0 ) L(N sat, m sat ) = L(σ) L(N, m 0 ).
6 In particular there holds either L(N sat, m sat ) = L(σ) or there is no p/t-net (N, m 0 ) satisfying L(N, m 0 ) = L(σ). The problem is that (N sat, m sat ) is not finite. Therefore, Theorem 1 has only theoretical value. Moreover, in the above considerations we did not include the formulated aim to construct a small p/t-net. Here the trade-off between the size of the constructed net and () comes into play: The more feasible places we add the better () is reached, but the bigger becomes the constructed net. The central question is which feasible places should be added. There are two basic algorithmic approaches throughout the literature to synthesize a finite net (N, m 0 ) from a language. In both approaches (N, m 0 ) represents (N sat, m sat ) in the sense that L(N, m 0 ) = L(σ) L(N sat, m sat ) = L(σ). The crucial idea in these approaches is to define feasible places structurally on the level of the given language. Every feasible place is defined by a so called region of the language. A region is simply a tuple of natural numbers which represents the initial marking of a place and the number of tokens each transition consumes respectively produces in that place, satisfying some property which ensures that no occurrence sequence of the given language is prohibited by this place. Definition 5 (Region). Denoting T = {t 1,..., t n }, a region of L(σ) is a tuple r = (r 0,..., r n ) N n+1 satisfying for every wt L(σ) (w L(σ), t T ): ( ) r 0 + n ( w ti r i wt ti r n+i ) 0. i=1 Every region r of L(σ) defines a place p r via m 0 (p r ) := r 0, W (t i, p r ) := r i and W (p r, t i ) := r n+i for 1 i n. The place p r is called corresponding place to r. From Lemma 1, we deduce: Theorem. Each place corresponding to a region of L(σ) is feasible w.r.t. L(σ). and each place feasible w.r.t. L(σ) corresponds to a region of L(σ). Thus, the set of feasible places w.r.t. L(σ) corresponds to the set of regions of L(σ). The set of regions can be characterized as the set of non-negative integral solutions of a homogenous linear inequation system A L(σ) r 0. The matrix A L(σ) consists of rows a wt = (a wt,0,..., a wt,n ) for all wt L(σ), satisfying a wt r 0 ( ). This is achieved by setting for each wt L(σ): 1 for i = 0, a wt,i = w ti for i = 1,..., n wt ti n for i = n + 1,..., n. The next table shows this inequation system for the process language of Example 1.
7 a r 0 r 6 0 ab r 0 + r 1 r 6 r 7 0 abb r 0 + r 1 + r r 6 r 7 0 abbe r 0 + r 1 + r r 6 r 7 r 10 0 ac r 0 + r 1 r 6 r 8 0 acd r 0 + r 1 + r 3 r 6 r 8 r 9 0 acde r 0 + r 1 + r 3 + r 4 r 6 r 8 r 9 r 10 0 ad r 0 + r 1 r 6 r 9 0 adc r 0 + r 1 + r 4 r 6 r 9 r 8 0 adce r 0 + r 1 + r 4 + r 3 r 6 r 9 r 8 r 10 0 The inequation system may have less inequations than the number of words in the considered language. In this example the inequations for acde and adce coincide. Altogether the idea of adding feasible places is very well suited for process mining, because this guarantees (1). But so far it is not clear which feasible places should be added such that the resulting net does not become too big and () is still satisfactorily fulfilled. The two mentioned region based approaches to synthesize a finite net from a language propose two different procedures to add a finite set of feasible places. These procedures are the candidates to yield a good solution of the process mining problem. Both approaches are based on linear programming techniques and convex geometry to calculate a certain finite set of regions based on the above characterization of the set of regions by a linear inequation system. In the following section we adjust both approaches to the considered process mining problem and discuss their applicability and their results in this context. 4 Solving the Process Mining Problem We first introduce three basic principles to identify redundant places. Redundant places can be omitted from a marked p/t-net (N, m 0 ) without changing L(N, m 0 ). That means, when constructing a net by adding feasible places, we do not add redundant feasible places (since this does not influence ()). Definition 6 (Redundant place). Given a marked p/t-net (N, m 0 ), N = (P, T, F, W ), a place p P is called redundant if L(N, m 0 ) = L(P \ {p}, T, F ((P \ {p} T ) (T P \ {p})), W (P \{p} T ) (T P \{p}), m 0 P \{p} ). A place p fulfilling W (p, t) W (t, p) for each t T and m 0 (p) max{w (p, t) t T } induces no behavioural restriction and is therefore called useless. A place p is called a non-negative linear combination of places p 1,..., p k if there are non-negative real numbers λ 1,..., λ k (k N) such that m 0 (p) = k i=1 λ i m 0 (p i ), W (p, t) = k i=1 λ i W (p i, t) and W (t, p) = k i=1 λ i W (t, p i ) for all transitions t. In such a case we shortly write p = k i=1 λ i p i. A place p is called less restrictive than a place p if λ m 0 (p) m 0 (p ) and λ W (t, p) W (t, p ) as well as λ W (p, t) W (p, t) for all transitions t and some λ > 0. In such a case we shortly write p p. Lemma. Let (P, T, F, W, m 0 ) be a marked p/t-net and let p, p, p 1,..., p k P be pairwise different places. Then there holds:
8 (i) p useless = p is redundant. (ii) p p = p is redundant. (iii) p = k i=1 λ i p i = p is redundant. Proof. (i) and (ii) are clear by definition, (iii) is proven in [1]. These results can be used to effectively construct a finite net solving the process mining problem. In the following subsections the two mentioned existing basic approaches are introduced, optimized w.r.t. the process mining problem and compared. 4.1 Method 1: Finite Basis of Feasible Places The first strategy to add a certain finite set of feasible places, used in [1], computes a so called finite basis of the set of all feasible places. Such a basis is a finite set of feasible places P b = {p 1,..., p k }, such that each other feasible place p is a non-negative linear combination of p 1,..., p k. The idea is to add only basis places. Adding all basis places leads to a finite representation of the saturated feasible net. By Lemma (iii) the resulting marked p/t-net (N b, m b ), N = (P b, T, F b, W b ), fulfills L(N sat, m sat ) = L(N b, m b ). Consequently, this approach leads to an optimal solution of the process mining problem concerning (). The following considerations show that such a finite basis always exists and how it can be computed. As mentioned, the set of feasible places can be defined exactly as the set of nonnegative integer solutions of A L(σ) r 0. The set of non-negative real solutions of such a system is a pointed polyhedral cone [19]. According to a theorem of Minkowski [15, 19] polyhedral cones are finitely generated, that means there are finitely many solutions y 1,..., y k, called basis solutions, such that each element r of the polyhedral cone is a non-negative linear sum r = k i=1 λ i y i for some λ 1,..., λ k 0. Pointed polyhedral cones have a unique (except for scaling) minimal (w.r.t. set inclusion) set of basis solutions given by the rays of the cone [19]. If all entries of A L(σ) are integers, then also the entries of the basis solutions can be chosen as integers. If r = k i=1 λ i y i for basis solutions y 1,..., y k of A L(σ) r 0, r 0, then p r = k i=1 λ i p yi. Thus, to compute a finite representation of (N sat, m sat ), we compute a finite set of integer basis solutions of A L(σ) r 0, r 0. The set of places P b corresponding to such basis solutions forms a basis of the set of all feasible places. The minimal set of basis solutions y 1,..., y k can be effectively computed from A L(σ) (see for example [16]). The time complexity of the computation essentially depends on the number k of basis solutions which is bounded by k ( ) L(σ) + T +1 T +1. That means, in the worst case the time complexity is exponential in the number of words of L(σ), whereas in most practical examples of polyhedral cones the number of basis solutions is reasonable. The finite set P b usually still includes redundant places. The redundant places described in Lemma by (i) and (ii) are deleted from (N b, m b ) in order to get a preferably small net solving the process mining problem. Algorithm 1 computes (N b, m b ). An example for a net calculated by this algorithm is shown for the event log of Example 1. We used the Convex Maple [14] package from [11] to calculate the rays of the inequation system A L(σ) r 0, r 0. The number of basis places corresponding to rays is 55 in this example. Steps 11 to 13 of Algorithm 1 delete 15 of these places.
9 1: L(σ) getp rocesslanguage(σ) : A EmptyMatrix 3: (P, T, F, W, m 0 ) (, getactivities(σ),,, ) 4: for all w L(σ) do 5: A.addRow(a w) 6: end for 7: Solutions getintegerrays(a r 0, r 0) 8: for all r Solutions do 9: P.addCorrespondingP lace(r) 10: end for 11: for all (p, p ) P P, p p do 1: if p.isuseless() p p then P.delete(p) end if 13: end for 14: return (P, T, F, W, m 0) Algorithm 1: Computes (N b, m b ) from an event log σ. Many of the 40 places of the resulting net (N b, m b ) are still redundant. It is possible to calculate a minimal subset of places, such that the resulting net has the same behaviour as (N b, m b ). This would lead to the net shown in Figure with only five places. But this is extremely inefficient. Thus, more efficient heuristic approaches to delete redundant places are of interest. The practical applicability of Algorithm 1 could be drastically improved with such heuristics. In the considered example, most of the redundant places are so called loop places. A loop place is in the pre- and the postset of one transition. If we delete all loop places from (N b, m b ), there remain the five places shown in Figure plus the eight redundant places shown in Figure 3. In this case this procedure did not change the behaviour of the net (i.e. all loop places were redundant). a c b e a c b e d d Fig.. Key places of the net constructed from the event log of Example 1 with method 1. Fig. 3. Further places of the net constructed from the event log of Example 1 with method 1. In this example the process language is exactly reproduced by the constructed net, i.e. L(N b, m b ) = L(σ). Usually this is not the case. For example omitting the word acde (but not its prefixes) from the process language, the inequation system is not changed, since adce defines the same inequation. Therefore the net constructed from
10 this changed language with Algorithm 1 coincides with the above example. This net has the additional occurrence sequence adce not belonging to the changed process language. Since the net calculated by Method 1 is the best approximation to the given process language, the changed process language (given by a respective log) has to be completed in this way to be describable through a p/t-net. The main advantage of method 1 is the optimality w.r.t. (). The resulting process model may be seen as a natural completion of the given probably incomplete log file. Problematic is that the algorithm in some cases may be inefficient in time and space consumption. Moreover, the resulting net may be relatively big. 4. Method : Separating Feasible Places The second strategy, used e.g. in [1, ], is to add such feasible places to the constructed net, which separate specified behaviour from non-specified behaviour. That means for each w L(σ) and each t T such that wt L(σ), one searches for a feasible place p wt, which prohibits wt (as shown in the second part of Lemma 1). Such wt is called wrong continuation (also called faulty word in [1]) and such places are called separating feasible places. If there is such a separating feasible place, it is added to the net. The number of wrong continuations is bounded by L(σ) T. Thus the set P s containing one separating feasible place for each wrong continuation, for which such place exists, is finite. The resulting net (N s, m s ), N s = (P s, T, F s, W s ) yields a good solution for the process mining problem: It holds L(N sat, m sat ) = L(σ) there exists a separating feasible place for each wrong continuation L(N s, m s ) = L(σ). That means, if the process language of the log can exactly be generated by a p/t-net, the constructed net (N s, m s ) is such a net. Consequently, in this case () is optimized. But in general (N s, m s ) does not necessarily optimize (), since L(N s, m s ) L(N sat, m sat ) = L(N b, m b ) is possible (because even if there is no feasible place prohibiting wt, there might be one prohibiting wtt but such places are not added). However, in most practical cases L(N s, m s ) = L(N b, m b ) is fulfilled (see Subsection 4.3). In situations, where this is not the case, L(N s, m s ) \ L(N b, m b ) is usually small and thus L(N s, m s ) \L(σ) is small. The following heuristic can be used to further reduce L(N s, m s ) \ L(σ) in such situations: If there is no feasible place prohibiting a wrong continuation wt, try to construct a feasible place prohibiting wtt, and if there is no such place, try to construct a feasible place prohibiting wtt t, and so on, until you reach a satisfactory result. In order to compute a separating feasible place which prohibits a wrong continuation wt, one defines so called separating regions defining such places: Definition 7 (Separating region). Let r be a region of L(σ) and let wt be a wrong continuation. The region r is a separating region (w.r.t. wt) if ( ) r 0 + n ( w ti r i wt ti r n+i ) < 0. i=1 Lemma 1 shows that each separating feasible place prohibiting a wrong continuation wt corresponds to a separating region w.r.t. wt and vice versa. A separating region
11 r w.r.t. a wrong continuation wt can be calculated (if it exists) as a non-negative integer solution of a homogenous linear inequation system with integer coefficients of the form A L(σ) r 0 b wt r < 0. The vector b wt = (b 0,..., b n ) is defined in such a way that b wt r < 0 ( ). This is achieved by setting 1 for i = 0, b wt,i = w ti for i = 1,..., n wt ti n for i = n + 1,..., n. The matrix A L(σ) is defined as before. For example the inequation b wt r < 0 for the wrong continuation abc of the process language of Example 1 reads as follows: r 0 + r 1 + r r 6 r 7 r 8 < 0. If there exists no non-negative integer solution of this system, there exists no separating region w.r.t. wt and thus no separating feasible place prohibiting wt. If there exists a non-negative integer solution of the system, any such solution defines a separating feasible place prohibiting wt. There are several linear programming solver to decide the solvability of such a system and to calculate a solution if it is solvable. The choice of a concrete solver is a parameter of the process mining algorithm, that can be used to improve the results or the runtime. Since the considered system is homogenous, we can apply solvers searching for rational solutions, because each rational solution of the system can be transformed to an integer solution by multiplying with the common denominator. In order to decide if there is a non-negative rational solution and to find such solution in the positive case, the ellipsoid method by Khachiyan [19] can be used. The runtime of this algorithm is polynomial in the size of the inequation system. Since there are at most L(σ) T wrong continuations, the time complexity for computing (N s, m s ) is polynomial in the size of the input event log σ. Although the method of Khachiyan yields an algorithm to solve the process mining problem in polynomial time, usually a better choice is the classical Simplex algorithm or variants of the Simplex algorithm [5]. While the Simplex algorithm is exponential in the worst case, probabilistic and experimental results [19] show that the Simplex algorithm has a significant faster average runtime than the algorithm of Khachiyan. The standard procedure to calculate a starting edge with the Simplex algorithm is a natural approach to decide, if there is a non-negative integer solution of the linear inequation system and to find such solution in the positive case. But it makes also sense to use the whole Simplex method including a linear objective function that is optimized (minimized or maximized). The choice of a reasonable objective function for the Simplex solver is a parameter of the algorithm to improve the results. An appropriate example for this is a function minimizing the resulting separating region, i.e. generating minimal arc weights and a minimal initial marking. Moreover, there are several variants of the Simplex algorithm that can improve the runtime of the mining algorithm [5]. For example the inequation systems for the different wrong continuations only differ in the last inequation b wt r < 0. This enables the efficient application of incremental Simplex methods.
12 Independently from the choice of the solver, certain separating feasible places may separate more than one wrong continuation. For not yet considered wrong continuations, that are prohibited by feasible places already added to the constructed net, we do not have to calculate a separating feasible place. Therefore we choose a certain ordering of the wrong continuations. We first add a separating feasible place for the first wrong continuation (if such place exists). Then we only add a separating feasible place for the second wrong continuation, if it is not prohibited by an already added feasible places, and so on. This way we achieve, that in the resulting net (N s, m s ), various wrong continuations are prohibited by the same separating feasible place. The chosen ordering of the wrong continuations can be used as a parameter to positively adjust the algorithm. In particular, given a fixed solver, there always exists an ordering of the wrong continuations, such that the constructed net has no redundant places. In general (N s, m s ) may still include redundant places. There exist no redundant places w.r.t. (i) of Lemma, but there may exist redundant places w.r.t. (ii). These places can finally be deleted from (N s, m s ) in order to get a preferably small net. Algorithm calculates (N s, m s ). 1: L(σ) getp rocesslanguage(σ) : W C getw rongcontinuations(l(σ)) 3: A EmptyMatrix 4: (P, T, F, W, m 0 ) (, getactivities(σ),,, ) 5: for all w L(σ) do 6: A.addRow(a w ) 7: end for 8: for all w W C do 9: if isoccurrencesequence(w, (P, T, F, W, m 0 )) then 10: r Solver.getIntegerSolution(A r 0, r 0, b w r < 0) 11: if r null then 1: p correspondingp lace(r) 13: for all p P do 14: if p p then P.delete(p ) end if 15: end for 16: P.add(p) 17: end if 18: end if 19: end for 0: return (P, T, F, W, m 0 ) Algorithm : Computes (N s, m s ) from an event log σ. An example for a net calculated by this algorithm is shown for the log of Example 1. We considered the length-plus-lexicographic order of the 45 wrong continuations: b, c, d, e, aa, ae, aba, abc, abd, abe, aca, acb, acc, ace, ada, adb, add, ade, abba, abbb, abbc, abbd, acda, acdb, acdc, acdd, adca, adcb, adcc, adcd, abbea, abbeb, abbec, abbed, abbee, acdea, acdeb, acdec, acded, acdee, adcea, adceb, adcec, adced, adcee. To calculate a separating feasible place for a given wrong continuation, we used the Simplex
13 method of the Maple Simplex package [14]. We chose an objective function (for the Simplex algorithm) that minimizes all arc weights outgoing from the constructed place as well as the initial marking. Figure 4 shows the places resulting from the first five wrong continuations b, c, d, e and aa. In Figures we annotate the constructed separating feasible places with the wrong continuation, for which the place was calculated. The next wrong continuation ae leads the ae-place in Figure 5. Then aba is already prohibited by the aa-place and thus steps 10 to 17 of Algorithm are skipped for this wrong continuation. The next three wrong continuations abc, abd and abe lead to the respective separating feasible places in Figure 5. All remaining 35 wrong continuations are prohibited by one of the already calculated feasible places. The b-, c-, and d-place from Figure 4 are deleted in Figure 5, because each of these places is less restrictive than either the abc- or the abd-place. Thus, they are deleted as redundant places according to step 13 to 15 of Algorithm. Consequently the net in Figure 5 with six places results from Algorithm (only the e-place is still redundant). Altogether the Simplex algorithm was used to calculate nine separating feasible places, of which we deleted three as redundant places. aa a c b d e c b d e aa a abc abd e c b d ae abe e Fig. 4. First five places calculated from the event log of Example 1 with method. Fig. 5. Final net constructed from the event log of Example 1 with method. The resulting net exactly reproduces the behaviour of the log. Omitting the word acde (but not its prefixes) from the process language, the algorithm calculates also the net from Figure 5 (the inequation systems are not changed). This net does not exactly match the behaviour of this changed process language. But it is still an optimal solution regarding () (although this is not guaranteed, since L(N b, m b ) L(σ)). The main advantage of method is that the number of added places is bounded by L(σ) T and that in most practical cases it is a lot smaller. Usually the resulting net is small and concise. The calculation of the net is efficient. There exists a polynomial time algorithm. Problematic is, that a good solution regarding () is not guaranteed, i.e. there may be intricate examples leading to a bad solution of the process mining problem. The next subsection shows an example, where the constructed net is not optimal regarding (), but this example was really hard to find. Therefore, in most cases the net should be an optimal solution. In the special case L(N sat, m sat ) = L(σ), the optimality of
14 method is even guaranteed. Moreover if the constructed net is not optimal, the example of the next subsection indicates that it is usually still a good solution of the process mining problem. Altogether the process model resulting from method is a reasonable completion of the given probably incomplete log file. Lastly it remains to mention that in contrast to method 1, method also computes if the process language of the log is exactly reproduced by the constructed net, since there holds L(N s, m s ) = L(σ) if and only if there is a separating feasible place for every wrong continuation. 4.3 Comparison of Method 1 and Method While L(N b, m b ) is the smallest net language including L(σ) and thus (N b, m b ) is optimal w.r.t. (), this must not be the case for (N s, m s ). On the other hand, the examples and considerations in Subsection 4.1 and 4. have shown, that method (calculating (N s, m s )) is more efficient than method 1 (calculating (N b, m b )) and that method leads to significantly smaller nets. The nets resulting from method 1 usually need some heuristical adaptions to get a manually tractable size. In the following we consider an example event log leading to a situation, in which method 1 can lead to a better solution than method (dependent on the chosen parameters of method ) regarding (). event log (activity,case): (a,1) (a,1) (b,) (b,) (b,1) process language: a aa aab b bb Example. Method 1 computes the net (N b, m b ) in Figure 6. The behaviour of the net (N b, m b ) completes L(σ) by the occurrence sequence ab, i.e. L(N b, m b ) \ L(σ) = {ab}. The net resulting from method is dependent on two parameters: the order of the wrong continuations and the solver calculating the separating feasible places (i.e. which solutions of the inequation systems are computed). We fixed the length-plus-lexicographic order for the wrong continuations. Our standard procedure using the Maple Simplex algorithm calculated the net on the bottom in Figure 7. This net is optimal regarding (). But choosing another solver could also lead to the net depicted on the top in Figure 7. This net contains another separating feasible place for the wrong continuation aabb (the corresponding region also solves the respective inequation system). It has one additional occurrence sequence abb in contrast to the nets in Figure 6 and Figure 7. Thus it is not optimal regarding (). Since the region corresponding to this place is not an edge of the polyhedron defined by the inequation system corresponding to the wrong continuation aabb and the Simplex algorithm always computes edges, the Simplex solver computed the edge-solution defining the aabb-place on the bottom of Figure 7. Thus, we could not find a solver leading to a non-optimal net regarding (). But of course we cannot rule out the possibility, that there are log files such that a solver generates such a nonoptimal solution. It remains to mention that we searched a long time for the presented example event log, in which it is at least possible that method computes a non-optimal solution.
15 aaa a aabb ba b 3 a b aaa a aabb ba b Fig. 6. Net calculated with method 1. Fig. 7. Two alternative nets calculated with method We showed in this subsection that it is actually possible that method 1 leads to a better solution regarding () than method. As argued, it is reasonable to assume that this happens only in really rare cases. Method still provides a good solution in these cases. It can be optimized w.r.t. two parameters the chosen solver and the ordering of the wrong continuations. The distinct advantages of method concerning the runtime and the size of the calculated net altogether argue for method. But method 1 can still lead to valuable results, in particular if combined with some heuristics to decrease the number of places of the constructed net. Mainly, algorithms deleting redundant places are of interest. 5 Conclusion The presented methods were restricted in two ways. Firstly we only considered p/t-nets as process models. Of course there are several other net classes of interest, such as for example workflow nets and elementary nets. Secondly there is one other definition of regions of languages (to define feasible places) in the literature, that could be applied. We also adapted and tested region based synthesis methods w.r.t. such other net classes and region definitions. In this section we will shortly argue that these methods follow the same lines of computing a finite representation of (N sat, m sat ) through basis or separating solutions of linear inequation systems. We compare these methods with the presented ones. First, we discuss alternative Petri net classes. In the example of Subsection 4.1, we proposed to omit loops to simplify the constructed net. Leaving loops from p/t-nets in general, leads to the simpler class of pure nets. The connection between a transition and a place can then be described by one integer number z: If z is positive, the transition produces z tokens in the place, and if z is negative, the transition consumes z tokens from the place. The process mining approach can be developed for this net class analogously as for p/t-nets. The inequation systems get simpler in this case, in particular the
16 number of variables is halved. Therefore the process mining approach based on regions of languages gets more efficient for pure nets, but the modelling power is restricted in contrast to p/t-nets. Typical workflow Petri nets often have unweighted arcs. To construct such nets from a log with the presented methods, one simply has to add additional inequations ensuring arc weights smaller or equal than one to the considered inequation systems. The problem is that the resulting systems are inhomogeneous. Method 1 is not applicable in this case (adaptions of this method are in some cases still possible). Method is still useable, but the linear programming techniques to find separating feasible places have to be adapted. The approaches become less efficient in the inhomogeneous case [19]. A popular net class with unweighted arcs are elementary nets. In elementary nets the number of tokens in a place is bounded by one. This leads to additional inhomogeneous inequations ensuring this property. The process mining methods can in this case be applied as described in the last paragraph. Note that the total number of possible places is finite in the case of elementary nets. Thus also the number of feasible places is finite. This leads to some simplifications concerning a compact representation of the saturated feasible net (similar to method 1), which can be calculated itself. In [13] regions of partial languages are introduced (in partial languages concurrency between activities can be specified), and in [1] their calculation for the case of a finite partial language is shown. Since the process language of an event log considered in this paper is a special case of a finite partial language, this approach can directly be applied in our setting. Since the set of regions in this case can also be characterized as the set of non-negative integer solutions of a homogeneous inequation system, the two computation methods of Section 4 can analogously be used with this alternative regions definition. But the number of variables as well as the number of inequations is larger than with the regions definition of this paper, in particular the dimension of the resulting cone is bigger. We tested this approach, but the complexity of the algorithm as well as the size of the resulting nets are worse. Nevertheless the approach can be interesting for process mining, if there is some additional information, that can be used to identify independent (concurrent) events in the event log. Extracting such independency information in logs is already applied in [18]. The big advantage of the presented process mining approaches based on regions of languages is that they lead to reliable results. Other process mining algorithms are often more or less heuristic and their applicability is shown only with experimental results. We showed theoretical results that justify that the presented methods lead to a good or even optimal solution regarding (), while (1) is guaranteed. A problem of the algorithms may be the required time and space consumption as well as the size of the resulting nets. The presented algorithms can be seen as a basis, that can be improved in several directions. Method for computing separating feasible places is flexible w.r.t. the used solver and the chosen ordering of the wrong continuations. Varying the solver could improve time and space consumption, heuristics for fixing an appropriate ordering of the wrong continuations could lead to smaller nets. Moreover, both methods could be improved by additional approaches to find redundant places yielding smaller nets. For example, in this paper we used a simple special objective function in the simplex algorithm to rule out some redundant places. To develop such approaches, experimental
17 results and thus an implementation of the algorithms is necessary. An implementation of the presented process mining algorithms is the next step to realize a practically usable tool. References 1. E. Badouel, L. Bernardinello, and P. Darondeau. Polynomial algorithms for the synthesis of bounded nets. In P. D. Mosses; M. Nielsen; M. I. Schwartzbach, editor, TAPSOFT, volume 915 of Lecture Notes in Computer Science, pages Springer, E. Badouel and P. Darondeau. Theory of regions. In W. Reisig; G. Rozenberg, editor, Petri Nets, volume 1491 of Lecture Notes in Computer Science, pages Springer, J. E. Cook and A. L. Wolf. Discovering models of software processes from event-based data. ACM Trans. Softw. Eng. Methodol., 7(3):15 49, J. Cortadella, M. Kishinevsky, A. Kondratyev, L. Lavagno, and A. Yakovlev. Petrify: A tool for manipulating concurrent specifications and synthesis of asynchronous controllers. IEICE Trans. of Informations and Systems, E80-D(3):315 35, J. Cortadella, M. Kishinevsky, A. Kondratyev, L. Lavagno, and A. Yakovlev. Hardware and petri nets: Application to asynchronous circuit design. In M. Nielsen; D. Simpson, editor, ICATPN, volume 185 of Lecture Notes in Computer Science, pages Springer, P. Darondeau. Deriving unbounded petri nets from formal languages. In D. Sangiorgi; R. de Simone, editor, CONCUR, volume 1466 of Lecture Notes in Computer Science, pages Springer, A. K. A. de Medeiros, W. M. P. van der Aalst, and A. J. M. M. Weijters. Workflow mining: Current status and future directions. In R. Meersman, Z. Tari, and D. C. Schmidt, editors, CoopIS/DOA/ODBASE, volume 888 of Lecture Notes in Computer Science, pages Springer, J. Desel and W. Reisig. The synthesis problem of petri nets. Acta Inf., 33(4):97 315, A. Ehrenfeucht and G. Rozenberg. Partial (set) -structures. part i: Basic notions and the representation problem. Acta Inf., 7(4):315 34, A. Ehrenfeucht and G. Rozenberg. Partial (set) -structures. part ii: State spaces of concurrent systems. Acta Inf., 7(4): , M. Franz. Convex - a maple package for convex geometry., franz/convex/. 1. R. Lorenz, R. Bergenthum, S. Mauser, and J. Desel. Synthesis of petri nets from finite partial languages. In Proceedings of ACSD 007, R. Lorenz and G. Juhás. Towards synthesis of petri nets from scenarios. In S. Donatelli and P. S. Thiagarajan, editors, ICATPN, volume 404 of Lecture Notes in Computer Science, pages Springer, Maplesoft. Maple-homepage H. Minkowski. Geometrie der Zahlen. Teubner, T. Motzkin. Beiträge zur Theorie der linearen Ungleichungen. PhD thesis, Jerusalem, R. Parekh and V. Honavar. Automata induction, grammar inference, and language acquisition. In R. Dale, H. Moisl, and H. Somers, editors, Handbook of Natural Language Processing. New York: Marcel Dekker, Process mining group eindhoven technical university: Prom-homepage. cgunther/dev/prom/. 19. A. Schrijver. Theory of Linear and Integer Programming. Wiley, 1986.
18 0. W. van der Aalst, V. Rubin, B. van Dongen, E. Kindler, and C. Guenther. Process mining: A two-step approach using transition systems and regions. Technical Report BPM Center Report BPM-06-30, Department of Technology Management, Eindhoven University of Technology, W. van der Aalst and K. van Hee. Workflow Management: Models, Methods, and Systems. MIT Press, Cambridge, Massachsetts, 00.. W. van der Aalst, A. Weijters, H. W. Verbeek, and et al. Process mining: research tools application. 3. W. M. P. van der Aalst, B. F. van Dongen, J. Herbst, L. Maruster, G. Schimm, and A. J. M. M. Weijters. Workflow mining: A survey of issues and approaches. Data Knowl. Eng., 47():37 67, B. van Dongen, N. Busi, G. Pinna, and W. van der Aalst. An iterative algorithm for applying the theory of regions in process mining. Technical Report Beta rapport 195, Department of Technology Management, Eindhoven University of Technology, R. J. Vanderbei. Linear Programming: Foundations and Extensions. Kluwer Academic Publishers, 1996.
Chapter 15 Introduction to Linear Programming
Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values
Genet A tool for the synthesis and mining of Petri nets. Josep Carmona jcarmonalsi.upc.edu Software Department Universitat Politcnica de Catalunya
Genet A tool for the synthesis and mining of Petri nets Josep Carmona jcarmonalsi.upc.edu Software Department Universitat Politcnica de Catalunya 2 Contents 1.1 Overview of the tool.......................,
Min-cost flow problems and network simplex algorithm
Min-cost flow problems and network simplex algorithm The particular structure of some LP problems can be sometimes used for the design of solution techniques more efficient than the simplex algorithm.
Process Mining Using BPMN: Relating Event Logs and Process Models
Noname manuscript No. (will be inserted by the editor) Process Mining Using BPMN: Relating Event Logs and Process Models Anna A. Kalenkova W. M. P. van der Aalst Irina A. Lomazova Vladimir A. Rubin Received:
Chapter 6: Episode discovery process
Chapter 6: Episode discovery process Algorithmic Methods of Data Mining, Fall 2005, Chapter 6: Episode discovery process 1 6. Episode discovery process The knowledge discovery process KDD process of analyzing
Process Mining Framework for Software Processes
Process Mining Framework for Software Processes Vladimir Rubin 1,2, Christian W. Günther 1, Wil M.P. van der Aalst 1, Ekkart Kindler 2, Boudewijn F. van Dongen 1, and Wilhelm Schäfer 2 1 Eindhoven University
Formal Languages and Automata Theory - Regular Expressions and Finite Automata -
Formal Languages and Automata Theory - Regular Expressions and Finite Automata - Samarjit Chakraborty Computer Engineering and Networks Laboratory Swiss Federal Institute of Technology (ETH) Zürich March
24. The Branch and Bound Method
24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no
Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.
Linear Programming Widget Factory Example Learning Goals. Introduce Linear Programming Problems. Widget Example, Graphical Solution. Basic Theory:, Vertices, Existence of Solutions. Equivalent formulations.,
5.1 Bipartite Matching
CS787: Advanced Algorithms Lecture 5: Applications of Network Flow In the last lecture, we looked at the problem of finding the maximum flow in a graph, and how it can be efficiently solved using the Ford-Fulkerson
Regular Languages and Finite Automata
Regular Languages and Finite Automata 1 Introduction Hing Leung Department of Computer Science New Mexico State University Sep 16, 2010 In 1943, McCulloch and Pitts [4] published a pioneering work on a
Linear Programming I
Linear Programming I November 30, 2003 1 Introduction In the VCR/guns/nuclear bombs/napkins/star wars/professors/butter/mice problem, the benevolent dictator, Bigus Piguinus, of south Antarctica penguins 3 Numbers and Numeral Systems
CHAPTER 3 Numbers and Numeral Systems Numbers play an important role in almost all areas of mathematics, not least in calculus. Virtually all calculus books contain a thorough description of the natural,
CHAPTER 1 INTRODUCTION
CHAPTER 1 INTRODUCTION 1.1 Research Motivation In today s modern digital environment with or without our notice we are leaving our digital footprints in various data repositories through our daily activities,
3 Does the Simplex Algorithm Work?
Does the Simplex Algorithm Work? In this section we carefully examine the simplex algorithm introduced in the previous chapter. Our goal is to either prove that it works, or to determine those circumstances
Designing and Evaluating an Interpretable Predictive Modeling Technique for Business Processes
Designing and Evaluating an Interpretable Predictive Modeling Technique for Business Processes Dominic Breuker 1, Patrick Delfmann 1, Martin Matzner 1 and Jörg Becker 1 1 Department for Information Systems,
Practical Guide to the Simplex Method of Linear Programming
Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear
8 Simulatability of Net Types
8 Simulatability of Net Types In chapter 7 was presented different higher order net types. Now we discuss the relations between these net types, place transition nets, and condition event nets. Especially
Complexity and Completeness of Finding Another Solution and Its Application to Puzzles
yato@is.s.u-tokyo.ac.jp seta@is.s.u-tokyo.ac.jp Π (ASP) Π x s x s ASP Ueda Nagao n n-asp parsimonious ASP ASP NP Complexity and Completeness of Finding Another Solution and Its Application to Puzzles Takayuki
Linear Programming. March 14, 2014
Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1
Factoring & Primality
Factoring & Primality Lecturer: Dimitris Papadopoulos In this lecture we will discuss the problem of integer factorization and primality testing, two problems that have been the focus of a great amount
MA257: INTRODUCTION TO NUMBER THEORY LECTURE NOTES
MA257: INTRODUCTION TO NUMBER THEORY LECTURE NOTES 2016 47 4. Diophantine Equations A Diophantine Equation is simply an equation in one or more variables for which integer (or sometimes rational) ()
Coverability for Parallel Programs
2015 Coverability for Parallel Programs Lenka Turoňová* Abstract We improve existing method for the automatic verification of systems with parallel running processes. The technique
The Ideal Class Group
Chapter 5 The Ideal Class Group We will use Minkowski theory, which belongs to the general area of geometry of numbers, to gain insight into the ideal class group of a number field. We have already mentioned
Approximation Algorithms
Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms,
MATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 7: Conditionally Positive Definite Functions Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter
Faster and More Focused Control-Flow Analysis for Business Process Models through SESE Decomposition
Faster and More Focused Control-Flow Analysis for Business Process Models through SESE Decomposition Jussi Vanhatalo 1,2, Hagen Völzer 1, and Frank Leymann 2 1 IBM Zurich Research Laboratory, Säumerstrasse.
Compact Representations and Approximations for Compuation in Games
Compact Representations and Approximations for Compuation in Games Kevin Swersky April 23, 2008 Abstract Compact representations have recently been developed as a way of both encoding the strategic interactions
The Dirichlet Unit Theorem
Chapter 6 The Dirichlet Unit Theorem As usual, we will be working in the ring B of algebraic integers of a number field L. Two factorizations of an element of B are regarded as essentially the same if
Advanced Higher Mathematics Course Assessment Specification (C747 77)
Advanced Higher Mathematics Course Assessment Specification (C747 77) Valid from August 2015 This edition: April 2016, version 2.4 This specification may be reproduced in whole or in part for educational
INTEGER PROGRAMMING. Integer Programming. Prototype example. BIP model. BIP models
Integer Programming INTEGER PROGRAMMING In many problems the decision variables must have integer values. Example: assign people, machines, and vehicles to activities in integer quantities. If this is
Model Discovery from Motor Claim Process Using Process Mining Technique
International Journal of Scientific and Research Publications, Volume 3, Issue 1, January 2013 1 Model Discovery from Motor Claim Process Using Process Mining Technique P.V.Kumaraguru *, Dr.S.P.Rajagopalan
2.5 Gaussian Elimination
page 150 150 CHAPTER 2 Matrices and Systems of Linear Equations 37 10 the linear algebra package of Maple, the three elementary 20 23 1 row operations are 12 1 swaprow(a,i,j): permute rows i and j 3
I. Solving Linear Programs by the Simplex Method
Optimization Methods Draft of August 26, 2005 I. Solving Linear Programs by the Simplex Method Robert Fourer Department of Industrial Engineering and Management Sciences Northwestern University Evanston,
GOAL-BASED INTELLIGENT AGENTS
International Journal of Information Technology, Vol. 9 No. 1 GOAL-BASED INTELLIGENT AGENTS Zhiqi Shen, Robert Gay and Xuehong Tao ICIS, School of EEE, Nanyang Technological University, Singapore 639798
What is Linear Programming?
Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x
Chapter 6. The stacking ensemble approach
82 This chapter proposes the stacking ensemble approach for combining different data mining classifiers to get better performance. Other combination techniques like voting, bagging etc are also described
Inverse Optimization by James Orlin
Inverse Optimization by James Orlin based on research that is joint with Ravi Ahuja Jeopardy 000 -- the Math Programming Edition The category is linear objective functions The answer: When you maximize.
Notes from Week 1: Algorithms for sequential prediction
CS 683 Learning, Games, and Electronic Markets Spring 2007 Notes from Week 1: Algorithms for sequential prediction Instructor: Robert Kleinberg 22-26 Jan 2007 1 Introduction In this course we will be looking)
MATH REVIEW KIT. Reproduced with permission of the Certified General Accountant Association of Canada.
MATH REVIEW KIT Reproduced with permission of the Certified General Accountant Association of Canada. Copyright 00 by the Certified General Accountant Association of Canada and the UBC Real Estate: Linear Programming Relaxations and Rounding
Lecture 3: Linear Programming Relaxations and Rounding 1 Approximation Algorithms and Linear Relaxations For the time being, suppose we have a minimization problem. Many times, the problem at hand can,
Ideal Class Group and Units
Chapter 4 Ideal Class Group and Units We are now interested in understanding two aspects of ring of integers of number fields: how principal they are (that is, what is the proportion of principal ideals
Largest Fixed-Aspect, Axis-Aligned Rectangle
Largest Fixed-Aspect, Axis-Aligned Rectangle David Eberly Geometric Tools, LLC Copyright c 1998-2016. All Rights Reserved. Created: February 21, 2004 Last Modified: February
Virtual Time and Timeout in Client-Server Networks
Virtual Time and Timeout in Client-Server Networks Jayadev Misra July 13, 2011 Contents 1 Introduction 2 1.1 Background.............................. 2 1.1.1 Causal Model of Virtual
State Space Analysis: Properties, Reachability Graph, and Coverability graph. prof.dr.ir. Wil van der Aalst
State Space Analysis: Properties, Reachability Graph, and Coverability graph prof.dr.ir. Wil van der Aalst Outline Motivation Formalization Basic properties Reachability graph Coverability graph PAGE 2. Finite Automata. 2.1 The Basic Model
Chapter 2 Finite Automata 2.1 The Basic Model Finite automata model very simple computational devices or processes. These devices have a constant amount of memory and process their input in an online manner.
Generation of a Set of Event Logs with Noise
Generation of a Set of Event Logs with Noise Ivan Shugurov International Laboratory of Process-Aware Information Systems National Research University Higher School of Economics 33 Kirpichnaya Str., Moscow,
Solving Systems of Linear Equations
LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how | http://docplayer.net/1067612-Process-mining-based-on-regions-of-languages.html | CC-MAIN-2018-05 | refinedweb | 11,032 | 53.41 |
CodePlexProject Hosting for Open Source Software
Currently, I cannot access the Views count in a region. Being told that IViewsCollection does not have a Count() method. I have an instance of the shell region manager in a presenter that was passed in via the constructor as IRegionManager.
I recently made two changes to the TabControlRegionAdaptor and recompiled the Composite.Presentation.dll - they were the TabItem HeaderTemplate fix (suggested here:) and I also wrapped TabItem
with a custom TabItem which does some stuff OnApplyTemplate. I don't see how that could suddenly make the Count() method unavailable in the region, but could it?
Just really confused as I see plenty of examples like this:
IRegion mainRegion = regionManager.Regions[regionName];
int viewsAmount = mainRegion.Views.Count();
That snippet will not compile for me....
Thanks in advance for any suggestions and apologies if I am missing something really obvious....
Hi headbiznatch,
You are probably not using the
System.Linq namespace. The count method that usually can be used in
IViewCollection is just the extension method that linq provides on
IEnumerable. Try adding using System.Linq;
If this does not resolve your issue, what is the compile error that you are receiving?
Hope it helps!
Matias Bonaventura
You beat me to the punch! Just figured it out and yeah, I feel like a moron. Thanks....
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | http://compositewpf.codeplex.com/discussions/65744 | CC-MAIN-2017-51 | refinedweb | 256 | 67.96 |
SAR to RGB image translation using CycleGAN
The ability of SAR data to let us see through clouds make it more valuable specially in cloudy areas and bad weather. This is the time when earth observation can reap maximum benefits, but optical sensors prevent us doing that. Now a days a lot of organizations are investing in SAR data making it more available to users than before. The only disadvantage of SAR data is the unavailability of labelled data as it is more difficult for users to understand and label SAR data than optical imagery.
In this sample notebook, we will see how we can make use of benefits of SAR and optical imagery to perform all season earth observation. We will train a deep learning model to translate SAR imagery to RGB imagery, thereby making optical data (translated) available even in extreme weather days and cloudy areas.
We will train a CycleGAN model for this case..
import os, zipfile from pathlib import Path from arcgis.gis import GIS from arcgis.learn import prepare_data, CycleGAN
# Connect to GIS gis = GIS('home')
For this usecase, we have SAR imagery from Capella Space and world imagery in the form of RGB tiles near Rotterdam city in the Netherlands. We have exported that data in a new “CycleGAN” metadata format available in the
Export Training Data For Deep Learning tool. This
Export Training Data For Deep Learning tool available in ArcGIS Pro as well as ArcGIS Image Server.
Input Raster: SAR imagery tile
Additional Raster: RGB imagery
Tile Size X & Tile Size Y: 256
Stride X & Stride Y: 128
Meta Data Format: CycleGAN
Environments: Set optimum
Cell Size,
Processing Extent.
In the exported training data, 'A' and 'B' folders contain all the image tiles exported from SAR imagery and RGB imagery (world imagery cache), respectively. Each folder will also have other files like 'esri_accumulated_stats.json', 'esri_model_definition.emd', 'map.txt', 'stats.txt'. Now, we are ready to train the
CycleGAN model.
Alternatively, we have provided a subset of training data containing a few samples that follows the same directory structure mentioned above. You can use the data directly to run the experiments.
training_data = gis.content.get('25ed4a30219e4ba7acb3633e1a75bae1') training_data
filepath = training_data.download(file_name=training_data.name)
with zipfile.ZipFile(filepath, 'r') as zip_ref: zip_ref.extractall(Path(filepath).parent)
output_path = Path(os.path.join(os.path.splitext(filepath)[0]))
We will train CycleGAN model [1] that performs the task of Image-to-Image translation where it learns mapping between input and output images using unpaired dataset. This model is an extension of GAN architecture which involves simultaneous training of two generator models and two discriminator models. In GAN, we can generate images of domain Y from domain X, but in CycleGAN, we can also generate images of domain X from domain Y using the same model architecture.
It has two mapping functions: G : X → Y and F : Y → X, and associated adversarial discriminators Dy and Dx. G tries to generate images that look similar to images from domain Y, while Dy aims to distinguish between translated samples G(x) and real samples y. G aims to minimize this objective against an adversary D that tries to maximize it. The same process happens in generation of the images of domain X from domain Y using F as a generator and Dx as a discriminator.
We will specify the path to our training data and a few hyperparameters.
path: path of the folder containing training data.
batch_size: Number of images your model will train on each step inside an epoch, it directly depends on the memory of your graphic card. 4 worked for us on a 11GB GPU.
data = prepare_data(output_path, batch_size=8)
To get a sense of what the training data looks like,
arcgis.learn.show_batch() method randomly picks a few training chips and visualizes them.
rows: Number of rows to visualize
data.show_batch()
model = CycleGAN(data)
Learning rate is one of the most important hyperparameters in model training. ArcGIS API for Python provides a learning rate finder that automatically chooses the optimal learning rate for you.
lr = model.lr_find()
We will train the model for a few epochs with the learning rate we have found. For the sake of time, we can start with 25 epochs. Unlike some other models, we train CycleGAN from scratch with a learning rate of 2e-04 for some initial epochs and then linearly decay the rate to zero over the next epochs.
model.fit(25, lr)
Here, with 25 epochs, we can see reasonable results — both training and validation losses have gone down considerably, indicating that the model is learning to translate SAR imagery to RGB and vice versa.
It is a good practice to see results of the model viz-a-viz ground truth. The code below picks random samples and shows us ground truth and model predictions, side by side. This enables us to preview the results of the model within the notebook.
model.show_results(4)
("SAR_to_RGB_25e", publish=True)
WindowsPath('D:/CycleGAN/Data/data_for_cyclegan_le_3Bands/models/SAR_to_RGB_25e')
We can translate SAR imagery to RGB and vice versa with the help of
predict() method.
Using predict function, we can apply the trained model on the image which we want to translate.
img_path: path to the image file.
convert_to: 'a' or 'b' type of fake image we want to generate.
#un-comment the cell to run predict over your desired image. # model.predict(r"D:\CycleGAN\Data\exported_data_CycleGAN\A\images\000002800.tif", convert_to="b")
In the above step, we are translating an image of
type a i.e. SAR imagery to an image of
type b i.e. RGB imagery. We can also perform
type b to
type a translation by changing the image file and
convert_to parameter.
#un-comment the cell to run predict over your desired image. # model.predict(r"D:\CycleGAN\Data\exported_data_CycleGAN\B\images\000008007.tif", convert_to="a")
Also, we can make use of
Classify Pixels Using Deep Learning tool available in both ArcGIS Pro and ArcGIS Enterprise.
Input Raster: The raster layer you want to classify.
Model Definition: It will be located inside the saved model in 'models' folder in '.emd' format.
Padding: The 'Input Raster' is tiled and the deep learning model classifies each individual tile separately before producing the final 'Output Classified Raster'. This may lead to unwanted artifacts along the edges of each tile as the model has little context to predict accurately. Padding as the name suggests allows us to supply some extra information along the tile edges, this helps the model to predict better.
Cell Size: Should be close to the size used to train the model. This was specified in the Export training data step.
Processor Type: This allows you to control whether the system's 'GPU' or 'CPU' will be used to classify pixels, by 'default GPU' will be used if available.
The gif below was achieved with the model trained in this notebook and visualizes the generated RGB image over original RGB image near Rotterdam.
In this notebook, we demonstrated how to use
CycleGAN model using
ArcGIS API for Python in order to translate imagery of one type to the other.
[1] Jun-Yan Zhu, Taesung Park, Phillip Isola, Alexei A. Efros: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks;. | https://developers.arcgis.com/python/samples/sar-to-rgb-image-translation-using-cyclegan/ | CC-MAIN-2022-27 | refinedweb | 1,213 | 54.32 |
Test APIs Failing from Client
In the case where you can't edit a local version of your APIs in order to cause them to fail, you need to be able to edit your client so that it looks like the API request fails. Here's a quick code change that should make this easy for you.
This is one of many possible manual testing techniques. This allow you to see the change in your running app as you work through it. For instance, I like to use this as I'm testing through the error alerting features that are common in an app when an API request fails.
Go to the point in your app where your bits touch the network. This is usually the place where you use your HTTP wrapper or, if you're a free spirit, call some XHR goodness yourself. These days, I like using the venerable axios library. Axios is promise based, so that means that we can use async/await as well, which is like a fine chocolate next to a warm fire in winter (which is good).
So, all my http code might look in essence like this:
import axios from 'axios' export const fetch = { // deserialize, etc ... request(url) { return axios({ method: 'get', url }) } }
So, this
request function returns a Promise. The logic around the request might look like:
import * as api from './api' async function fetchWater() { const { request} = api.fetch try { const res = await request('/my/own/home') // ... handle success response } catch (res) { // ... handle error response, where failure code should execute (what we want to TEST!) } }
To make this request fail, we need only make the Promise fail, rejecting it like a mouthful of stewed tomatoes. So, just change
api.js:
import axios from 'axios' export const fetch = { // ... request(url) { return Promise((resolve, reject) => { reject({ status: 500, data: { errors: [{ detail: 'Shere Khan is back!' }] } }) }) } }
Now note that you must be aware of what your format HTTP library, in this case axios, uses in its responses. What you
reject manually must be exactly the same format that usually is returned in an error situation (eg,
status and
data). You must also know what your server is designed to return as an error response (eg,
errors). The above application code is using the JSON API format.
What other little tricks do you find useful in getting APIs to fail? | https://jaketrent.com/post/test-apis-failing-from-client/ | CC-MAIN-2022-40 | refinedweb | 396 | 72.87 |
This tutorial introduces intermediate level information on how to develop RubyGems. This builds on and expects the knowledge covered in the Basic RubyGem Development post. If you're new to RubyGem development, please start by reading that tutorial. Otherwise, please at least skim the article to make sure you know everything covered there, since the rest of this tutorial will assume you know it all.
Instead of focusing on what a RubyGem is, like we did in the tutorial on RubyGem basics we're going to cover more details on how to work in a RubyGem. We'll cover adding and using dependencies to a RubyGem, testing a RubyGem, more details on managing releases of RubyGems, and more.
At the end of reading this tutorial, you'll be equipped with the knowledge required to mimic the behavior of most RubyGems out there.
Testing RubyGems
A Simple Test
The Ruby programming language has a fantastic testing culture around it. It is difficult to find a library or application written in Ruby that isn't tested, and most publicly available code that isn't tested is quickly called out or simply ignored.
As such, it should come as no surprise that the idea of testing is well engrained into the idiomatic process of creating a RubyGem.
Let's augment our
redundant_math library by adding some basic tests. We'll start by using the builtin
Test::Unit library that comes standard with every Ruby installation.
First, create a "test" directory in the directory containing the gemspec of the RubyGem. Bundler doesn't create this directory for us, but it is easy enough to get going. Once the "test" directory is created, let's write our first test. Create a file "test/testredundantmath.rb" with the following contents:
require 'test/unit' require 'redundant_math' class RedundantMathTest < Test::Unit::TestCase def test_add assert_equal 7, RedundantMath.add(3, 4) end end
This is a standard
Test::Unit test. This tutorial won't cover how
Test::Unit works, but the basic idea should be clear from the code above.
To run the test, just run the ruby file:
$ bundle exec ruby test/test_redundant_math.rb Run options: # Running tests: . Finished tests in 0.000507s, 1972.3866 tests/s, 1972.3866 assertions/s. 1 tests, 1 assertions, 0 failures, 0 errors, 0 skips
We need to prefix the command with "bundle exec" so that Bundler will setup our load paths so that the test can properly require the
redundant_math library. If you run the test without the "bundle exec" prefix then it will give an error about not being able to load the library.
As you can see from the output, the test passed, which is some nice validation that our code probably works as expected.
Executing Tests with Rake
While manually executing the test file works for small projects or running individual tests, most gems are made up of many test files and the idiomatic way of running tests is via
rake. Rake has built-in support for adding a task to run
Test::Unit tests. Let's modify the Rakefile to support this now. The Rakefile should end up looking like this:
require "bundler/gem_tasks" require 'rake/testtask' Rake::TestTask.new
Now, run
rake test and it'll run our tests:
$ rake test Run options: # Running tests: . Finished tests in 0.000638s, 1567.3981 tests/s, 1567.3981 assertions/s. 1 tests, 1 assertions, 0 failures, 0 errors, 0 skips
The Rake test task will automatically find any files matching
test/test*.rb and run them. So if you add new tests to the "test" directory, they'll automatically be picked up and ran!
A Word About Test Files
For the "redundant_math" library, there is only a single file to test. More often, however, a library is made up of multiple files. In this case, it is common and often expected for the test files to map to this directory structure.
For example, let's say we had a "lib/redudantmath/abs.rb" that had a class
RedundantMath::Abs that had methods for doing absolute value calculations on it. To test this, the test file should be "test/testredundantmathabs.rb" or something similar. Namely, there should be a 1:1 mapping between test files and library files.
This convention makes it easy for developers who may not be familiar with the project to easily find the test cases related to a certain class.
Development Dependencies
One of the best features of RubyGems is the ability to offload dependency management out of your library and onto the RubyGem system. RubyGems will load all the proper versions of libraries for you, and will print errors in the case that there are unresolvable dependencies. We covered basic dependencies in the tutorial on RubyGem basics.
In addition to basic dependencies, you can also define development dependencies. These are dependencies that are only relevant or used for development purposes.
Most commonly, these are used to bring in alternate test libraries. As an example, let's convert our tests to use RSpec instead of
Test::Unit. RSpec is an extremely popular testing library for Ruby.
For the purpose of this tutorial, we'll just add RSpec alongside the
Test::Unit test cases. In the real world, you probably only want one or the other.
First, we need to install RSpec. Let's add the development dependency by modifying the gemspec to include a line like the following:
gem.add_development_dependency "rspec", "~> 2.13.0"
Next, run
bundle:
$ bundle Using diff-lcs (1.2.2) Using redundant_math (0.0.1) from source at . Using rspec-core (2.13.1) Using rspec-expectations (2.13.0) Using rspec-mocks (2.13.0) Using rspec (2.13.0) Using bundler (1.2.3) Your bundle is complete! Use `bundle show [gemname]` to see where a bundled gem is installed.
As you can see, various "rspec" related libraries are now included in our bundle. Bundler automatically installs development dependencies as well as runtime dependencies since bundle installations are typically only done in development for gems. However, when someone does a
gem install for your gem after it is published, the development dependencies are not installed.
Let's write the RSpec test. Create a file "spec/redundantmathspec.rb" with the following contents:
require "redundant_math" describe RedundantMath do it "can add numbers" do described_class.add(3, 4).should == 7 end end
Again, this tutorial won't cover RSpec, but the above is a very basic example of an RSpec test. Note also that the filename is now suffixed with "spec" rather than prefixed with "test". This is important because the RSpec test runner looks for the suffix by default, rather than the
Test::Unit prefix.
The test can be run by calling
spec:
$ bundle exec spec . Finished in 0.00036 seconds 1 example, 0 failures
Once again, our tests pass, but this time powered by RSpec tests.
Just like
Test::Unit, there are ways to easily integrate RSpec with Rake, but we'll avoid covering that since we only wanted to highlight using development dependencies here, rather than how to use RSpec.
Code Layout
RubyGems are rarely a single file. They usually quickly become dozens of Ruby files. Beginners are often scared or unsure how to split up a complicated single-file project into multiple files. The Ruby community has a standard expected practice for doing this sort of thing. It is easy to understand and follow.
First, create as few Ruby files as possible directly in the "lib" folder. The "lib" folder of a RubyGem is added to the global Ruby
$LOAD_PATH. For example, if you were to create a file "lib/thread.rb", then
require "thread" might accidentally require your RubyGem library rather than the "thread" standard library. It depends on how the
$LOAD_PATH is setup, but it is possible.
Therefore, it is standard RubyGem practice to only make a single file in "lib" based on the name of your gem. For example, for our library there is only "lib/redundant_math.rb". Based on the name of the library, a developer can reasonably expect that
require "redundant_math" will pull in our library, without resorting to the documentation.
When adding additional files, they should go in a sub-directory within "lib" that is named the same as the main Ruby file. For example, a class to add absolute value support may be at "lib/redundant_math/abs.rb". Within this sub-directory, you may add as many files as possible, since the name for
require is namespaces by the directory, so you're far less likely to collide with another library. Developers can then do
require "redundant_math/abs" and so on. Again, this is expected behavior.
As an additional note, most RubyGems expose all functionality of the library by only requiring the top-level Ruby file. Splitting up the code into subdirectories is mostly done for code organization for development purposes. Therefore, following our previous example, if we split out absolute value functionality into "redundantmath/abs.rb", we should still be able to access it only by doing a
require "redundant_math". This works because "lib/redundantmath.rb" should require the absolute value code, like the following:
require "redundant_math/abs" require "redundant_math/version" module RedundantMath def self.add(x, y) x + y end end
Naming Conventions
Following code layout conventions, there are also standard module/class naming conventions within RubyGems. By following these conventions, you again make it easier for developers to quickly begin using your RubyGem, since most other RubyGems follow these patterns.
Modules in classes in Ruby are generally CamelCased. This probably comes as no surprise if you've been using it for some time already. What you may not know, however, is that the name of the RubyGem often maps to the name of module/class, and also the way the code is laid out into files typically maps to module/class namespacing.
For the purpose of this tutorial, assume "module" and "class" are interchangeable. The naming conventions are the same either way and it is beyond the scope of this tutorial to cover the functional differences of each and when to use each.
You've already seen a bit of this with "lib/redundant_math.rb", which includes a module
RedundantMath.
The first important point is that the name of the top-level module should be the camel cased name of the gem itself. In our case, the RubyGem is named "redundant_math" and the top-level module is "RedundantMath". Easy to remember, and easy to expect when using a RubyGem.
Next, files within sub-directories map directly to additional modules or classes. "lib/redundant_math/abs.rb" should contain a module named
RedundantMath::Abs.
Finally, additional sub-directories map to additional namespaces. If there was a file "lib/redundant_math/util/thing.rb", it would map to a module
RedundantMath::Util::Thing.
I want to stress that these aren't required rules. You will certainly find RubyGems that do not follow this pattern. However, most library RubyGems do follow this pattern and by doing so makes it easier to use the library, find tests for the library, know what functionality a library may have, etc. There are many upsides and very few downsides to following this pattern.
An Intermediate-level RubyGem
While the tutorial on RubyGem development basics covered how to setup a basic RubyGem development environment and a bit about how to package and release them, this tutorial covered more of how to actually write Ruby code for a RubyGem.
Given the basics knowledge plus this knowledge, you're now capable of writing an idiomatic, high quality, well-tested RubyGem and releasing that RubyGem to the public. You know how to add dependencies to the RubyGem, executables, tests, and more. You should be comfortable splitting up your RubyGem implementation into multiple files and how those files map to implementation details of the gem itself.
There are a few more tricks to know about RubyGems, but they're rarely used and quite advanced: platform-specific RubyGems, C-extensions, etc. These might be covered in an advanced post in the future, but are unnecessary to write almost every RubyGem.
Have fun and good luck writing your RubyGems! | http://tech.pro/tutorial/1277/intermediate-rubygem-development | CC-MAIN-2015-22 | refinedweb | 2,025 | 57.06 |
Created on 2016-10-06 09:13 by wiggin15, last changed 2016-10-10 07:29 by Drekin.
When I wrap sys.stdout with a custom object that overrides the 'write' method, regular prints use the custom write method, but the input() function prints the prompt to the original stdout.
This is broken on Python 3.5. Earlier versions work correctly.
In the following example 'write' does nothing, so I expect no output, but the input() function outputs to stdout anyway:
import sys
class StreamWrapper(object):
def __init__(self, wrapped):
self.__wrapped = wrapped
def __getattr__(self, name):
# 'write' is overridden but for every other function, like 'flush', use the original wrapped stream
return getattr(self.__wrapped, name)
def write(self, text):
pass
orig_stdout = sys.stdout
sys.stdout = StreamWrapper(orig_stdout)
print('a') # this prints nothing
input('b') # this should print nothing, but prints 'b' (in Python 3.5 and up only)
Looks like this was broken in . Adding the 'fileno' function from this issue fixes the problem, but it's just a workaround. This affects the colorama package:
A related issue is that the REPL doesn't use sys.stdin for input, see #17620. Another related issue is #28333. I think that the situation around stdio in Python is complicated an inflexible (by stdio I mean all the interactions between REPL, input(), print(), sys.std* streams, sys.displayhook, sys.excepthook, C-level readline hooks). It would be nice to tidy up these interactions and document them at one place.
Currently, input() tries to detect whether sys.stdin and sys.stdout are interactive and have the right filenos, and handles the cases different way. I propose input() to be a thin wrapper (stripping a newline, generating EOFError) around proposed sys.readlinehook(). By default, sys.readlinehook would be GNU readline on Unix and stdio_readline (which just uses sys.stdout and sys.stdin) on Windows. I think that would fix all the problems like this one and changing/wrapping sys.std* streams would just work.
My proposal is at and there is discission at #17620. Recently, the related issue #1602 was fixed and there is hope there will be progress with #17620.
Other related issues are #1927 and #24829.
From memory, there are at least three code paths for input():
1. Fallback implementation, used when stdout is a pipe or other non-terminal
2. Default PyOS_ReadlineFunctionPointer() hook, used when stdout is a terminal:
3. Gnu Readline (or another hook installed by a C module)
Arnon’s problem only seems to occur in the last two cases, when PyOS_ReadlineFunctionPointer() is used. The problem is that PyOS_ReadlineFunctionPointer() uses C <stdio.h> FILE pointers, not Python file objects. Python calls sys.stdout.fileno() to see if it can substitute the C-level stdout FILE object.
Adam: I think your sys.readlinehook() proposal is independent of Arnon’s problem. We would still need some logic to decide what to pass to libraries like Gnu Readline that want a FILE pointer instead of a Python file object.
The fileno() method is documented all file objects, including text files like sys.stdout. So I think implementing your own fileno() method is completely valid.
One other idea that comes to mind is if Python checked if the sys.stdout object has been changed (e.g. sys.stdout != sys.__stdout__), rather than just comparing fileno() values. But I am not sure if this change is worth it.
BTW if I revert the fix for Issue 24402 (I also tried 3.3.3), the problem occurs even when a custom fileno() is defined. On the other hand, Python 2’s raw_input() is not affected, presumably because it uses PyFile_AsFile() and fails immediately if stdout is a custom class.
Does GNU readline do anything fancy about printing the prompt? Because you may want to use GNU readline for autocompletition while still enable colored output via wrapped stdout. Both at the same time with one call to input(). It seems that currently either you go interactive and ignore sys.std*, or you lose readline capabilities. | https://bugs.python.org/issue28373 | CC-MAIN-2021-31 | refinedweb | 670 | 68.77 |
ubuntu enter this commond in your terminal:
sudo apt-get install python-mysqldb
or windows download this and install:
For testing purposes I want my code to raise a socket "connection reset by peer" error, so that I can test how I handle it, but I am not sure how to raise the error.
The problem description:
There are set of processes running on my system say process_set_1. Build a process agent which runs an asynchronous socket listening to incoming requests from process_set_1. Each process sends an id. The agent stores these ids in a dictionary and sends response that ur id has been accepted. Now agent process receives some data from another process (which is some sort of sister process for the agent). This data contains id along with a command which is to be sent by the agent to the process_set_1 through HTTP client over AF_UNIX socket, since each process in process_set_1 has a HTTP listening CLI. The process agent sends an HTTP request by recognizing id stored in dictionary to the process_set_1. A service running in the process_set_1 routes this HTTP command to the respective process.
process_set_1
asynchronous
id
AF_UNIX
Now my problem is the HTTP request which to be sent must go through AF_UNIX socket. I got this solution.
class UnixStreamHTTPConnection(httplib.HTTPConnection):
def __init__(self, path, host='localhost/rg_cli',port=None, strict=None,
timeout=None):
httplib.HTTPConnection.__init__(self, host, port=port, strict=strict,
timeout=timeout)
self.path=path
def connect(self):
self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
self.sock.connect(self.path)
But this cannot work since normal socket will not work and thus i thought of asyncore module in python. To use asyncore module again i will have to subclass asyncore.dispatcher. This class also contains connect() method.
asyncore
Another problem is I don't know how asyncore module works and thus not able to find a way to mix the work of 1) listening forever, accept the connection, store id and the sock_fd.
2) accept data from the process' agent sister process, retrieve the sock_fd by matching the id in the dictionary and send it through the AF_UNIX socket.
Please help since i have already spent 2 days digging it out. Sorry if could explain my problem very well.
I want to do getpeername() on stdin. I know I can do this by wrapping a socket object around stdin, with
s = socket.fromfd(sys.stdin.fileno(), family, type)
but that requires that I know what the family and type are. What I want to do is discover the family and type by looking at what getpeername() and/or getsockname() return. Can this be done with the standard library??
Forgot Your Password?
2018 © Queryhome | https://www.queryhome.com/tech/81590/how-do-i-install-the-mysqldb-under-python-programming | CC-MAIN-2021-04 | refinedweb | 453 | 64.3 |
uuid_time - extract the time at which the UUID was created
Synopsis
Description
Author
Availability
#include <uuid/uuid.h>
time_t uuid_time(uuid_t uu, struct timeval *ret_tv)
The uuid_time function extracts the time at which the supplied UUID uu was created. Note that the UUID creation time is encoded within the UUID, and this function can only reasonably expect to extract the creation time for UUIDs created with the uuid_generate(3) function. It may or may not work with UUIDs created by OSF DCE uuidgen.(2)).
uuid_time was written by Theodore Y. Tso for the ext2 filesystem utilties.
uuid_time is part of libuuid from the e2fsprogs package and is available from.
libuuid(3), uuid_clear(3), uuid_compare(3), uuid_copy(3), uuid_generate(3), uuid_is_null(3), uuid_parse(3), uuid_unparse(3) | http://www.squarebox.co.uk/cgi-squarebox/manServer/uuid_time.3 | crawl-003 | refinedweb | 125 | 57.27 |
You’re unlikely to be reading this if you haven’t used the .NET Framework to build managed applications. As such, you’ll know how extensive the .NET Framework is in the range of support it provides for application developers.
One feature that is notable by its absence has been direct support for building applications that can host add-ins. Such applications include Visual Studio, Windows Media Player (aka plug-ins), and Internet Explorer (aka add-ons). Add-in host applications expose a range of services for add-ins via a set of interfaces that are collectively referred to as the contract. Through a contract, an add-in integrates with and extends the host application. Examples of add-ins include encoder and decoder plug-ins for Windows Media Player, the Silverlight add-on for Internet Explorer, and code line counter add-ins for Visual Studio.
The .NET Framework does of course provide basic support you need to build add-in support into your applications. Unfortunately, this support is at too low a level to enable developers to be productive, especially when considering the infrastructure that a host application needs to host add-ins typically requires the following functionality:
With the .NET Framework 3.5 (formerly known as Orcas), native, high level support for this functionality is now included, and is located with the System.AddIn namespace. This namespace contains a set of types that encapsulate the various building blocks of add-in infrastructure development, and are designed to be easily and quickly used by you to build add-in support for your applications.
.NET Framework 3.5 add-in support is divided into two layers. The first and most important layer provides the foundation for building add-in extensibility into your applications, and addresses a wide variety of scenarios that generally involve a host application exposing functionality for an add-in to integrate with (although functionality can be exposed by the add-in as well). This foundation is known as the CLR Add-In Model, and you can find out plenty about it in the CLR Add-In Team’s blog. Additionally, check out these great MSDN Magazine articles to provide some background: .NET Application Extensibility Part 1 and .NET Application Extensibility Part 2.
The second layer is implemented by WPF, and extends the CLR add-in model to allow host applications (standalone and XBAP) to display UIs provided by add-ins (and vice versa), on top of the extensive abilities provided the CLR add-in model. Now, WPF developers can address a slew of interesting scenarios that include:
The next drop of the WPF SDK, due soon, will include a bunch of documentation for building add-ins in WPF. For now, though, I hope the attached sample and How To (from the next SDK drop) provide a useful starting point.
Note – In .NET Framework 3.5 RTM, the name of the VisualAdapters class and its members will change. See this post for more information.
Hola! I just returned from TechEd 2007 held in Barcelona, Spain. Barcelona is a beautiful city with incredible | http://blogs.msdn.com/wpfsdk/archive/2007/11/09/building-visual-add-ins-with-wpf-new-in-net-3-5.aspx | crawl-002 | refinedweb | 513 | 54.63 |
NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab.
Microcontroller Programming » Compiling the bootloader
While compiling the bootloader I get this error:
miniMac:bootloader328p Me$ make
avrdude -c avr109 -p m328p -F -b 115200 -P /dev/cu.PL2303-0000101D -U lock:w:0x2f950f
avrdude: reading input file "0x2f"
avrdude: writing lock (1 bytes):
Writing |
| 0% 0.00savrdude: error: programmer did not respond to command: write byte
make: *** [fuses] Error 1
miniMac:bootloader328p Me$
Any idea what might cause this?
Thanks,
Ralph
What programmer are you using... I'm assuming some sort of ISP programmer since it looks like you were trying to program fuses.
Are you trying to upload a bootloader to a new chip? Or an existing one with a bootloader installed already?
On a different subject, any luck with the files I emailed you?
Rick
Hi Rick, so far no luck with the DS3232. I just received some EEPROM (I2C) so I am going to try Peter Fluery's test program to see if I can get the communications working. I also loaded the Nunchuck code and got that to work.
I was thinking that possible the reason I am having problems is because all of the chips I am using were new chips I got from mouser that did not have the bootloader installed. I had just copied the bootloader.hex to the device using my Dragon programmer.
I had never set the fuses and could not read/understand the fuse setting files from the bootloader foders. I am using Atmel's AVR Studio when using the Dragon. Using AVR Studio it is rather convoluted to set the fuses and you do not have the full range of settings available.
I have no idea what I am doing so I thought I'd just try to load the bootloader just using the Nerdkit breadboard and that is how I got the error.
I can try loading the bootloader project in AVR Studio and programing the mcu from there. In hindsight that would make more sense. I'll try that tomorrow.
Ralph
I guess you need a parallel dapa (lpt1) programmer to compile the bootloader foodloader code.
Oh well, I've ordered some new mcus from the NerdKit Store so I'll use them.
On another note: Can one just copy the foodloader.hex to a mcu and expect it to work? What about fuse settings?
I had just loaded the foodloader.hex as that was all I was told to do with a beta ISP programmer. Should I have also set the fuses? It appeared to work just loading the .hex but I have had problems running code that I did not originate like Ricks's DS3232 RTC code, which I really will need.
Ralph, did you read the thread I just posted a couple hours ago in the support forum? I typed up an instruction for the bootloader installation. I thought it might help you out as well as some others.
As far as compilation goes, you don't need any programmer to compile. You only need a programmer to send the compiled file over to the micro-controller. You also don't have to have a parallel port programmer (dapa). I use a USBASP programmer. You AVRDragon will also work fine.
The fuses will only effect code running if certain things have been played with. (Or not) For instance, if you bought a new chip from Mouser (like you stated) but did not program the fuses, the chip will be using it's own internal RC oscillator instead of the crystal. It will also have the CKDIV8 fuse programmed which will make the clock run at 1/8 speed or 1Mhz on a stock chip. Even if the fuses were set to use an external clock, if the CKDIV8 fuse is still set, the clock would be 1/8 the crystal speed.
This would definately throw all timing out of whack and could prevent the LCD from working right, Any timing critical serial communications like I2C could be out of whack etc...
Let me know exactly what problem you are having. I'm sure we can get through this and we may be on the right track.
Read my bootloader post though, it may help a bit. Also, make sure you have a good clean set of bootloader files from the download section.
Rick's most excellent bootloader instructions.
re: timing, that is one of the reasons I was questioning my method of just loading the bootloader (foodloader.hex).
I have had a number of ATmega mcus some purchased from the Nerdkits Store and others from Mouser or other suppliers from the web. I have no way of knowing which is which.
I believe I have a blank ATmega328p (1) purchased online without a boot loader.
When I power up I get two black bars on the LCD in run mode and in program mode I get two black bars.
I tried to load a program using the command line on (1) which I believe does not have a bootloader and it eventually times out with:
Found programmer: Id = "½ ("; type = ⌡
Software Version = . ; Hardware Version = .
avrdude: error: buffered memory access not supported. Maybe it isn't
a butterfly/AVR109 but a AVR910 device?
make: *** [tempsensor-upload] Error 1
The infamous "butterfly" error. So chalk up another reason for the butterfly error no bootloader.
The fuse settings on (1) are:
Lock: 0xFF
Extended: 0xFF
High: 0xD9
Low: 0x62
On another ATmega328p (2) the fuse settings are:
Lock: 0xFF
Extended: 0xFD
High 0xD2
Low 0xF7
The fuse settings from the bootloader328P Makefile are:
Lock: 0x2F 101111
Extended: 0x05
High: 0xD2
Low: 0xF7
I will definitely try to load the bootloader following Rick's great instructions.
Another problem I have had which might be related is that I have not been able to load any programs using ISP. This is on a ISP breakout board I had made up, using my Dragon (I use PP/HVSP) or on my STK600 (a very expensive but poorly supported programmer from ATmel).
I never had a problem with the LCD but I am having problems with I2C so I will follow Rick's instructions to see if I get the bootloader installed. I know I can always load the foodloader.hex directly and now I see how to set the fuses using the Makefile. Using AVRstudio to set the fuses is very convoluted as I have said and getting some settings like the Lock setting from the bootloader folder is really hard. Using the Makefile should get the settings desired.
Wow, this sure is interesting.
ps Rick your instructions do not address compiling the bootloader/foodloader. I believe you have changed the bootloader so you must have compiled it. "We" could also use (and appreciate) instructions on changing the bootloader. Especially changing the bootloader to run on other ATmel mcus. I have a ATmega32 that came with the Dragon that I will eventually like to use so that would be neat to have the bootloader though since I have a programmer it is not critical to have.
Now this is a little strange, I have no idea what it means or especially "WHY". But I just received my new ATmega168 and ATmega328P microprocessors from the Nerdkits Store.
The Nerdkit ATmega328p's and ATmega168's fuse settings do not match the fuse settings in the bootloader Makefile!!
The new Nerdkit ATmega328P: The bootloader ATmega328 fuse settings:
Lock: 0xEF Lock: 0x2F
Extended: 0xFD Extended: 0x05
High: 0xD2 High: 0xD2
Low: 0xF7 Low: 0xF7
The new Nerdkit ATmega168: The bootloader ATmega168 fuse settings:
Lock: 0xEF Lock: 0x2F
Extended: 0xF8 Extended: 0x00
High: 0xD5 High: 0xD5
Low: 0xF7 Low: 0xF7
Like I said I have no idea what this means, eventually I'll go through the specsheets and determine what the different hex values mean but for now I will just standardize on the Nerdkit fuse settings as that works and should be what everyone is using unless they follow Rick's instructions and Make the fuses according to the bootloader Makefile. Of course the bootloader Makefile could be modified to match the Nerdkits Store devices fuse settings.
Just the Lock and Extended fuse settings do not match.
The fuses_mike.txt file agrees with the bootloader Makefile
Geesch, this could get confusing.
They are the same. Both the lock and efuse bits only use part of the byte the rest remain at 1 (unprogrammed) If you convert the hex values you read to binary and compare that with the numbers in the bootloader fuse settings, you'll see the same bits are programmed (0). Depending on what you read the chip with the un-used bits may be reported which is what you are seeing.
328/168 lock only bits 0 thru 5 are used
0xEF = 0b11101111
0x2F = 0b 101111
328/168 efuse only uses bits 0 thru 2
0xFD = 0b11111101
0x05 = 0b 101
0xF8 = 0b11111000
0x00 = 0b 000
Like I said this can get confusing. I was going to go to the specsheet to see what the settings meant but once again thanks.
I just looked at Ricks "How to install the NK bootloader" and want to give credit to Rick for a job well done - That is a great tutorial!
However, you don't absolutely have to buy one ... you can build your own usb/isp programmer using the nerdkit. The bootloader source that comes with the nerdkit can be modified to use the spi programming commands to load your new chips. It is probably only worthwhile for the educational aspects because it will take a lot of time and effort but then you would have a custom programmer that you could change/fix as desired and gain good expertise on avrdude, memory programming, fuses, and spi in the process.
Ok Noter, how? We need a schematic and the bootloader source change and a outline of exactly what is going on "educational aspects".
I have programmers but also I tried to build one (which didn't work). So if you show me how I will makeup another.
Between your instructions and Rick's that would make a nice package.
Then maybe someone will publish a Eagle PCB board and we could really be off to the races.
The only issue I would see with that would be two fold. One, if you modify the bootloader, you would have to have a way to re-load it. If you change the bootloader to be a regular program loaded in via the normal NK method, it could possibly work... but would definately require a programming effort and would not really be a USB programmer, it would be a serial programmer.
A true USB programmer can be done but most I've seen require a 12Mhz crystal. A notable one is the USBASP firmware I have on my programmer. That firmware shows it being loaded on a smaller mega8 or mega 48 but I see no reason why a mega168 wouldn't work. The firmware for the device can be found here. I have built one of these in the past using a mega8. The only part I would have to check is whether or not the standard NK fuse settings would work with the USBASP.
I kind of like the original idea though because it would require no additional hardware. I might have to look at this. It would be pretty cool and for my part it would be a MAJOR programming accomplishment.
If I got it going, an eagle PCB would be no biggie. However, it most likely could just be breadboarded.
Dang... another project to occupy my time :D
Yes, that is what I meant - make the bootloader (start with foodloader.c) into a regular program that accepts the same bootloader commands from avrdude and translates them into serial programming commands for a mcu attached via the spi. A good place to start is to become familiar with serial programming - Section 27.8 of the mcu datasheet explains how to program the mcu via the serial interface.
// GET_AVR_ID.c
#include <stdio.h>
#include <math.h>
#include <string.h>
#include <avr/io.h>
#include <avr/interrupt.h>
#include <avr/pgmspace.h>
#include <inttypes.h>
#include "../libnerdkits/lcd.h"
#ifndef PB0
# include "../libnerdkits/io_328p.h"
#endif
//------------------------------------------------------------------------
// _ms timer variables
volatile uint16_t wait_count;
// initialize timer
void _wait_ms_init(){
// set up timer for 1 ms interrupt
//8000000/64/125 = 1khz
//20000000/256/78
OCR0A=78; // match value
TCCR0B = (1<<CS02) ; //256
TCCR0A = (1<<WGM01); // CTC mode
TIMSK0 = (1<<OCIE0A); // interrupt on OCR0A match
//
// be sure interrupts are enabled - sei();
}
// set ms timer value
void _wait_ms(uint16_t ms){
wait_count=ms;
while(wait_count>0);
}
SIGNAL(TIMER0_COMPA_vect){
if(wait_count>0){
wait_count--;
}
}
FILE lcd_stream;
// PIN Def's
//
// PB1 -- reset pin on target ATmega
// PB3 -- serial input (MOSI)
// PB4 -- serial output (MSIO)
// PB5 -- serial clock (SCK)
//
#define P_MOSI PB3
#define P_MSIO PB4
#define P_SCK PB5
#define set_AVR_1_output DDRB |= (1<<PB2)
#define clear_AVR_1_output DDRB &= ~(1<<PB2)
#define AVR_1_on PORTB |= (1<<PB2)
#define AVR_1_off PORTB &= ~(1<<PB2)
#define uchar unsigned char
#define SPI_CMD_LEN 4
uchar SPI_Cmd[5];
uchar SPI_Reply[5];
uchar AVR_Signature[4];
void SPI_Init(void){
// set MOSI, SCK, SS as output
DDRB |= (1<<P_MOSI) | (1<<P_SCK);
// set MSIO as input
DDRB &= ~(1<P_MSIO);
/* Enable SPI, Master, set clock rate fck/128 */
SPCR = (1<<SPE)|(1<<MSTR)|(1<<SPR1)|(1<<SPR0);
// initialize reset pin
set_AVR_1_output;
}
uchar SPI_SendRcv(uchar cData){
/* Start transmission */
SPDR = cData;
/* Wait for transmission complete */
while(!(SPSR & (1<<SPIF)));
return(SPDR);
}
void do_spi_cmd(){
uint8_t i;
i=0;
while(i<SPI_CMD_LEN){
SPI_Reply[i]=SPI_SendRcv(SPI_Cmd[i]);
i++;
}
}
void enter_programming_mode(){
SPI_Reply[2]=0x00;
int tries;
tries=5;
while((SPI_Reply[2]!=0x53)&&(tries>0)) {
AVR_1_on;
_wait_ms(25);
AVR_1_off;
memcpy_P(SPI_Cmd,PSTR("\xAC\x53\x00\x00"),4);
do_spi_cmd();
tries--;
}
if(tries==0){
SPI_Reply[2]='*';
}
}
void read_ARV_Signature(){
memcpy_P(SPI_Cmd,PSTR("\x30\x00\x00\x00"),4);
do_spi_cmd();
AVR_Signature[0]=SPI_Reply[3];
memcpy_P(SPI_Cmd,PSTR("\x30\x00\x01\x00"),4);
do_spi_cmd();
AVR_Signature[1]=SPI_Reply[3];
memcpy_P(SPI_Cmd,PSTR("\x30\x00\x02\x00"),4);
do_spi_cmd();
AVR_Signature[2]=SPI_Reply[3];
}
void release_AVR(void){
// raise signal on reset pin
AVR_1_on;
// turn off the spi too in case the avr pins have other uses
SPCR = 0;
DDRB &= ~((1<<P_MOSI) | (1<<P_SCK));
}
int main() {
_wait_ms_init();
SPI_Init();
lcd_init();
fdev_setup_stream(&lcd_stream, lcd_putchar, 0, _FDEV_SETUP_WRITE);
sei();
enter_programming_mode();
read_ARV_Signature();
release_AVR();
lcd_clear_and_home();
fprintf_P(&lcd_stream, PSTR("0x%02x 0x%02x 0x%02x"), AVR_Signature[0], AVR_Signature[1], AVR_Signature[2]);
while(1);
}
Is that a working program? If so, that was quick. I'll admit I haven't looked much at programmer development though since I have a couple ISP programmers as it is. This would be a nice step forward for the community.
Rick
It works on my nerdkit so it should work on yours too. Much of that code I already had and just consolidated it into the example. I was hoping to provide a nice schematic for Ralph but I've a long way to go with the Eagle design package. So in the mean time I hope you don't mind my hand drawn version:
Please note that in the code in my last post, PB2 in the define statements will need to be changed to PB1 to match drawing exactly or just use PB2 to connect to the reset pin of the target ATmega. The spi pins are straight across, 17 to 17, 18 to 18, and 19 to 19. As you can see, the hookup is pretty simple.
The next steps are to get each one of the serial programming instructions coded into a subroutine and tested - see table 27-19 on page 310 of the datasheet. After that, it's a matter of pulling in the UART code to communicate with AVRdude on the PC to call the appropriate function (as in the foodloader.c program) and we will have a nerdkit DIY serial programmer. ;-)
Gee, this is looking just like a BETA SIP loader I had.
Wonder why nothing was ever done with that?
It worked fine once I figured out the syntax of the AVRDUDE line.
It referenced the dapa programmer.
Please log in to post a reply. | http://www.nerdkits.com/forum/thread/1354/ | CC-MAIN-2019-39 | refinedweb | 2,691 | 70.84 |
TemplateSyntaxError at /admin/
Hi - I just installed the 0.1.1 version using easy_install, and configured the app as per the docs, and I'm receiving the following error:
TemplateSyntaxError at /admin/
Caught an exception while rendering: 'admin' is not a registered namespace
The error occurs in one of the menu template tags on the last line below: {% admin_tools_render_menu_css %}
{{{
!python
{% extends "admin/base.html" %} {% load i18n admin_tools_menu_tags %} {% block title %}{{ title }} | {% trans 'Django site admin' %}{% endblock %}
{% block extrastyle %} {{ block.super }} {% admin_tools_render_menu_css %} }}}
Any idea what might be going on here? I don't see anyone else having similar issues.
Thanks!
Rob
Hi, what version of django are you using ? did you configure the django admin urls correctly ?
-- David
I'm using Django 1.1.1, and I am attempting to add the admin tools to an already working Django installation. The Django admin works fine if I remove the admin_tools entries.
after a quick research, it looks like you need to configure the urls in the "django 1.1" way, e.g.:
url(r'^admin/', include(admin.site.urls))
can you try this and see if it works ? if it's the case, I'll have to update the docs...
That was exactly it! Thank you!
-Rob
I want to add one more thing. I know it's bad practice, but I didn't have my MEDIA_URL configured in my settings.py, so the media from the admin_tools didn't appear until I stuck that in there. | https://bitbucket.org/izi/django-admin-tools/issues/7/templatesyntaxerror-at-admin | CC-MAIN-2015-40 | refinedweb | 245 | 68.06 |
Valid Parentheses — Day 7(Python)
Today we will be looking into one of Leetcode’s Easy tagged questions. Let us jump into the problem without losing much time.
Given a string
s containing
Example 4:
Input: s = "([)]"
Output: false
Example 5:
Input: s = "{[]}"
Output: true
Constraints:
1 <= s.length <= 104
sconsists of parentheses only
'()[]{}'.
Solution
Since we have a limited set of brackets to worry about in this problem, we can use a dictionary that holds the key as closing brackets and value as a respective opening bracket. We need a stack which will hold the order of opening brackets, we can remove the opening brackets from the stack when we find the right closing bracket.
- Construct a dictionary that holds opening brackets (value) and closing brackets(key).
- Have a set of opening brackets. A set will help us to identify opening brackets.
- Run the loop for each character in the string, check if it is an opening bracket. If yes, push it into the stack.
- If the current character is a closing bracket, check if the topmost element in the stack is the right opening bracket for the current closing bracket. If yes, pop the topmost element in the stack.
- If any of the above condition is False, return False
- Once the entire loop is completed, check for any remaining items in the stack. If the stack is empty return true, else return false.
class Solution:
def isValid(self, s: str) -> bool:
dictionary_brackets = {'}':'{',')':'(', ']':'['}
open_bracket = ('{','(','[')
stack_bracket = []
for each_char in s:
if stack_bracket == [] or each_char in open_bracket:
stack_bracket.append(each_char)
elif(dictionary_brackets[each_char]==stack_bracket[-1]):
stack_bracket.pop()
else:
return False
return False if stack_bracket else True
Complexity analysis
Time Complexity
We would be required to traverse through each character in the string which would take O(N).
Space Complexity
In worst-case-scenario, if our input string is ‘(((([[[[[{{{{’. The entire string will be stored in the stack, which means the space complexity will be O(N).
I would like to improve my writing skills so any suggestions or critiques are highly welcomed. | https://atharayil.medium.com/valid-parentheses-day-7-python-b37c7009a4c6?source=post_internal_links---------4---------------------------- | CC-MAIN-2021-39 | refinedweb | 341 | 64.2 |
Type: Posts; User: skycrazy123
Came across the following stuff, which does work, but I want the character representation, not the file. Is there something similar???
ofstream out("binary.txt", ios::binary);
...
Hi,
I am trying to send an array of objects over a socket using winsock. Obviously I can't send the array in its current form and need to somehow convert it to an appropriate format. I have heard...
The problem has been resolved. I added: char data[512] = {'/0'}; at the receiving end.
----RESOLVED-----
Hi All,
This should be an easy one for you.
I have defined and set a char as follows:
char *data = "1";
I know, I just hadnt got around to updating the files yet (but thanks for reminding me)
Dave.
Thanks for the explanation why Lindley. And thanks for identifying the problem JohnW@Wessex.
Problem sorted. I officially love this forum.
I also officially appologise for my sucky ability when...
I declared them private in the header, and it seemed to work. I'll test it with the rest and get back to you
No Dice.
The same problem occurs.
the call's:
cout << test[1].get_info() << "\n";
cout << test[0].get_info() << "\n";
both return "bar"
The MyClass header file is as follows:
#include "stdafx.h"
#include <iostream>
#include <string.h>
using namespace std;
class MyClass
{
public:
Hi,
Thanks for the help earlier on on another problem, now for a new one....
I wish to store objects (I think I actually mean object pointers, but I am sure you will tell me if I'm wrong) in...
never mind. I hadn't included the header file either.
Thanks for all your help.
Ahh, that figures. I do however get a different error now:
error C2653: 'ProductInfo' : is not a class or namespace name
Any ideas on how I'd go about fixing that one?
Ta,
Dave.
Would you mind elaborating GCDEF? An example would be useful.
Cheers,
Dave.
Hi,
I'm new to c++ and have an issue with the LNK2019 error. I've done some googling and as of yet haven't found a solution to my problem. So here I am!!
The exact error I receive is: error... | http://forums.codeguru.com/search.php?s=22de28a9e772b91125217aebfecb9f71&searchid=6640133 | CC-MAIN-2015-14 | refinedweb | 366 | 77.43 |
Getting started¶
Installation¶
pip install prompt_toolkit
For Conda, do:
conda install -c prompt_toolkit
Several situations. as a readline replacement, a custom layout ourselves.
Further, there is a very flexible key binding system that can be programmed for all the needs of full screen applications.
A simple prompt¶
The following snippet is the most simple example, it uses the
prompt() function to asks the user for input
and returns the text. Just like
(raw_)input.
from prompt_toolkit import prompt text = prompt('Give me some input: ') print('You said: %s' % text)
Learning prompt_toolkit¶
In order to learn and understand prompt_toolkit, it is best to go through the all sections in the order below. Also don’t forget to have a look at all the examples in the repository.
First, learn how to print text. This is important, because it covers how to use “formatted text”, which is something you’ll use whenever you want to use colors anywhere.
Secondly, go through the asking for input section. This is useful for almost any use case, even for full screen applications. It covers autocompletions, syntax highlighting, key bindings, and so on.
Then, learn about Dialogs, which is easy and fun.
Finally, learn about full screen applications and read through the advanced topics. | https://python-prompt-toolkit.readthedocs.io/en/latest/pages/getting_started.html | CC-MAIN-2022-33 | refinedweb | 208 | 56.76 |
Continuing the story of replacing Windows Live Writer with some custom tools...
In addition to image uploads and code formatting, I also needed the ability to:
All of this metadata was information I wanted to track for each markdown file, but I didn't want to keep the metadata outside of the markdown file in a separate store, because synchronization only adds complexity. I was looking for a way to embed metadata into the markdown file itself, but the CommonMark specification doesn't provide for metadata.
Fortunately, I found a number of tools and libraries support YAML front matter in markdown. Front matter is easy to write, easy to parse, and a good solution if you need to keep metadata associated with your markdown with you markdown.
The picture below shows a post after publication, so the tools have added an ID and a URL. I add the date to publish and the title.
Continuing the story of replacing Windows Live Writer with some custom tools...
One of the best features of WLW was the ability to paste an image into a post and have the image magically appear on the server in an expected location. For my new workflow, I needed a second Markdig extension to look for images in a post and automatically upload the images through a WebAPI I built into this blog. The WebAPI returns the URL of the uploaded (or updated) image. This API, by the way, was a simple replacement for the ancient XML RPC API that WLW relied on.
For this extension I didn't need to custom render an image tag. Instead, I could wait until Markdig finished converting markdown into HTML and go looking for
img tags. One way to implement this style of extension for Markdig is to implement the low level
IMarkdownExtension interface.
public class ImageProcessingExtension : IMarkdownExtension { private readonly IFileSystem fileSystem; private readonly IBlogServer blogServer; public ImageProcessingExtension(IFileSystem fileSystem, IBlogServer blogServer) { this.fileSystem = fileSystem; this.blogServer = blogServer; } public void Setup(MarkdownPipelineBuilder pipeline) { pipeline.DocumentProcessed += Pipeline_DocumentProcessed; } public void Setup(MarkdownPipeline pipeline, IMarkdownRenderer renderer) { } private void Pipeline_DocumentProcessed(MarkdownDocument document) { var damn = document.Descendants().ToArray(); foreach(var node in document.Descendants() .OfType<LinkInline>() .Where(l => l.IsImage)) { var localName = node.Url; if (!fileSystem.Exists(localName)) { throw new ArgumentException($"Cannot find file {localName}"); } var bytes = fileSystem.ReadBinary(localName); var newUrl = blogServer.UploadMedia(localName, bytes); node.Url = newUrl; } } }
This extension searches the rendered HTML nodes for images, uploads the image in the URL, and then updates the URL with the remote address.
Note that all this happens inside an event handler which must return
void. However, the
UploadMedia method uses
HttpClient behind the scenes, and must be
async. As you know,
async and
void return types mix together like potassium and water - an example of why I say we must always await the async letdown.
A couple years ago I decided to stop using Windows Live Writer for authoring blog posts and build my own publishing tools using markdown and VSCode. Live Writer was a fantastic tool during its heyday, but some features started to feel cumbersome. Adding code into a blog post, as one example.
This blog uses SyntaxHighlighter to render code blocks, which requires HTML in a specific format. With WLW the HTML formatting required a toggle into HTML mode, or using an extension which was no longer supported in the OSS version of WLW.
What I really wanted was to author a post in markdown and use simple code fences to place code into a post.
``` csharp
public void AnOdeToCode()
{
}
```
Simple!
All I'd need is a markdown processor that would allow me to add some custom rendering for code fences.
Markdig is a fast, powerful, CommonMark compliant, extensible Markdown processor for .NET. Thanks to Rick Strahl for bringing the library to my attention. I use Markdig in my tools to transform a markdown file into HTML for posting here on the site.
There are at least a couple different techniques you can use to write an extension for Markdig. What I needed was an extension point to render SyntaxHighlighter flavored HTML for every code fence in a post. With Markdig, this means adding an
HtmlOBjectRenderer into the processing pipeline.
public class PreCodeRenderer : HtmlObjectRenderer<CodeBlock> { private CodeBlockRenderer originalCodeBlockRenderer; public PreCodeRenderer(CodeBlockRenderer originalCodeBlockRenderer = null) { this.originalCodeBlockRenderer = originalCodeBlockRenderer ?? new CodeBlockRenderer(); } public bool OutputAttributesOnPre { get; set; } protected override void Write(HtmlRenderer renderer, CodeBlock obj) { renderer.EnsureLine(); var fencedCodeBlock = obj as FencedCodeBlock; if (fencedCodeBlock?.Info != null) { renderer.Write($"<pre class=\"brush: {fencedCodeBlock.Info}; gutter: false; toolbar: false; \">"); renderer.EnsureLine(); renderer.WriteLeafRawLines(obj, true, true); renderer.WriteLine("</pre>"); } else { originalCodeBlockRenderer.Write(renderer, obj); } } }
Note that the
Info property of a
FencedCodeBlock will contain the info string, which is commonly used to specify the language of the code (csharp, xml, javascript, plain, go). The renderer builds a
pre tag that SyntaxHighlighter will know how to use. The last step, the easy step, is to add
PerCodeRenderer into a
MarkdownPipelineBuilder before telling Markdig to process your markdown.
The C# Interactive window in VS is not the best interactive C# experience, LINQPad still has the honor, I believe, but it is a quick and convenient option for trying out a few lines of code.
There's actually two reasons why I tend to avoid using debuggers. The first reason is a genuine belief that debuggers encourage short term thinking and quick fixes in software. The second reason is the terrible sights and sounds I witness when I launch a debugger like the VS debugger. It is the noise of combustion and heavy gears engaging. My window arrangement shatters and the work space transforms into a day-trading app with real time graphs and event tickers. A modal dialog pops up and tells me a thread was caught inside the reverse flux capacitor and allowed to execute freely with side-effects. I don't know what any of this means or has to do with finding my off-by-one error, which only adds to my sense of fear and confusion.
One way I avoid the debugger is by adding better logging to my software. The best time to think about what you need to log is when the software is misbehaving, and ideally before the software misbehaves in front of strangers. Sometimes the logging becomes verbose.
logger.Verbose($"ABBREVIATIONS");; for (var index = ABBRSTART; index <= ABBREND; index++) { for (var number = 0; number < NUMABBR; number++) { var offset = (NUMABBR * (index - 1)) + (WORDSIZE * 2); logger.Verbose($"For [{index}][{number}] the offset is {offset}"); var ppAbbreviation = machine.Memory.WordAt(Header.ABBREVIATIONS); logger.Verbose($"For [{index}][{number}] the ppointer is {ppAbbreviation:X}"); var pAbbreviation = machine.Memory.WordAddressAt(ppAbbreviation + offset); logger.Verbose($"For [{index}][{number}] the pointer is {pAbbreviation:X}"); var location = machine.Memory.SpanAt(pAbbreviation); var result = decoder.Decode(location).Text; logger.Verbose($"Abbreviation [{index}][{number}] : {result}"); } }
Verbosity works well if you categorize correctly. Again, the best proving ground for a logging strategy is when the software is misbehaving. You can learn what knobs you need to tweak and what categories work well. With Serilog, which I still prefer, you can set the category to match type names in your software, then configure the log messages you want to see using code or configuration files.
public ILogger CreateLogger() { var logger = new LoggerConfiguration() .MinimumLevel.Warning() .MinimumLevel.Override(typeof(FrameCollection).FullName, LogEventLevel.Verbose) .MinimumLevel.Override(typeof(Machine).FullName, LogEventLevel.Verbose) .MinimumLevel.Override(typeof(DebugOutputStream).FullName, LogEventLevel.Verbose) .MinimumLevel.Override(typeof(Instruction).FullName, LogEventLevel.Verbose) .MinimumLevel.Override(typeof(ZStringDecoder).FullName, LogEventLevel.Verbose) .Enrich.FromLogContext() .WriteTo.File(@"trace.log", outputTemplate: "{SourceContext:lj}\n{Message:lj}{NewLine}{Exception}") .CreateLogger(); return logger; }
To use logs during test runs you need to sink log events into XUnit's
ITestOutputHelper. The logs are available from the VS test runner by clicking on an "Open additional output for this result" link.
For one particular integration style test I have, the logs can get lengthy, which leads to an amusing message from VS.
An editor like Notepad? Am I not already in a piece of software that can edit text? It's like having the GPS in a Tesla tell me I'll need to use a 1988 Oldsmobile to reach my destination.
I always feel a sense of satisfaction when I move a piece of complexity from outside an object to inside an object. It doesn't need to be a large amount of code, I've learned. Every little step helps in the long run.
If I could go back 20 years and give myself some programming tips, one of those tips would certainly be this: You don't move code into an abstraction to reuse the code, you move code into an abstraction to use the code.. In this article we'll look at settings and strategies for improving the performance of web applications and web APIs running in an Azure App Service. We'll start with some easy configuration changes you can make for an instant improvement.
Microsoft announced support for HTTP/2 in App Services early in 2018. However, when you create a new App Service today, Azure will start you with HTTP 1.1 configured as the default protocol. HTTP/2 brings major changes to our favorite web protocol, and many of the changes aim to improve performance and reduce latency on the web. For example, header compression and binary formatting in HTTP/2 will reduce payload sizes. An even better example is the use of request pipelineing and multiplexing. These features allow for more concurrent requests using fewer network sockets and help to avoid one slow request from blocking all subsequent requests, which is a frequent problem in HTTP 1.1 that we call the "head-of-line" blocking problem.
To configure your App Service to use HTTP/2 with the portal, go to Platform Settings in the Configuration blade. Here you will find a dropdown to specify the HTTP version. With 2.0 selected, any clients that support HTTP/2 will upgrade their connection automatically.
HTTP/2 might not benefit every application, so you will want to run performance tests and stress tests to document your improvements. Here's a simple test where I used the network tools in Firefox against a page hosted in an App Service. The page references a handful of script and CSS resources, and also includes 16 images. Each image is over 200 KB in size. First, I used the developer tools to record what happens on an App Service using HTTP 1.1. Notice how the later requests start in a blocked state (the red section of the bars). This is the dreaded "head-of-line blocking" problem where limitations on the number of connections and concurrent requests throttle the throughput between the client and the server. The client doesn't receive the final bytes for the page until 800ms after the first request starts.
Next, I switched on HTTP/2 support in the App Service. I didn't need to make any other configuration changes on the client or the server. The last byte arrives in less than 500ms. We avoid blocking thanks to the improved network utilization of HTTP/2.
In front of every Azure App Service is a load balancer, even if you only run a single instance of your App Service Plan. The load balancer intercepts every request heading for your app service so when you do move to multiple instances of an app service plan, the load balancer can start to balance the request load against available instances. By default, Azure will make sure clients continue reaching the same app service instance during a session, because Azure can't guarantee your application isn't storing session state in server memory. To provide this behavior the load balancer will inject a cookie into the first response to a client. This cookie is what Azure calls the Application Request Routing Cookie.
If you have a stateless application and can allow the load balancer to distribute requests across instances without regard to previous requests, then turn off the routing cookie in the Configuration blade to improve performance and resiliency. You won't have requests waiting for a server restart, and when failures do happen, the load balancer can shift clients to a working instance quickly.
The routing configuration is another item you'll find in the Platform Settings of the App Service Configuration blade.
If you've deployed applications into IIS in the past, you'll know that IIS will unload idle web sites after a period of inactivity. Azure App Services will also unload idle web sites. Although the unloading can free up resources for other applications that might be running on the same App Service Plan, this strategy hurts the performance of the app because the next incoming request will wait as the web application starts from nothing. Web application startup time can be notoriously slow, regardless of the technologies involved. The caches are empty, the connection pools are empty, and all requests are slower than normal as the site needs to warms up.
To prevent the idle shutdown, you can set the Always On flag in the App Service Configuration blade.
By default, the file system for your App Service is mounted from Azure Storage. The good news is your file system is durable, highly available, and accessible from multiple App Service instances. The sad news is your application makes a network call every time the app touches the file system. Some applications require the Azure Storage solution. These are the applications that write to the file system, perhaps when a user uploads a file, and they expect the file system changes to be durable, permanent, and immediately visible across all running instances of the application. Other applications might benefit from having a faster, local, read-only copy of the web site content. If this sounds like your application, or you want to run a test, then create a new App Setting for the app with a key of WEBSITE_LOCAL_CACHE_OPTION and a value of Always. You'll then have a d:\home folder pointing to a local cache on the machine and populated with a copy of your site content.
Although I say the cache is read-only, the truth is you can write into the local cache folder. However, you'll lose any changes you make after an app restart. For more information on the tradeoffs involved and how the local cache works with log files, see the Azure App Service Local Cache overview.
All the performance improvements we've looked at so far only require configuration changes. The next set of improvements require some additional infrastructure planning or restructuring, and in some cases changes to the application itself. The common theme in the next set of tips is to reduce the distance bits need to travel over the network. The speed of light is finite, so the further a bit has to travel, the longer the bit needs to reach a destination.
In Azure you assign most resources you create to a specific region. For example, when I create an App Service, I can place the service close to me in the East US region, or, if I'm on an extended visit to Europe, I can select the North Europe region. If you create multiple resources that work together closely, you'll want to place the resources together in the same region. In the past I've seen performance suffer when someone at the company accidentally places an App Service in one region and an associated Azure SQL instance in a different region. Every database query from the App Service becomes a trip across the continent, or around the world.
How do you check your existing subscriptions to make sure your resources are properly co-located? Assuming you don't want to click through the portal to check manually, you can write a custom script or program, or use Azure Policy. Azure Policy has a built-in rule to check every resource to ensure the resource location matches the location of the resource's parent resource group. All you need to do with this rule in place is make sure your associated resources are all in the same resource group. The policy definition for this audit rule looks like the following.
{ "if": { "field": "location", "notIn": [ "[resourcegroup().location]", "global" ] }, "then": { "effect": "audit" } }
If most of your customer traffic originates from a specific area of the world, it makes sense to place your resources in the Azure region closest to your customers. Of course, many of us have customers fairly distributed around the world. In this case, you might consider geo-replicating your resources across multiple Azure regions and stay close to everyone. For App Services, this means creating multiple App Service plans inside of multiple Azure data centers around the world. Then, you'll typically use a technology like Azure Traffic Manager to direct customer traffic to the closest App Service instance.
Note: since I wrote this article, Microsoft introduced Azure Front Door. Front Door offers some additional capabilities that are not available from Traffic Manager. Features like SSL offload, instead failover, and DDoS protection. If you need global load balancing, you should also look at what the Front Door Service offers.
Traffic Manager is a DNS based load balancer. So, when a customer's web browser asks for the IP address associated with your application's domain, Traffic Manager can use rules you provide and other heuristics to select the IP address of a specific App Service. Traffic Manager can select the App Service with the lowest latency for a given customer request, or, you can also configure Traffic Manager to enforce geo-fencing where the load balancer sends all customers living in a specific province or country to the App Service you select. You can see the routing methods built into Traffic Manager in the Create Traffic Manager profile blade below.
There are tradeoffs and complications introduced by Traffic Manager. It is easy to replicate stateless web applications and web services across data centers around the world, but you'll need to spend some time planning a data access strategy. Keeping one database as the only source of truth is the easiest data access approach. But, if your App Service in Australia is reading data from a database in the U.K., you might be losing the performance benefits of geo-replicating the App Service. Another option is to replicate your data, too, but much depends on your business requirements for consistency. Data replication is typically asynchronous and delayed, and your business might not be able to live with the implications of eventual consistency.
Azure's content delivery network allows you to take static content from Azure Storage, or from inside your App Service, and distribute the content to edge servers around the world. Again, the idea is to reduce the distance information need to travel, and therefore reduce the latency in network requests. Static files like script files, images, CSS files, and videos, and are all good candidates for caching on the CDN edge servers. A CDN can have other benefits, too. Since your App Service doesn't need to spend time or bandwidth serving files cached on a CDN, it has more resources available to produce your dynamic content.
When setting up a CDN profile in Azure, you can select a pricing plan with the features you need from a set of providers that includes Microsoft, Verizon, and Akamai.
Today's architecture fashion is to decompose systems into a set of microservices. These microservices need to communicate with each other to process customer requests. Just. You can place as many applications on the web server as you like, and keeping services together can reduce network latency. However, keep in mind that having too many services on the same machine can stretch resources thin. It will take some experimentation and testing to figure out the best distribution of services, the ideal size of the App Service Plans, and the number of instances you need to handle all your customer requests.
We've looked at several strategies we can use to improve the performance of web applications and web APIs we've deployed to Azure App Services. Just remember that your first step before trying one of these strategies should be to measure your application performance and obtain a good baseline number. Not every strategy in this article will benefit every application. Starting with baseline performance numbers will allow you to compare strategies and see which ones are the most effective for your application. | https://odetocode.com/blogs/scott/ | CC-MAIN-2020-24 | refinedweb | 3,422 | 53.61 |
Maze generation algorithms with matrices in Python (II) — Sidewinder
)
The sidewinder algorithm is a slight variation of the binary tree one: we flip a coin in every cell of the matrix again and if we obtain tails (0) we carve east, if we obtain heads (1) we look back at the cells we visited until that moment, randomly choose one and carve north.
We start again with a binomial distribution over a matrix of a chosen size:
grid = np.random.binomial(n,p, size=(size,size))
It makes sense again to preprocess the grid to avoid digging out the maze, changing ones in the first row with zeros and zeros in the last column with ones:
def preprocess_grid(grid, size): # fix first row and last column to avoid digging outside the maze first_row = grid[0] first_row[first_row == 1] = 0 grid[0] = first_row for i in range(1,size): grid[i,size-1] = 1 return grid
In order to choose a random cell among the ones we previously visited, we need to keep track of a list “previous_l”.
if toss == 0 and k+2 < size*3: output_grid[w,k+1] = ' ' output_grid[w,k+2] = ' ' previous_l.append(j)if toss == 1: # it's impossible to carve outside after preprocessing # look back, choose a random cell if grid[i,j-1] == 0: # reaching from 0 # mandatory to be sure that previous_l has at least one element # if coming from a list of previous cells, choose one and... r = rd.choice(previous_l) k = r * 3 + 1 # ...just carve north # just carve north also if this is the 1st of the row (1 element loop) output_grid[w-1,k] = ' ' output_grid[w-2,k] = ' ' # and clear the list for this run previous_l = []
If the coin is a tails, we just carve east and append this cell to the list of the previous visited cell for this run; if the coin is a heads, we look back at the visited cells, choose randomly one of those and carve north. Since we closed a loop, we clear the list of visited cells.
Note that:
- if we are on the first row of the grid, it’s useless to check for current row index because we will surely carve east as a result of preprocessing the grid. 😉
- if this is the first column of the row, then we are just closing the run on the current cell (in this case, the previous_l list is empty and it would be impossible to choose among a previously visited cell).
Running the script gives the following result:
Curious about the complete code? Check out the complete source here: | https://1littleendian.medium.com/maze-generation-algorithms-with-matrices-in-python-ii-sidewinder-56c563235471 | CC-MAIN-2021-43 | refinedweb | 436 | 54.8 |
Best way to detect user presencefvitorc Feb 7, 2012 11:36 PM
Hi,
I am trying to find the best way to detect user presence (or queue presence).
I've been looking at the source code for sometime now, and I think that one way to know when a user has connected is to listen to the FinishStateSync command on the ServerBus. Am I correct? Is there a better way for that?
As for the disconnect I haven't find anything.
I know that I could send Connected and Disconnected messages from the client, but I would not like to rely on these methods. For example, if the browser crashes, I would never know that the user has disconnected, because he didn't even had the chance to send the Disconnected message.
Regards,
Vitor.
1. Re: Best way to detect user presencefvitorc Feb 9, 2012 12:56 AM (in response to fvitorc)
Actually, it's not possible to listen to incoming messages on the ServerBus. I get the following exception if I try to do so:
java.lang.IllegalArgumentException: cannot modify or subscribe to reserved service: ServerBus
So I would like to request a new feature for errai. The proposal would be to get queue connect/disconnect events.
The end user could write something like that to get these events:
@QueuePresence
public class MyQueuePresence implements QueuePresenceListener {
@Override
public void onConnect(String sessionId) {
// do whatever I want
}
@Override
public void onDisconnect(String sessionId) {
// do whatever I want
}
}
Of course, the interface, annotation and methods names could be different.
Where can I request such feature? Is it writing in this forum enough?
Regards,
Vitor.
2. Re: Best way to detect user presencefvitorc Feb 11, 2012 1:52 PM (in response to fvitorc)
After a while, I found a solution for that. For disconnects it's quite simple, but for connects you must disable the monitor tool.
Anyway, the ServerMessageBusImpl (just cast to it from a MessageBus) has a addQueueClosedListener for disconnects.
It also has a attachMonitor method, which provides not only connect/disconnect events, but a lot of other notifications.
The bad part is that you won't be able to use the monitor tool, because only one monitor can be active at a time.
I believe the best thing to do then, is to initially send your own connect message (do not rely on ServerBus), and use a QueueClosedListener for disconnects.
Regards,
Vitor. | https://community.jboss.org/thread/194917?tstart=0 | CC-MAIN-2014-10 | refinedweb | 402 | 60.24 |
wcsxfrm - wide-character string transformation
#include <wchar.h> size_t wcsxfrm(wchar_t *ws1, const wchar_t *ws2, size_t n);
The wcsxfrm() function transforms the wide-character string pointed to by ws2 and places the resulting wide-character string into the array pointed to by ws1. The transformation is such that if wcscmp() is applied to two transformed wide strings, it returns a value greater than, equal to or less than 0, corresponding to the result of wcscoll() behaviour is undefined.
The wcsxfrm() function will not change the setting of errno if successful.
The wcsxfrm() function returns the length of the transformed wide-character string (not including the terminating null wide-character code). If the value returned is n or more, the contents of the array pointed to by ws1 are indeterminate.
On error, the wcsxfrm() function returns (size_t)-1, and sets errno to indicate the error.
The wcsxfrm() function may fail if:
- [EINVAL]
- The wide-character string pointed to by ws2 contains wide-character codes outside the domain of the collating sequence.
None.
The transformation function is such that two transformed wide-character strings can be ordered by wcscmp().
Because no return value is reserved to indicate an error, an application wishing to check for error situations should set errno to 0, then call wcsxfrm(), then check errno.
None.
wcscmp(), wcscoll(), <wchar.h>.
Derived from the MSE working draft. | http://pubs.opengroup.org/onlinepubs/007908799/xsh/wcsxfrm.html | crawl-003 | refinedweb | 227 | 53.31 |
From: Joel de Guzman (joel_at_[hidden])
Date: 2003-11-03 19:01:15
David Abrahams <dave_at_[hidden]> wrote:
> Ahem. Tuples should already be functioning mpl type sequences, IMO.
> It would be nice if variant would be the same. If not possible in
> either case for some reason, there should be a *non-intrusive*
> type_sequence<X>::type metafunction invocation which gets the type
> sequence.
>
> "typelist" is the wrong name; it implies exposure of an implementation
> detail.
Ok. I like the type_sequence<X>::type spelling. That's better.
>> Another alternative is to ask MPL to open up its namespace to allow
>> the specialization of mpl::begin<s> and mpl::end<s>,
>
> They're already specializable (why wouldn't they be?). More
> conveniently and portably, you can nest a ::tag type and then
> fully-specialize mpl::begin_traits/end_traits.
>
> [But then, you should know this from the book preview, Joel ;-)]
Indeed! I guess I still had the hangover from yesterday when I wrote
that reply :-)
>> so we can add our own typelist-savvy classes. It wouldn't be
>> possible, due to potential name clashes to put the begin<s>, end<s>
>> in the class' namespace. For example, Fusion already has a begin(t)
>> and end(t) functions and it's not legal to have both a class/struct
>> and a function in the same namespace.
>
> I don't see why that's a problem. mpl::begin<T> and fusion::begin(t)
> can be different.
Please disregard that. That's a non-issue (a hangover issue, in fact :-)
So, we have 2 solutions:
1) provide a type_sequence<X>::type
2) provide an mpl::begin<X> and mpl::end<X>
No. 1 is the easiest to implement. No. 2 would require some knowledge of
mpl's tag mechanism to do easily, otherwise would require partial specialization.
No. 2 will make X a conforming mpl type sequence.
Thoughts?
-- Joel de Guzman
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2003/11/55780.php | CC-MAIN-2019-47 | refinedweb | 336 | 67.35 |
i have been given some source codes for which i have to make an application.
Here is a part of it:
private void manageAccount()
{
Account acc = null;
int accId;
int amount;
int choice;
accId = KeyboardInput.getInt("Enter Account number: ");
if (!(myBank.contains(accId)))
{
System.out.println("Account "+ accId+" does not exist");
}
else // account is found so process it as required
{
acc = (Account) myBank.get(accId);
choice = manageMenu.getMenuChoice();
while (choice!=0)
{
switch (choice) // select required action on the account
{
case 1 :
System.out.println(acc.toString());
break;
case 2 :
amount = KeyboardInput.getIntInRange("Enter Amount: ", 0, 9999);
acc.deposit(amount);
System.out.println(acc.toString());
break;
case 3 :
amount = KeyboardInput.getIntInRange("Enter Amount: ", 0, 9999);
acc.withdraw(amount);
System.out.println(acc.toString());
break;
case 4 :
if (acc.getClass().getName().equals("CurrentAccount"))
{
int newODLimit;
CurrentAccount currentAcc= (CurrentAccount) acc;
System.out.println("Overdraft limit is: "+currentAcc.getOverdraftLimit());
newODLimit = KeyboardInput.getIntInRange("Enter new overdraft limit: ", 0, 9999);
currentAcc.setOverdraftLimit(newODLimit);
}
else
i have to make an application using these source codes. Howeverr i am not allowed to edit it.
is there any way i can make an application which can get to these codes?
for example if the user presses the "Add an Account" button from my application, it will read the code from the AddAccount class.
or will i need to write out the codes all over again into my new Interface class?
i know i sound very confused, thats because, well i am.
i was in hospital for most of my OOp classes and now have to do this resit assignment and there is no one to help! so if anyone can help me please reply! im desperate!!
p.s , i proberbly havent explaind my situation well so here is the page where my assignment is:
My guess would be that they want you to put all the source code, including the file you write with the main method, into a package. That way you will be able to call on the methods that you were given.
eg.
package ass2r;
import java.awt.*;
public class etc
Thanx for that i was hoping that that was what i am meant to do. However can you suggest how i can go about doing that? I just need a little guide on how to get started.
For example this is part of a code in a class in the package:
public void withdraw(int amount)
{
if (balance >= amount)
{
balance= balance - amount;
charges= charges + 1;
}
}
so how can i say that if Button "Withraw" is pressed then carry out this method from the ManageAccount class?
sorry to be a pain again, but any suggestions?
thankyou
Hi Boo,
When your post came in I did go look at your assmt site; it looks like what your instructor is asking for is to demonstrate your knowledge and understanding of OOP: object oriented programming. This can be the most confusing part of java: private, public, static, final, instantiation, object creation, encapsulation, packages, jars... oh my... it's a mouthful.
Your assmt is simple, yet the concept is pretty detailed and much too complicated to get into here.
What the assignment appears to call for is creating new objects from the classes in the package, and then accessing the class methods and properties from the new objects you've created.
I see that there are class notes on the site link you provided. You might take a quick peek thru them to see if there are examples for this particular topic.
Creating objects, creating their private and public properties & methods, and then creating programs/classes to utilize, change and access those 'super' parent class qualities, it's all the main part of OOP.
As a brief example:
Say you had a parent 'super' class Bike with methods of
setColor( ) and setType( ).
For the parent or 'super' class, it creates a Bike object and lets you change the color and model on it. But what if you don't want a Red Tricycle Bike object, and instead you want t
you could then go and create your own separate class (BoosHomework) to create your own bike object:
public class BoosHomework{
public static void main.....{
Bike booBike = new Bike( ); // create new bike object
booBike.setColor( ) // call to the Bike superclass method to set color on booBike.
It's a very interesting concept and I recommend that if you really want to learn Java, that you take your time with it and understand it.
I realize this doesn't help much, with your assmt due by Monday, but perhaps over the weekend you can take a couple hours and experiment with OOP basics and get an idea of where to go with it.
Hope this helps!
~Jim
Forum Rules
Development Centers
-- Android Development Center
-- Cloud Development Project Center
-- HTML5 Development Center
-- Windows Mobile Development Center | http://forums.devx.com/showthread.php?137669-How-to-connect-Ms-Sql-Serveur-with-JDBC&goto=nextoldest | CC-MAIN-2015-32 | refinedweb | 803 | 64.41 |
Introduction to PureScript: Twitter Search API
TLDR: I wrote this in a fiction format for fun. The actual code is in the repo. Also, I’m new to FP so this is newbie code. I can refactor it to be elegant but I want to keep this simple for beginners.
Twitter Storm
Kim Kardashian felt uneasy as soon as she woke up. She had just used the r-word yesterday and suffered a huge backlash. She felt vulnerable about her twitter following and needed to be reassured. She had to do something different. Yes, she could just type her name in the Twitter App and see what people were saying about her. But she had secretaries for that. No, she had to do what no other celeb had done before. She would code!
What language though? A language which is nice and clean and pure. So she googles around and discovers PureScript! She installs it in a breeze while wondering about this Mr. Java Script guy who was always complaining online on how difficult it was. Sigh. Ok, what next?
Reading Twitter credentials
First, she has to read her Twitter credentials from a file. Yes, she could hard code the passwords in the program but she’s a celeb. She knows Security.
So, she got her credentials from Twitter and created a file like below at
config/twitter_credentials.json
{
"consumer_key": "KimMama",
"consumer_secret": "KimLikesToCode",
"access_token": "KimDoesNotKnowWhatThisIsFor",
"access_token_secret": "KimThinksTwitterHasGoneMad"
}
She built a JavaScript like object in PureScript(called records) using
type:
type TwitterCredentials =
{ consumer_key :: String
, consumer_secret :: String
, access_token :: String
, access_token_secret :: String
}
How do we read this file?
import Node.Encoding (Encoding(..))
import Node.FS.Aff (readTextFile)
readConfigStr :: String -> Aff String
readConfigStr path = readTextFile UTF8 path
import Node.Encoding (Encoding(..)) meant import the type constructor
Encoding and the
.. meant import all it’s data constructors as well, one of which is
UTF8. Since she is a celeb and she is never wrong, type constructors are like abstract base types and data constructors are like normal OOP constructors but fancier. You can have data constructors with different names and you can even treat them like Enumerations in switch/case like statements(Kim’s BFF liked to call them pattern matching).
Aff stands Asynchronous Effect(the synchronous effect is called
Effect). These effects represent an action that the program would like to take, but not executed yet. Whaaa?
If Kim wanted to call Khloe for lunch, buy flowers for her mother and type her next tweet… She wouldn’t be the person doing it, would she? It would be her secretary! All, she would do is
text her secretary commands to do this thing but it wouldn’t happen until her secretary actually executed the commands at a later time!
In the same way,
Aff(and
Effect) were like
texts by Kim to her new secretary
PureScript. It was a way of telling
PureScript that she wanted them to be done but it was just a representation of a command, not the actual execution of the command. By representation, it just meant it was a value, just like the way number
3 or
"a_string" or a JavaScript object were values.
For e.g., imagine the following pseudocode in an imperative language(e.g. Python):
1: x = print("A String")
2: x
3: x
The output would be
A string
The
execution and
evaluation of the
But in a functional language, the above pseudocode would be something like
1: run(
2: let x = print("A String")
3: x
4: x
5: )
And the output would be
A String
A String
let is like the variable assignment in imperative code.
Only the evaluation happens at lines 2–4 but not the execution. The execution happens inside
run. So before the program is given to run,
x replaced at lines 3 and 4 to be
print("A String"). Note, the
run procedure.
Another viewpoint is that most applications always start with the
main function. In PureScript, perhaps the simplest program one could write is.
import Effect.Console (log)
main:: Effect Unit
main = log "Product Placement Here. ;)"
The signature for
log is
log :: String -> Effect Unit.
Unit stands for nothing, as in, we don’t expect anything back from the console.
And like the pseudocode above, what happens within PureScript code, unseen by the programmer is something like
run(main)
Kim felt a chill through her spine. She regretted not taking programming seriously in school.
Ok,
readConfigStr returned a
Aff String but she needed to convert it to our
TwitterCredentials record. She asked her secretary for technology to find a library for her and she found PureScript-Simple-JSON by a guy called Justin Woo.
import Simple.JSON as SimpleJSON
import Data.Either (Either(..))
parseConfig :: String -> Either String TwitterCredentials
parseConfig s =
case SimpleJSON.readJSON s of
Left error -> Left (show error)
Right (creds :: TwitterCredentials) -> Right creds
parseConfig has an
Either String TwitterCredentials in it’s signature. It’s like an union type. The result could either be a String(an error string) or the actual credentials. PureScript defines
Either as
data Either a b = Left a | Right b
So if we want to return a string, we return
Left "my error string", the actual credentials as
Right creds. That way, the person calling
parseConfig knows which is which.
In
parseConfig,
SimpleJSON.readJSON returned an
Either but Kim didn’t want to deal with the complex
Left type, so she just converted that to a string using
show.
Now it was just a matter of calling
readConfigStr and passing the value to
parseConfig. Something like this pseudocode
cStr = readConfigStr path
return parseConfig cStr
But she couldn’t make it compile! She started panicking and thought of what would happen if the word got out and Taylor Swift found out. The Shame
“Try the do notation”, said a voice from behind.
Kim swivelled back and her mouth opened with surprise.
“Kanye! I didn’t know you knew PureScript!”
“Nah, PureScript is for hipsters. I’m old school. I like my Haskell.”
He continued, “The do notation allows you to extract the
String from
Aff String and gives you the illusion of the pseudocode above.”
readConfig :: String -> Aff (Either String TwitterCredentials)
readConfig path = do
cStr <- readConfigStr path
pure $ parseConfig cStr
“What’s
pure $ for?”, asked Kim?
Kanye sighed. He knew the author of this post was in a hurry to move on to doing cooler stuff and didn’t want to get into monads in this post. So he bailed too.
First
$. That’s just a simple way of saying consider everything after as one value. For e.g.
show $ SimpleJSON.readJSON s meant
show (SimpleJSON.readJSON s) instead of
(show SimpleJSON.readJSON) s. Kim approved. She liked
$ signs.
Kanye then braced himself for his ‘simplification’ of
pure.
“You noticed that it was
cStr <- readConfigStr path and not
let cStr = readConfigStr path. The
<- is syntax sugar which make it look like an
=. But what is really happening underneath is something very similar to callbacks. The
Aff String type has to be given a function to work on the
String value within it. But this function can’t just be
cStr -> parseConfig cStr. The function has to return back an
Aff something.
pure is a constructor. In this context of
Aff, when we say
pure something, it’s like saying
new Aff(something) or in our case, it’s like saying
new Aff(parseConfig(cStr))"
Kim beamed at Kanye. He looked so hot right now. She wanted his baby.
Bearer Token from Twitter.
Great, that gave her the credentials but she needed a bearer token from Twitter which she would then use to get the results. How does one call the Twitter endpoint in PureScript? She beckoned her secretary for technology to find her a library. Her secretary came back running.
“I found a library called Milkis… again by Justin Woo!”
Kim’s eyes sharpened with intent. She wondered out aloud, “Do you think this Justin guy is a celebrity in the PureScript world? Hmmmm make my agent call his agent. Let’s do a reality show together.”
Kim first created a method to construct the authorization string from the credentials and encode it in
Base64. The
<> was like an append operator.
import Data.String.Base64 as S
authorizationStr :: TwitterCredentials -> String
authorizationStr credentials =
S.encode $ credentials.consumer_key <> ":" <> credentials.consumer_secret
She then made a simple
fetch helper method from
Milkis.
import Milkis as M
import Milkis.Impl.Node (nodeFetch)
fetch :: M.Fetch
fetch = M.fetch nodeFetch
She then created a method to get the bearer token string or return a string as error(in the
Left part of the code).
import Milkis as M
import Effect.Aff (Aff, attempt)
getTokenCredentialsStr :: String -> Aff (Either String String)
getTokenCredentialsStr basicAuthorizationStr = do
let
opts =
{ body: "grant_type=client_credentials"
, method: M.postMethod
, headers: M.makeHeaders { "Authorization": basicAuthorizationStr
, "Content-Type": "application/x-www-form-urlencoded;charset=UTF-8"
}
}
_response <- attempt $ fetch (M.URL "") opts
case _response of
Left e -> do
pure (Left $ show e)
Right response -> do
theText <- M.text response
pure (Right theText)
Now to bring it all together.
type BearerAuthorization =
{ token_type :: String
, access_token :: String
}
basicHeader :: String -> String
basicHeader base64EncodedStr = "Basic " <> base64EncodedStr
toBearerAuthorization :: String -> Either String BearerAuthorization
toBearerAuthorization tokenString = do
case SimpleJSON.readJSON tokenString of
Left e -> do
Left $ show e
Right (result :: BearerAuthorization) -> do
Right result
getTokenCredentials :: TwitterCredentials -> Aff (Either String BearerAuthorization)
getTokenCredentials credentials = do
tokenCredentialsStrE <- getTokenCredentialsStr $ basicHeader $ authorizationStr credentials
case tokenCredentialsStrE of
Left error -> do
pure (Left error)
Right tokenCredentialsStr -> do
let tokenCredentialsE = toBearerAuthorization(tokenCredentialsStr)
case tokenCredentialsE of
Left error -> do
pure (Left error)
Right authResult -> do
pure (Right authResult)
Great, we had the bearer token. It’s finally time to search for
Kim Kardashian! PureScript had this interesting signature format though. What it was saying below was that
showResults took as input a
BearerAuthorization and a
String and returned an
Aff (Either String SearchResults)
Also, the
SearchResults and
Status had lots of fields but she just wanted the basic stuff.
type Status =
{ created_at :: String
, id_str :: String
, text :: String
}
type SearchResults =
{ statuses :: Array Status
}
twitterURL :: String -> M.URL
twitterURL singleSearchTerm = M.URL $ "" <> singleSearchTerm
showResults :: BearerAuthorization -> String -> Aff (Either String SearchResults)
showResults credentials singleSearchTerm = do
let
opts =
{ method: M.getMethod
, headers: M.makeHeaders { "Authorization": "Bearer " <> credentials.access_token}
}
_response <- attempt $ fetch (twitterURL singleSearchTerm) opts
case _response of
Left e -> do
pure (Left $ show e)
Right response -> do
stuff <- M.text response
let aJson = SimpleJSON.readJSON stuff
case aJson of
Left e -> do
pure $ Left $ show e
Right (result :: SearchResults) -> do
pure (Right result)
Finally, reaching the very end to the
main command!
import Effect.Class.Console (errorShow, log)
import Effect.Aff (Aff, launchAff_)
main :: Effect Unit
main = launchAff_ do
let searchTerm = "Kim Kardashian"
config <- readConfig "./config/twitter_credentials.json"
case config of
Left errorStr -> errorShow errorStr
Right credentials -> do
tokenCredentialsE <- getTokenCredentials credentials
case tokenCredentialsE of
Left error ->
errorShow error
Right tokenCredentials -> do
resultsE <- showResults tokenCredentials searchTerm
case resultsE of
Left error ->
errorShow error
Right result ->
log $ show $ "Response:" <> (show result.statuses)
launchAff_ was required because the entire computation returned
Aff something but
main was of type
Effect Unit. So
launchAff_ just converted
Aff something to
Effect Unit
As Kim beamed with pride at her code, she flashed her eyes at Kanye and asked him, “Isn’t the code beautiful?”
Kanye gazed into her eyes and said, “Actually, it sucks. There are so many case statements in that code that I feel cross eyed.”
And the next thing Kanye knew, was that he was flat on the ground, his jaw felt like it had been displaced and he was seeing double.
For there are three things you don’t tell your wife:
- Honey, you have gained weight
- Your code sucks
- I miss my mother’s cooking.
As Kanye massaged his jaw, he muttered, “… I guess she does not want to know about the
ExceptT Monad…” | https://medium.com/@rajiv.abraham/introduction-purescript-twitter-fec6df5276dc | CC-MAIN-2020-29 | refinedweb | 1,963 | 57.47 |
Here is the code:
using UnityEngine;
using System.Collections;
public class bulletactions : MonoBehaviour
{
private float yposV;
private float xposV;
public Rigidbody2D rb;
private float PLAyposV;
private float PLAxposV;
// Use this for initialization
void Start()
{
rb = GetComponent<Rigidbody2D>();
// This stuff makes the object point towards the mouse pointer when it first spawns, but not again.
var pos = Camera.main.WorldToScreenPoint(transform.position);
var dir = Input.mousePosition - pos;
var angle = Mathf.Atan2(dir.y, dir.x) * Mathf.Rad2Deg;
transform.rotation = Quaternion.AngleAxis(angle, Vector3.forward);
//
PLAyposV = GameObject.Find("Player").GetComponent<PlayerShoot>().playerYpos; //Finding the players X and Y position from another script and then moving the projectile to it.
PLAxposV = GameObject.Find("Player").GetComponent<PlayerShoot>().playerXpos;
transform.position = new Vector2(PLAxposV, PLAyposV);
}
void Update()
{
yposV = transform.position.y;
xposV = transform.position.x;
transform.position += transform.forward * Time.deltaTime; // This line
if (yposV == -200)
Destroy(gameObject);
if (yposV == 200)
Destroy(gameObject);
if (xposV == -200)
Destroy(gameObject);
if (xposV == 200)
Destroy(gameObject);
} //If the object goes out of map range, delete it.
}
What this script is supposed to do: The script is attached to a missile entity that upon being cloned, faces towards the mouse pointer. Then, it will travel a speed of X in the direction it is currently facing until it goes out of the region and is destroyed.
What the script actually does right now The missile appears, rotates correctly, but does not move.
This is because i do not understand how to put transform.position += transform.forward * Time.deltaTime; // This line into action.
transform.position += transform.forward * Time.deltaTime; // This line
Please help! Thanks :)
Answer by navot
·
Feb 22, 2017 at 11:16 PM
transform.forward is the Vector pointing in the z Direction of the object. In 3D space, this is usually in front of you, but in 2D this points "into" the screen.
You need to replace it with either transform.right or transform.up depending on how your gameobject is rotated.
You can find which one either by trial and error or looking in the editor how it is rotated. The x axis is right and the y axis is up
Answer by xXSilverswordXx
·
Feb 27, 2017 at 10:34 PM
thank you my.
transform.forward problem
2
Answers
How to make a NavMeshAgent move in the direction it is looking?
4
Answers
transform.forward giving unwanted movement
1
Answer
transform.localPosition += transform.forward moves object toward (0, 0, 0)
2
Answers
How do you maintain gravity using transform.forward and transform.right?
0
Answers | https://answers.unity.com/questions/1317241/transformforward-in-2d.html | CC-MAIN-2019-09 | refinedweb | 418 | 50.33 |
The code below runs grep in one machine through SSH and prints the results:
import sys, os, string
import paramiko
cmd = "grep -h 'king' /opt/data/horror_20100810*"
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect('10.10.3.10', username='xy', password='xy')
stdin, stdout, stderr = ssh.exec_command(cmd)
stdin.write('xy\n')
stdin.flush()
print stdout.readlines()
You'll need to put the calls into separate threads (or processes, but that would be overkill) which in turn requires the code to be in a function (which is a good idea anyway: don't have substantial code at a module's top level).
For example:
import sys, os, string, threading import paramiko cmd = "grep -h 'king' /opt/data/horror_20100810*" outlock = threading.Lock() def workon(host): ssh = paramiko.SSHClient() ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh.connect(host, username='xy', password='xy') stdin, stdout, stderr = ssh.exec_command(cmd) stdin.write('xy\n') stdin.flush() with outlock: print stdout.readlines() def main(): hosts = ['10.10.3.10', '10.10.4.12', '10.10.2.15', ] # etc threads = [] for h in hosts: t = threading.Thread(target=workon, args=(h,)) t.start() threads.append(t) for t in threads: t.join() main()
If you had many more than five hosts, I would recommend using instead a "thread pool" architecture and a queue of work units. But, for just five, it's simpler to stick to the "dedicated thread" model (especially since there is no thread pool in the standard library, so you'd need a third party package like threadpool... or a lot of subtle custom code of your own of course;-). | https://codedump.io/share/xpBJltV535wV/1/creating-multiple-ssh-connections-at-a-time-using-paramiko | CC-MAIN-2018-13 | refinedweb | 267 | 57.57 |
This article has been dead for over three months: Start a new discussion instead
Reputation Points: 0 [?]
Q&As Helped to Solve: 0 [?]
Skill Endorsements: 0 [?]
0
Hello.
I was writing a java code for class and I was unable to figure out how strings and sub-strings work. Can anyone provide some insight on the program.
Here is the problem: Given a bit string expression, such as 10110 OR 11101, evaluate it.
Input: Each string represents a bit-string expression with one operator in it. The operators are AND, OR, and NOT. The operands are bit strings, from 1 to 8 bit long. If the expression contains two operands, they will be the same length. Each input must read as a single string. A single blank will separate the operators and operands.
Output: the value of the expression.
My code so far...
import java.util.Scanner; import java.io.File; import java.io.IOException; public class BitStringsClient { public static void main(String[] args) throws IOException { //communicating with the data Scanner inFile = new Scanner(new File("BitStrings.txt")); //declare variables String firststr, secondstr = null; int charValue = 0; //process data while(inFile.hasNext()) { firststr = inFile.next(); if(firststr.equals("NOT")); { secondstr = inFile.nextLine(); //for loop to convert bits to their opposite for(int index = 0; index < firststr.length(); index++) { if() } } //else if(firststr.equals("AND")); { secondstr = inFile.nextLine(); //for loop to multiply strings for(int index = 0; index < firststr.length(); index++) { } } //else if(firststr.equals("OR")); { secondstr = inFile.nextLine(); //for loop to add strings for(int index = 0; index < firststr.length(); index++) { } } //else } } }
Reputation Points: 563 [?]
Q&As Helped to Solve: 792 [?]
Skill Endorsements: 16 [?]
•Team Colleague
You | http://www.daniweb.com/software-development/java/threads/410368/bit-strings-in-java | CC-MAIN-2014-10 | refinedweb | 277 | 63.56 |
df_trn, y_trn, nas = proc_df(df_raw, ‘SalesPrice’)
This should do the trick Jeremy will explain this another video about what this option, but this was fixed in an update.
df_trn, y_trn, nas = proc_df(df_raw, ‘SalesPrice’)
This should do the trick Jeremy will explain this another video about what this option, but this was fixed in an update.
I am trying to apply RandomForestRegression to a dataset which has unsorted and Null dates in training as well as test data. I suspect this is causing the severe drop in my validation score. Can anyone guide me how to handle such scenarios?
P.S. Pardon me if this has been taught in future lessons, I’ve just watched the first two lectures.
Why is that we have only 14 splits. If the data cannot be split into two roughly equal halves, then we may have more that 14 splits for 20K samples.
If we are looking for the upper bound on the splits, worst case we may end up with 20K-1 splits. It’s not clear to me on why we use log2(samples) for the splits.
Fill missing values with fillna using median values for quantitative columns, for categorical cols missing vals are interpreted as -1 (0). There is a utility method proc_df that does this.
That is a good point! Thinking about equal sized splits helped me build intuition about how constructing a tree works, but you are absolutely right that this is neither precise nor correct.
In practice I think we are much more likely to see a number of splits closer to the lower bound than the upper of 20k-1. We probably do not want to have leaves of size 1 (this can be configured) due to overfitting. Also I am under the impression that the methods used for calculating value of splits will favor splitting into two splits of similar size vs one large and one small (the gini index and the information gain are both summation based). Not sure if this is correct though?
Anyhow, I started learning about trees from the point of completely not understanding how they worked to where now I hope I have some semi useful and correct intuition - really appreciate you chiming in @tutysara as this is definitely helps to refine my understanding!
Why are we worrying about the splits?
Because if I remember the tree diagram, the tree splits the data according to the feature Importance and other parameters…
Also as radek and you said earlier, we can have a path to the individual leaves in the worst case…(that’s bounded by the data size)
But we don’t want to do so…
We want the RF to split the data accordingly which might be sufficient for it to generalize eventually.
Please shed some more light into the ongoing discussion…
Thanks…
I am also on my learning journey and reading through the comments really helped understanding the points better and checking my understanding, thanks @radek for the writeups.
I was talking about the worst case scenario, since I was thinking it as the upper time bound on running the algorithm (asymptotic analysis). Your reply clarified this – its the most probable scenario we are talking about and not the worst case scenario. I guess we are on the same page, ty.
@ecdrid did it answered your question too? if not we can discuss more on this
tim thanks for these.
I loaded up lecture 11 on jeremy’s list and it fit the lecture 12 here
I found reading through the whole lecture first help me understand better. As I have some misunderstanding on Regularization on Logistic Model.
When Regularization is LARGE, weight are suppressed to 0, while regularization is small, weight are allowed to float freely from what the loss function determines.
I find plotting the weight of coefficients help me understand better. I randomly sample 1000 coefficient in the logistic model and plot some scatter plots and histogram. In general, reducing regularization increase the range and stand deviation of coefficients.
C = 1e-5 (Strong regularization) ,weights are suppressed to 0
C = 1e-1 (medium regularization) ,weight ranges from [-0.1 0.1]
C = 1e 10 (Weak Regularization), weights can go very high (y,axis, up to 4)
Now for the NVSVM++ part
I played around with the w_adj paramters, and I am surprised that w_adj = 0 performs better than the default setting w_adj = 0.4.
Gist:
Just figured out the colors of rectangles when we used to plot the Rf trees has a symbolic meaning(Jeremy had said that it meant something but left it for us to figure it out eventualy)
Try
~$ conda install graphviz
There is this independently organised ML competition at my workplace in which I have to predict prices. The training and test data contain overlapping dates (also missing dates). This may not be a correct way to formulate an ML problem but it is what it is. Is there a way to deal with this and still create a representable validation set which generalizes well? Tips or tricks?
I was going to test NBSVM on Kaggle Toxic comment competition and found out there is already a Kernel on it.
Did you also try NBSVM++? I am thinking to give it a try.
I stuck with Kaggle Kernel today, cannot import Pytorch on Kaggle Kernel today. Will copy over to Paperspace tomorrow and try.
ImportError: dlopen: cannot load any more object with static TLS
It’s because PyTorch is broken on the Kaggle’s Docker image…
Nothing to do with fast.ai
To see it yourself,
Just fire up a kernel and do
import torch
I tried installing with conda and ran into a (rather odd) conflict with pygpu.
pip install treeinterpreter
worked
I see, any temporary fix that works? I follow Github issue seems that it works before but is broken again.
I love the fast.ai courses and recognize that @jeremy optimizes for his iteration speed, but I would like to respectfully suggest that more explanatory variable names would be helpful in the long run for achieving the goal of making ML uncool.
Maybe
x,
y, and even
df are be okay (like
i) because of how standard they are, but IMHO some of the other abbreviations are overly terse. For example,
batch_size would be way more informative than
bs;
image_size or
img_size would be more understandable than
sz.
Yes,
bs and
sz are faster to type, and thus bring the iteration time down, but I suspect that your goals will be better reached by improving readability than by improving typing speed.
I would be willing to go through the code and make a PR with IMHO more understandable variable names. | http://forums.fast.ai/t/another-treat-early-access-to-intro-to-machine-learning-videos/6826?page=24 | CC-MAIN-2018-17 | refinedweb | 1,120 | 68.2 |
Author: Ian Lance Taylor
Last update: October 15, 2018
A proposal for how to make incompatible changes from Go 1 to Go 2 while breaking as little as possible.
Currently the Go language and standard libraries are covered by the Go 1 compatibility guarantee. The goal of that document was to promise that new releases of Go would not break existing working programs.
Among the goals for the Go 2 process is to consider changes to the language and standard libraries that will break the guarantee. Since Go is used in a distributed open source environment, we cannot rely on a flag day. We must permit the interoperation of different packages written using different versions of Go.
Every language goes through version transitions. As background, here are some notes on what other languages have done. Feel free to skip the rest of this section.
C language versions are driven by the ISO standardization process. C language development has paid close attention to backward compatibility. After the first ISO standard, C90, every subsequent standard has maintained strict backward compatibility. Where new keywords have been introduced, they are introduced in a namespace reserved by C90 (an underscore followed by an uppercase ASCII letter) and are made more accessible via a
#define macro in a header file that did not previously exist (examples are
_Complex, defined as
complex in
<complex.h>, and
_Bool, defined as
bool in
<stdbool.h>). None of the basic language semantics defined in C90 have changed.
In addition, most C compilers provide options to define precisely which version of the C standard the code should be compiled for (for example,
-std=c90). Most standard library implementations support feature macros that may be #define’d before including the header files to specify exactly which version of the library should be provided (for example,
_ISOC99_SOURCE). While these features have had bugs, they are fairly reliable and are widely used.
A key feature of these options is that code compiled at different language/library versions can in general all be linked together and work as expected.
The first standard, C90, did introduce breaking changes to the previous C language implementations, known informally as K&R C. New keywords were introduced, such as
volatile (actually that might have been the only new keyword in C90). The precise implementation of integer promotion in integer expressions changed from unsigned-preserving to value-preserving. Fortunately it was easy to detect code using the new keywords due to compilation errors, and easy to adjust that code. The change in integer promotion actually made it less surprising to naive users, and experienced users mostly used explicit casts to ensure portability among systems with different integer sizes, so while there was no automatic detection of problems not much code broke in practice.
There were also some irritating changes. C90 introduced trigraphs, which changed the behavior of some string constants. Compilers adapted with options like -no-trigraphs and -Wtrigraphs.
More seriously, C90 introduced the notion of undefined behavior, and declared that programs that invoked undefined behavior might take any action. In K&R C, the cases that C90 described as undefined behavior were mostly treated as what C90 called implementation-defined behavior: the program would take some non-portable but predictable action. Compiler writers absorbed the notion of undefined behavior, and started writing optimizations that assumed that the behavior would not occur. This caused effects that surprised people not fluent in the C standard. I won’t go into the details here, but one example of this (from my blog) is signed overflow.
C of course continues to be the preferred language for kernel development and the glue language of the computing industry. Though it has been partially replaced by newer languages, this is not because of any choices made by new versions of C.
The lessons I see here are:
C++ language versions are also now driven by the ISO standardization process. Like C, C++ pays close attention to backward compatibility. C++ has been historically more free with adding new keywords (there are 10 new keywords in C++11). This works out OK because the newer keywords tend to be relatively long (
constexpr,
nullptr,
static_assert) and compilation errors make it easy to find code using the new keywords as identifiers.
C++ uses the same sorts of options for specifying the standard version for language and libraries as are found in C. It suffers from the same sorts of problems as C with regard to undefined behavior.
An example of a breaking change in C++ was the change in the scope of a variable declared in the initialization statement of a for loop. In the pre-standard versions of C++, the scope of the variable extended to the end of the enclosing block, as though it were declared immediately before the for loop. During the development of the first C++ standard, C++98, this was changed so that the scope was only within the for loop itself. Compilers adapted by introducing options like
-ffor-scope so that users could control the expected scope of the variable (for a period of time, when compiling with neither
-ffor-scope nor
-fno-for-scope, the GCC compiler used the old scope but warned about any code that relied on it).
Despite the relatively strong backward compatibility, code written in new versions of C++, like C++11, tends to have a very different feel than code written in older versions of C++. This is because styles have changed to use new language and library features. Raw pointers are less commonly used, range loops are used rather than standard iterator patterns, new concepts like rvalue references and move semantics are used widely, and so forth. People familiar with older versions of C++ can struggle to understand code written in new versions.
C++ is of course an enormously popular language, and the ongoing language revision process has not harmed its popularity.
Besides the lessons from C, I would add:
I know less about Java than about the other languages I discuss, so there may be more errors here and there are certainly more biases.
Java is largely backward compatible at the byte-code level, meaning that Java version N+1 libraries can call code written in, and compiled by, Java version N (and N-1, N-2, and so forth). Java source code is also mostly backward compatible, although they do add new keywords from time to time.
The Java documentation is very detailed about potential compatibility issues when moving from one release to another.
The Java standard library is enormous, and new packages are added at each new release. Packages are also deprecated from time to time. Using a deprecated package will cause a warning at compile time (the warning may be turned off), and after a few releases the deprecated package will be removed (at least in theory).
Java does not seem to have many backward compatibility problems. The problems are centered on the JVM: an older JVM generally will not run newer releases, so you have to make sure that your JVM is at least as new as that required by the newest library you want to use.
Java arguably has something of a forward compatibility problem in that JVM bytecodes present a higher level interface than that of a CPU, and that makes it harder to introduce new features that cannot be directly represented using the existing bytecodes.
This forward compatibility problem is part of the reason that Java generics use type erasure. Changing the definition of existing bytecodes would have broken existing programs that had already been compiled into bytecode. Extending bytecodes to support generic types would have required a large number of additional bytecodes to be defined.
This forward compatibility problem, to the extent that it is a problem, does not exist for Go. Since Go compiles to machine code, and implements all required run time checks by generating additional machine code, there is no similar forward compatibility issue.
But, in general:
Python 3.0 (also known as Python 3000) started development in 2006 and was initially released in 2008. In 2018 the transition is still incomplete. Some people continue to use Python 2.7 (released in 2010). This is not a path we want to emulate for Go 2.
The main reason for this slow transition appears to be lack of backward compatibility. Python 3.0 was intentionally incompatible with earlier versions of Python. Notably,
Because Python is an interpreted language, and because there is no backward compatibility, it is impossible to mix Python 2 and Python 3 code in the same program. This means that for a typical program that uses a range of libraries, each of those libraries must be converted to Python 3 before the program can be converted. Since programs are in various states of conversion, libraries must support Python 2 and 3 simultaneously.
Python supports statements of the form
from __future__ import FEATURE. A statement like this changes the interpretation of the rest of the file in some way. For example,
from __future__ import print_function changes
So, we knew it already, but:
The Perl 6 development process began in 2000. The first stable version of the Perl 6 spec was announced in 2015. This is not a path we want to emulate for Go 2.
There are many reasons for this slow path. Perl 6 was intentionally not backward compatible: it was meant to fix warts in the language. Perl 6 was intended to be represented by a spec rather than, as with previous versions of Perl, an implementation. Perl 6 started with a set of change proposals, but then continued to evolve over time, and then evolve some more.
Perl supports
use feature which is similar to Python's
from __future__ import. It changes the interpretation of the rest of the file to use a specified new language feature.
Pedantically speaking, we must have a way to speak about specific language versions. Each change to the Go language first appears in a Go release. We will use Go release numbers to define language versions. That is the only reasonable choice, but it can be confusing because standard library changes are also associated with Go release numbers. When thinking about compatibility, it will be necessary to conceptually separate the Go language version from the standard library version.
As an example of a specific change, type aliases were first available in Go language version 1.9. Type aliases were an example of a backward compatible language change. All code written in Go language versions 1.0 through 1.8 continued to work the same way with Go language 1.9. Code using type aliases requires Go language 1.9 or later.
Type aliases are an example of an addition to the language. Code using the type alias syntax
type A = B did not compile with Go versions before 1.9.
Type aliases, and other backward compatible changes since Go 1.0, show us that for additions to the language it is not necessary for packages to explicitly declare the minimum language version that they require. Some packages changed to use type aliases. When such a package was compiled with Go 1.8 tools, the package failed to compile. The package author can simply say: upgrade to Go 1.9, or downgrade to an earlier version of the package. None of the Go tools need to know about this requirement; it's implied by the failure to compile with older versions of the tools.
It's true of course that programmers need to understand language additions, but the the tooling does not. Neither the Go 1.8 tools nor the Go 1.9 tools need to explicitly know that type aliases were added in Go 1.9, other than in the limited sense that the Go 1.9 compiler will compile type aliases and the Go 1.8 compiler will not. That said, the possibility of specifying a minimum language version to get better error messages for unsupported language features is discussed below.
We must also consider language changes that simply remove features from the language. For example, issue 3939 proposes that we remove the conversion
string(i) for an integer value
i. If we make this change in, say, Go version 1.20, then packages that use this syntax will stop compiling in Go 1.20. (If you prefer to restrict backward incompatible changes to new major versions, then replace 1.20 by 2.0 in this discussion; the problem remains the same.)
In this case, packages using the old syntax have no simple recourse. While we can provide tooling to convert pre-1.20 code into working 1.20 code, we can't force package authors to run those tools. Some packages may be unmaintained but still useful. Some organizations may want to upgrade to 1.20 without having to requalify the versions of packages that they rely on. Some package authors may want to use 1.20 even though their packages now break, but do not have time to modify their package.
These scenarios suggest that we need a mechanism to specify the maximum version of the Go language with which a package can be built.
Importantly, specifying the maximum version of the Go language should not be taken to imply the maximum version of the Go tools. The Go compiler released with Go version 1.20 must be able to build packages using Go language 1.19. This can be done by adding an option to cmd/compile (and, if necessary, cmd/asm and cmd/link) along the lines of the
-std option supported by C compilers. When cmd/compile sees the option, perhaps
-lang=go1.19, it will compile the code using the Go 1.19 syntax.
This requires cmd/compile to support all previous versions, one way or another. If supporting old syntaxes proves to be troublesome, the
-lang option could perhaps be implemented by passing the code through a convertor from the old version to the current. That would keep support of old versions out of cmd/compile proper, and the convertor could be useful for people who want to update their code. But it is unlikely that supporting old language versions will be a significant problem.
Naturally, even though the package is built with the language version 1.19 syntax, it must in other respects be a 1.20 package: it must link with 1.20 code, be able to call and be called by 1.20 code, and so forth.
The go tool will need to know the maximum language version so that it knows how to invoke cmd/compile. Assuming we continue with the modules experiment, the logical place for this information is the go.mod file. The go.mod file for a module M can specify the maximum language version for the packages that it defines. This would be honored when M is downloaded as a dependency by some other module.
The maximum language version is not a minimum language version. If a module require features in language 1.19, but can be built with 1.20, we can say that the maximum language version is 1.20. If we build with Go release 1.19, we will see that we are at less than the maximum, and simply build with language version 1.19. Maximum language versions greater than that supported by the current tools can simply be ignored. If we later build with Go release 1.21, we will build the module with
-lang=go1.20.
This means that the tools can set the maximum language version automatically. When we use Go release 1.30 to release a module, we can mark the module as having maximum language version 1.30. All users of the module will see this maximum version and do the right thing.
This implies that we will have to support old versions of the language indefinitely. If we remove a language feature after version 1.25, version 1.26 and all later versions will still have to support that feature if invoked with the
-lang=go1.25 option (or
-lang=go1.24 or any other earlier version in which the feature is supported). Of course, if no
-lang option is used, or if the option is
-lang=go1.26 or later, the feature will not be available. Since we do not expect wholesale removals of existing language features, this should be a manageable burden.
I believe that this approach suffices for language removals.
For better error messages it may be useful to permit the module file to specify a minimum language version. This is not required: if a module uses features introduced in language version 1.N, then building it with 1.N-1 will fail at compile time. This may be confusing, but in practice it will likely be obvious what the problem is.
That said, if modules can specify a minimum language version, the go tool could produce an immediate, clear error message when building with 1.N-1.
The minimum language version could potentially be set by the compiler or some other tool. When compiling each file, see which features it uses, and use that to determine the minimum version. It need not be precisely accurate.
This is just a suggestion, not a requirement. It would likely provide a better user experience as the language changes.
The Go language can also change in ways that are not additions or removals, but are instead changes to the way a specific language construct works. For example, in Go 1.1 the size of the type
int on 64-bit hosts changed from 32 bits to 64 bits. This change was relatively harmless, as the language does not specify the exact size of
int. Potentially, though, some Go 1.0 programs continued to compile with Go 1.1 but stopped working.
A redefinition is a case where we have code that compiles successfully with both versions 1.N and version 1.M, where M > N, and where the meaning of the code is different in the two versions. For example, issue 20733 proposes that variables in a range loop should be redefined in each iteration. Though in practice this change seems more likely to fix programs than to break them, in principle this change might break working programs.
Note that a new keyword normally cannot cause a redefinition, though we must be careful to ensure that that is true before introducing one. For example, if we introduce the keyword
check as suggested in the error handling draft design, and we permit code like
check(f()), that might seem to be a redefinition if
check is defined as a function in the same package. But after the keyword is introduced, any attempt to define such a function will fail. So it is not possible for code using
check, under whichever meaning, to compile with both version 1.N and 1.M. The new keyword can be handled as a removal (of the non-keyword use of
check) and an addition (of the keyword
check).
In order for the Go ecosystem to survive a transition to Go 2, we must minimize these sorts of redefinitions. As discussed earlier, successful languages have generally had essentially no redefinitions beyond a certain point.
The complexity of a redefinition is, of course, that we can no longer rely on the compiler to detect the problem. When looking at a redefined language construct, the compiler cannot know which meaning is meant. In the presence of redefined language constructs, we cannot determine the maximum language version. We don't know if the construct is intended to be compiled with the old meaning or the new.
The only possibility would be to let programmers set the language version. In this case it would be either a minimum or maximum language version, as appropriate. It would have to be set in such a way that it would not be automatically updated by any tools. Of course, setting such a version would be error prone. Over time, a maximum language version would lead to surprising results, as people tried to use new language features, and failed.
I think the only feasible safe approach is to not permit language redefinitions.
We are stuck with our current semantics. This doesn‘t mean we can’t improve them. For example, for issue 20733, the range issue, we could change range loops so that taking the address of a range parameter, or referring to it from a function literal, is forbidden. This would not be a redefinition; it would be a removal. That approach might eliminate the bugs without the potential of breaking code unexpectedly.
Build tags are an existing mechanism that can be used by programs to choose which files to compile based on the release.
Build tags name release versions, which look just like language versions, but, speaking pedantically, are different. In the discussion above we've talked about using Go release 1.N to compile code with language version 1.N-1. That is not possible using build tags.
Build tags can be used to set the maximum or a minimum release, or both, that will be used to compile a specific file. They can be a convenient way to take advantage of language changes that are only available after a certain version; that is, they can be used to set a minimum language version when compiling a file.
As discussed above, though, what is most useful for language changes is the ability to set a maximum language version. Build tags don‘t provide that in a useful way. If you use a build tag to set your current release version as your maximum version, your package will not build with later releases. Setting a maximum language version is only possible when it is set to a version before the current release, and is coupled with an alternate implementation that is used for the later versions. That is, if you are building with 1.N, it’s not helpful to use a build tag of
!1.N+1. You could use a build tag of
!1.M where
M < N, but in almost all cases you will then need a separate file with a build tag of
1.M+1.
Build tags can be used to handle language redefinitions: if there is a language redefinition at language version
1.N, programmers can write one file with a build tag of
!1.N using the old semantics and a different file with a build tag of
1.N using the new semantics. However, these duplicate implementations are a lot of work, it's hard to know in general when it is required, and it would be easy to make a mistake. The availability of build tags is not enough to overcome the earlier comments about not permitting any language redefinitions.
It would be possible to add a mechanism to Go similar to Python‘s
from __future__ import and Perl’s
use feature. For example, we could use a special import path, such as
import "go2/type-aliases". This would put the required language features in the file that uses them, rather than hidden away in the go.mod file.
This would provide a way to describe the set of language additions required by the file. It's more complicated, because instead of relying on a language version, the language is broken up into separate features. There is no obvious way to ever remove any of these special imports, so they will tend to accumulate over time. Python and Perl avoid the accumulation problem by intentionally making a backward incompatible change. After moving to Python 3 or Perl 6, the accumulated feature requests can be discarded. Since Go is trying to avoid a large backward incompatible change, there would be no clear way to ever remove these imports.
This mechanism does not address language removals. We could introduce a removal import, such as
import "go2/no-int-to-string", but it's not obvious why anyone would ever use it. In practice, there would be no way to ever remove language features, even ones that are confusing and error-prone.
This kind of approach doesn't seem suitable for Go.
One of the benefits of a Go 2 transition is the chance to release some of the standard library packages from the Go 1 compatibility guarantee. Another benefit is the chance to move many, perhaps most, of the packages out of the six month release cycle. If the modules experiment works out it may even be possible to start doing this sooner rather than later, with some packages on a faster cycle.
I propose that the six month release cycle continue, but that it be treated as a compiler/runtime release cycle. We want Go releases to be useful out of the box, so releases will continue to include the current versions of roughly the same set of packages that they contain today. However, many of those packages will actually be run on their own release cycles. People using a given Go release will be able to explicitly choose to use newer versions of the standard library packages. In fact, in some cases they may be able to use older versions of the standard library packages where that seems useful.
Different release cycles would require more resources on the part of the package maintainers. We can only do this if we have enough people to manage it and enough testing resources to test it.
We could also continue using the six month release cycle for everything, but make the separable packages available separately for use with different, compatible, releases.
Still, some parts of the standard library must be treated as core libraries. These libraries are closely tied to the compiler and other tools, and must strictly follow the release cycle. Neither older nor newer versions of these libraries may be used.
Ideally, these libraries will remain on the current version 1. If it seems necessary to change any of them to version 2, that will have to be discussed on a case by case basis. At this time I see no reason for it.
The tentative list of core libraries is:
I am, perhaps optimistically, omitting the net, os, and syscall packages from this list. We'll see what we can manage.
The penumbra standard library consists of those packages that are included with a release but are maintained independently. This will be most of the current standard library. These packages will follow the same discipline as today, with the option to move to a v2 where appropriate. It will be possible to use
go get to upgrade or, possibly, downgrade these standard library packages. In particular, fixes can be made as minor releases separately from the six month core library release cycle.
The go tool will have to be able to distinguish between the core library and the penumbra library. I don't know precisely how this will work, but it seems feasible.
When moving a standard library package to v2, it will be essential to plan for programs that use both v1 and v2 of the package. Those programs will have to work as expected, or if that is impossible will have to fail cleanly and quickly. In some cases this will involve modifying the v1 version to use an internal package that is also shared by the v2 package.
Standard library packages will have to compile with older versions of the language, at least the two previous release cycles that we currently support.
The ability to support
go get of standard library packages will permit us to remove packages from the releases. Those packages will continue to exist and be maintained, and people will be able to retrieve them if they need them. However, they will not be shipped by default with a Go release.
This will include packages like
and perhaps other packages that do not seem to be widely useful.
We should in due course plan a deprecation policy for old packages, to move these packages to a point where they are no longer maintained. The deprecation policy will also apply to the v1 versions of packages that move to v2.
Or this may prove to be too problematic, and we should never deprecate any existing package, and never remove them from the standard releases.
If the above process works as planned, then in an important sense there never will be a Go 2. Or, to put it a different way, we will slowly transition to new language and library features. We could at any point during the transition decide that now we are Go 2, which might be good marketing. Or we could just skip it (there has never been a C 2.0, why have a Go 2.0?).
Popular languages like C, C++, and Java never have a version 2. In effect, they are always at version 1.N, although they use different names for that state. I believe that we should emulate them. In truth, a Go 2 in the full sense of the word, in the sense of an incompatible new version of the language or core libraries, would not be a good option for our users. A real Go 2 would, perhaps unsurprisingly, be harmful. | https://go.googlesource.com/proposal/+/bf0f39f6ff02ed46ce3967d6fba970d3656e67ea/design/28221-go2-transitions.md | CC-MAIN-2020-24 | refinedweb | 4,893 | 64 |
How can I open a new terminal and execute a rosrun command through a cpp program ?
i need a code example
asked 2017-01-19 12:49:14 -0600
updated 2017-01-20 00:53:22 -0600
i need a code example
To run any shell commands from a cpp program, you can use the
std::system function. ie
#include <cstdlib> // I think 'system' is in here? int main(void) { std::system("rosrun [package] [node]"); // replace [package] and [node] with the relevant text return EXIT_SUCCESS; }
This won't work unless the master node is running (i.e you have called
roscore or
roslaunch first)
Normally when you want to start a node, you run all the nodes you'll need in the program using a launch file. Instead of generating new nodes during runtime, you could also create services. The Cpp API gives you basically all the interfaces you'll need.
I would highly recommend doing the tutorials. They take ages, but go a long way to understanding ROS.
Also, before posting a question next time, look for the answer first, and if these don't answer your question, follow the support guidelines. A more descriptive title might be "How to execute rosrun command from cpp", then the informal text can go in the text body.
Please start posting anonymously - your entry will be published after you log in or create a new account.
Asked: 2017-01-19 12:49:14 -0600
Seen: 765 times
Last updated: Jan 23 '17
How could I use "ROS" commands in a bash file ?
ROS commands from c++ applications
If i currently have kinetic instead of indigo, and i have to install a package that was design for indigo, (ros-indigo-image-pipeline) how do i check that i can replace indigo with kinetic or if it is even available for kinetic?
Problem in AR. Drone Movement (angular.z)
What are some examples of the code I'd write that be for capturing sounds or acceleration?
Start multiple nodes at boot and give access to ports
Ar Drone 2.0 angular movements [closed]
I have a python program. How can I integrate ROS data in it? | https://answers.ros.org/question/252421/how-can-i-open-a-new-terminal-and-execute-a-rosrun-command-through-a-cpp-program/ | CC-MAIN-2021-04 | refinedweb | 362 | 71.34 |
In my original method the resulting sphere had a slight crease along the equator where the quantization error was worst. In the new method there is less error overall, and more importantly the quantization error is spread more evenly, eliminating the equatorial crease.
In my old quantization method I was basically projecting a sphere onto the XY plane. Therefore the quantization method was pretty good in places where the sphere surface is not perpendicular to the plane, but gets progressively worse as the surface becomes perpendicular to the projection plane. You can see in the diagram that normals in the red area map to a much smaller part of the plane than normals in the green area. More specificaly adding 2 bits to the sample size generally increases the accuracy by a factor 4 but near the edges the accuracy increases only twice.
One way to avoid this effect is to use another type of projection. By selecting a radial projection on the plane that goes through points X0=(1,0,0),Y0={0,1,0) and Z0=(0,0,1). That is, one casts a ray from the origin to the given point on the sphere and finds the intersection between the ray and the plane. To simplify things even further the new method uses a non-orthogonal coordinate system on the plane such that X0->(0,0), Y0->(1,0), Z0->(0,1)
Since we only consider the octant where x,y,z > 0, the sphere is nowhere perpendicular to the plane. In fact the maximum angle between the sphere and the plane is around 30 degrees. In this case the projection is everywhere nice and regular and each bit of the sample always corresponds to a 2-fold increase in accuracy. Calculations show that the spread between the maximum accuracy and the minimal accuracy is only about 5 times.
The formula for the projective transform is:
Which we can reduce to:
zbits = 1/3 * (1 - (x+y-2*z)/(x+y+z))
ybits = 1/3 * (1 - (x-2*y+z)/(x+y+z))
The ybits and zbits are now between 0 and 1, and so is ( ybits + zbits ). Thus we can apply a similar bit-packing method as in the original algorithm. In the original method since the diagonal of the table was all zero, we could quantize two values, x and y, from 0-127, since the diagonal could be shared in the two encodings. Since in this mapping the diagonal varies, we have to quantize two values ybits and zbits from 0-126 so that we code nothing on the diagonal.
zbits = z / ( x + y + z )
ybits = y / ( x + y + z )
To unpack we use the reverse transform and then normalize the vector so it again lies on the sphere. Since normalization is just a scaling by some number, we can pre-tabulate the amount of required scaling for each u and v and thus avoid costly divisions and square roots. Logically what we do is:
There is another way to understand this new projection. We know the length of unit vectors is always 1, so we don't have to store that information. We can store any vector that is colinear with the normal as the quantized vector and then just normalize the vector once we unpack it ( by multiplying it with a stored normalization constant ). So all we need to do is store the ratio of say x/(x+y+z) and y/(x+y+z) as our two quantized values. Any ratios that allow us to resconstruct the proportions of x,y,z will do. Since we are quantizing the values in the range 0-126 just do the following:
scaling = scalingTable[zbits][ybits]
v.z = zbits * scaling
v.y = ybits * scaling
v.x = (1 - zbits - ybits ) * scaling
Now xbits is the ratio of x/(x+y+z) quantized in the range of 0-126, and ybits is the ratio y/(x+y+z). Given these two ratios we can unpack a vector colinear with the original vector like so:
w = 126 / ( x + y + z )
xbits = x * w;
ybits = y * w;
Finally to get the original normal back, we just multiply the unpacked vector by the appropriate scaling constant:
x = xbits
y = ybits
z = 126 - x - y
A quick review should convince you that both methods are in fact identical.
x *= scaling;
y *= scaling;
z *= scaling;
This new method has a slightly higher cost. To pack a vector from 12 bytes to 2 bytes the old method used 4 multiplies and 2 adds, this new method requires 2 multiplies, 2 adds and a divide, which on modern processors may be almost the same speed. The unpacking method costs 3 extra multiplies and two adds. Also no big deal. And look at that near perfect accuracy, at least for lighting.
Below is a reimplementation of the original quantized vector class with the improved quantization method.
#ifndef _3D_UNITVEC_H #define _3D_UNITVEC_H #include #include #include "3dmath/vector.h" #define UNITVEC_DECLARE_STATICS \ float cUnitVector::mUVAdjustment[0x2000]; \ c3dVector cUnitVector::mTmpVec; // upper 3 bits #define SIGN_MASK 0xe000 #define XSIGN_MASK 0x8000 #define YSIGN_MASK 0x4000 #define ZSIGN_MASK 0x2000 // middle 6 bits - xbits #define TOP_MASK 0x1f80 // lower 7 bits - ybits #define BOTTOM_MASK 0x007f // unitcomp.cpp : A Unit Vector to 16-bit word conversion algorithm // based on work of Rafael Baptista (rafael@oroboro.com) // Accuracy improved by O.D. (punkfloyd@rocketmail.com) // // a compressed unit vector. reasonable fidelty for unit vectors in a 16 bit // package. Good enough for surface normals we hope. class cUnitVector : public c3dMathObject { public: cUnitVector() { mVec = 0; } cUnitVector( const c3dVector& vec ) { packVector( vec ); } cUnitVector( unsigned short val ) { mVec = val; } cUnitVector& operator=( const c3dVector& vec ) { packVector( vec ); return *this; } operator c3dVector() { unpackVector( mTmpVec ); return mTmpVec; } void packVector( const c3dVector& vec ) { // convert from c3dVector to cUnitVector assert( vec.isValid()); c3dVector tmp = vec; // input vector does not have to be unit length // assert( tmp.length() <= 1.001f ); mVec = 0; if ( tmp.x < 0 ) { mVec |= XSIGN_MASK; tmp.x = -tmp.x; } if ( tmp.y < 0 ) { mVec |= YSIGN_MASK; tmp.y = -tmp.y; } if ( tmp.z < 0 ) { mVec |= ZSIGN_MASK; tmp.z = -tmp.z; } // project the normal onto the plane that goes through // X0=(1,0,0),Y0=(0,1,0),Z0=(0,0,1). // on that plane we choose an (projective!) coordinate system // such that X0->(0,0), Y0->(126,0), Z0->(0,126),(0,0,0)->Infinity // a little slower... old pack was 4 multiplies and 2 adds. // This is 2 multiplies, 2 adds, and a divide.... float w = 126.0f / ( tmp.x + tmp.y + tmp.z ); long xbits = (long)( tmp.x * w ); long ybits = (long)( tmp.y * w ); assert( xbits < 127 ); assert( xbits >= 0 ); assert( ybits < 127 ); assert( ybits >= 0 ); // Now we can be sure that 0<=xp<=126, 0<=yp<=126, 0<=xp+yp<=126 // however for the sampling we want to transform this triangle // into a rectangle. if ( xbits >= 64 ) { xbits = 127 - xbits; ybits = 127 - ybits; } // now we that have xp in the range (0,127) and yp in the range (0,63), // we can pack all the bits together mVec |= ( xbits << 7 ); mVec |= ybits; } void unpackVector( c3dVector& vec ) { // if we do a straightforward backward transform // we will get points on the plane X0,Y0,Z0 // however we need points on a sphere that goes through these points. // therefore we need to adjust x,y,z so that x^2+y^2+z^2=1 // by normalizing the vector. We have already precalculated the amount // by which we need to scale, so all we do is a table lookup and a // multiplication // get the x and y bits long xbits = (( mVec & TOP_MASK ) >> 7 ); long ybits = ( mVec & BOTTOM_MASK ); // map the numbers back to the triangle (0,0)-(0,126)-(126,0) if (( xbits + ybits ) >= 127 ) { xbits = 127 - xbits; ybits = 127 - ybits; } // do the inverse transform and normalization // costs 3 extra multiplies and 2 subtracts. No big deal. float uvadj = mUVAdjustment[mVec & ~SIGN_MASK]; vec.x = uvadj * (float) xbits; vec.y = uvadj * (float) ybits; vec.z = uvadj * (float)( 126 - xbits - ybits ); // set all the sign bits if ( mVec & XSIGN_MASK ) vec.x = -vec.x; if ( mVec & YSIGN_MASK ) vec.y = -vec.y; if ( mVec & ZSIGN_MASK ) vec.z = -vec.z; assert( vec.isValid()); } static void initializeStatics() { for ( int idx = 0; idx < 0x2000; idx++ ) { long xbits = idx >> 7; long ybits = idx & BOTTOM_MASK; // map the numbers back to the triangle (0,0)-(0,127)-(127,0) if (( xbits + ybits ) >= 127 ) { xbits = 127 - xbits; ybits = 127 - ybits; } // convert to 3D vectors float x = (float)xbits; float y = (float)ybits; float z = (float)( 126 - xbits - ybits ); // calculate the amount of normalization required mUVAdjustment[idx] = 1.0f / sqrtf( y*y + z*z + x*x ); assert( _finite( mUVAdjustment[idx])); //cerr << mUVAdjustment[idx] << "\t"; //if ( xbits == 0 ) cerr << "\n"; } } void test() { #define TEST_RANGE 4 #define TEST_RANDOM 100 #define TEST_ANGERROR 1.0 float maxError = 0; float avgError = 0; int numVecs = 0; {for ( int x = -TEST_RANGE; x < TEST_RANGE; x++ ) { for ( int y = -TEST_RANGE; y < TEST_RANGE; y++ ) { for ( int z = -TEST_RANGE; z < TEST_RANGE; z++ ) { if (( x + y + z ) == 0 ) continue; c3dVector vec( (float)x, (float)y, (float)z ); c3dVector vec2; vec.normalize(); packVector( vec ); unpackVector( vec2 ); float ang = vec.dot( vec2 ); ang = (( fabs( ang ) > 0.99999 w = 0; w < TEST_RANDOM; w++ ) { c3dVector vec( genRandom(), genRandom(), genRandom()); c3dVector vec2; vec.normalize(); packVector( vec ); unpackVector( vec2 ); float ang =vec.dot( vec2 ); ang = (( x = 0; x < 50; x++ ) { c3dVector vec( (float)x, 25.0f, 0.0f ); c3dVector vec2; vec.normalize(); packVector( vec ); unpackVector( vec2 ); float ang = vec.dot( vec2 ); ang = (( fabs(; }} cerr << "max angle error: " << maxError << ", average error: " << avgError / numVecs << ", num tested vecs: " << numVecs << endl; } friend ostream& operator<< ( ostream& os, const cUnitVector& vec ) { os << vec.mVec; return os; } protected: unsigned short mVec; static float mUVAdjustment[0x2000]; static c3dVector mTmpVec; }; #endif // _3D_VECTOR_H
Search for your code on google you will find that Valve took your code and copyrighted it their own. Your name is still in the comments though. I hope you don't mind ! heh..
btw, I'm also taking it. I hope it will serve my needs.
Do you think you could describe that method as being a projection on a cube or a tetrahedron of sorts, rather than 2 planes ? | https://www.gamedev.net/resources/_/technical/math-and-physics/higher-accuracy-quantized-normals-r1252 | CC-MAIN-2017-04 | refinedweb | 1,690 | 62.27 |
Quarkus – an IO thread and a worker thread walk into a bar: a microbenchmark story
A competitor recently published a microbenchmark comparing the performance of their stack to Quarkus. The Quarkus team feels this microbenchmark shouldn’t be taken at face value because it wasn’t making a like-to-like comparison leading to incorrect conclusions. Both of the two frameworks under comparison support reactive processing. Reactive processing enables running the business logic directly on the IO thread, which ultimately performs better in microbenchmark focusing on response time and concurrency. The microbenchmark should have been written so that both frameworks (or neither framework) obtain this benefit. Anyway, this turns out to be a very interesting topic and good information for Quarkus users, so read on.
tl; dr;
Quarkus has great performance for both imperative and reactive workloads. It’s because Quarkus is itself based on Eclipse Vert.x, a mature top performing reactive framework, in such a way that allows you to layer, mix and match the IO paradigm that best fits your use-case.
If you have a REST scenario well suited to run purely on the IO thread, add a Vert.x Reactive Route using Quarkus Reactive Routes and your app will get better performance than using Quarkus RESTEasy.
We ran this low-work REST + validation competitor-written microbenchmark which features no blocking operation, just returning static data. When using Quarkus Reactive Routes to run Quarkus purely on the IO thread, we observed 2.6x times the requests/sec and 30% less memory usage (RSS) than running with Quarkus RESTEasy (which mixes IO thread and worker thread). But that’s on a microbenchmark purpose built to this specific scenario (more on that later).
More interesting read
The microbenchmark itself is uninteresting, but it is a good demonstrator of a phenomenon that can happen in reactive stacks. Let’s use it as a vehicle to learn more about Quarkus and its reactive engine.
Imperative and Reactive: the elevator pitch
This blog post does not explain the fundamental differences between the imperative execution model and the reactive execution model. However, to understand why we see so much difference in the mentioned microbenchmark, we need some notions.
In general, Java web applications use imperative programming combined with blocking IO operations. This is incredibly popular because it is easier to reason about the code. Things get executed sequentially. To make sure one request is not affected by another, they are run on different threads. When your workload needs to interact with a database or another remote service, it relies on blocking IO. The thread is blocked waiting for the answer. Other requests running on different threads are not slowed down significantly. But this means one thread for every concurrent request, which limits the overall concurrency.
On the other side, the reactive execution model embraces asynchronous development models and non blocking IOs. With this model, multiple requests can be handled by the same thread. When the processing of a request cannot make progress anymore (because it requests a remote service, or interacts with a database), it uses non blocking IO. This releases the thread immediately, which can then be used to serve another request. When the result of the IO operation is available, the processing of the request is restored and continues its execution. This model enables the usage of the IO thread to handle multiple requests. There are two significant benefits. First, the response time is smaller because it does not have to jump to another thread. Second, it reduces memory consumption as it decreases the usage of threads. The reactive model uses the hardware resources more efficiently, but… there is a significant drawback. If the processing of a request starts to block, this gets real bad. No other request can be handled. To avoid this, you need to learn how to write non blocking code, how to structure asynchronous processing, and how to use non blocking IOs. It’s a paradigm shift.
In Quarkus, we want to make the shift as easy as possible. However, we have observed that the majority of user applications are written using the imperative model. That is why, when the user application uses JAX-RS, Quarkus defaults to execute the (imperative) workload to a worker thread.
Hello world microbenchmark: IO thread or worker thread?
Back to the competitor’s microbenchmark, we have a REST endpoint doing some trivial processing and some equally trivial validation. Pretty much no meaningful business work. This is the Hello World of REST for all intents and purposes.
When you run the microbenchmark with Quarkus RESTEasy, the request is handled by the reactive engine on the IO thread but then the processing work is handed over to a second thread from the worker thread pool. That’s called dispatch. When your microbenchmark does as little as Hello World, then the dispatch overhead is proportionally big. The dispatch overhead is not visible in most (real life) applications but is very visible in artificial constructs like microbenchmarks.
The competitor’s stack, however, runs all the request operations on the IO thread by default. So what this microbenchmark was actually comparing is just the cost of dispatching to the worker thread pool. And frankly (according to the competitor’s numbers) and in spite of this extra dispatch work, Quarkus did very very well achieving ~95% of the competitor’s throughput today! I say today because we are always improving upon performance, and in fact we expect to see further gains in the soon to be released 1.4 release.
When compared at a disadvantage (dispatching to a worker thread), Quarkus is nevertheless almost as fast in throughput.
… But wait, Quarkus can also avoid the dispatch altogether and run operations on the IO Thread. This is a more accurate comparison to how the competitor’ stack was configured to do as in both case, it is the user’s responsibility to ask for a dispatch if and when needed by the application. To compare apples to apples, let’s use Quarkus Reactive Routes backed by Eclipse Vert.x. In this model, operations are run on the IO thread by default.
@ApplicationScoped public class MyDeclarativeRoutes { @Inject Validator validator; @Route(path = "/hello/:name", methods = HttpMethod.GET) void greetings(RoutingExchange ex) { RequestWrapper requestWrapper = new RequestWrapper(ex.getParam("name").orElse("world")); Set<ConstraintViolation<RequestWrapper>> violations = validator.validate(requestWrapper)); if( violations.size() == 0) { ex.ok("hello " + requestWrapper.name); } else { StringBuilder validationError = new StringBuilder(); violations.stream().forEach(violation > validationError.append(violation.getMessage())); ex.response().setStatusCode(400).end(validationError.toString()); } } private class RequestWrapper { @NotBlank public String name; public RequestWrapper(String name) { this.name = name; } } }
This is not very different from your JAX-RS equivalent.
Throughput Numbers
We ran the microbenchmark application in a docker container constrained to reflect a typical resource allocation to a container orchestrated by Kubernetes:
- 4 CPU
- 256 MB of RAM
- and
-Xmx128mheap usage for the Java process
We saw that Quarkus using Reactive Routes ran 2.6 times the requests/sec. 2.6 times! It makes sense! Remember the application code virtually does nothing, so the dispatch cost is comparatively high. If you were to write a more real life workload (maybe even having a blocking operation like a JPA access and therefore forcing a dispatch), then the results would be very different. Context matters!
You can find code and how to reproduce the microbenchmark here on GitHub.
Table 1. Microbenchmark results comparing Quarkus dispatching to a worker thread vs running purely on the IO thread
1. ‘Mean Start Time to First Request’ was measured using an application built as an UberJar
In a fair comparison (purely remaining on the IO thread – no dispatch), Quarkus more than double its throughput.
As the generated load tends towards the maximum throughput of the system under test, the response time experienced by the client increases exponentially. So the best system (for the workload) has a vertical line as far to the right as possible. Equally important is to have as flat a line as possible for the longest time. You do not want the response time to degrade before the system reaches maximum throughput.
By the way, in the competitor microbenchmark Quarkus is shown as consuming more RSS (more RAM).This is also explained by the worker thread pool being operated whereas the competitor did not have a worker thread pool. The Quarkus Reactive Routes solution (on a pure IO event run) shows a 30% RSS usage reduction.
In this graph, the lower, the better. We see that the pure IO thread solution manages to increase throughput with little to no change to the memory usage (RSS), that’s very good!
Conclusion
Quarkus offers you the ability to safely run blocking operations, run non blocking operations on the IO thread or mix both models. The Quarkus team takes performance very seriously and we see Quarkus as offering great numbers whether you use the imperative or reactive models. In more realistic workloads, the dispatch cost would be much less significant, you would not see such drastic differences between the two approaches. As usual, test as close to your real application as possible.
Mystery solved. Benchmarks are hard, challenge them. But the moral of the story is that in all bad comes some good. We’ve now learned how to run Quarkus applications entirely on the IO thread. And how in some situations that can make a big difference. Remember, don’t block! In fact, Quarkus can warn you if you do so. Oh and we also learned that Quarkus is so fast, it can even beat itself ;p
This blog post originally appeared on the Quarkus blog. | https://jaxenter.com/quarkus-io-thread-microbenchmark-171748.html | CC-MAIN-2021-10 | refinedweb | 1,603 | 55.34 |
QVariant conversion from UserType to Int
I'm trying to take advantage of the new functionality in Qt5.2 allowing conversion from a UserType to an Int. Here's my code:
@#include <QString>
#include <QMetaType>
#include <QDebug>
class UserTypeClass
{
public:
UserTypeClass(int data=0);
UserTypeClass( const UserTypeClass& );
int toInt(bool *ok) const;
bool operator==( const UserTypeClass& rhs ) const;
bool operator<( const UserTypeClass& rhs ) const;
private:
int m_data;
};
Q_DECLARE_METATYPE(UserTypeClass);
//---------------------------------------------------------------
UserTypeClass::UserTypeClass(int data) : m_data(data & 0x03)
{
qDebug() << "UserTypeClass ctor " << this << ' ' << data << " -> " << m_data;
}
//---------------------------------------------------------------
int UserTypeClass::toInt(bool* ok) const
{
if(ok)
{
*ok = (m_data >= 0 && m_data < 4); //*ok = true;
qDebug() << "UserTypeClass::toInt " << this << ' ' << *ok << ' ' << m_data;
}
else
{
qDebug() << "UserTypeClass::toInt " << this << " not set " << m_data;
}
return m_data;
}
//---------------------------------------------------------------
bool UserTypeClass::operator==( const UserTypeClass& rhs ) const
{
return m_data == rhs.m_data;
}
//---------------------------------------------------------------
bool UserTypeClass::operator<( const UserTypeClass& rhs ) const
{
return m_data < rhs.m_data;
}
//---------------------------------------------------------------
UserTypeClass::UserTypeClass( const UserTypeClass& other )
: m_data( other.m_data )
{
qDebug() << "UserTypeClass copy " << &other << " to " << this << ' ' << m_data;
}
int main(int argc, char **argv)
{
//QMetaType::registerConverter< UserTypeClass, int > ( &UserTypeClass::toInt );
int (UserTypeClass::f)(bool) const = &UserTypeClass::toInt;
QMetaType::registerConverter< UserTypeClass, int > ( f );
QVariant value;
value.setValue(UserTypeClass(3));
qDebug() << "main(): value = " << value.toInt();
}@
And here's my output:
UserTypeClass ctor 0x39fc14 3 -> 3
UserTypeClass copy 0x39fc14 to 0x758200 3
UserTypeClass::toInt 0x39fc1c false 6337224
UserTypeClass::toInt 0x758200 true 3
main(): value = 3
I'm using Qt5.2.1 on Windows 7, msvc10. Also tried Qt5.3.1. Haven't tried linux yet...
My ctor and copy ctor are called as expected, but the toInt() belonging to the object I placed in the QVariant does not seem to get called. How do I know? I cheated - In toInt(), I know m_data should be less than 4, so when I notice it's not, I set ok=false. Only then does the proper toInt() get called (Note 'this' of the second call to toInt() is the same as the copy constructed value). I have no idea where the first call to toInt() originated from.
What am I doing wrong? I can't rely on this cheat in production, so how can I make sure the correct toInt() will be called in my code?
(ps, I tried adding a call to QMetaType::registerComparators< UserTypeClass >(); in main, but that didn't help.)
thanks,
glenn
.
Debugging the problem shows that "value.toInt()" returns the value of the QVariant value's member variable to store integers and not what it gets from calling UserTypeClass's toInt(bool *) function, though it actually calls this function as the log already proved.
Sorry, this does not really help you, but at least you know that the problem still exists in Qt 5.3.1 using VS 2013 on WINDOWS 7 64-bit. An Update would not help you.
I did not use QMetaType::registerConverter before, so the problem interested me. Maybe someone else can have a look to verify the toInt function is properly registered. If well, it looks like a bug.
Thanks for looking. I have also confirmed the problem under 5.3.1/linux/g++
When I call value.toInt(), it generates a call to QVariant::qNumVariantToHelper, which can do 2 attempts to make the conversion:
@template <typename T>
inline T qNumVariantToHelper(...)
{
... some code omitted T ret = 0; if ((d.type >= QMetaType::User || t >= QMetaType::User) && QMetaType::convert(&val, d.type, &ret, t)) { return ret; } if (!handlerManager[d.type]->convert(&d, t, &ret, ok) && ok) *ok = false; return ret;
}@
The first attempt uses QMetaType's QMetaTypeConverterRegistry, which can not seem to produce the correct 'this' to find the data value in the qvariant. Fortunately (for debugging, at least), I am able to return false from there and make the second attempt.
The second attempt uses QVariant's handlerManager and is able to locate the correct data.
I'm not an expert at this stuff, so I was hoping to get a little more feedback before submitting a bug report.
Hi,
What about
@QMetaType::registerConverter< UserTypeClass, int > (&UserTypeClass::toInt);@
?
That was my first attempt at line 58, then I made my intentions more explicit at lines 59,60. Both give the same result.
Can you try with a more recent version of Qt ? I just tried with exactly your code and it seems to be working fine
Please see above. Already tested with 5.3.1. Another commentator has also confirmed with 5.3.1.
The code 'works' because of the cheat I put in UserTypeClass::toInt(). It sets ok to false when m_data != 3. See line 3 of the log. There, UserTypeClass::toInt() reports a strange value of 'this', false, and 6337224. Where did 6337224 come from? It should be 3. That is the problem.
Because of the cheat, QVariant::qNumVariantToHelper() is then able to perform a second try at the conversion (again, see above), this time with much improved results.
I might have misunderstood something, in your constructor you ensure that you can't have anything bigger than 3, so in fact when calling toInt you should always have a valid value. Correct ? If so, why implement toInt(bool *ok) and not just toInt() ?
In the other case, you should simply set ok to true if ok is valid. Otherwise the behavior will be considered as wrong, you must set it even if it's always true. | https://forum.qt.io/topic/44751/qvariant-conversion-from-usertype-to-int | CC-MAIN-2017-47 | refinedweb | 873 | 57.47 |
= SCGI WSGI =
This is a HOWTO. See BehindApache for a higher-level discussion.
A very simple setup lets your cherry run under SCGI (on apache in my setup). You need just a running apache server with [ mod_scgi] and have the scgi python module installed. You also need an SCGI front end to WSGI. In this case, I'm using the SCGI-->WSGI application proxy, aka "SWAP" from [ Python Paste]. Here's a direct link to [ "SWAP"]. If that link doesn't work, you may have to hunt down SWAP on your own.
{{{
#!python
#!/usr/bin/python
import cherrypy
from paste.util.scgiserver import serve_application
class HelloWorld:
def index(self):
return "Hello world!"
index.exposed = True
app = cherrypy.tree.mount(HelloWorld())
cherrypy.engine.start(blocking=False)
serve_application(application=app, prefix="/dynamic", port=4000)
}}}
The Apache configuration that goes along with this is as follows:
{{{ | http://tools.cherrypy.org/wiki/ScgiWsgi?format=txt | CC-MAIN-2015-27 | refinedweb | 143 | 52.76 |
Timing C/C++ Code on Linux not opposed to this, I do most of my work during the day on a Linux machine and wanted to be able to use that to develop too. For the most part, this isn't much of an issue because the code is pretty much standard C++ and is quite cross platform compatible. I just generated a Makefile as well as the Visual Studio Project [1] and I can work on the challenge whenever I feel like devoting a little time.
Anyway, the timer code supplied is not cross platform. It only works on Windows. Since it isn't that complicated, and I wanted to see times on both platforms, I modified the hr_time.h and hr_time.cpp to include an additional implementation for Linux.
Here are the details:
Linux Time Structures
In <sys/time.h>, you get a few functions and structures that make high resolution timing easy. The gettimeofday function returns the seconds and microseconds since the Epoch. The function operates on the timeval structure:
#include <sys/time.h> // snip... timeval t; gettimeofday(&t,NULL); // ignoring the 2nd parameter which is the timezone
In addition to being able to query the time, there are also functions for adding, subtracting, and comparing times. For this purpose, the timesub function works perfectly. You need two values (already queried with gettimeofday and and result timeval:
timeval start,stop,result; // query the start/stop with gettimeofday during your program timersub(&start,&stop,&result);
result now contains the difference between start/stop in seconds and microseconds.
For the sake of compatibility, the hr_time implementation wants a double for the time, where the fraction part is the number of microseconds. The nice thing is that timesub normalizes tv_usec to somewhere in the range [0,100000). Here is what I did to convert the result to a double:
return result.tv_sec + result.tv_usec/1000000.0; // 1000000 microseconds per second
Here are the completed code files if you'd like to download and look closer or use for yourself:
hr_time.h
hr_time.cpp
Have fun coding!
[1] Actually, I used Bakefile to generate the project files.
4 users commented in " Timing C/C++ Code on Linux "Follow-up comment rss or Leave a Trackback
Thanks for the mods. However, both your links to the files point to the same file.
Sure enough, I corrected the link. Thanks
Correct function:
timersub(&start,&stop,&result);
The attached cpp file had the correct function but there was a typo in the post. Good catch, thanks. | http://allmybrain.com/2008/06/10/timing-cc-code-on-linux/ | crawl-002 | refinedweb | 422 | 64.71 |
Good quality and especially javascript download file docx all this for free, this is a good choice for you. A converter and an editor. FastStone Image Viewer is a great application from FastStone featuring an image browser, if you need fast manipulation,
From the intro: One of the most powerful features in Lightroom is the image processing engine and the way the image adjustment processing is deferred until the time you choose to edit javascript download file docx in Photoshop or export an image.adobe LiveCycle Designer ES, acrobat 9 javascript download file docx Pro Extended,,
Just reformatted. Office 2007 hypptv download youtube or Open Office file and get back the original data or text so it does javascript download file docx not need to be retyped or re-entered, upload your corrupt.M Email: 3 PPS1000t/aPPS..
After the set time lapses, the picture is erased from the gadgets and the organizations servers. Snapchats particular documentation expresses that the organizations servers hold a log of the last 200 snaps that were sent and got, yet no real substance is put away. The.
Montesquieu o fio das missangas baixar livro de filmes ritual dublado. Historia o gato de botas 2011 qual melhor site filmes fiz uma musica na. Dvdrip o fio da navalha somerset maugham download filme senhor das moscas 1963 gato botas baixar.
: . Adobe. : . . Adobe Creative Cloud 2014 X-Force . .
GTA 5 News and content, and GTA 5 Videos. In addition to javascript download file docx the GTA5 Countdown, internet connection is required for news and and videos to keep this GTA 5 app light to download and save your phone space. It comes with a GTA Gallery,« ».
Convert Download videos free from Instagram, putlocker, javascript download file docx xvideos, tumblr,, soundCloud mp3, facebook, vK,license:Freeware Price: 0.00 Size: 3.3 MB Downloads (116 )) Domingo - Java-API to Lotus Notes/Domino javascript download file docx Download ActiPOINT notesD Released: December 30, domingo is compatible with Java 1.3 or higher and with Lotus Notes/Domino R5, r6,.
Supports up to four signatures for each address you have set up. Works for Compose Message and Reply / Forward. Features: Use rich HTML signatures with. 7. NyanIMG chrome Add-on - Internet/Browsers. NyanIMG chrome Add-on is a browser extension designed to upload images to a dedicated hosting service, so you can share them with your friends. You will be able to upload the images just by right-clicking them. The upload process is very speedy. 8. Pendule - Internet/Browsers. This extension adds some extended developer tools for chrome like image manipu..
InnerSoft cad is a add-on component for AutoCAD that allows you to Expo).
Auf dieser Leiste laufen dann Seiten vom Hersteller selbst sowie andere Werbeinformationen im 45-Sekunden-Takt durch. Aber im Gegensatz zu vielen anderen Bannern und Werbeformaten, die man aus dem Netz kennt, sind diese sehr dezent und kaum merklich. Die Werbung lässt sich also in jedem Fall.
Downloading videos nowadays have been in trend. You can find every second user on the Internet either wants to download videos, audios or movies. With the increasing use of the Internet and phones, people are always in search for their entertainment. Technology has made our.
Visio 2010 For Free From Title Show: All Software Free Software Only Mobile Software Only 1. Classic Menu for visio 2010 - Multimedia Design/Image Editing. One may find that the new ribbon-style interface of Microsoft visio 2010 puts all the users who were accustomed to.
Movies Hollywood From javascript download file docx Short Description 1. Rating, movies biography. View stills, classification,you can pay through your PayPal account. You must confirm you are using a genuine software.) Payment : The standard payment method of our online store is Paypal or Paypal's Credit Card. We also accept the payment via Amazon Gift Card, (For key issue,)
Alpha apple browser chrome google javascript download file docx internet linux mac microsoft web windows. Google chrome 37 dev. If you really feel you need the latest Chrome,for information on software related resources, software Geek This site has closed This website has now closed and it will not be updated javascript download file docx in future.: Brush Script is a Trademark of Monotype javascript download file docx Typography ltd.
As featured in: WhatsApp is a cross-platform messaging service javascript download file docx that uses the same internet data plan you use for email and web browsing, download Specs What's New Alternatives 18 News. There is no cost to message and stay in touch with your friends.platform: Windows flash version 11 7 download Publisher: javascript download file docx EagleGet. Aac, wma, mov, ac3, wav, mp3, date: Size: 5525 KB Free Pocket PC video converter can convert almost any popular video formats such as avi, vob, rmvb, divx, rm, xvid, 3gp, flv, mkv, wmv,windows 8, windows 7, categorie: Office/Word/PowerPoint/Excel Marime Fisier: Compatibil: Windows 10, 2016 admin Program Utile Leave a comment Info: Puteti deschide fisiere de tip word sau powerpoint. Windows Vista, licenta: Free. O varianta gratuit ce va ofera o solutie optima la fisierele de tip office. Windows XP. July 6,
Autodesk sketchbook unlocked free download!
2013 Visits: 284 PyQtX provides stable up to date binary Installers for javascript download file docx PyQt on Mac OS sit the files section to Download the Installer, license:Freeware Price: 0.00 Size: 20 KB Downloads (172 )) SearchIt Download PyQtX Released: August 11,free microsoft javascript download file docx word trial version. In all views, outlook 2013 brings together fade-in menus for appointments, download Microsoft Office 2013.all the effects are available via the FX menu (FX button javascript download file docx at the bottom of the turntable)) RECORD SHARE YOUR CREATIONS WITH YOUR FRIENDS : Record your mixes and share them with your friends on Facebook and Twitter thanks to the eMix feature.#1 Free PDF to Word Converter 10.1 download - Windows 7 - Absolutely free and user friendly application that converts PDF to Word format.
: Generate. Adobe Creative Cloud 2014 X-Force, adobe javascript download file docx Illustrator CC. :. :.swanand Kirkire javascript download file docx Shaan. Download 8.30 MB Hits : 906 2 Aal-Izz-Well-(Remix)) Sonu Nigam, 1 Aal-Izz-Well Sonu Nigam, swanand Kirkire Shaan. Download 8.26 MB Hits : 424 3 Behti-Hawa-Sa-Tha-Woh Shaan Shantanu Moitra Download 8.33 MB.you may have seen a number of new TVs, has been widespread in our daily. And other products sporting a 4K logo, javascript download file docx 4K resolution refers to a display device or content having horizontal resolution on the order of 4,000 pixels, camcorders,8 wondershare filmora javascript download file docx 8 2 wondershare filmora 7 8 wondershare filmora.
Users can continue using javascript download file docx current version or upgrade to a later version such as 2019. Bear in mind that the pricing may differ for different versions. However, after purchasing a product key,
import and visualize the existing ones, edit them and export to other formats javascript download file docx and also get access to properties of entities. Cad VCL is a library for creating cad software in Delphi and CBuilder applications. With its help a developer can create new drawings,
Tessellation Enabled. DirectX 11 Free Download For Windows 7 64 bit. Direct javascript download file docx Compute 11. Ati radeon HD 3200, key Features Direct X 11 Download. ATI radeon HD 3200. Direct X 11 Compatible Video Cards / Graphic Card List dx 11 in AMD radeon HD 7660d, ati radeon HD 4550, hD 3000, aTI graphic card b276, ati graphics card support, hDR Texture Compression. ATI radeon 3000, multi-threading. Below are the eset security free download main noticeable features of DX 11 or directx download windows 7 64 bit. Shader Model 5.0.adding ingredients in the correct order, you will be in charge of food prep, it's up to you to prepare over 50 different recipes! Cooking the right amount of time and more. From Ravioli to Creme Brulee, from Eggrolls javascript download file docx to Pancakes,
Funny punjabi skit :D - Download Songs and javascript download file docx Music Videos for Free - t. Like us if you love us close Gosong Shake Set t as Homepage Displaying 1 - 21 of about results for.if you cant live without music and you want to javascript download file docx own something that could get you to listen to your favorite tracks even while on-the-go, officially dubbed as Galaxy Music, better not miss this new Samsung smartphone!java Applets are also application and broadly they do not require you to download an application, exe file (which of-course requires you to download it)) which is an application and can be affected by virus or malware. Java or.NET applications are often distributed with javascript download file docx a setup.
Panda protects you. Do you have Windows Vista? CODE Windows Registry Editor Version 5.00 HKEY flash downloader in chrome _LOCAL _MACHINESOFTWAREP anda SoftwarePanda Antivirus javascript download file docx Lite "LANGUAGE "dword: mukam, 7bddeb7242a6 7bddeb7242a6 7bddeb7242a6 7bddeb7242a6 7bddeb7242a6 g,,. Just install and forget. | http://blogtosani.info/javascript-download-file-docx.html | CC-MAIN-2020-34 | refinedweb | 1,526 | 63.7 |
I’m trying to implement GQN and as for my data set,
after some transformations I’ve got 400 compressed .pt files, each including 2000 training samples.
The easiest way to apply a Dataset over it would be to use getitem able to calculate and decompress file in which given sample is stored in order to access it.
However it has huge disadvantage of having to decompress a file upon drawing any sample from Database.
I’d like avoid decompressing whole data set as it would use lots of my ROM, so I define two other approaches:
Store each of 2000 sample from file in separate compressed file - takes same amount of space and still needs to decompress on calling getitem but decompressed files are significantly smaller (still takes time to decompress).
Lastly I decide to iterate randomly over 2000 samples from one file and then move on to another random compressed file. This allows me to iterate over Dataset faster, but with my current implementation using multithreading actually slows my script down.
The code I’m using is as follows:
import torch from torch.utils.data import Dataset, DataLoader, Sampler import gzip import os from tqdm import tqdm class ShepardMetzler(Dataset): def __init__(self, root_dir, transform= None): self.root_dir = root_dir self.transform = transform self.zipPaths = sorted([os.path.join(self.root_dir, fileName) for fileName in os.listdir(root_dir)]) with gzip.open(self.zipPaths[0], "rb") as f: self.length = len(torch.load(f))*len(self.zipPaths) self.perZip = self.length / len(self.zipPaths) self.currentZip = -1 def __len__(self): return self.length def __getitem__(self, idx): if idx[0] != self.currentZip: self.currentZip = idx[0] with gzip.open(self.zipPaths[idx[0]], "rb") as f: self.currentTensor = torch.load(f) return self.currentTensor[idx[1]] def zipsLength(self): return self.perZip def zipsNumber(self): return len(self.zipPaths) class SubSampler(Sampler): def __init__(self, dSet): self.dSet = dSet self.idxSplit = dSet.zipsLength() self.setsNumber = dSet.zipsNumber() def __len__ (self): return len(self.dSet) def __iter__(self): subSets = torch.randperm(self.setsNumber) for sSet in subSets: perm = torch.randperm(self.idxSplit) for sample in perm: yield [sSet, sample] return dataset = ShepardMetzler(root_dir = "../shepard_metzler_5_parts/train") mySampler = SubSampler(dataset) dataloader = DataLoader(dataset, batch_size = 32, sampler=mySampler, num_workers = 0) for batch in tqdm(dataloader): pass
The question is, how could I allow it to use multithreading?
I know DataLoader has worker_init_fn parameter but I’ve got no idea how to use it nor if it’s what I should be actually using.
Or could I somehow abuse multithreading creating DataLoader on top of samples provided by another DataLoader?
It would be great if I could get somehow each worker to work on different compressed files at same time. | https://discuss.pytorch.org/t/using-multithreading-with-custom-sampler-dataloader/29523 | CC-MAIN-2022-27 | refinedweb | 455 | 50.73 |
In my previous post () I talked about an issue where RCWs (Runtime Callable Wrappers) awaiting garbage collection were holding references to a COM object and preventing it from being deterministically shut down. In this post, we'll continue the discussion--this time focusing on how to use the debugger to track down such a problem.
To facilitate this blog entry, I started to code up a simple example of the problem. What I discovered is that the insidious nature of RCW leaks can be reproduced in code that is even simpler than I had originally intended to write. Hopefully, this example will hammer home just how careful you have to be when using RCWs.
using System;
using System.Collections.Generic;
using System.Text;
using Microsoft.Office.Interop.Excel;
using System.Runtime.InteropServices;
namespace RCWLeak
{
class Program
{
static void Main(string[] args)
{
ApplicationClass excel = new ApplicationClass();
Workbook workbook = excel.Workbooks.Add(Type.Missing);
Marshal.FinalReleaseComObject(workbook);
excel.Quit();
Marshal.FinalReleaseComObject(excel);
Console.ReadKey();
}
}
}
So here is a very simple routine that does nothing more than start Excel, add a workbook, and then shut down. It looks correct right? Unfortunately it is not. When we run this code, execution pauses at the end and waits for input. What we can see by viewing task manager (or tasklist) that Excel is still running despite the fact that we thought we had taken all of the necessary steps to ensure a clean shut down.
Clearly there is an RCW that is still holding a reference to Excel somewhere, but it is non-obvious from looking at the code. To get to the bottom of this we'll need to take a look at the managed heap. Fortunately, we can do this with the SOS ("Son Of Strike") debugging extension. If you have installed WinDBG, you already have SOS. If you haven't installed, WinDBG, you can get it free from Microsoft here:.
If you are familiar with WinDBG, you are probably aware that it has an open interface that allows for the development of custom, pluggable debugging tools to extend the base debugging functionality. As I mentioned, SOS is one such extension. To load SOS in WinDBG, you would type the following in the command window:
.loadby sos mscorwks
Once SOS is loaded, you can see what it does by typing "!help". To get help for individual commands, you can simply type "!help <command>".
What we want to do is figure out what is holding on to Excel. To start with, we can find out all of the objects on the managed heap by using the !dumpheap command.
0:005> !dumpheap -stat
total 6901 objects
Statistics:
MT Count TotalSize Class Name
79132f9c 1 12 System.Collections.Generic.GenericEqualityComparer`1[[System.String, mscorlib]]
79116b3c 1 12 System.Security.Permissions.ReflectionPermission
79116a1c 1 12 System.Security.Permissions.FileDialogPermission
79116998 1 12 System.Security.PolicyManager
791135c8 1 12 System.Resources.FastResourceComparer
7910a9e8 1 12 System.RuntimeTypeHandle
7910746c 1 12 System.DBNull
791073ac 1 12 System.Empty
7910556c 1 12 System.__Filters
7910551c 1 12 System.Reflection.Missing
791045e4 1 12 System.RuntimeType+TypeCacheQueue
7912ed84 1 16 System.SByte[]
7912d8dc 1 16 System.Collections.ObjectModel.ReadOnlyCollection`1[[System.Reflection.CustomAttributeData, mscorlib]]
7912d274 1 16 System.Collections.ObjectModel.ReadOnlyCollection`1[[System.Reflection.CustomAttributeTypedArgument, mscorlib]]
7911baf8 1 16 System.Enum+HashEntry
79113ea0 1 16 System.Security.Permissions.FileIOAccess
79113744 1 16 System.Resources.ResourceReader+TypeLimitingDeserializationBinder
79112510 1 16 System.Globalization.GlobalizationAssembly
791106e4 1 16 System.Security.Permissions.UIPermission
79108934 1 16 System.Reflection.Cache.InternalCache
790fa4f8 1 16 System.__ComObject
008cdbfc 1 16 Microsoft.Office.Interop.Excel.WorkbookClass
008c5e84 1 16 Microsoft.Office.Interop.Excel.ApplicationClass
...
790fcb30 2466 217980 System.String
Total 6901 objects
The "stat" flag limits the output to just the objects themselves--very handy. If we put our cursor on the first line of the output, we can use the find window to search for "Microsoft.Office.Interop". This shows us the typed RCWs that are on the heap. However, all we see are:
008cdbfc 1 16 Microsoft.Office.Interop.Excel.WorkbookClass
008c5e84 1 16 Microsoft.Office.Interop.Excel.ApplicationClass
We know from checking our code that we are calling Marshal.FinalReleaseComObject on these, so they can't be the problem. (As a side note, if we wanted to dump these objects, we could call !dumpheap -mt <method table> and it would give us the address that we could then use to call !dumpobj.)
What else could be going on? Well, we have accounted for all of the typed RCWs, but we haven't checked for System.__ComObject. To look for a specific type, we can make use of the !dumpheap type flag:
0:005> !dumpheap -type System.__ComObject
Address MT Size
02893294 790fa4f8 16
total 1 objects
Statistics:
MT Count TotalSize Class Name
790fa4f8 1 16 System.__ComObject
Total 1 objects
Aha! This must be the RCW that is causing problems. Now we just need to figure out where it is coming from. Let's start by trying to figure out who owns it. We'll do that by looking for the root using the !gcroot command:
0:005> !gcroot 02893294
Note: Roots found on stacks may be false positives. Run "!help gcroot" for
more info.
Scan Thread 0 OSTHread f48
Scan Thread 2 OSTHread c3c
Uh oh, the object is already eligible for collection. We know this because there are no roots shown. It looks like we need to start debugging earlier when this object is still alive. The problem is that managed source level debugging doesn't work very well in WinDBG at the moment.
What to do? Take advantage of one of the best kept secrets out there: the Visual Studio 2005 debugger can load debugger extensions. To load SOS in the Visual Studio Debugger, enable native debugging in your project, start debugging and then simply go to the immediate window and type ".load sos". Subsequent SOS commands can be entered in the immediate window exactly as they would be entered in the WinDBG command window.
So we'll open our project in Visual Studio, turn on native debugging, hit F10 (single step) and load SOS:
.load sos
extension C:\Windows\Microsoft.NET\Framework\v2.0.50727\sos.dll loaded
Now that we have SOS loaded, we'll try setting a breakpoint on the first Marshal.FinalReleaseComObject call. After running to that location, we'll take a look at the ComObject again to see if we've managed to catch it while it is still rooted.
!dumpheap -type System.__ComObject
Address MT Size
02a83294 790fa4f8 16
02a83494 790fa4f8 16
total 2 objects
Statistics:
MT Count TotalSize Class Name
790fa4f8 2 32 System.__ComObject
Total 2 objects
Well, we see two ComObjects now. Lets find out who they belong to.
!gcroot 02a83294
!gcroot 02a83494
DOMAIN(004514F0):HANDLE(Strong):18118c:Root:02a83494(System.__ComObject)
The second ComObject is rooted to a GCHandle, which means that it was probably created by some internal implementation and we can ignore it for now. The first ComObject is not rooted--which means that it is probably the one we are looking for, but we still don't know who is allocating it. To find out, we'll try stopping at each line of code and dumping ComObjects.
What we discover with this approach is that the ComObject does not appear until we step over the following line of code.
Workbookworkbook = excel.Workbooks.Add(Type.Missing);
However, it is still unrooted. That can only mean one thing--this line of code is creating an RCW on the heap but not referencing it. Looking carefully at the code, we finally see what we have been missing. The Workbooks accessor (of the ApplicationClass) creates a Workbooks interface instance and returns it. Since we don't assign the return value to anything, there is nothing to reference it beyond the scope of the call. Presumably, the object backing the Workbooks interface does't implement IProvideClassInfo, so the RCW is created as a System.__ComObject.
We can actually confirm this with a little creative stepping in the dissembly window:
Workbook workbook = excel.Workbooks.Add(Type.Missing);
0000003e mov ecx,esi
00000040 mov eax,dword ptr [ecx]
00000042 call dword ptr [eax+000000F8h]
00000048 mov edi,eax
After stepping over the first call (which invokes the Workbooks accessor), we dump the ComObjects again:
!dumpheap -type System.__ComObject
Address MT Size
02763294 790fa4f8 16
027632a4 790fa4f8 16
027632b4 790fa4f8 16
total 3 objects
Statistics:
MT Count TotalSize Class Name
790fa4f8 3 48 System.__ComObject
Total 3 objects
The last one proves our hypothesis:
!gcroot 027632b4
Note: Roots found on stacks may be false positives. Run "!help gcroot" for
more info.
Error during command: Warning. Extension is using a callback which Visual Studio does not implement.
Scan Thread 3924 OSTHread f54
ESP:47ef64:Root:027632b4(System.__ComObject)
Scan Thread 4672 OSTHread 1240
What we see is that the ComObject is currently rooted to the stack. Now after we step over the remainder of the code for that line, we can dump the heap again and we will see that object has lost its root (which is what we expect).
!gcroot 027632b4
Note: Roots found on stacks may be false positives. Run "!help gcroot" for
more info.
Error during command: Warning. Extension is using a callback which Visual Studio does not implement.
Scan Thread 3924 OSTHread f54
Scan Thread 4672 OSTHread 1240
In order to effect a clean shutdown, it would seem that we need to refactor our code so that we can call Marshal.FinalReleaseComObject on the Workbooks object. We'll need to assign the return value of the Workbooks accessor to a variable. Here is the corrected code:
static void Main(string[] args)
{
ApplicationClass excel = new ApplicationClass();
Workbooks workbooks = excel.Workbooks;
Workbook workbook = workbooks.Add(Type.Missing);
Marshal.FinalReleaseComObject(workbook);
Marshal.FinalReleaseComObject(workbooks);
excel.Quit();
Marshal.FinalReleaseComObject(excel);
Console.ReadKey();
}
Upon running it, we see that we have solved the problem and Excel now shuts down cleanly when we expect it to.
Hopefully, this demonstration has reinforced the perils of RCW usage. When writing interop code, it is important to avoid calls that tunnel into the object model because they will orphan RCWs on the heap that you will not be able to access in order to call Marshal.ReleaseComObject.
PingBack from
useful article.
i found that my inspector wrapper NewInspector() event was causing problem. One line of code referenced Inspector.MailItem and that caused the file lock. Even if the only line was
If TypeOf Inspector.CurrentItem Is MailItem Then Exit Sub
It would still lock file. I solved it by including following garbage collection lines in that sub:
GC.Collect()
GC.WaitForPendingFinalizers()
GC.Collect() | https://blogs.msdn.microsoft.com/geoffda/2007/09/07/the-designer-process-that-would-not-terminate-part-2/ | CC-MAIN-2018-22 | refinedweb | 1,780 | 50.73 |
oktraMembers
Content count79
Joined
Last visited
Community Reputation144 Neutral
About Conoktra
- RankMember
Conoktra replied to Conoktra's topic in General and Gameplay ProgrammingThank you to everyone the help! It turns out it was the "sliver" problem. Marching cubes was making some triangles that where simply too small and narrow in certain places. Tweaking the scale was able to increase the stability. Thanks again.
Conoktra posted a topic in General and Gameplay Programming);
OpenGL
Conoktra replied to Conoktra's topic in Graphics and GPU ProgrammingThanks for the help! It's being rendered as two triangles. I've read that OpenGL uses the "Barycentric coordinate system" which might cause the issue as seen here. But that individual was rendering in 2D, so he was able to bypass the problem by generating his texture coordinates from screen space (rather then interpolating them from the vertices). Drawing two quads (one red and one green) that overlay and blend together to create a linear fade from red to green (as is seen in image B). Somehow the interpolation of the color channel is non-linear, resulting in the background bleeding through as is seen in image A. Code: // Draw the red quad glBegin(GL_TRIANGLES); glVertex3f(0, 1, 0); glColor4f(1, 0, 0, 0); glVertex3f(1, 1, 1); glColor4f(1, 0, 0, 0); glVertex3f(1, 1, 0); glColor4f(1, 0, 0, 1); glVertex3f(0, 1, 1); glColor4f(1, 0, 0, 1); glVertex3f(1, 1, 1); glColor4f(1, 0, 0, 0); glVertex3f(0, 1, 0); glColor4f(1, 0, 0, 0); glEnd(); // Draw the green quad glBegin(GL_TRIANGLES); glVertex3f(0, 1, 0); glColor4f(0, 1, 0, 1); glVertex3f(1, 1, 1); glColor4f(0, 1, 0, 1); glVertex3f(1, 1, 0); glColor4f(0, 1, 0, 0); glVertex3f(0, 1, 1); glColor4f(0, 1, 0, 0); glVertex3f(1, 1, 1); glColor4f(0, 1, 0, 1); glVertex3f(0, 1, 0); glColor4f(0, 1, 0, 1); glEnd(); See the above for vertex colors. The blend mode is glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). I am using sRGB. I am using Ogre3D so I can't verify that gamma correction isn't happening for sure. I got Googling and added "fragColour.a = pow(outColour.a, 1.0 / 5.0);" to my shader rather then "fragColour.a = outColor.a;" and it seems to have done the trick! Reading the OpenGL registry though says gamma correction shouldn't happen on alpha values for textures. What about vertex colors? That and a gamma factor of 5 is a lot bigger then the standard 2.2.
OpenGL
Conoktra posted a topic in Graphics and GPU ProgrammingI have a rather simple problem that I am positive has a simple solution. I am rendering a quad twice as two layers, each with a color. The first is green, the second is red. I am using the alpha channel of the color component for the quad's vertices to blend the two colors together across the quads. Attached bellow are two images. Image A is how it looks when rendered with OpenGL, image B is how it should look. The black color in image A is the background bleeding through. If OpenGL interpolated the color values linearly there would be no bleeding, it would be a nice transition from red to green and no black would show. But that's obviously not the case. I've tried using a GLSL program and setting the color variable to noperspective but it makes no difference. Is there a way to force OpenGL to do plain linear interpolation of my vertex colors, so that the red and green blend evenly across the quad like in image B? Additional information: Red layer alpha values: 1.0 -------- 0.0 | | | | | | 0.0 -------- 1.0 Green layer alpha values: 0.0 -------- 1.0 | | | | | | 1.0 -------- 0.0 . Thanks!
- Even the most sound code can be broken when you throw scenarios at it that are specifically designed to break it. My compiler doesn't mangle the class names nor do I use namespaces, hence neither is an issue.
- tolua++ needs to know the class's typename or else its treated as a void* inside lua. The "typeid(T).name + 6" skips the "class " prefix and only passes the classes unique type name ("class MyClass" -> "MyClass") which is what tolua++ expects. luaPushArg() has specialized template instances for all non-class types (thus avoiding the issue of instantiating the template with, say, an int). That way I can pass any class to lua without having to define a specialized instance of luaPushArg() for each class type. With hundreds of classes exported to lua, this saves a lot of time and code. Portability isn't an issue. If it was fixing it would be a matter of a couple of #ifdefs. Whoopdeedoo. Its not possible to have two classes with the same typename. Period. Try to compile this: class A { }; class A { }; // error C2011: 'A' : 'class' type redefinition
Conoktra posted a topic in Coding HorrorsConsider the following: class Vector3 { public: f64 x, y, z; }; // pass an unkown userdata type to lua template<typename T> inline void luaPushArg(lua_State * const LuaState, T arg) { tolua_pushusertype(LuaState, (void*)&arg, typeid(T).name() + 6); } void testFunction(const Vector3 &v) { luaPushArg(LuaState, v); luaCallFunction("TestFunction"); // CRASH (only sometimes though!) luaPop(); } What went wrong here? Can you spot the bug? This one was a real pain in the tush. luaPushArg() would work with all my specialized types (int, float, etc) that I was passing to Lua, but when I passed classes it would sometimes crash. Turns out that luaPushArg() is taking a T arg instead of a T &arg. This means that a new copy of 'v' is created inside testFunction() when it calls luaPushArg(). luaPushArg() then pushes the newly created object onto the Lua stack. Upon luaPushArg()'s return, the pointer too the class object that was just pushed onto the lua stack is now invalidated. Sometimes it would crash, sometimes it wouldn't. This one was a real nightmare. Hehehe . I can't wait for C11 support in GCC, that way bugs like this can be avoided using type-generic expression macros.
Conoktra replied to Conoktra's topic in Production and ManagementThanks for the feedback guys, I appreciate it.
Conoktra posted a topic in Production and ManagementHi [img][/img].
Conoktra replied to Conoktra's topic in General and Gameplay Programming[quote name='wolfscaptain' timestamp='1342012884' post='4958014'] Nodes are not necessarily bones. [/quote] Yah, I kinda put 2+2 together and figured that out (some nodes are linked to bones others are not, but all nodes can have channels. So what is the difference between a boneless node and a weightless bone, you may ask? Well, I would like to know the answer to that myself xD). I just wish the assimp team could put some half-decent documentation up, it would sure save a bunch of people a lot of time. Anyway, thanks for the help
Conoktra posted a topic in General and Gameplay ProgrammingHello! I have written a tool based off of [url=""]Assimp[/url]. [u]Why does assimp do this and how would I fix it?[/u] It would be great if assimp would store all bones in the bone list. [CODE] > [/CODE] I really appreciate any and all help! Thanks.
Conoktra replied to Conoktra's topic in General and Gameplay ProgrammingThanks all, I appreciate the help. Sounds like I need to buckle down and just write a exporter or converter myself. Thanks again!
Conoktra posted a topic in General and Gameplay Programming... Someone please wake me . ). I can't seem to find documentation, source code, or anything that would provide me with alternative formats, exporters, or model loading libraries that would solve this problem. (very) long story short, this is where you guys come in. [u]I am looking for some way to get a model from Maya into my game with minimal effort.[/u].
Conoktra replied to codeToad's topic in General and Gameplay ProgrammingI don't ever use the STL, period. Not even when time is tight and I just "need to get it done". But then again I am a die-hard [url=""]Lisper[/url], so... take it for a grain of salt.
Conoktra replied to Conoktra's topic in 2D and 3D ArtThanks Ashaman73 [img][/img] @polycount: I know this one is hard to answer because there are so many variables that can have an effect. Hardware varies, shaders can drastically skew the performances, and the scene dynamically changes with more/less being rendered on a continual basis. I guess a better way to phrase the question is is there a optimal # of polygons you want to aim for in a scene? Say 50,000? Then from there you can calculate shader costs & individual polycounts? @hardware + implementation: The target platform is the average user's PC (as according to [url=""]Steam's statistics[/url]). The engine uses octree scene culling and renders everything in batches organized by their type & render settings (it can't really be any more optimal, practically every fancy feature has been added in). It does utilize advanced shading & post processing effects (real-time lighting, bloom/HDR, light scattering, etc), but those can be disabled in a options menu. Thanks again! | https://www.gamedev.net/profile/172956-conoktra/?tab=classifieds | CC-MAIN-2017-30 | refinedweb | 1,534 | 63.09 |
posix_trace_eventset_empty, posix_trace_eventset_fill,
posix_trace_eventset_ismember - manipulate trace event type sets (TRAC-
ING)
SYNOPSIS
#include <trace.h>);
DESCRIPTION
These primitives manipulate sets of trace event types. They operate on
data objects addressable by the application, not on the current trace
event filter of any trace stream.
The posix_trace_eventset_add() and posix_trace_eventset_del() func-
tions, follow-
ing
IEEE Std 1003.1-2001.
POSIX_TRACE_ALL_EVENTS
All trace event types defined, both system and user, are
included in the set..
RETURN VALUE
Upon successful completion, these functions shall return a value of
zero. Otherwise, they shall return the corresponding error number.
ERRORS
These functions may fail if:
EINVAL The value of one of the arguments is invalid.
The following sections are informative.
EXAMPLES
None.
APPLICATION USAGE
None.
RATIONALE
None.
FUTURE DIRECTIONS
None.
SEE ALSO
posix_trace_set_filter() , posix_trace_trid_eventid . | http://www.linux-directory.com/man3/posix_trace_eventset_ismember.shtml | crawl-003 | refinedweb | 127 | 52.56 |
Algorithm implementation/Sorting/Pigeonhole sort
From Wikibooks, the open-content textbooks collection
All the examples currently on this page seem to be implementations of counting sort and not pigeonhole sort. Pigeonhole sort, when sorting a complex data structure with a key, keeps track of all the elements on a certain key (because equal elements are still distinct); rather than just keep a count like counting sort (which is only applicable to simple value types).
[edit] C99
void pigeonhole_sort(int *low, int *high, int min, int max) { /* used to keep track of the size of the list we're sorting */ int count; /* pointer into the list we're sorting */ int *current; /* size of range of values in the list (ie, number of pigeonholes we need)*/ const int size = max - min + 1; /* our array of pigeonholes */ int holes[size]; /* make absolutely certain that the pigeonholes start out empty */ for (int i = 0; i < size; ++i) holes[i] = 0; /* Populate the pigeonholes. */ for (current = low; current <= high; current++) holes[*current - min] += 1; /* Put the elements back into the array in order. */ for (current = low, count = 0; count < size; count++) while (holes[count]-- > 0) *current++ = count + min; }
Note that
min and
max could also easily be determined within the function.
[edit] Java
public static void pigeonhole_sort(int[] a) { // size of range of values in the list (ie, number of pigeonholes we need) int min = a[0], max = a[0]; for (int x : a) { min = Math.min(x, min); max = Math.max(x, max); } final int size = max - min + 1; // our array of pigeonholes int[] holes = new int[size]; // Populate the pigeonholes. for (int x : a) holes[x - min]++; // Put the elements back into the array in order. int i = 0; for (int count = 0; count < size; count++) while (holes[count]-- > 0) a[i++] = count + min; }
[edit] PHP
function pigeon_sort($arr) { //search min and max $min = $max = $arr[0]; foreach ($arr as $num) { if ($num < $min) $min = $num; if ($num > $max) $max = $num; } foreach ($arr as $num) $d[$num-$min]++; for ($i = 0; $i <= $max - $min; $i++ ) while ($d[$i + $min]-- > 0)$res[] = $i+$min; return $res; }
[edit] Python
def pigeonhole_sort(a): # size of range of values in the list (ie, number of pigeonholes we need) my_min = min(a) my_max = max(a) size = my_max - my_min + 1 # our list of pigeonholes holes = [0] * size # Populate the pigeonholes. for x in a: assert type(x) is int, "integers only please" holes[x - my_min] += 1 # Put the elements back into the array in order. i = 0 for count in xrange(size): while holes[count] > 0: holes[count] -= 1 a[i] = count + my_min i += 1 | http://en.wikibooks.org/wiki/Algorithm_implementation/Sorting/Pigeonhole_sort | crawl-002 | refinedweb | 436 | 54.6 |
Servers: hosting Pyro objects¶
This chapter explains how you write code that publishes objects to be remotely accessible. These objects are then called Pyro objects and the program that provides them, is often called a server program.
(The program that calls the objects is usually called the client. Both roles can be mixed in a single program.)
Make sure you are familiar with Pyro’s Key concepts before reading on.
See also
Configuring Pyro for several config items that you can use to tweak various server side aspects.
Creating a Pyro class and exposing its methods and properties¶
Exposing classes, methods and properties is done using the
@Pyro4.expose decorator.
It lets you mark the following items to be available for remote access:
- methods (including classmethod and staticmethod). You cannot expose a ‘private’ method, i.e. name starting with underscore. You can expose a ‘dunder’ method with double underscore for example
__len__. There is a short list of dunder methods that will never be remoted though (because they are essential to let the Pyro proxy function correctly).
- properties (these will be available as remote attributes on the proxy) It’s not possible to expose a ‘private’ property (name starting with underscore). You can’t expose attributes directly. It is required to provide a @property for them and decorate that with
@expose, if you want to provide a remotely accessible attribute.
- classes as a whole (exposing a class has the effect of exposing every nonprivate method and property of the class automatically)
Anything that isn’t decorated with
@expose is not remotely accessible.
Here’s a piece of example code that shows how a partially exposed Pyro class may look like:
import Pyro4 class PyroService(object): value = 42 # not exposed def __dunder__(self): # exposed pass def _private(self): # not exposed pass def __private(self): # not exposed pass @Pyro4.expose def get_value(self): # exposed return self.value @Pyro4.expose @property def attr(self): # exposed as 'proxy.attr' remote attribute return self.value @Pyro4.expose @attr.setter def attr(self, value): # exposed as 'proxy.attr' writable self.value = value
Note
Prior to Pyro version 4.46, the default behavior was different: Pyro exposed everything, no special
action was needed in your server side code to make it available to remote calls. Probably the easiest way
to make old code that was written for this model to fit the new default behavior is to add a single
@Pyro4.expose decorator on all of your Pyro classes. Better (safer) is to only add it to the methods
and properties of the classes that are accessed remotely.
If you cannot (or don’t want to) change your code to be compatible with the new behavior, you can set
the
REQUIRE_EXPOSE config item back to
False (it is now
True by default). This will restore
the old behavior.
Notice that it has been possible for a long time already for older code to utilize
the
@expose decorator and the current, safer, behavior by having
REQUIRE_EXPOSE set to
True.
That choice has now simply become the default.
Before upgrading to Pyro 4.46 or newer you can try setting it to
True yourself and
then adding
@expose decorators to your Pyro classes or methods as required. Once everything
works as it should you can then effortlessly upgrade Pyro itself.
Specifying one-way methods using the @Pyro4.oneway decorator:
You decide on the class of your Pyro object on the server, what methods are to be called as one-way.
You use the
@Pyro4.oneway decorator on these methods to mark them for Pyro.
When the client proxy connects to the server it gets told automatically what methods are one-way,
you don’t have to do anything on the client yourself. Any calls your client code makes on the proxy object
to methods that are marked with
@Pyro4.oneway on the server, will happen as one-way calls:
import Pyro4 @Pyro4.expose class PyroService(object): def normal_method(self, args): result = do_long_calculation(args) return result @Pyro4.oneway def oneway_method(self, args): result = do_long_calculation(args) # no return value, cannot return anything to the client
See Oneway calls for the documentation about how client code handles this.
See the
oneway example for some code that demonstrates the use of oneway methods.
Exposing classes and methods without changing existing source code¶
In the case where you cannot or don’t want to change existing source code,
it’s not possible to use the
@expose decorator to tell Pyro what methods should be exposed.
This can happen if you’re dealing with third-party library classes or perhaps a generic module that
you don’t want to ‘taint’ with a Pyro dependency because it’s used elsewhere too.
There are a few possibilities to deal with this:
Don’t use @expose at all
You can disable the requirement for adding
@expose to classes/methods by setting
REQUIRE_EXPOSE back to False.
This is a global setting however and will affect all your Pyro classes in the server, so be careful.
Use adapter classes
The preferred solution is to not use the classes from the third party library directly, but create an adapter class yourself
with the appropriate
@expose set on it or on its methods. Register this adapter class instead.
Then use the class from the library from within your own adapter class.
This way you have full control over what exactly is exposed, and what parameter and return value types
travel over the wire.
Create exposed classes by using “@expose“ as a function
Creating adapter classes is good but if you’re looking for the most convenient solution we can do better.
You can still use
@expose to make a class a proper Pyro class with exposed methods,
without having to change the source code due to adding @expose decorators, and without having
to create extra classes yourself.
Remember that Python decorators are just functions that return another function (or class)? This means you can also
call them as a regular function yourself, which allows you to use classes from third party libraries like this:
from awesome_thirdparty_library import SomeClassFromLibrary import Pyro4 # expose the class from the library using @expose as wrapper function: ExposedClass = Pyro4.expose(SomeClassFromLibrary) daemon.register(ExposedClass) # register the exposed class rather than the library class itself
There are a few caveats when using this:
- You can only expose the class and all its methods as a whole, you can’t cherrypick methods that should be exposed
- You have no control over what data is returned from the methods. It may still be required to deal with serialization issues for instance when a method of the class returns an object whose type is again a class from the library.
See the
thirdpartylib example for a little server that deals with such a third party library.
Pyro Daemon: publishing Pyro objects¶
To publish a regular Python object and turn it into a Pyro object, you have to tell Pyro about it. After that, your code has to tell Pyro to start listening for incoming requests and to process them. Both are handled by the Pyro daemon.
In its most basic form, you create one or more classes that you want to publish as Pyro objects, you create a daemon, register the class(es) with the daemon, and then enter the daemon’s request loop:
import Pyro4 @Pyro4.expose class MyPyroThing(object): # ... methods that can be called go here... pass daemon = Pyro4.Daemon() uri = daemon.register(MyPyroThing) print(uri) daemon.requestLoop()
Once a client connects, Pyro will create an instance of the class and use that single object to handle the remote method calls during one client proxy session. The object is removed once the client disconnects. Another client will cause another instance to be created for its session. You can control more precisely when, how, and for how long Pyro will create an instance of your Pyro class. See Controlling Instance modes and Instance creation below for more details.
Anyway, when you run the code printed above, the uri will be printed and the server sits waiting for requests.
The uri that is being printed looks a bit like this:
PYRO:obj_dcf713ac20ce4fb2a6e72acaeba57dfd@localhost:51850
Client programs use these uris to access the specific Pyro objects.
Note
From the address in the uri that was printed you can see that Pyro by default binds its daemons on localhost.
This means you cannot reach them from another machine on the network (a security measure).
If you want to be able to talk to the daemon from other machines, you have to
explicitly provide a hostname to bind on. This is done by giving a
host argument to
the daemon, see the paragraphs below for more details on this.
Note
Private methods:
Pyro considers any method or attribute whose name starts with at least one underscore (‘_’), private.
These cannot be accessed remotely.
An exception is made for the ‘dunder’ methods with double underscores, such as
__len__. Pyro follows
Python itself here and allows you to access these as normal methods, rather than treating them as private.
Note
You can publish any regular Python object as a Pyro object. However since Pyro adds a few Pyro-specific attributes to the object, you can’t use:
- types that don’t allow custom attributes, such as the builtin types (
strand
intfor instance)
- types with
__slots__(a possible way around this is to add Pyro’s custom attributes to your
__slots__, but that isn’t very nice)
Note
Most of the the time a Daemon will keep running. However it’s still possible to nicely free its resources
when the request loop terminates by simply using it as a context manager in a
with statement, like so:
with Pyro4.Daemon() as daemon: daemon.register(...) daemon.requestLoop()
Oneliner Pyro object publishing: serveSimple()¶
Ok not really a one-liner, but one statement: use
serveSimple() to publish a dict of objects/classes and start Pyro’s request loop.
The code above could also be written as:
import Pyro4 @Pyro4.expose class MyPyroThing(object): pass obj = MyPyroThing() Pyro4.Daemon.serveSimple( { MyPyroThing: None, # register the class obj: None # register one specific instance }, ns=False)
You can perform some limited customization:
- static
Daemon.
serveSimple(objects [host=None, port=0, daemon=None, ns=True, verbose=True])¶
Very basic method to fire up a daemon that hosts a bunch of objects. The objects will be registered automatically in the name server if you specify this. API reference:
Pyro4.core.Daemon.serveSimple()
If you set
ns=True your objects will appear in the name server as well (this is the default setting).
Usually this means you provide a logical name for every object in the
objects dictionary.
If you don’t (= set it to
None), the object will still be available in the daemon (by a generated name) but will not be registered
in the name server (this is a bit strange, but hey, maybe you don’t want all the objects to be visible in the name server).
When not using a name server at all (
ns=False), the names you provide are used as the object names
in the daemon itself. If you set the name to
None in this case, your object will get an automatically generated internal name,
otherwise your own name will be used.
Important
- The names you provide for each object have to be unique (or
None). For obvious reasons you can’t register multiple objects with the same names.
- if you use
Nonefor the name, you have to use the
verbosesetting as well, otherwise you won’t know the name that Pyro generated for you. That would make your object more or less unreachable.
The uri that is used to register your objects in the name server with, is of course generated by the daemon. So if you need to influence that, for instance because of NAT/firewall issues, it is the daemon’s configuration you should be looking at.
If you don’t provide a daemon yourself,
serveSimple() will create a new one for you using the default configuration or
with a few custom parameters you can provide in the call, as described above.
If you don’t specify the
host and
port parameters, it will simple create a Daemon using the default settings.
If you do specify
host and/or
port, it will use these as parameters for creating the Daemon (see next paragraph).
If you need to further tweak the behavior of the daemon, you have to create one yourself first, with the desired
configuration. Then provide it to this function using the
daemon parameter. Your daemon will then be used instead of a new one:
custom_daemon = Pyro4.Daemon(host="example", nathost="example") # some additional custom configuration Pyro4.Daemon.serveSimple( { MyPyroThing: None }, daemon = custom_daemon)
Creating a Daemon¶
Pyro’s daemon is
Pyro4.Daemon (shortcut to
Pyro4.core.Daemon).
It has a few optional arguments when you create it:
Registering objects/classes¶
Every object you want to publish as a Pyro object needs to be registered with the daemon. You can let Pyro choose a unique object id for you, or provide a more readable one yourself.
Daemon.
register(obj_or_class[, objectId=None, force=False])¶
Registers an object with the daemon to turn it into a Pyro object.
It is important to do something with the uri that is returned: it is the key to access the Pyro object. You can save it somewhere, or perhaps print it to the screen. The point is, your client programs need it to be able to access your object (they need to create a proxy with it).
Maybe the easiest thing is to store it in the Pyro name server.
That way it is almost trivial for clients to obtain the proper uri and connect to your object.
See Name Server for more information (Registering object names), but it boils down to
getting a name server proxy and using its
register method:
uri = daemon.register(some_object) ns = Pyro4.locateNS() ns.register("example.objectname", uri)
Note
If you ever need to create a new uri for an object, you can use
Pyro4.core.Daemon.uriFor().
The reason this method exists on the daemon is because an uri contains location information and
the daemon is the one that knows about this.
Intermission: Example 1: server and client not using name server¶
A little code example that shows the very basics of creating a daemon and publishing a Pyro object with it. Server code:
import Pyro4 @Pyro4.expose class Thing(object): def method(self, arg): return arg*2 # ------ normal code ------ daemon = Pyro4.Daemon() uri = daemon.register(Thing) print("uri=",uri) daemon.requestLoop() # ------ alternatively, using serveSimple ----- Pyro4.Daemon.serveSimple( { Thing: None }, ns=False, verbose=True)
Client code example to connect to this object:
import Pyro4 # use the URI that the server printed: uri = "PYRO:obj_b2459c80671b4d76ac78839ea2b0fb1f@localhost:49383" thing = Pyro4.Proxy(uri) print(thing.method(42)) # prints 84
With correct additional parameters –described elsewhere in this chapter– you can control on which port the daemon is listening, on what network interface (ip address/hostname), what the object id is, etc.
Intermission: Example 2: server and client, with name server¶
A little code example that shows the very basics of creating a daemon and publishing a Pyro object with it, this time using the name server for easier object lookup. Server code:
import Pyro4 @Pyro4.expose class Thing(object): def method(self, arg): return arg*2 # ------ normal code ------ daemon = Pyro4.Daemon(host="yourhostname") ns = Pyro4.locateNS() uri = daemon.register(Thing) ns.register("mythingy", uri) daemon.requestLoop() # ------ alternatively, using serveSimple ----- Pyro4.Daemon.serveSimple( { Thing: "mythingy" }, ns=True, verbose=True, host="yourhostname")
Client code example to connect to this object:
import Pyro4 thing = Pyro4.Proxy("PYRONAME:mythingy") print(thing.method(42)) # prints 84
Unregistering objects¶
When you no longer want to publish an object, you need to unregister it from the daemon:
Running the request loop¶
Once you’ve registered your Pyro object you’ll need to run the daemon’s request loop to make Pyro wait for incoming requests.
This is Pyro’s event loop and it will take over your program until it returns (it might never.)
If this is not what you want, you can control it a tiny bit with the
loopCondition, or read the next paragraph.
Integrating Pyro in your own event loop¶
If you want to use a Pyro daemon in your own program that already has an event loop (aka main loop),
you can’t simply call
requestLoop because that will block your program.
A daemon provides a few tools to let you integrate it into your own event loop:
Pyro4.core.Daemon.sockets- list of all socket objects used by the daemon, to inject in your own event loop
Pyro4.core.Daemon.events()- method to call from your own event loop when Pyro needs to process requests. Argument is a list of sockets that triggered.
For more details and example code, see the
eventloop and
gui_eventloop examples.
They show how to use Pyro including a name server, in your own event loop, and also possible ways
to use Pyro from within a GUI program with its own event loop.
Combining Daemon request loops¶
In certain situations you will be dealing with more than one daemon at the same time. For instance, when you want to run your own Daemon together with an ‘embedded’ Name Server Daemon, or perhaps just another daemon with different settings.
Usually you run the daemon’s
Pyro4.core.Daemon.requestLoop() method to handle incoming requests.
But when you have more than one daemon to deal with, you have to run the loops of all of them in parallel somehow.
There are a few ways to do this:
- multithreading: run each daemon inside its own thread
- multiplexing event loop: write a multiplexing event loop and call back into the appropriate daemon when one of its connections send a request. You can do this using
selectorsor
selectand you can even integrate other (non-Pyro) file-like selectables into such a loop. Also see the paragraph above.
- use
Pyro4.core.Daemon.combine()to combine several daemons into one, so that you only have to call the requestLoop of that “master daemon”. Basically Pyro will run an integrated multiplexed event loop for you. You can combine normal Daemon objects, the NameServerDaemon and also the name server’s BroadcastServer. Again, have a look at the
eventloopexample to see how this can be done. (Note: this will only work with the
multiplexserver type, not with the
threadtype)
Cleaning up¶
To clean up the daemon itself (release its resources) either use the daemon object
as a context manager in a
with statement, or manually call
Pyro4.core.Daemon.close().
Of course, once the daemon is running, you first need a clean way to stop the request loop before you can even begin to clean things up.
You can use force and hit ctrl-C or ctrl-or ctrl-Break to abort the request loop, but
this usually doesn’t allow your program to clean up neatly as well.
It is therefore also possible to leave the loop cleanly from within your code (without using
sys.exit() or similar).
You’ll have to provide a
loopCondition that you set to
False in your code when you want
the daemon to stop the loop. You could use some form of semi-global variable for this.
(But if you’re using the threaded server type, you have to also set
COMMTIMEOUT because otherwise
the daemon simply keeps blocking inside one of the worker threads).
Another possibility is calling
Pyro4.core.Daemon.shutdown() on the running daemon object.
This will also break out of the request loop and allows your code to neatly clean up after itself,
and will also work on the threaded server type without any other requirements.
If you are using your own event loop mechanism you have to use something else, depending on your own loop.
Controlling Instance modes and Instance creation¶
While it is possible to register a single singleton object with the daemon, it is actually preferred that you register a class instead. When doing that, it is Pyro itself that creates an instance (object) when it needs it. This allows for more control over when and for how long Pyro creates objects.
Controlling the instance mode and creation is done by decorating your class with
Pyro4.behavior
and setting its
instance_mode or/and
instance_creator parameters. It can only be used
on a class definition, because these behavioral settings only make sense at that level.
By default, Pyro will create an instance of your class per session (=proxy connection) Here is an example of registering a class that will have one new instance for every single method call instead:
import Pyro4 @Pyro4.behavior(instance_mode="percall") class MyPyroThing(object): @Pyro4.expose def method(self): return "something" daemon = Pyro4.Daemon() uri = daemon.register(MyPyroThing) print(uri) daemon.requestLoop()
There are three possible choices for the
instance_mode parameter:
session: (the default) a new instance is created for every new proxy connection, and is reused for all the calls during that particular proxy session. Other proxy sessions will deal with a different instance.
single: a single instance will be created and used for all method calls, regardless what proxy connection we’re dealing with. This is the same as creating and registering a single object yourself (the old style of registering code with the deaemon). Be aware that the methods on this object can be called from separate threads concurrently.
percall: a new instance is created for every single method call, and discarded afterwards.
Instance creation
Normally Pyro will simply use a default parameterless constructor call to create the instance.
If you need special initialization or the class’s init method requires parameters, you have to specify
an
instance_creator callable as well. Pyro will then use that to create an instance of your class.
It will call it with the class to create an instance of as the single parameter.
See the
instancemode example to learn about various ways to use this.
See the
usersession example to learn how you could use it to build user-bound resource access without concurrency problems.
Autoproxying¶
Pyro will automatically take care of any Pyro objects that you pass around through remote method calls. It will replace them by a proxy automatically, so the receiving side can call methods on it and be sure to talk to the remote object instead of a local copy. There is no need to create a proxy object manually. All you have to do is to register the new object with the appropriate daemon:
def some_pyro_method(self): thing=SomethingNew() self._pyroDaemon.register(thing) return thing # just return it, no need to return a proxy
This feature can be enabled or disabled by a config item, see Configuring Pyro.
(it is on by default). If it is off, a copy of the object itself is returned,
and the client won’t be able to interact with the actual new Pyro object in the server.
There is a
autoproxy example that shows the use of this feature,
and several other examples also make use of it.
Note that when using the marshal serializer, this feature doesn’t work. You have to use one of the other serializers to use autoproxying. Also, it doesn’t work correctly when you are using old-style classes (but they are from Python 2.2 and earlier, you should not be using these anyway).
Server types and Concurrency model¶
Pyro supports multiple server types (the way the Daemon listens for requests). Select the
desired type by setting the
SERVERTYPE config item. It depends very much on what you
are doing in your Pyro objects what server type is most suitable. For instance, if your Pyro
object does a lot of I/O, it may benefit from the parallelism provided by the thread pool server.
However if it is doing a lot of CPU intensive calculations, the multiplexed server may be more
appropriate. If in doubt, go with the default setting.
- threaded server (servertype
"thread", this is the default)
This server uses a dynamically adjusted thread pool to handle incoming proxy connections. If the max size of the thread pool is too small for the number of proxy connections, new proxy connections will fail with an exception. The size of the pool is configurable via some config items:
THREADPOOL_SIZEthis is the maximum number of threads that Pyro will use
THREADPOOL_SIZE_MINthis is the minimum number of threads that must remain standby
Every proxy on a client that connects to the daemon will be assigned to a thread to handle the remote method calls. This way multiple calls can potentially be processed concurrently. This means your Pyro object may have to be made thread-safe! If you registered the pyro object’s class with instance mode
single, that single instance will be called concurrently from different threads. If you used instance mode
sessionor
percall, the instance will not be called from different threads because a new one is made per connection or even per call. But in every case, if you access a shared resource from your Pyro object, you may need to take thread locking measures such as using Queues.
- multiplexed server (servertype
"multiplex")
- This server uses a connection multiplexer to process all remote method calls sequentially. No threads are used in this server. It uses the best supported selector available on your platform (kqueue, poll, select). It means only one method call is running at a time, so if it takes a while to complete, all other calls are waiting for their turn (even when they are from different proxies). The instance mode used for registering your class, won’t change the way the concurrent access to the instance is done: in all cases, there is only one call active at all times. Your objects will never be called concurrently from different threads, because there are no threads. It does still affect when and how often Pyro creates an instance of your class.
Note
If the
ONEWAY_THREADED config item is enabled (it is by default), oneway method calls will
be executed in a separate worker thread, regardless of the server type you’re using.
When to choose which server type? With the threadpool server at least you have a chance to achieve concurrency, and you don’t have to worry much about blocking I/O in your remote calls. The usual trouble with using threads in Python still applies though: Python threads don’t run concurrently unless they release the GIL. If they don’t, you will still hang your server process. For instance if a particular piece of your code doesn’t release the GIL during a longer computation, the other threads will remain asleep waiting to acquire the GIL. One of these threads may be the Pyro server loop and then your whole Pyro server will become unresponsive. Doing I/O usually means the GIL is released. Some C extension modules also release it when doing their work. So, depending on your situation, not all hope is lost.
With the multiplexed server you don’t have threading problems: everything runs in a single main thread. This means your requests are processed sequentially, but it’s easier to make the Pyro server unresponsive. Any operation that uses blocking I/O or a long-running computation will block all remote calls until it has completed.
Serialization¶
Pyro will serialize the objects that you pass to the remote methods, so they can be sent across a network connection. Depending on the serializer that is being used for your Pyro server, there will be some limitations on what objects you can use, and what serialization format is required of the clients that connect to your server.
You specify one or more serializers that are accepted in the daemon/server by setting the
SERIALIZERS_ACCEPTED config item. This is a set of serializer names
that are allowed to be used with your server. It defaults to the set of ‘safe’ serializers.
A client that successfully talks to your server will get responses using the same
serializer as the one used to send requests to the server.
If your server also uses Pyro client code/proxies, you might also need to
select the serializer for these by setting the
SERIALIZER config item.
See the Configuring Pyro chapter for details about the config items. See Serialization for more details about serialization, the new config items, and how to deal with existing code that relies on pickle.
Note
Since Pyro 4.20 the default serializer is “
serpent”. It used to be “
pickle” in older versions.
The default set of accepted serializers in the server is the set of ‘safe’ serializers,
so “
pickle” and “
dill” are not among the default.
Other features¶
Attributes added to Pyro objects¶
The following attributes will be added to your object if you register it as a Pyro object:
_pyroId- the unique id of this object (a
str)
_pyroDaemon- a reference to the
Pyro4.core.Daemonobject that contains this object
Even though they start with an underscore (and are private, in a way), you can use them as you so desire. As long as you don’t modify them! The daemon reference for instance is useful to register newly created objects with, to avoid the need of storing a global daemon object somewhere.
These attributes will be removed again once you unregister the object.
Network adapter binding and localhost¶
All Pyro daemons bind on localhost by default. This is because of security reasons. This means only processes on the same machine have access to your Pyro objects. If you want to make them available for remote machines, you’ll have to tell Pyro on what network interface address it must bind the daemon. This also extends to the built in servers such as the name server.
There are a few ways to tell Pyro what network address it needs to use.
You can set a global config item
HOST, or pass a
host parameter to the constructor of a Daemon,
or use a command line argument if you’re dealing with the name server.
For more details, refer to the chapters in this manual about the relevant Pyro components.
Pyro provides a couple of utility functions to help you with finding the appropriate IP address to bind your servers on if you want to make them publicly accessible:
Cleaning up / disconnecting stale client connections¶
A client proxy will keep a connection open even if it is rarely used. It’s good practice for the clients to take this in consideration and release the proxy. But the server can’t enforce this, some clients may keep a connection open for a long time. Unfortunately it’s hard to tell when a client connection has become stale (unused). Pyro’s default behavior is to accept this fact and not kill the connection. This does mean however that many stale client connections will eventually block the server’s resources, for instance all workers threads in the threadpool server.
There’s a simple possible solution to this, which is to specify a communication timeout on your server. For more information about this, read ‘After X simultaneous proxy connections, Pyro seems to freeze!’ Fix: Release your proxies when you can..
Daemon Pyro interface¶
A rather interesting aspect of Pyro’s Daemon is that it (partly) is a Pyro object itself.
This means it exposes a couple of remote methods that you can also invoke yourself if you want.
The object exposed is
Pyro4.core.DaemonObject (as you can see it is a bit limited still).
You access this object by creating a proxy for the
"Pyro.Daemon" object. That is a reserved
object name. You can use it directly but it is preferable to use the constant
Pyro4.constants.DAEMON_NAME. An example follows that accesses the daemon object from a running name server:
>>> import Pyro4 >>> daemon=Pyro4.Proxy("PYRO:"+Pyro4.constants.DAEMON_NAME+"@localhost:9090") >>> daemon.ping() >>> daemon.registered() ['Pyro.NameServer', 'Pyro.Daemon'] | http://pythonhosted.org/Pyro4/servercode.html | CC-MAIN-2017-39 | refinedweb | 5,341 | 53.61 |
*
Coupling
Sahil Kapoor
Ranch Hand
Joined: Sep 12, 2009
Posts: 316
posted
Jul 04, 2010 23:06:35
0
Following is a code snippet from Cert pal
Q6:- How do you improve coupling between these classes ?
class Duck { int z; public int quacker (Main m) { z=m.getX(); return m.x; } } public class Main extends Duck { int x=5; public static void main(String... args) { int y = new Duck().quacker(new Main()); } }
Answers are :-
1) mark variable y private // No problem in this
2) Dont pass main object // Problem ???
and why not the answer is
Mark the variable z private.
SCJP 6.0 96%
(Connecting the Dots ....)
Prasad Kharkar
Ranch Hand
Joined: Mar 07, 2010
Posts: 446
1
I like...
posted
Jul 04, 2010 23:25:10
0
passing the main object is allowing the quacker method in Duck class to change the variable values
Improving coupling means we should reduce the class to class interaction
this is required because of following reasons
Suppose some programmer has written Duck class and another has written the Main class
now the programmer of the Main class wants to write something very very important to the class and does not want it to give it to another classes
but wait here
we are passing the object of the main class to the Duck class quacker method which can modify the Main class which is not desired at all
so we should not pass the Main type parameter to the quacker method
I hope this helps a little
and yes
about z
no need to make it private because we are not accessing it from some other class (in our case, the main class)
Happy preparation
SCJP 6 [86%], OCPWCD [84%], OCEJPAD [83%]
If you find any post useful, click the "plus one" sign on the right
Bob Ruth
Ranch Hand
Joined: Jun 04, 2007
Posts: 320
posted
Jul 05, 2010 10:22:06
0
Just a little "point of view" ( I had a hard time with the idea of coupling early on...)
changing the variable from default to private is more about encapsulation than coupling because another class in the same package can have direct access to the variable and THAT is, as I say, an encapsulation problem!
The way you might want to think about the second issue: the quacker() method in class Duck has the following signature:
public int quacker(Main m);
so you can't ever pass it anything BUT a Main. This is a) inflexible and b) it makes Duck class highly dependent on the existence of the Main class.
Now, how do you decrease that coupling? That is where the magic of the "interface" comes in to play.
What if you created and interface called "Avian" and, in that interface you specified the signatures of some behaviors (like flaps, flies, chirps.... things that all avians do) and then made Duck implement Avian. But, at the same time, you could make a Sparrow that implements Avian, an Ostrich, a seagull, a parakeet..... whatever you want.
Now, alter quacker to the following signature:
public int quacker(Avian a);
Now, quacker no longer has to have a "concrete class" Main object.... it can accept any implementer of Avian. This makes quacker() dependent on the interface type NOT the concrete type Main. That makes the code much more flexible.
I realize that this might be a little confusing because your example class IS Duck and so you might think that...."why would I want anything BUT a duck?" But try to separate this specific example from the issue of coupling that we are discussing.
If I had a method that was designed to implement a logging class called PrintLogger. Let's say that I have another class LogSetup that has a method that is designed to set a log class and it has a signature like this:
PrintLogger printLog; printLog = setLogger(PrintLogger pl) {
The only thing that you can ever pass to setLogger() is a PrintLogger.
Let's change that a little...
Let's design an interface called Loggable. It might have method signatures like logFine(), logInfo(), logFatal()...just to make an example.
Then we change PrintLogger class to implement Loggable
public class PrintLogger implements Loggable { ....... class coding ......}
now, you can change LogSetup to look like this:
Loggable printLog; // <----------- interface type rather than concrete type printLog = setLogger(Loggable pl) { <------------ argument is the interface type Loggable
now, you can implement a FileLogger, a NetworkLogger, a TapeDriveLogger, a ScreenLogger that all implement the Loggable
interface and you can pass any of them to the setLogger method, not JUST the PrintLogger.
This is much more flexible design and you can, if you care to add and remove logger types without breaking the LogSetup code.
So, cutting it down to the essentials, any time you specify the use of a concrete class and pass concrete classes around it increases coupling.
When you design interface types, then design concrete classes that implement the interfaces, then have your code specify and pass by interface
type rather than concrete type, this reduces coupling.
I hope that this will give you some help in understanding.
------------------------
Bob
SCJP - 86% - June 11, 2009
Sahil Kapoor
Ranch Hand
Joined: Sep 12, 2009
Posts: 316
posted
Jul 07, 2010 07:21:22
0
@ Prasad Thankyou so much
@ Bob It certainly added new things.....Thanks for you stupendous explanation !!!
I agree. Here's the link:
subject: Coupling
Similar Threads
main as a static method
Inheritance confusion
Private variables in superclass accessed by subclass
enclosing class can access its inner class private members??
inner classses
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/501555/java-programmer-SCJP/certification/Coupling | CC-MAIN-2014-52 | refinedweb | 946 | 66.27 |
new to Jython -- though I've been coding in Python for about 5 years --
and would like to use it to build some GUI applications for my users to
access a Microsoft SQL-Server (we're running Windows XP on the desktops). I
cannot find an example of how to connect via the zxJDBC libraries. Is this
how I *should* be trying to connect and, if it is, can you show me "what
button to push". I am using the ODBC Data Sources via the control panel, so
I will have to connect hitting that.
If there is an easier way to do it, I'm certainly willing to learn!
Thanks!
--greg
-----------------------------------------------------------------------------
This message was sent using Conway Corporation WebMail --
here are two snippets (not related) that show standard jdbc stuff I use.
from java.lang import Class
from java.sql import *
#jdbc:mysql://[host:port],[host:port].../[database]
# [?propertyName1][=propertyValue1]
# [&propertyName2][=propertyValue2]...
# [?user=monty&password=greatsqldb]
DRIVER = "com.mysql.jdbc.Driver"
DBURL = "jdbc:mysql://";
class JdbcTools:
def __init__(self, server=""):
self.Connection = None
self.Server = server
Class.forName(DRIVER).newInstance();
def getConnection(self, database="", user=None, password=None):
url = ""+DBURL
url += self.Server +"/"+database
if user:
url += "?user="+user+"&password="+password
connection = DriverManager.getConnection(url)
return connection
class DataRecord:
def __init__ ( self, rs ): #std constructor __init__(self, ...
meta = rs.getMetaData ( )
for col in range ( 1, meta.getColumnCount( ) + 1):
name = meta.getColumnName ( col )
setattr(self, string.lower ( name ), rs.getString (
col ) )
S = DriverManager.getConnection(DbUrl, user, pword).createStatement()
rs = S.executeQuery("SELECT * FROM sometable")
while ( rs.next( ) ):
record = DataRecord ( rs )
print record.id, record.fname
-----Original Message-----
From: chai ang [mailto:chai.ang@...]
Sent: Wednesday, September 08, 2004 7:10 PM
To: jython-users@...
Cc: catcher@...
Subject: [Jython-users] Re: Oracle examples
Importance: Low
On Wed, 25 Aug 2004 22:44:42 -0400, Robert
<catcher@...> wrote:
> Are there any examples of using Jython to connect to an
> Oracle (or any
> database) using JDBC?
In case this might be of use....
Of course, in this case u'd need the jdbc thin
driver. I think u can get one from Oracle?
Regards,
Chai
ps
is there any policy against HTML mail? There should be.
#--- cut here
from java.lang import Class
from java.sql import Connection
from java.sql import Driver
from java.sql import DriverManager
Class.forName( "oracle.jdbc.driver.OracleDriver" )
url = "jdbc:oracle:thin:@yourDbServerName:1521:yourDbName"
user = "yourUserName" # f***, not case sensitive
passw = "yourPassword"
con = DriverManager.getConnection(url, user,passw)
stt = con.createStatement()
query = "SELECT column1 FROM myTable"
print "Exec'ing query", query
rs = stt.executeQuery(query)
while rs.next() == 1 :
print rs.getString:
_______________________________________________
Jython-users mailing list
Jython-users@... | https://sourceforge.net/p/jython/mailman/jython-users/?viewmonth=200409&viewday=24&style=flat | CC-MAIN-2018-26 | refinedweb | 443 | 54.9 |
/* ** (c) COPYRIGHT MIT 1995. ** Please first read the full copyright statement in the file COPYRIGH. */
This is the include file for the directory handling modules for parsing and presenting directory listings - often in the form of a HTML document.
#ifndef WWWDIR_H #define WWWDIR directory manager generates directory listings for FTP and HTTP requests. This module contains the protocol independent code and it produces the HTML object. It is only included if either the FTP or the File module is included.
#include "HTDir.h"
No directory listings without icons! The WWWDir interface contains support for including references (URLs and ALT text tags) to icons in directory listings. The icons are selected as a function of the media type and the content encoding of the file in question. That is - you can set up icons for compressed files, postscript files etc. There is also a small set of specific icons representing directories etc.
Note: Icons are not set up by default! You must enable them yourself.
The Library distribution contains a small
set of default icons which you can find at
$(datadir)/www-icons, and they can be set up using the
HTIconInit() initialization function in the
WWWInit startup interface
#include "HTIcons.h"
Descriptions appearing in directory listings are produced by this module. This may be overridden by another module for those who which descriptions to come from somewhere else. It's only HTTP directory listings that contain a description field (if enabled by the Directory browsing module.
#include "HTDescpt.h"
End of DIR module
#ifdef __cplusplus } /* end extern C definitions */ #endif #endif | http://www.w3.org/Library/src/WWWDir.html | CC-MAIN-2015-22 | refinedweb | 261 | 57.06 |
I'm using the following to count the number of files in a directory, and its subdirectories:
find . -type f | wc -l
But I have half a million files in there, and the count takes a long time.
Is there a faster way to get a count of the number of files, that doesn't involve piping a huge amount of text to something that counts lines? It seems like an inefficient way to do things.
If you have this on a dedicated file-system, or you have a steady number of files overhead, you may be able to get a rough enough count of the number of files by looking at the number of inodes in the file-system via "df -i":
root@dhcp18:~# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda1 60489728 75885 60413843 1% /
On my test box above I have 75,885 inodes allocated. However, these inodes are not just files, they are also directories. For example:
root@dhcp18:~# mkdir /tmp/foo
root@dhcp18:~# df -i /tmp
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda1 60489728 75886 60413842 1% /
root@dhcp18:~# touch /tmp/bar
root@dhcp18:~# df -i /tmp
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda1 60489728 75887 60413841 1% /
NOTE: Not all file-systems maintain inode counts the same way. ext2/3/4 will all work, however btrfs always reports 0.
If you have to differentiate files from directories, you're going to have to walk the file-system and "stat" each one to see if it's a file, directory, sym-link, etc... The biggest issue here is not the piping of all the text to "wc", but seeking around among all the inodes and directory entries to put that data together.
Other than the inode table as shown by "df -i", there really is no database of how many files there are under a given directory. However, if this information is important to you, you could create and maintain such a database by having your programs increment a number when they create a file in this directory and decrement it when deleted. If you don't control the programs that create them, this isn't an option.
I would also try:
find topDir -maxdepth 3 -printf '%h %f\n'
find topDir -maxdepth 3 -printf '%h %f\n'
And then process the output, reducing into a count for the directories.
This is especially useful if you anticipate the directory structure.
Try this handy little Python script to see if its any faster.
from os import walk
print sum([len(files) for (root, dirs, files) in walk('/some/path')])
Andrew
find dir -type f | wc -l
len(files)
find | ls
By posting your answer, you agree to the privacy policy and terms of service.
asked
4 years ago
viewed
4441 times
active
2 years ago | http://serverfault.com/questions/205071/fast-way-to-recursively-count-files-in-linux/205135 | CC-MAIN-2015-06 | refinedweb | 476 | 63.32 |
Raven - Running in Debug mode
Raven can be deployed in several ways, but the simplest method is to simply start the server located in the release zip under "/Server/Raven.Server.exe" That will start the server as a console application, which displays the server log. The server logs all requests, including status and duration. You can use that to get an idea how fast Raven really is.
- Close the server by hitting Enter on the console.
- If you want to clear the log and keep the server running, you can type "cls" and then enter.
Debug Configuration
You can set the following configuration options in the Raven.Server.exe.config file's appSettings section:
- Raven/DataDir - The physical location for the Raven data directory.
- Raven/AnonymousAccess - What access rights anonymous users have. The default is Get (anonymous users can only read data). The other options are None and All.
- Raven/Port - The port that Raven will listen to. The default is 8080.
- Raven/VirtualDirectory - The virtual directory that Raven will listen to. The default is empty.
- Raven/PluginsDirectory - The plugin directory for extending Raven. The default is a directory named "Plugins" under Raven base directory.
- Raven/MaxPageSize - The maximum number of results a Raven query can return (overrides any page size set by the client). The default is 1024.
Running in this configuration is useful when you just want to try things out, for production permanent deployment, it is recommended to use the Service or IIS modes. | https://ravendb.net/docs/article-page/2.0/python/server/deployment/docs-deploy-debug | CC-MAIN-2022-27 | refinedweb | 248 | 59.5 |
Spin Concepts and Terminology (Rancher 2)¶
Container image: Blueprint for how a container is created and started. Similar to a tarball with some other important metadata added.
Container: Running instance of an image with a private process and storage space. Similar to a regular process - especially a jailed one - and can have child processes. Containers are ephemeral; when the process exits, the container no longer exists.
Image Registry: Versioned repository for container images. Organized into namespaces (like directories). Labels are typically applied to images for version control. For example,
registry.nersc.gov/myproject/myimage:mylabel
Pod: One or more very-closely-coupled containers. Pods allow for scaling; for example, a web front-end that is heavily used might be deployed as a pod with a scale of three, meaning it is configured to run three identical containers based on the same image, and load distributed across them.
Workload: Set of parameters and rules that define how to create and run a particular pod. Includes the image, scale, settings such as environment variables, storage volumes, etc.
Deploy: Create a workload.
Ingress: Proxy that allows a workload to be accessible on the network using a DNS name. The ingress in Spin uses the nginx web server software internally.
Namespace: Group of workloads. Typically used to group all of the workloads that make up a particular system, but can also be used in other ways; for example, to group all of the workloads that belong to a particular user.
Project: Group of workloads, namespaces, ingresses, etc. In Spin, these correspond to NERSC projects. Used for access control; all objects in a project are accessible only to members of the project.
Kubernetes: Container scheduling system. Responsible for running all of the above on cluster nodes.
Rancher: Orchestration system for Kubernetes clusters. Responsible for managing the Kubernetes configuration and installation. Provides an overarching web user interface, CLI, and internal API; provides authentication and access control to clusters and projects. | https://docs.nersc.gov/services/spin/rancher2/concepts/ | CC-MAIN-2021-17 | refinedweb | 324 | 50.63 |
Difference between revisions of "Janitorial tasks"
Revision as of 16:30, 3 October 2011
Before embarking on any of those tasks, be sure to contact the developers on the mailing list..
- There is a python script that can help for this kind of replacements. Have a look at it: ReplacementScript and/or references to it:
//#include "foo.h" /* <-- kill the header include! */ class Foo; /* <-- replace with a forward declaration */ class Bar { Foo* _foo; };
There are also forward declaration headers that should be used in preference to handcoded forward declarations:
- XML namespace - #include "xml/xml-forward.h"
- Extension namespace - #include "extension/extension-forward.h"
- SPObject related things - #include "forward.h"
. The */
- Every source file should have the following Emacs local variable block and Vim modeline at the end:
/* :
- Include style. In-tree includes should use quotes, while system headers should use angle brackets. An exception is 2Geom, which should use angle brackets, though it is local (we are preparing for it to become a standalone library one day). Includes in each group should be sorted alphabetically. The path should be relative to the src/ directory. If there is a config.h include, it should go at the top. Here is an example:
#ifdef HAVE_CONFIG_H # include "config.h" #endif #include <cairo.h> #include <cstdio> #include <glib.h> #include <math.h> #include "display/cairo-utils.h" #include "document.h" #include "sp-use.h" #include "xml/node.h"
- Order of the file. Each file should contain the following, in precisely that order:
- Include guard (only for header files)
- System includes
- Local includes
- Forward declarations (note: although using an explicit forward declaration file used to be recommended, it no longer is. Using one had been found to be inefficient and error-prone.)
-.
Coding style
- Replace C-style casts with the appropriate C++ casts. You can compile with -Wold-style-casts to find them easily.
- static_cast when the conversion is obvious, for example a floating point to integer type.
- const_cast if the only difference between the types are const qualifiers.
- reinterpret_cast if the conversion does not compile with static_cast, for example pointer to integer.
- dynamic_cast for downcasting to derived class type.
Elimination of old utest tests
It should be double-checked that the old utest tests (also see TestSuite-blueprint are indeed all obsolete. If there happen to be any left that are not obsolete they should of course be converted to the CxxTest framework (if you don't feel up to it, ask me). Finally, the obsolete files should be removed from the repository and Makefiles, making sure that nothing breaks in the process. | https://wiki.inkscape.org/wiki/index.php?title=Janitorial_tasks&diff=prev&oldid=72895 | CC-MAIN-2020-05 | refinedweb | 429 | 59.6 |
Let's make a shit javascript interpreter! Part two.As a learning exercise, I've begun writing a
Homework from part One - A simple tokeniser.We ended "Let's make a shit JavaScript interpreter! Part One." by setting some homework to create a tokeniser for simple expressions like "1 + 2 * 4". Two readers sent in their versions of the tokenisers (ps, put a link to your home work results from Part One in the comments, and I'll link to it here).
Our simple tokeniser
operators = ['/', '+', '*', '-']
class ParseError(Exception):
pass
def is_operator(s):
return s in operators
def is_number(s):
return s.isdigit()
def tokenise(expression):
pos = 0
for s in expression.split():
t = {}
if is_operator(s):
t['type'] = 'operator'
t['value'] = s
elif is_number(s):
t['type'] = 'number'
t['value'] = float(s)
else:
raise ParseError(s, pos)
t.update({'from': pos, 'to': pos + len(s)})
pos += len(s) + 1
yield t
>>> pprint(list(tokenise("1 + 2 * 4")))
[{'from': 0, 'to': 1, 'type': 'number', 'value': 1.0},
{'from': 2, 'to': 3, 'type': 'operator', 'value': '+'},
{'from': 4, 'to': 5, 'type': 'number', 'value': 2.0},
{'from': 6, 'to': 7, 'type': 'operator', 'value': '*'},
{'from': 8, 'to': 9, 'type': 'number', 'value': 4.0}]
Code for shitjs.You can follow along with the code for shit js at:
-
- pip install shitjs
-
- hg clone
What next? Parsing with the tokens of our simple expression.Since we have some tokens from the input, we can now move onto the parser. Remember that we are not making a parser for all of javascript to start with, we are starting on a simple expressions like "1 + 2 * 4". As mentioned in Part One, we are using an algorithm called "Top Down Operator precedence". Where actions are associated with tokens, and an order of operations. Here you can see the precedence rule (order of operations) with parenthesis around the (1 + 2) addition changes the result.
>>> 1 + 2 * 4
9
>>> (1 + 2) * 4
12
A number is supplied for the left, and the right of each token. These numbers are used to figure out which order the operators are applied to each other. So we take our token structure from tokenise() above, and we create some Token objects from them, and depending on their binding powers evaluate them.
What's new is old is new.
The "Top Down Operator precedence" paper is from the 70's. In the 70's lisp programmers loved to use three letter variable names, and therefore the algorithm and the variable names are three letter ones. They also wore flares in the 70's (which are back in this season) and I'm not wearing them, and I'm not using three letter variable names!
Sorry, I digress... So we call 'nud' prefix, and 'led' infix. We also call rbp right_binding_power, and lbp left_binding_power.
nud - prefix
led - infix
rbp - right_binding_power
lbp - left_binding_power
Prefix is to the left, and infix is to the right.
Manually stepping through the algorithm.Let's manually step through the algorithm for the simple expression "1 + 2 * 4".
>>> pprint(list(tokenise("1 + 2 * 4")))
[{'from': 0, 'to': 1, 'type': 'number', 'value': 1.0},
{'from': 2, 'to': 3, 'type': 'operator', 'value': '+'},
{'from': 4, 'to': 5, 'type': 'number', 'value': 2.0},
{'from': 6, 'to': 7, 'type': 'operator', 'value': '*'},
{'from': 8, 'to': 9, 'type': 'number', 'value': 4.0}]
Let's give left binding powers to each of the token types.
- number - 0
- + operator - 10
- * operator - 20
('token', Literal({'to': 1, 'type': 'number', 'value': 1.0, 'from': 0}))
('expression right_binding_power: ', 0)
('token', OperatorAdd({'to': 3, 'type': 'operator', 'value': '+', 'from': 2}))
('left from prefix of first token', 1.0)
('token', Literal({'to': 5, 'type': 'number', 'value': 2.0, 'from': 4}))
('expression right_binding_power: ', 10)
('token', OperatorMul({'to': 7, 'type': 'operator', 'value': '*', 'from': 6}))
('left from prefix of first token', 2.0)
('token', Literal({'to': 9, 'type': 'number', 'value': 4.0, 'from': 8}))
('expression right_binding_power: ', 20)
('token', End({}))
('left from prefix of first token', 4.0)
('leaving expression with left:', 4.0)
('left from previous_token.infix(left)', 8.0)
right_binding_power:10: token.left_binding_power:0:
('leaving expression with left:', 8.0)
('left from previous_token.infix(left)', 9.0)
right_binding_power:0: token.left_binding_power:0:
('leaving expression with left:', 9.0)
You can see that it is a recursive algorithm. Each indentation is where it is entering a new expression.
Also, see how it manages to use the binding powers to make sure that the multiplication of 2 and 4 is done first before the addition. Otherwise the answer might be (1 + 2) * 4 == 12! Not the correct answer 9 that it gives at the end.
The operations are ordered this way because the + operator has a lower left binding power than the * operator.
You should also be able to see from that trace that a tokens infix and prefix operators are used. The OperatorAdd for example, takes what is on the left and adds it to the expression of what is on the right with it's prefix operator.
Here is an example Operator with prefix(left) and infix(right) methods.
class OperatorAdd(Token):
left_binding_power = 10
def prefix(self):
return self.context.expression(100)
def infix(self, left):
return left + self.context.expression(self.left_binding_power)
Pretty simple right? You can see the infix method takes the value in from the left, and adds it to the expression of what comes on the right.
Exercises for next timeMake this work:
>>> evaluate("1 + 2 * 4")
9.0
(ps... if you want to cheat the code repository has my part 2 solution in it if you want to see. The code is very short).
Until next time... Further reading (for the train, or the bathtub).
Below is some further reading about parsers for JavaScript, and parsers in python. | http://renesd.blogspot.com/2010/07/lets-make-shit-javascript-interpreter.html | CC-MAIN-2015-18 | refinedweb | 954 | 67.04 |
Python]
Ruby: A lot of 1 Liners
May 14, 2009 § Leave a comment
Ruby: Just rescue…
May 13, 2009 § Leave a comment
Mysql: benchmarking many writes
April 28, 2009 § Leave a comment
Purpose:
- Benchmarking Mysql writes to be compared with key-value databases writes.
Basic info:
- 2.53 G Core 2 Duo Mac Book Pro
- 4 GB RAM
- Ruby client
Results for 100,000 rows with 16 char length value:
Write 100000 rows with string-length: 16 Thread ID: 659670 Total: 9.095328 %self total self wait child calls name 66.58 6.06 6.06 0.00 0.00 100000 Mysql#query (ruby_runtime:0} 20.30 9.10 1.85 0.00 7.25 1 Integer#times (ruby_runtime:0} 13.11 1.19 1.19 0.00 0.00 100000 Object#insert_statement (/Users/didip/projects/ruby/mysql-profile/write_profile.rb:27} 0.00 9.10 0.00 0.00 9.10 1 Object#write_many_profile (/Users/didip/projects/ruby/mysql-profile/write_profile.rb:37}
Results for 1,000,000 rows with 16 char length value:
Write 1000000 rows with string-length: 16 Thread ID: 659670 Total: 88.175784 %self total self wait child calls name 66.33 58.49 58.49 0.00 0.00 1000000 Mysql#query (ruby_runtime:0} 20.48 88.18 18.06 0.00 70.12 1 Integer#times (ruby_runtime:0} 13.19 11.63 11.63 0.00 0.00 1000000 Object#insert_statement (/Users/didip/projects/ruby/mysql-profile/write_profile.rb:27} 0.00 88.18 0.00 0.00 88.18 1 Object#write_many_profile (/Users/didip/projects/ruby/mysql-profile/write_profile.rb:37}
Even Matz love Python
March 27, 2009 § Leave a comment
Matz love Python:
Ruby on Rails: YellowPages.com
March 8, 2009 § Leave a comment
Don’t know if many readers know this, but yellowpages.com uses Ruby on Rails.
Below is the slide presentation by 1 of their developer.
Ruby’s Time.now
December 2, 2008 § 2 Comments
I really miss Ruby’s Time.now.to_i, too bad Python does not have something as convenient.
So I created one:
import datetime import time class TimeUtil(): @classmethod def to_i(cls, t=None): if t is None: t = datetime.datetime.utcnow() return time.mktime(t.timetuple())
99 Bottles of Beer
August 20, 2008 § 1 Comment
99 Bottles of Beer site is an archive of 1 program, re-implemented in 1214 languages. The program is to sing 99 bottles of beer.
I’ll give you guys some of the popular ones:
Resource:
Wat.
References: | http://rapd.wordpress.com/category/ruby/ | CC-MAIN-2014-15 | refinedweb | 420 | 67.35 |
I'd tried using iTerm2 beta Build 1.0.0.20120724 and using
bind C-y run-shell "reattach-to-user-namespace -l zsh -c 'tmux show-buffer | pbcopy'" in .tmux.conf but none works.
The solutions I found aren't specific about the system and conditions. Hence I hope the problem statement here is clear ie. Copying text from remote to OS X clipboard via iTerm2 with Tmux.
Problem:
Copy text output from cat of a log file that's longer than a screen.
Copy text from vertically* split screen (left and right panes) without copying the text from the other pane.
*not sure if it should be called vertically or horizontally split.
Copy text through Vim that's longer than a screen.
I'm aware of holding alt while clicking and drag to select the text. But the problem arises when you need to scroll, or are working in more than 1 pane.
tmuxon the remote side of an
sshconnection? – chepner Feb 13 '13 at 15:25
save-buffercommand. – Paulo Almeida Jul 30 '13 at 1:49 | https://superuser.com/questions/542666/copy-text-from-remote-via-iterm2-with-tmux-to-os-x-clipboard | CC-MAIN-2021-21 | refinedweb | 179 | 77.53 |
Modules
Tabris.js uses the same basic module system as Node.js, also known as the “CommonJS” module system, a detailed explanation in the Node.js docs.
The Node.js implementation is the standard that Tabris.js follows and aims to be compatible with.
Virtual Modules
A module is usually define by a file, but it is also possible to define a module at runtime at any path. This is done using
Module.define:
const {Module} = require('tabris'); Module.define('/src/my/virtual/module', myExports);
It could then be imported - for example from file
/src/my/real/module.js - like this:
const myExports = require('../virtual/module');
Of course you can also define virtual modules in
node_modules to make them easily importable from anywhere in your application code.
If you use this feature in a TypeScript based project the virtual modules can be declared in a separate
d.ts file. The ES6 module system and different output folder also need to be considered:
import {Module} from 'tabris'; Module.define('/dist/my/virtual/module', {default: myDefaultExport});
Import from
/src/my/real/module.ts:
import myDefaultExport from '../virtual/module';
Make sure the module that defines the virtual module is always parsed first. Assuming this is done is done in
src/virtualModules.ts, your main module may begin like this:
import './virtualModules'; import './my/real/module';
Module Mapping
Since modules (except npm modules) are always imported using relative paths, any project with sufficiently deeply nested directories may end up with code like this:
import {texts} from '../../../resources'; import MainPage from '../../../ui/pages/MainPage';
One way to deal with this problem is module mapping. Using the
Module.addPath method any non-relative import can be mapped to any module path within the project. Create a new module
'src/moduleMapper.ts' (or
.js):
import {Module} from 'tabris'; Module.addPath('resources', ['./dist/resource']); Module.addPath('pages/*', ['./dist/ui/pages/*']);
Or for a non-compiled project:
const {Module} = require('tabris'); Module.addPath('resources', ['./src/resource']); Module.addPath('pages/*', ['./src/ui/pages/*']);
In your entry-point module, load this module before any other. Now your imports can always look like this:
import {texts} from 'resources'; import MainPage from 'pages/MainPage';
This only affects application modules, import from npm modules (installed in
node_modules) are not affected. However, it is still possible to (accidentally or not) register a pattern that overrides a npm module for imports in application modules. It’s therefore common to choose a specific prefix for these import patterns, e.g.
@pages/*.
For a TypeScript project the compiler needs to be configured to respect this mapping. The compiler option
'paths' is supported by
addPath directly, the configuration can be read from
tsconfig.json at runtime. The
baseUrl needs to be different though, since the JavaScript files are loaded from the
dist folder at runtime. Also, no file endings (
.ts) may be used since these also change, and the replacement paths must start with “
./”.
Example
tsconfig.json:
{ "compilerOptions": { "module": "commonjs", "outDir": "dist", "baseUrl": "./", "paths": { "@resources": ["./resources"], "@pages/": ["./ui/pages/*"] } // ... other config }, "include": [ "./src/*.ts", "./src/*.tsx" ] }
Runtime configuration:
const paths = (Module.readJSON('./tsconfig.json') as any).compilerOptions.paths; Module.addPath({baseUrl: '/dist', paths}); // unlike tsconfig.json baseUrl is absolute! | http://docs.tabris.com/latest/modules.html | CC-MAIN-2022-27 | refinedweb | 530 | 51.65 |
Does anyone know of a good way to compress or decompress files and folders in C# quickly? Handling large files might be necessary.
2017年02月19日49分56秒
The .Net 2.0 framework namespace
System.IO.Compression supports GZip and Deflate algorithms. Here are two methods that compress and decompress a byte stream which you can get from your file object. You can subsitute
GZipStream for
DefaultStream in the methods below to use that algorithm. This still leaves the problem of handling files compressed with different algorithms though.(); }
2017年02月19日49分56秒
I've always used the SharpZip Library.
2017年02月19日49分56秒
As of .Net 1.1 the only available method is reaching into the java libraries.
Using the Zip Classes in the J# Class Libraries to Compress Files and Data with C#
Not sure if this has changed in recent versions.
2017年02月19日49分56秒
My answer would be close your eyes and opt for DotNetZip. It's been tested by a large community.
2017年02月19日49分56秒
This is very easy to do in java, and as stated above you can reach into the java.util.zip libraries from C#. For references see:
java.util.zip javadocs
sample code
I used this a while ago to do a deep (recursive) zip of a folder structure, but I don't think I ever used the unzipping. If I'm so motivated I may pull that code out and edit it into here later.
2017年02月19日49分56秒
GZipStream is a really good utility to use.
2017年02月19日49分56秒
You can use a 3rd-party library such as SharpZip as Tom pointed out.
Another way (without going 3rd-party) is to use the Windows Shell API. You'll need to set a reference to the Microsoft Shell Controls and Automation COM library in your C# project. Gerald Gibson has an example at:
Internet Archive's copy of the dead page
2017年02月19日49分56秒
Another good alternative is also DotNetZip.
2017年02月19日49分56秒 | http://www.91r.net/ask/165.html | CC-MAIN-2017-09 | refinedweb | 308 | 75.5 |
For a positive integer n, we have to determine if n is prime, where n is small (i.e., it can be assigned to a ulong type in C#). Thus, n will be constrained to be less than 2^64-1. The algorithm presented here will make some notable improvements over the brute force method by using Wheel Factorization.
ulong
Prime numbers are interesting numbers to the math community. And, the search for prime numbers seems to be the hobby (or profession) of many mathematicians. Prime numbers are also important in the RSA encryption algorithm. There is even a prize money for finding very large primes.
Our goal is to determine if a positive integer is prime or not. The brute force way would look something like this:
// Warning: Slow code ahead, do not use.
// The brute force way of determining if a number is prime.
public bool IsPrime(ulong primeSuspect)
{
if (primeSuspect < 2) return false;
if (primeSuspect == 2) return true;
for (ulong divisor = 2; divisor < primeSuspect; divisor++)
{
// If no remainder after dividing by the divisor
if (primeSuspect % divisor == 0)
{
return false; // It is not prime
}
}
// If we did not find a divisor, it is prime
return true;
}
The above code is inefficient because it often checks a number as a possible divisor that has already been eliminated because it is a multiple of a smaller number. No need to check 4 as a divisor if 2 was already checked. And, no need to check 9 as a divisor if 3 has already been checked, and so on. Another problem is that it checks all numbers up to our prime candidate as a divisor. We will fix this first.
So, our first improvement to the algorithm will be to only search for factors for our prime candidate up to the square root of our prime candidate. If there is a factor to our candidate greater than its square root, then there must also be a factor less than its square root such that when these two factors are multiplied, it gives us our original candidate.
Our next improvement will be to use a method called Wheel Factorization. If we already know all the primes less than or equal to the square root of our candidate, this would be optimal. However, searching for all these lesser primes comes at the price of more computation time, and negates much of our gains. So, I'll use Wheel Factorization to speed up the search.
In Wheel Factorization, you start with the first few primes. In this example, I will use 2, 3, and 5, the first three primes, to make it simple. (In the downloaded code, I use more than the first three primes, which will give us some more improvement.) This gives us a Wheel Factorization of 30, the product of the first three primes (2*3*5). You then make a list of the integers from 1 to 30, and eliminate all the numbers in the list that are multiples of 2, 3, or 5. This gives us this list: {1, 7, 11, 13, 17, 19, 23, 29} of sieved numbers. These sieved numbers give us a pattern of numbers that repeat and are not multiples of 2, 3, or 5. Thus, if you add 30, or 60, or 90, etc. to each of these numbers in the list, none are divisible by 2, 3, or 5. I will make a small modification to this sieved list of numbers to make the loop simpler. I will remove the 1, and add it to 30 at the tail of the list. So now, our list of numbers is {7, 11, 13, 17, 19, 23, 29, 31}. This is so I can do a pass = 0 and don't have to divide by 1.
So, here is the heart of the program (simplified for this article by using just the first three primes to create the sieve):
private static ulong[] aSieve30 = new ulong[]
{7, 11, 13, 17, 19, 23, 29, 31};
// Find the first divisor greater than 1 of our candidatePrime.
public static ulong FirstDivisor(ulong candidatePrime)
{
if (candidatePrime == 0)
throw new ArgumentException ("Zero is an invalid parameter!");
// A List of the first three primes
List<ulong> firstPrimes =
new List<ulong>(new ulong[] {2, 3, 5});
WheelFactor = 30; // The product of the primes in firstPrimes
if (candidatePrime == 1)
{
// 1 is not considered a prime or a composite number.
return 0; // So return any number other than 1.
}
foreach (ulong prime in firstPrimes)
{
if (candidatePrime % prime == 0) return prime;
}
// No need to search beyond the square root for divisors
ulong theSqrt = (ulong)Math.Sqrt((double)candidatePrime);
for (ulong pass = 0; pass < theSqrt; pass += WheelFactor)
{
foreach (ulong sieve in aSieve30)
{
if (candidatePrime % (pass + sieve) == 0)
{
return pass + sieve;
}
}
}
// If we got this far our number is a prime
return candidatePrime;
}
public static bool IsPrime(ulong primeSuspect)
{
if (primeSuspect == 0) return false;
return (FirstDivisor(primeSuspect) == primeSuspect);
}
As you can see from the for loop above, it is incremented by our WheelFactor, which is 30 in this case. And, the inner loop checks all 8 primes in our sieved list. Therefore, just 8 of 30 numbers are checked, which is more than a 73% improvement over the brute force way. Unfortunately, increasing the number of primes in our first primes list has a diminishing improvement on our search. The downloaded code uses the first 8 primes, which gives about an 83% improvement over the brute force way.
for
WheelFactor
The figure illustrates a wheel factorization of 30. After performing trial divisions of 2, 3, and 5, then you only have to do trial divisions for those spokes of the wheel that are white. The spokes of the wheel that are red have been eliminated for consideration as possible divisors.
In Wheel Factorization, you get some good performance improvement in determining if a number is prime by skipping all the multiples of the first few primes.
BuildSieve()
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
General News Suggestion Question Bug Answer Joke Rant Admin
Man throws away trove of Bitcoin worth $7.5 million | http://www.codeproject.com/Articles/31085/Prime-Number-Determination-Using-Wheel-Factorizati?fid=1530801&df=90&mpp=10&noise=1&prof=False&sort=Position&view=Expanded&spc=None&fr=11 | CC-MAIN-2013-48 | refinedweb | 1,027 | 67.89 |
# Errors that static code analysis does not find because it is not used
Readers of our articles occasionally note that the PVS-Studio static code analyzer detects a large number of errors that are insignificant and don't affect the application. It is really so. For the most part, important bugs have already been fixed due to manual testing, user feedback, and other expensive methods. At the same time, many of these errors could have been found at the code writing stage and corrected with minimal loss of time, reputation and money. This article will provide several examples of real errors, which could have been immediately fixed, if project authors had used static code analysis.

The idea is very simple. We'll search for examples of pull requests on GitHub that specify that an issue is a bugfix. Then we'll try to find these bugs using the PVS-Studio static code analyzer. If an error could be found by the analyzer, then it is a bug which could have been found at the code writing stage. The earlier the bug is corrected, the cheaper it costs.
Unfortunately, GitHub let us down and we didn't manage to make a big posh article on the subject. GitHub itself has a glitch (or a feature) that doesn't allow you to search for comments of pull requests in projects written only in certain programming languages. Or I don't know how to cook it. Despite that I specify to search for comments in C, C++, C# projects, the results are given for all languages, including PHP, Python, JavaScript, and others. As a result, looking for suitable cases has proved to be extremely tedious, and I'll go for just a few examples. However, they are enough to demonstrate the usefulness of static code analysis tools when used regularly.
What if the bug had been caught at the earliest stage? The answer is simple: programmers wouldn't have to wait for it to show itself, then search and correct the defective code.
Let's look at the errors that PVS-Studio could have immediately detected:
The [first example](https://github.com/satisfactorymodding/SatisfactoryModLoader/commit/9e4b32e8f6b5c1e8d117e4cecc2586d4d2f3d8e1) is taken from the SatisfactoryModLoader project. Before fixing the error, the code looked as follows:
```
// gets an API function from the mod handler
SML_API PVOID getAPIFunction(std::string name) {
bool found = false;
for (Registry reg : modHandler.APIRegistry) {
if (reg.name == name) {
found = true;
}
}
if (!found) {
std::string msg = ...;
MessageBoxA(NULL,
msg.c_str(),
"SatisfactoryModLoader Fatal Error",
MB_ICONERROR);
abort();
}
}
```
This code contained an error, that PVS-Studio would immediately issue a warning to:
[V591](https://www.viva64.com/en/w/v591/) Non-void function should return a value. ModFunctions.cpp 44
The above function has no *return* statement, so it will return a formally undefined value. The programmer didn't use the code analyzer, so he had to look for the bug on his own. The function after editing:
```
// gets an API function from the mod handler
SML_API PVOID getAPIFunction(std::string name) {
bool found = false;
PVOID func = NULL;
for (Registry reg : modHandler.APIRegistry) {
if (reg.name == name) {
func = reg.func;
found = true;
}
}
if (!found) {
std::string msg = ...;
MessageBoxA(NULL,
msg.c_str(),
"SatisfactoryModLoader Fatal Error",
MB_ICONERROR);
abort();
}
return func;
}
```
Curiously, in the commit, the author marked the bug as critical: "*fixed* ***critical bug*** *where API functions were not returned*".
In the second [commit](https://github.com/spc476/mc6809/commit/815a5577c006b2b2c812c7cff86dc72191d47003) from the mc6809 project history, edits were introduced in the following code:
```
void mc6809dis_direct(
mc6809dis__t *const dis,
mc6809__t *const cpu,
const char *const op,
const bool b16
)
{
assert(dis != NULL);
assert(op != NULL);
addr.b[MSB] = cpu->dp;
addr.b[LSB] = (*dis->read)(dis, dis->next++);
...
if (cpu != NULL)
{
...
}
}
```
The author corrected only one line. He replaced the expression
```
addr.b[MSB] = cpu->dp;
```
for the following one
```
addr.b[MSB] = cpu != NULL ? cpu->dp : 0;
```
In the old code version there was not any check for a null pointer. If it happens so that a null pointer is passed to the *mc6809dis\_direct* function as the second argument, its dereference will occur in the body of the function. The result is [deplorable and unpredictable](https://www.viva64.com/en/b/0306/).
Null pointer dereference is one of the most common patterns we are told about: «It's not a critical bug. Who cares that it is thriving in code? If dereference occurs, the program will quietly crash and that's it.» It's strange and sad to hear this from C++ programmers, but life happens.
Anyway, in this project such dereference has turned into a bug, as the commit's subject tells us: "*Bug fix---NULL dereference*".
If the project developer had used PVS-Studio, he could have checked and found the warning two and a half months ago. This is when the bug was introduced. Here is the warning:
[V595](https://www.viva64.com/en/w/v595/) The 'cpu' pointer was utilized before it was verified against nullptr. Check lines: 1814, 1821. mc6809dis.c 1814
Thus, the bug would have been fixed at the time of its appearance, which would have saved the developer's time and nerves :).
An example of another interesting [fix](https://github.com/Forceflow/libmorton/commit/9bee7af145f24b653b8b195ffc0f4147e0268e02) was found in the libmorton project.
Code to be fixed:
```
template
inline bool findFirstSetBitZeroIdx(const morton x,
unsigned long\* firstbit\_location)
{
#if \_MSC\_VER && !\_WIN64
// 32 BIT on 32 BIT
if (sizeof(morton) <= 4) {
return \_BitScanReverse(firstbit\_location, x) != 0;
}
// 64 BIT on 32 BIT
else {
\*firstbit\_location = 0;
if (\_BitScanReverse(firstbit\_location, (x >> 32))) { // check first part
firstbit\_location += 32;
return true;
}
return \_BitScanReverse(firstbit\_location, (x & 0xFFFFFFFF)) != 0;
}
#elif \_MSC\_VER && \_WIN64
....
#elif \_\_GNUC\_\_
....
#endif
}
```
In his edit, a programmer replaces the expression "*firstbit\_location += 32*" with "***\*****firstbit\_location += 32*". The programmer expected that 32 will be added to the value of the variable referred to by the *firstbit\_location* pointer, but 32 was added to the pointer itself. The changed value of the pointer wasn't used anywhere any more and the expected variable value remained unchanged.
PVS-Studio would issue a warning to this code:
[V1001](https://www.viva64.com/en/w/v1001/) The 'firstbit\_location' variable is assigned but is not used by the end of the function. morton\_common.h 22
Well, what is so bad about the modified but further unused expression? The V1001 diagnostic doesn't look like it's meant for detecting particularly dangerous bugs. Despite this, it found an important error that influenced the program logic.
Moreover, it turned out that that error wasn't so easy to find! Not only has it been in the program since the file was created, but it has also experienced many edits in neighboring lines and existed in the project for as many as 3 (!) years! All this time the logic of the program was broken, and it didn't work in the way developers expected. If they had used PVS-Studio, the bug would have been detected much earlier.
In the end, let's look at another nice example. While I was collecting bug fixes on GitHub, I came across a fix with the [following content](https://github.com/torvalds/linux/commit/ca09f02f122b2ecb0f5ddfc5fd47b29ed657d4fd) several times. The fixed error was here:
```
int kvm_arch_prepare_memory_region(...)
{
...
do {
struct vm_area_struct *vma = find_vma(current->mm, hva);
hva_t vm_start, vm_end;
...
if (vma->vm_flags & VM_PFNMAP) {
...
phys_addr_t pa = (vma->vm_pgoff << PAGE_SHIFT) +
vm_start - vma->vm_start;
...
}
...
} while (hva < reg_end);
...
}
```
PVS-Studio issued a warning for this code snippet:
[V629](https://www.viva64.com/en/w/v629/) Consider inspecting the 'vma->vm\_pgoff << 12' expression. Bit shifting of the 32-bit value with a subsequent expansion to the 64-bit type. mmu.c 1795
I checked out declarations of variables, used in the expression "*phys\_addr\_t pa = (vma->vm\_pgoff << PAGE\_SHIFT) + vm\_start — vma->vm\_start;*" and found out that the code given above is equal to the following synthetic example:
```
void foo(unsigned long a, unsigned long b)
{
unsigned long long x = (a << 12) + b;
}
```
If the value of the *a* 32-bit variable is greater than *0xFFFFF*, 12 highest bits will have at least one nonnull value. After shifting this variable left, these significant bits will be lost, resulting in incorrect information written in *x.*
To eliminate loss of high bits, we need first to cast *a* to the *unsigned long long* type and only after this shift the variable:
```
pa = (phys_addr_t)vma->vm_pgoff << PAGE_SHIFT;
pa += vm_start - vma->vm_start;
```
This way, a correct value will always be written in *pa.*
That'd be okay but this bug, the same as the first example from the article, also turned out to be critical. It's author wrote about it in the comment. Moreover, this error found its way to an enormous number of projects. To fully appreciate the scale of the tragedy, I [suggest looking](https://github.com/search?q=arm%3A+KVM%3A+Fix+incorrect+device+to+IPA+mapping&type=Commits) at the number of results when searching for this bugfix on GitHub. Scary, isn't it?

So I've taken a new approach to demonstrate the benefits of a regular static code analyzer usage. I hope you enjoyed it. [Download and try](https://www.viva64.com/en/pvs-studio-download/) the PVS-Studio static code analyzer to check your own projects. At the time of writing, it has about 700 implemented diagnostic rules to detect a variety of error patterns. Supports C, C++, C# and Java. | https://habr.com/ru/post/460121/ | null | null | 1,610 | 57.27 |
Bottom-up (RDF-driven) overlays allow existing Firefox GUIs to be enhanced.
In the context of Mozilla, an overlayis a hunk of user-interface (UI) data, most commonly XUL, that is extracted to separate files and pulled in at runtime to parts of the UI that include it. Think of it as a reusable widget or set of widgets.
In general, there are two types of overlays. One is a directly specified overlay that you put in your extension. These are used to keep your code modular or to reuse in multiple places in the extension. The second type is something that you want to appear in the Firefox default application, perhaps to carry out a particular function of your extension. For example, you might want to add a menu item to the Tools menu to launch your extension. In a sense, in that case, you are adding to the default UI for Firefox.
This hack focuses on the latter approach, with instructions on which type of overlays you can have, how to make them, and some general insight into how Firefox handles the overlays.
For the overlay puzzle to fall into place, you need three parts of the puzzle:
Entries in contents.rdf to register the overlay
The name of file that is the target of the overlay
The code that will be pulled in from the overlay to its destination
The best way to understand each piece is by walking through an
example. Here, we are going to add a menu item to the Firefox Tools
menu to launch the
chromeEdit application.
[Hack #86] explained contents.rdf files for the purpose of registering extensions. Lets add to the sample contents.rdf file in that hack to show how to register overlays:
<?xml version="1.0"?> <RDF:RDF xmlns: ...<!-- extension information omitted --> <!-- overlay information --> <RDF:Seq <RDF:li </RDF:Seq> <RDF:Seq <RDF:li>chrome://chromedit/content/chromeditBrowserOverlay.xul</RDF:li> </RDF:Seq> </RDF:RDF>
This time, we skip over the extension information and move on to overlay information. This information tells Firefox to take some extra chrome from the chromeditBrowserOverlay.xul file. The first bolded sequence says that this extension has overlays that it wants registered and lists the files in Firefox that will be overlaid. In this instance, it is the main browser window file, browser.xul.
The second highlighted sequence lists all the extension overlay files that will be pulled into the overlaid Firefox file. So, at this point, all we have is a correlation between Firefox files and an extension file, defining their connection from a high level. The low-level detail of what widget goes where is done in the overlay file.
Now that we've registered the
chromeditBrowserOverlay.xul,
let's look inside the overlay file to create the
menu item and place it in the Tools menu. The structure of an overlay
file is the same as any XUL file, with the main difference being that
the root element is
<overlay>. This tag
needs a namespace attribute only, though an ID is also recommended:
<?xml version="1.0"?> <!DOCTYPE overlay SYSTEM "chrome://chromedit/locale/chromeditMenuOverlay.dtd"> <?xml-stylesheet <script type="application/x-javascript"> <![CDATA[ // functions go here ]]> </script> <menupopup id="taskPopup"><menuitem oncommand="showChromEdit( )" label="Edit User Files" id="chromedit-aviary" insertbefore="menu_preferences" class="menuitem-iconic menu-iconic icon-chromedit16"/> </menupopup> </overlay>
The overlay has a name:
ceTasksOverlay. The chunk
to be placed in the browser UI is a menu-item widget. It is to be
placed in an existing, identified container tag. In this case, this
is the Tools
<menupopup>, which has an
id of
taskPopup. The
id is the bit of the overlay that is matched up
with an identical
id in the Firefox target file,
so this must not be left out. Everything contained within that
id tag in the overlay will be placed in the
<menupopup> in the target file. The new
<menuitem> has a label, and a command to
call a function to launch the extension. Separate from the detail
that coordinates the overlay fragment with the rest of Firefox, the
class attribute is used by the
chromedit package to apply icon-laden styles to
the menu item, for a pleasing effect.
At runtime, Firefox looks up the list of overlay files, scans then,
find matches for widget IDs, and merges the new XUL into the UI.
Figure 8-4 shows how the new Edit User Files menu
item looks for
chromEdit. Note the styled-in icon.
Figure 8-4. Overlaid Tools menu in Firefox
So far, our example has concerned adding a menu item to the Firefox
Tools menu, but there are other areas of the Firefox UI into which
you can overlay. It's possible to overlay into any
area of the visible UI, once you know the
id of
the widget to which you want to add. For example, the Download
Manager Tweak extension () adds a button
to the Downloads panel in Firefox's Options window.
The ForecastFox extension () gives the option of showing temperatures and conditions in either the menu bar, the main toolbar, or the status bar. Here is a list of the most-used areas where extensions put overlays, but it is by no means definitive:
Menu bar
Main toolbar
Personal toolbar
Status bar
Options
Firefox has a sidebar that can be accessed via the
View→Sidebar menu item. The default features that are
available as sidebar panels are Bookmarks, History, and Downloads.
You can add a sidebar panel for your extension.
Here's an example of how to do so with
chromEdit:
<menupopup id="viewSidebarMenu"> <menuitem observes="viewChromEditSidebar" class="icon-chromedit16"/> </menupopup> <broadcasterset id="mainBroadcasterSet"> <broadcaster id="viewChromEditSidebar" autoCheck="false" label="ChromEdit" type="checkbox" group="sidebar" sidebarurl="chrome://chromedit/content/" sidebartitle="ChromEdit" oncommand="toggleSidebar('viewChromEditSidebar');"/> </broadcasterset>
The first thing to note in this example is that a sidebar needs two
XUL blocks. Similar to the Tools menu overlay, we have a menu item.
In this case, it is attached to the View→Sidebar menu using
the
viewSidebarMenu
id, which
opens or closes
chromEdit in the sidebar when
activated. The menu item is then hooked up to a broadcaster via the
observes attribute. The new broadcaster for the
sidebar is overlaid into Firefox's main broadcaster
set. The menu-item attributes are abstracted to the overlay simply to
allow it to be used in more than one place.
TIP: The sidebar holds the information on what file to open in the sidebar (
sidebarurl), and the name it will have there (
sidebartitle).
Just for debugging purposes, there is a place to look to see if your overlays are being pulled into Firefox—a section of chrome called overlayinfo. You will find it in two places on disk. The first is in the install area under chrome/overlayinfo, and the second is in the profile area under chrome/overlayinfo. Because Firefox extensions are installed in the user profile directory, you will find your overlay information in the latter.
Under the overlayinfo folder you will find
multiple subfolders, but the relevant one for Firefox is
browser. Enter the content folder and locate the
file overlays.rdf. If the completed
chromeditcontents.rdf file
registered correctly when Firefox was restarted, you should see the
following entry in that overlay file:
<?xml version="1.0"?> <RDF:RDF xmlns: <RDF:Seq RDF: <RDF:li>chrome://chromedit/content/chromeditBrowserOverlay.xul</RDF:li> </RDF:Seq> </RDF:RDF>
If your overlay file is not listed here, there was a problem with registration and you should go back and double-check your contents.rdf file. You should not manually add entries to this file.
When you overlay the browser, the widgets you add are now in that
scope. You might also want to add styles to those widgets. For
example, you might want to add an icon to the menu item via a
class attribute. The best way to do this is via a
skin
overlay.
The first step is to add entries to the skin contents.rdf manifest with target and stylesheet information. Start with the standard skin registration contents.rdf manifest [Hack #86] . Then, add the following entries:
<RDF:Seq <RDF:li <RDF:li </RDF:Seq> <RDF:Seq <RDF:li>chrome://chromedit/skin/chromedit.css</RDF:li> </RDF:Seq> <RDF:Seq <RDF:li>chrome://chromedit/skin/chromedit.css</RDF:li> </RDF:Seq>
The stylesheet sequence (
urn:mozilla:stylesheets)
lists the Firefox files to be overlaid. The
chromedit.css file is added to each one. This is
the same system that is used to develop a theme [Hack #89] .
Inside overlayinfo, the location to search for the stylesheet overlay listing is in a file called stylesheets.rdf in the profile area under chrome/overlayinfo/browser/skin.
—Brian King
View catalog information for Firefox Hacks
Return to Mozilla DevCenter. | http://www.linuxdevcenter.com/lpt/a/5786 | CC-MAIN-2014-35 | refinedweb | 1,467 | 63.8 |
- ] Reproducible Implicits Bug in Netbeans Plugin Nightly Build
Sat, 2010-04-03, 17:43
Hi, I'm using a certain type of implicit conversion to fix a particular
design issue, and it triggers a type parsing bug.
I have two files in a NetBeans project, first is a file Main.scala that
looks like this:
------------
package breakit
class Testy {
Main.inst = this
def boob = println("booby")
}
class Extendy {}
object Main extends Extendy {
var inst:Testy = null
implicit def main2testy(x:Extendy):Testy = return inst
def main(args: Array[String]): Unit = {
println("Hello, world!")
Main.boob
}
}
-------
Also a secondfile Other.scala that contains
package breakit
class Other {
Main.boob
}
So the bug is this: on opening NetBeans, everything is fine. The first time
you view Other.scala there is no error on Main.boob. However, once you
view/open Main.scala, then go back to Other.scala, (if you view Main first
you have to switch back and forth again) it "forgets" about the implicit
conversion and you're left with an error on Main.boob, "not a member."
My guess is that it parses the types from Main correctly, but then after
parsing Main again, it overwrites what it previously had. Would this be an
easy bug to fix by any chance?
Thanks,
Jamie | http://www.scala-lang.org/old/node/5842 | CC-MAIN-2014-42 | refinedweb | 214 | 75.5 |
gnutls_record_send_range(3) gnutls gnutls_record_send_range(3)
gnutls_record_send_range - API function
#include <gnutls/gnutls.h> ssize_t gnutls_record_send_range(gnutls_session_t session, const void * data, size_t data_size, const gnutls_range_st * range);
gnutls_session_t session is a gnutls_session_t type. const void * data contains the data to send. size_t data_size is the length of the data. const gnutls_range_st * range is the range of lengths in which the real data length must be hidden.
This function operates like gnutls_record_send() but, while gnutls_record_send() adds minimal padding to each TLS record, this function uses the TLS extra-padding feature to conceal the real data size within the range of lengths provided. Some TLS sessions do not support extra padding (e.g. stream ciphers in standard TLS or SSL3 sessions). To know whether the current session supports extra padding, and hence length hiding, use the gnutls_record_can_use_length_hiding() function.
This function currently is limited to blocking sockets.
The number of bytes sent (that is data_size in a successful invocation), or a negative error_send_range(3) | http://man7.org/linux/man-pages/man3/gnutls_record_send_range.3.html | CC-MAIN-2017-39 | refinedweb | 158 | 57.67 |
There have been times when I've wanted to keep track of content on the net, specifically track the changes in content. Python-Requests + Regex is the usual way to go, but it requires a lot of overhead code. I guess that's why people create libs. Anyway, so Scrapy has a bunch of utils that allow us to automate the process. Here's an example on how to scrape information off of the response from a form submission.
I am using this to track the stats on my USCIS case. I intend to deploy it to a Django-Celery server instance and have Tasker (on my android) pull it and give me an update every morning. More on that later.
So let's define a simple container for the information. Here's my container.
#items.py import scrapy class UscisItem(scrapy.Item): # define the fields for your item here like: # name = scrapy.Field() timestamp = datetime.datetime.now() case_number = scrapy.Field() status_headline = scrapy.Field() status_details = scrapy.Field() links = scrapy.Field()
Now, to create the spider. The spider is what crawls the urls and gets the data.
import scrapy from scrapy.http import FormRequest from ..items import UscisItem USCIS_CASE_NUMBER = "SOME_STRING" class USCISCaseStatusSpider(scrapy.Spider): name = 'case_status' allowed_domains = ['uscis.gov'] start_urls = [''] def parse(self, response): yield FormRequest.from_response(response, formname="caseStatusForm", formdata={"appReceiptNum": USCIS_CASE_NUMBER}, callback=self.parseUSCISCaseResponse)()
And BOOM done.
scrapy crawl case_status will launch the spider and UscisItem() will contain the relevant resultant data. Of course the next step is to insert it into a DB if you want and do whatever with it. I'll build it later.
Oh interestingly, if you don't want to deploy it on a remote machine, you could just cron it up and run it every morning. And of course since I am on Mac OS X I prefer the pretty notifications.
I use pync for that sort of thing. The repo contains the fixes that are not deployed on PyPI so it's better to just clone the repo and use that particular package.
So in the end our parse method just displays a notification with the status headline.
from pync import Notifier() Notifier.notify(item['status_headline'], title='USCIS Case Status')
That should be fun, seeing changes every morning. It's trivial, but powerful. | https://www.pronoy.in/2016/05/04/scraping-response-from-a-form-submission-using-scrapy/ | CC-MAIN-2018-22 | refinedweb | 381 | 62.04 |
A step by step tutorial on how to use AngularJS with ASP.Net MVC
Precap
Our simple AngularJS with ASP.Net MVC application is already showing off a mixture of two worlds. We started with a plain ASP.Net web applications and molded it, making a base for demonstrating, how at the core level an MVC application can be served as an AngularJS application. Nothing so serious about that. Till now you just served the contents of a MVC view in place of a static HTML page. By the way if you haven’t gone through the previous posts, you can always start it from beginning here. In this post of the series we will be talking about dividing the application into modules (AngularJS) and Areas (ASP.Net MVC).
Match at Next Level
We have already shown how a MVC view can be used for the view template for the AngularJS application. The next thing in AngularJS is modules and these useful in dividing and clubbing the business logic of the application. Say for example you would like to club all the things a Customer can do on your site or there is an Administration part of the site where Admin can control the things. All these interlinked things and functionalities can be and should be bundled together in a single AngularJS module. In the ASP.Net world these things are called Areas, where you club the similar and related functionalities together.
AngularJS modules are not just for the bundling together the business logic for the application navigation but also for the other components like all the API interfaces in a single module for a single service or your custom directives in another one. Similar is not done for the ASP.Net Areas. Don’t worry, we are going to see that with a small example here very soon.
Soon! why ? … why not just jump into it right away. And do sit tight, as in this post you are going to see a lot of code.
Create an Area
It is very simple and Visual Studio makes it very easy for us, just right click on your project and select Add -> Area, give it a name ( in my case i am going to name it as Customer). visual studio create few folders and files. Once new area is created, we will create 2 controllers named Default and Profile in there. Here is the code snippet to these 2 controllers.
// DefaultController.cs using System.Web.Mvc; namespace AngularJSwithMVC.Areas.Customer.Controllers { public class DefaultController : Controller { public ActionResult Index() { return PartialView(); } } }
// ProfleController.cs using System.Web.Mvc; namespace AngularJSwithMVC.Areas.Customer.Controllers { public class ProfileController : Controller { public ActionResult Index() { return PartialView(); } public ActionResult Edit() { return PartialView(); } } }
In the above code you can see that there are 3 actions methods defined in total. The methods are also returning the partial views. One point to be noted here is that as soon as you add a view to the area, the NuGet packages for bootstrap, jQuery and Mordernizr will be add to the project. You can remove those packages safely here for this project.
Let’s add views for each respective action methods. Here is the snippet for the views.
<!-- Default/Index.cshtml --> <div class="panel" ui-</div>
<!-- Profile/Index.cshtml --> <div class="panel-body"> <div class="row"> <div class="col-md-12"> <b>Name:</b> {{customer.name}}<br /> <b>Age:</b> {{customer.age}}<br /> </div> </div> </div> <div class="panel-footer"> <button class="btn btn-warning" ui-Edit</button> </div>
<!-- Profile/edit.cshtml --> <div class="panel-body"> <div class="row"> <div class="col-md-12"> <b>Name:</b><input class="form-control" type="text" ng-<br /> <b>Age:</b><input class="form-control" type="text" ng-<br /> </div> </div> </div> <div class="panel-footer"> <button class="btn btn-success" ng-Save</button> <button class="btn btn-warning" ng-Cancel</button> </div>
Area Registration
With all that defined, you have to register the MVC Area properly so that the default controller and action method of the area is known by the application. Here is how the CustomerAreaRegistration.cs file looks.
using System.Web.Mvc; namespace AngularJSwithMVC.Areas.Customer { public class CustomerAreaRegistration : AreaRegistration { public override string AreaName { get { return "Customer"; } } public override void RegisterArea(AreaRegistrationContext context) { context.MapRoute( "Customer_default", "Customer/{controller}/{action}/{id}", new {controller= "Default", action = "Index", id = UrlParameter.Optional } ); } } }
Notice the highlighted line in the above code, this is only line we have modified from the default.
It’s time for the explanation now. This new area (Customer) has the responsibility of showing and editing the customer profile. In future if we need to perform any other operations on the Customers like, view / Edit contact information, or reset password, all the templates for these features should be clubbed in there in this area.
AngularJS Module
The real control is in the hands of AngularJS module. We have to define a new module in AngularJS. This module will have its own config and routing, all related to the customers. to do that first of all we need to define this new module. for the sake of code manageability, we should divide the code for module and its configs in separate files. Let’s create these 3 files in customer sub-directory under modules directory (newly created) of app directory (Scripts -> app -> modules -> customer).
// customer.module.js (function () { 'use strict'; angular.module('customer', ['ui.router']); })();
// customer.config.js (function () { 'use strict'; angular.module('customer') .config(function () { // nothing as of now }); })();
// customer.route.js (function () { 'use strict'; angular.module('customer') .config(['$stateProvider', function ($stateProvider) { $stateProvider .state('customer', { url: '/Customer', templateUrl: '/Customer', resolve: { customer: function () { console.log('Resolving Customer'); var customer = { name: 'John Doe', age: 23 }; return customer; } }, controller: ['$scope', 'customer', function ($scope, customer) { console.log('Customer Controller'); $scope.customer = customer; $scope.saveEdits = function (customer) { $scope.customer = customer; } }] }) .state('customer.profile', { url: '/Profile', views: { 'section@customer': { templateUrl: '/Customer/Profile' } } }) .state('customer.profile.edit', { url: '/Edit', views: { 'section@customer': { templateUrl: '/Customer/Profile/Edit', controller: ['$scope', '$state', '$stateParams', function ($scope, $state) { console.log('Customer Edit Controller'); $scope.editingCustomer = angular.copy($scope.customer) var save = function () { $scope.saveEdits($scope.editingCustomer); $state.go('^'); }; var cancel = function () { $state.go('^'); }; $scope.save = save; $scope.cancel = cancel; }] } } }); }]); })();
Remember, for the sake of this tutorial only, I have defined the controllers in-line for these route definition. But, it is always a good practice to code the controllers in the separate files and refer the controllers here by name.
Let’s Goto Meeting …
We now have all the required components. We have to first include these new scripts in the script package for the clients to receive them, and you know how to do that, yes, include them in the BundleConfig.cs. Here is the code snippet showing the modification.
"~/bower_components/bootstrap/dist/js/bootstrap.js", "~/Scripts/app/modules/customer/customer.module.js", "~/Scripts/app/modules/customer/customer.config.js", "~/Scripts/app/modules/customer/customer.route.js", "~/Scripts/app/app.module.js",
Now, inject this newly created module in app module. Your code should look something like the one shown in the snippet below.
(function () { 'use strict'; angular.module('app', ['ui.router', 'customer']); })();
It’s time to change the angular routing to load that new module on navigation to a particular route. First of all, lets modify the _Layout.cshtml of our main MVC application (remember the we may have a _Layout.cshtml in the Area folder also, which we are ignoring as of now.). Here are the line modified for the file.
As you might have noticed the we have removed the page2 link from the navigation. We should now remove the state definition for the same from the app.route.js This all completes the code for this post and we should be able to compile and run the application.
The output
Our new module is now active and is available as a link in the navigation. The code is totally separate and can be easily removed in case we are not interested in loading the module or don’t want that module at all, by simply removing the Area, module directory and a few lines from app module. This increase the manageability of the application code and separates the concerns from each other. Hope you have also, completed the code till here and not faced any issue till now.
In case you have any difficulties or need any clarification, please feel free to leave your comments below. I will try my best to answer the queries as soon as possible.
We are now nearing the end of this series on the topic “AngularJS with ASP.Net MVC”. The only thing which remains here as per the scope of this series is “Content Authorization”. I will continue to the next post and leave you here with your code so that you can play with it and get more clarity on it.
Series Links
Part 1 Part 2 Part 3 Part 4 Part 5 Part 6 Part 7 Part 8 Part 10 Part 11 | https://www.mindzgrouptech.net/2017/02/06/angularjs-with-asp-net-mvc-part-9/ | CC-MAIN-2017-13 | refinedweb | 1,500 | 57.67 |
Creating a Custom Task Pane to Consume SharePoint Server 2007 Query Services in Word 2007
Summary: Create a custom task pane for Microsoft Office Word 2007 to perform a search query by using the Microsoft Office SharePoint Server 2007 Enterprise Search Query Web service.
Applies to: Microsoft Office SharePoint Server 2007, Windows SharePoint Services 3.0, Microsoft Office Word 2007, Microsoft Visual Studio 2005, Microsoft Visual Studio 2005 Tools for the 2007 Microsoft Office System Second Edition
Joel Krist, Akona Systems
October 2007
Code It | Read It | Explore It
The following example uses Microsoft Visual Studio 2005 Tools for the 2007 Office system Second Edition to create a custom task pane. The custom task pane presents a simple user interface (UI) that accepts a search keyword from the user, and then displays the results of the search in a DataGridView control. The search results can be inserted into a Word 2007 document by using an Insert button that is displayed in the DataGridView control. The process includes five major steps:
Creating a Word 2007 Add-in project in Microsoft Visual Studio 2005.
Adding a reference to the Enterprise Search Query Web service to the Add-in project.
Adding a user control to the project that implements the custom task pane.
Designing the UI for the custom task pane.
Adding the Enterprise Search functionality to the custom task pane.
Creating a Word Add-in Project in Visual Studio 2005
First, create a Word 2007 Add-in project.
To create a Word Add-in project in Visual Studio 2005
Start Visual Studio 2005.
On the File menu, click New Project.
In the New Project dialog box, expand the Visual Basic node or Visual C# node to view the project types. Then expand the Office node, and select 2007 Add-ins.
In the Templates pane, select Word Add-in.
Name the project SearchTaskPane.
Specify a Location for the project, and then click OK.
Visual Studio 2005 generates a Word Add-in project that contains a file named ThisAddIn.cs or ThisAddIn.vb, depending on the language selected in Step 3.
Adding a Reference to the Enterprise Search Query Web Service
Web service discovery is the process by which a client locates a Web service and obtains its service description. The process of Web service discovery in Visual Studio 2005 involves querying a Web site to locate the service description, which is an XML document that uses the Web Services Description Language (WSDL). When a Web reference is added to a project, Visual Studio 2005 generates a proxy class that provides a local representation of the Web service, allowing the client code to interface with the Web service. You can access the Web service methods by calling the methods in the proxy class. The proxy class handles the communication between the client application and the Web service.
To add a Web reference to the Enterprise Search Query Web service
In Solution Explorer, right-click the SearchTaskPane project, and click Add Web Reference.
Type the URL of the Enterprise Search Query Web service. By default, the Web service is located at the following URL:
Click Go to retrieve and display the information about the Web service.
Specify QueryService for the Web reference name, and click Add Reference to add the Web reference to the project, as shown in Figure 1.
Visual Studio downloads the service description and generates a proxy class to interface between the application and the Enterprise Search Query Web service.Figure 1. Add Web Reference dialog box
Adding a User Control to Implement the Custom Task Pane
The custom task pane is implemented as a user control.
To add a user control to the SearchTaskPane project
In Solution Explorer, right-click the SearchTaskPane project, and click Add New Item.
In the Add New Item dialog box, click User Control, and name the control SearchControl, as shown in Figure 2.Figure 2. Add New Item dialog box
Click Add.
Visual Studio adds the user control to the project and opens the control in the designer.
Designing the User Interface of the Custom Task Pane
The custom task pane that is implemented in this example displays a simple UI that contains a text box, a button, and a data grid view. The data grid view displays a button in Column 1. When you click the button, the path of the returned item is inserted into the Word 2007 document. Figure 3 shows the task pane displaying the results of a search that uses the keyword Inventory.
To design the task pane user interface in Visual Studio
Set the Width and Height properties of the user control to 300 by 400.
Add the following controls to the user control.Table 1. Control types to add to the user control
Size and position the controls so that they appear in the task pane, as shown in Figure 3.Figure 3. Custom Task Pane user interface
Add a button column to the DataGridView control.
Click the arrow in the upper-right corner of the DataGridView control to display the DataGridView Tasks pane, and then click the Edit Columns link.
Click Add.
In the Add Column dialog box, type Insert for the name of the column, select DataGridViewButtonColumn for the type, and clear the Header text, as shown in Figure 4.Figure 4. Adding a Button column
Click Add to add the column.
Add a column to the DataGridView control to display the path of returned search items.
In the Add Column dialog box, type Path for the name of the column, select DataGridViewTextBoxColumn for the type, enter Item Path for the Header text, and select Read Only, as shown in Figure 5.Figure 5. Adding a Path column
Click Add to add the column.
Click Close to close the Add Column dialog box.
In the Edit Columns dialog box, select the Button column. Set the Text property to Insert, set the UseColumnTextForButtonValue property to True, and set the Width property to 50, as shown in Figure 6.Figure 6. Setting Button Column properties
In the Edit Columns dialog box, select the Item Path column. Set the DataPropertyName property to Path, and set the AutoSizeMode property to DisplayedCells, as shown in Figure 7.Figure 7. Setting Item Path Column properties
Click OK.
In the DataGridView Tasks pane, clear the Enable Adding, Enable Editing, and Enable Deleting options.
Adding Enterprise Search Functionality to the Custom Task Pane
Next, add Enterprise Search functionality to the custom task pane.
To add code to the custom task pane to allow it to use Enterprise Search
Double-click the btnDoQuery button to open the source file for the user control.
Visual Studio 2005 opens the SearchControl.vb or SearchControl.cs source file and displays the btnDoQuery_Click event handler.
Add the following Imports statement or using statement to the top of the SearchControl.vb or SearchControl.cs source file. The Imports and using statements make it possible to use the classes and types defined in the Microsoft.Office.Interop.Word namespace without having to use fully qualified namespace paths.
For the SearchControl.cs file, add the statement after the using statements generated by Visual Studio 2005 when the user control was created.
Set the AutoGenerateColumns property of the DataGridView control to False to prevent it from displaying all columns returned in the search results.
If you are implementing the custom task pane using Visual Basic, add a constructor for the SearchControl class. Select SearchControl in the Class Name drop-down list, and then select New in the Method Name drop-down list, as shown in Figure 8.Figure 8. Adding a SearchControl constructor
Add the following code to the constructor of the SearchControl class, after the Visual Studio-generated call to the InitializeComponent method.
Add the following code to the body of the btnDoQuery_Click event handler.
' The string containing the keyword to use in the search. Dim keywordString As String = txtKeyword.Text ' The XML string containing the query request information ' for the Web service. Dim qXMLString As String = _ ". Dim queryService As QueryService.QueryService = _ New SearchTaskPane.QueryService.QueryService() queryService.Credentials = _ System.Net.CredentialCache.DefaultCredentials ' Perform the query and bind the results to the DataViewGrid. Try Dim queryResults As System.Data.DataSet = _ queryService.QueryEx(qXMLString) dgvQueryResults.DataSource = queryResults.Tables(0) Catch ex As Exception MessageBox.Show(ex.Message) End Try
// The string containing the keyword to use in the search. string keywordString = txtKeyword.Text; //. QueryService.QueryService queryService = new SearchTaskPane.QueryService.QueryService(); queryService.Credentials = System.Net.CredentialCache.DefaultCredentials; // Perform the query and bind the results to the DataViewGrid. try { System.Data.DataSet queryResults = queryService.QueryEx(qXMLString); dgvQueryResults.DataSource = queryResults.Tables[0]; } catch (Exception ex) { MessageBox.Show(ex.Message); }
Add an event handler for the DataGridView control CellContentClick event so that the task pane can respond when the Insert button is clicked. Switch to the design view of SearchControl.vb or SearchControl.cs and select the dgvQueryResults DataGridView control. Display the control's Events property page, and then double-click the CellContentClick event.
Visual Studio 2005 switches to the SearchControl.vb or SearchControl.cs code file and displays the dgvQueryResults_CellContentClick event handler. Add the following code to the definition of the dgvQueryResults_CellContentClick method.
If TypeOf dgvQueryResults.Columns(e.ColumnIndex) Is _ DataGridViewButtonColumn And e.RowIndex <> -1 Then Dim myRange As Word.Range = _ Globals.ThisAddIn.Application.ActiveDocument.Content Dim resultPath As String = _ dgvQueryResults.Rows(e.RowIndex).Cells("Path").Value.ToString() Try myRange.InsertAfter(resultPath) Catch MessageBox.Show( _ "The result's path cannot be added to the document.") End Try End If
if (dgvQueryResults.Columns[e.ColumnIndex] is DataGridViewButtonColumn && e.RowIndex != -1) { Word.Range myRange = Globals.ThisAddIn.Application.ActiveDocument.Content; string resultPath = dgvQueryResults.Rows[e.RowIndex].Cells["Path"].Value.ToString(); try { myRange.InsertAfter(resultPath); } catch { MessageBox.Show( "The result's path cannot be added to the document."); } }
Modify the Word add-in so that the custom task pane appears automatically. Open the ThisAddIn.cs or ThisAddIn.vb file.
Add the following variable declaration to the ThisAddIn class.
Add the following code to the ThisAddIn_Startup handler to create an instance of the custom task pane and add it to the collection of custom task panes that belong to the add-in.
Add the following code to the ThisAddIn_Shutdown handler to remove the custom task pane from the collection of custom task panes in the add-in.
Press CTRL+F5 to build and run the application. An instance of Word 2007 starts, and the custom task pane is docked on the right side of the document. Specify a search keyword and click Do Query. The search results are displayed in the DataGridView control. Click Insert for a search result item to insert the path of the item into the Word document.
This article explores how to create a custom task pane for Word 2007 that lets you perform a search query by using the Enterprise Search Query Web service. The following steps were performed:
Creating a Word Add-in project in Visual Studio 2005.
Adding a reference to the Enterprise Search Query Web service to the project.
Adding a user control to the project that implements the custom task pane.
Designing the UI of the custom task pane.
Adding the Enterprise Search functionality to the custom task pane.
Creating Word 2007 Templates Programmatically
Creating Custom Task Panes Using Visual Studio 2005 Tools for the Office System SE
Blog: Microsoft Visual Studio 2005 Tools for the Microsoft Office System
Enterprise Search Query Web Service Overview
How to: Submit a Keyword Query to Enterprise Search from a Client Application
Windows SharePoint Services Query Web Service
Comparisons of SharePoint Search Versions | http://msdn.microsoft.com/en-us/library/bb870301.aspx | CC-MAIN-2014-49 | refinedweb | 1,933 | 57.67 |
DEBSOURCES
Skip Quicknav
sources / python-django-imagekit / 4.0479
|Build Status|_
.. _Build Status:
Image,
**For the complete documentation on the latest stable version of ImageKit, see**
`ImageKit on RTD`_.
.. _`ImageKit on RTD`:
__
Installation
============
1. Install `PIL`_ or `Pillow`_. (If you're using an ``ImageField`` in Django,
you should have already done this.)
2. ``pip install django-imagekit``
3..
.. _`PIL`:
.. _`Pillow`:
Usage Overview
==============
Specs
-----
^^^^^^^^^^^^^^^^^^^^^^^^
The easiest way to use define an image spec is by using an ImageSpecField on
your model class:
.. code-block:: python:
.. code-block:: python
from django.db import models
from imagekit.models import ProcessedImageField
from imagekit.processors import ResizeToFill.
.. code-block:: python:
.. code-block:: python
source_file = open('/path/to/myimage.jpg', 'rb'):
.. code-block:: python
dest = open('/path/to/dest.jpg', 'wb')
dest.write(result.read())
dest.close()
Using Specs In Templates
^^^^^^^^^^^^^^^^^^^^^^^^
If you have a model with an ImageSpecField or ProcessedImageField, you can
easily use those processed image just as you would a normal image field:
.. code-block:: html
.
.. code-block:: python:
.. code-block:: html
{% load imagekit %}
{% generateimage 'myapp:thumbnail' source=source_file %}
This will output the following HTML:
.. code-block:::
.. code-block:: html
{% load imagekit %}
{% generateimage 'myapp:thumbnail' source=source_file -- alt="A picture of Me" id="mypicture" %}
Not generating HTML image tags? No problem. The tag also functions as an
assignment tag, providing access to the underlying file object:
.. code-block:: html
{% load imagekit %}
{% generateimage 'myapp:thumbnail' source=source_file as th %}
<a href="{{ th.url }}">Click to download a cool {{ th.width }} x {{ th.height }} image!</a>
thumbnail
"""""""""
Because it's such a common use case, ImageKit also provides a "thumbnail"
template tag:
.. code-block:: html
{% load imagekit %}
{% thumbnail '100x50' source_file %}
Like the generateimage tag, the thumbnail tag outputs an <img> tag:
.. code-block:: html
:
.. code-block:: html
{%:
.. code-block:: python.
.. code-block:: python:
.. code-block:: python
class Watermark(object):
def process(self, image):
# Code for adding the watermark goes here.
return image
That's all there is to it! To use your fancy new custom processor, just include
it in your spec's ``processors`` list:
.. code-block:: python.
.. _`PILKit`:
Admin
-----
ImageKit also contains a class named ``imagekit.admin.AdminThumbnail``
for displaying specs (or even regular ImageFields) in the
`Django admin change list`_. AdminThumbnail is used as a property on
Django admin classes:
.. code-block:: python:
.. code-block:: python``.
.. _`Django admin change list`: <irc://irc.freenode.net/imagekit>`_ channel on Freenode.
Contributing
============.
__
__ | https://sources.debian.org/src/python-django-imagekit/4.0.2-2/README.rst/ | CC-MAIN-2020-10 | refinedweb | 410 | 52.36 |
I switched laptops at work and had to recreate the test database for one of my projects. No problem. I dragged the DB table back over into my LINQ designer, but now when I run my code...
testcmsDataContext db = new testcmsDataContext();
IQueryable<cmschange> query = db.cmschanges;
I'm getting a bizarre error.
CS0029: Cannot implicitly convert type 'System.Data.Linq.Table<LINQDAL.CmsChanges.cmschange>' to 'System.Linq.IQueryable<cmschange>'
I've tried working around this, but without success, and not only was this code (and two others apps) working beautifully before, Table<TEntity> implements IQueryable<TEntity>. There shouldn't be a problem here.
Several blogs and forums pointed out import problems but I'm using System.Linq, System.Data.Linq, and System.Linq.Dynamic.
Did the new regeneration kill your namespaces?
[this is why we source control the generated files too]
Exactly right. I set up SQL Server created the DB, and upgraded my namespaces to make it easier to transition new iterations from test to production. However, I had so many problems configuring SQL Server (it took a couple weeks, two installs, a and hard drive reformat), that I forgot I'd made that change. The original dbml was in App_Code, so it was still accessible. I rebuilt the production code to run from LINQDAL.CmsChanges, deleted the files in App_Code and then the test files were no longer confused. | http://community.sitepoint.com/t/linq-object-not-executing-its-interface/85159 | CC-MAIN-2015-40 | refinedweb | 231 | 59.09 |
I am having trouble with snippets and settings. I am using Windows 7 & Sublime build 2059.
Snippets (Rails):
In a .html.erb file, I execute the link_to snippet 'lia'<TAB> and I get 'link_to "anoehunoe", :action => "aonseuh"', I expected to get something along the lines of '<%= link_to "anoehunoe", :action => "aonseuh" %>' but most of the erb snippets seem to be missing the erb tags (<% %>). When I look at the snippet I see '${TM_RAILS_TEMPLATE_START_RUBY_EXPR}link_to "${1:link text...}", :action => "${2:index}"${TM_RAILS_TEMPLATE_END_RUBY_EXPR}', I assume TM_RAILS_TEMPLATE_START_RUBY_EXPR and TM_RAILS_TEMPLATE_END_RUBY_EXPR are not resolving, do I need to set those somewhere? I assumed they are set in some Rails syntax files somewhere. I searched everything in 'C:\Users\jdm\AppData\Roaming\Sublime Text 2' but can't find any references to those variables.
Also, when I execute 'if'<TAB> in a .html.erb (Rails) file it puts in a PHP snippet ... what is misconfigured?
<?php if (condition): ?>
<?php endif ?> | http://www.sublimetext.com/forum/viewtopic.php?p=9999 | CC-MAIN-2014-41 | refinedweb | 153 | 59.7 |
Arabic
I illustrated most of the concepts in this blog post in Arabic at the following video
This doesn’t contain all the details in the post but yet will get you the fundamentals you need to proceed with the next parts.
Windows hashes
LM hashes
It was the dominating password storing algorithm on windows till windows XP/windows server 2003.
It’s disabled by default since windows vista/windows server 2008.
LM was a weak hashing algorithm for many reasons, You will figure these reasons out once You know how LM hashing works.
LM hash generation?
Let’s assume that the user’s password is PassWord
1 – All characters will be converted to upper case
PassWord -> PASSWORD
2 – In case the password’s length is less than 14 characters it will be padded with null characters, so its length becomes 14, so the result will be PASSWORD000000
3 – These 14 characters will be split into 2 halves
PASSWOR
D000000
4 – Each half is converted to bits, and after every 7 bits, a parity bit (0) will be added, so the result would be a 64 bits key.
1101000011 -> 11010000011
As a result, we will get two keys from the 2 pre-generated halves after adding these parity bits
5 – Each of these keys is then used to encrypt the string “[email protected]#$%” using DES algorithm in ECB mode so that the result would be
PASSWOR = E52CAC67419A9A22
D000000 = 4A3B108F3FA6CB6D
6 – The output of the two halves is then combined, and that makes out LM hash
E52CAC67419A9A224A3B108F3FA6CB6D
You can get the same result using the following python line.
python -c 'from passlib.hash import lmhash;print lmhash.hash("password")'
data-
Disadvantages
As you may already think, this is a very weak algorithm,
Each hash has a lot of possibilities, for example, the hashes of the following passwords
Password1
pAssword1
PASSWORD1
PassWord1 . . . ETC
It will be the same!!!!
Let’s assume a password like passwordpass123
The upper and lowercase combinations will be more than 32000 possibilities, and all of them will have the same hash!
You can give it a try.
import itertools len(map(''.join, itertools.product(*zip("Passwordpass123".upper(), "Passwordpass123".lower()))))
Also, splitting the password into two halves makes it easier, as the attacker will be trying to brute force just a seven-character password!
LM hash accepts only the 95 ASCII characters, but yet all lower case characters are converted to upper case, which makes it only 69 possibilities per character, which makes it just 7.5 trillion possibilities for each half instead of the total of 69^14 for the whole 14 characters.
Rainbow tables already exist containing all these possibilities, so cracking Lan Manager hashes isn’t a problem at all
Moreover, in case that the password is seven characters or less, the attacker doesn’t need to brute force the 2nd half as it has the fixed value of AAD3B435B51404EE
Example
Creating hash for password123 and cracking it.
You will notice that john got me the password “PASSWORD123” in upper case and not “password123”, and yeah, both are just true.
Obviously, the whole LM hashing stuff was based on the fact that no one will reverse it as well as no one will get into the internal network to be in a MITM position to capture it.
As mentioned earlier, LM hashes are disabled by default since Windows Vista + Windows server 2008.
NTLM hash <NTHash>
NTHash AKA NTLM hash is the currently used algorithm for storing passwords on windows systems.
While NET-NTLM is the name of the authentication or challenge/response protocol used between the client and the server.
If you made a hash dump or pass the hash attack before so no doubt you’ve seen NTLM hash already.
You can obtain it via
Dumping credentials from memory using mimikatz
Eg, sekurlsa::logonpasswords
Dumping SAM using
C:\Windows\System32\config\SYSTEM
C:\Windows\System32\config\SAM
Then reading hashes offline via Mimikatz
lsadump::sam /system:SystemBkup.hiv /sam:SamBkup.hiv
And sure via NTDS where NTLM hashes are stored in ActiveDirectory environments, You’re going to need administrator access over the domain controller, A domain admin privs for example
You can do this either manually or using DCsync within mimikatz as well
NTLM hash generation
Converting a plaintext password into NTLM isn’t complicated, it depends mainly on the MD4 hashing algorithm
1 – The password is converted to Unicode
2 – MD4 is then used to convert it to the NTLM
Just like MD4(UTF-16-LE(password))
3 – Even in case of failing to crack the hash, it can be abused using Pass the hash technique as illustrated later.
Since there are no salts used while generating the hash, cracking NTLM hash can be done either by using pre-generated rainbow tables or using hashcat.
hashcat -m 3000 -a 3 hashes.txt
Net-NTLMv1
This isn’t used to store passwords, it’s actually a challenge-response protocol used for client/server authentication in order to avoid sending user’s hash over the network.
That’s basically how Net-NTLM authentication works in general.
I will discuss how that protocol works in detail, but all you need to know for now is that NET-NTLMv1 isn’t used anymore by default except for some old versions of windows.
The NET-NTLMv1 looks like username::hostname:response:response:challenge
It can’t be used directly to pass the hash, yet it can be cracked or relayed as I will mention later.
Since the challenge is variable, you can’t use rainbow tables against Net-NTLMv1 hash,
But you can crack it by brute-forcing the password using hashcat using
hashcat -m 5500 -a 3 hashes.txt
This differs from NTLMv1-SSP in which the server challenge is changed at the client-side
NTLMv1 and NTLMv1-SSP are treated differently during cracking or even downgrading, this will be discussed at the NTLM attacks part.
Net-NTLMv2
A lot of improvements were made for v1, this is the version being used nowadays at windows systems.
The authentication steps are the same, except for the challenge-response generation algorithm, and the NTLM challenge length which in this case is variable instead of the fixed 16-bytes number at Net-NTLMv1.
At Net-NTLMv2 any parameters are added by the client such as client nonce, server nonce, timestamp as well as the username and encrypt them, that’s why you will find the length of Net-NTLMv2 hashes varies from user to another.
Net-NTLMv2 can’t be used for passing the hash attack, or for offline relay attacks due to the security improvements made.
But yet it still can be relayed or cracked, the process is slower but yet applicable.
I will discuss that later as well.
Net-NTLMv2 hash looks like
It can be cracked using
hashcat -m 5600 hash.txt
Net-NTLM Authentication
In a nutshell
Let’s assume that our client (192.168.18.132) is being used to connect to the windows server 2008 machine (192.168.18.139)
That server isn’t domain-joined, means that all the authentication process is going to happen between the client and the server without having to contact any other machines, unlike what may happen in the 2nd scenario.
The whole authentication process can be illustrated in the following picture.
Client IP : 192.168.18.132 [Kali linux]
Server IP: 192.168.18.139 [Windows server 2008 non-domain joined]
0 – The user enters his/her username and password
1 – The client initiates a negotiation request with the server, that request includes any information about the client capabilities as well as the Dialect or the protocols that the client supports.
2 – The server picks up the highest dialect and replies through the Negotiation response message then the authentication starts.
3 – The client then negotiates an authentication session with the server to ask for access, this request contains also some information about the client including the NTLM 8 bytes signature (‘N’, ‘T’, ‘L’, ‘M’, ‘S’, ‘S’, ‘P’, ‘\0’).
4 – The server responds to the request by sending an NTLM challenge
5 – The client then encrypts that challenge with his own pre-entered password’s hash and sends his username, challenge and challenge-response back to the server (another data is being sent while using NetNTLM-v2).
6 – The server tries to encrypt the challenge as well using its own copy of the user’s hash which is stored locally on the server in case of local authentication or pass the information to the domain controller in case of domain authentication, comparing it to the challenge-response, if equal then the login is successful.
1-2 : negotiation request/response
launch Wireshark and initiate the negotiation process using the following python lines
from impacket.smbconnection import SMBConnection, SMB_DIALECT myconnection = SMBConnection("jnkfo","192.168.18.139")
These couple lines represent the 1st two negotiation steps of the previous picture without proceeding with the authentication process.
Using the “smb or smb2” filter
During the negotiation request, you will notice that the client was negotiating over SMB protocol, and yet the server replied using SMB2 and renegotiated again using SMB2!
It’s simply the Dialects.
By inspecting the packet you will find the following
As mentioned earlier, the client is offering the Dialects it supports and the server picks up whatever it wants to use, by default it picks up the one with the highest level of functionality that both client and server supports.
If the best is SMB2 then let it be SMB2.
You can, however, enforce a certain dialect (assuming the server supports it) using
Myconnection.negotiateSession(preferredDialect=”NT LM 0.12”)
The dialect NT LM 0.12 was sent, the server responded back using SMB, and will use the same protocol for the rest of the authentication process.
Needless to say that LM response isn’t supported by default anymore since windows vista/windows server 2008.
3 – Session Setup Request (Type 1 message)
The following line will initiate the authentication process.
myconnection.login("Administrator", "[email protected]")
The “Session Setup Request” packet contains information such as the [‘N’, ‘T’, ‘L’, ‘M’, ‘S’, ‘S’, ‘P’, ‘\0’] signature, negotiation flags indicating the options supported by the client and the NTLM Message Type which must be 1
An interesting Flag is the NTLMSSP_NEGOTIATE_TARGET_INFO flag which will ask the server to send back some useful information as will be seen in step number 4
Another interesting flag is the NEGOTIATE_SIGN which has a great deal with the relay attacks as will be mentioned later.
4 – Session Setup Response (Type 2 message)
At the response, we get back the NTLMSSP signature again.
The message type must be 2 in this case.
Target name and the target info due to the NTLMSSP_NEGOTIATE_TARGET_INFO flag we sent earlier which provides us with some wealthy information about the target!
A good example is getting the domain name of exchange servers externally.
The most important part is the NTLM challenge or nonce.
5 – Session Setup Request (Type 3 message)
Long story short, the client needs to prove that he knows the user’s password, without sending the plaintext password or even the NTLM hash directly over the network.
So instead it goes through a procedure in which it creates NT-hash, uses this to encrypt the server’s challenge, sends this back along with the user name to the server.
That’s how the process works in general.
At NTLMv2, The client hashes the user’s pre-entered plain text password into NTLM using the pre-mentioned algorithm to proceed with the challenge-response generation.
The elements of the NTLMv2 hash are
– The upper-case username
– The domain or target name.
HMAC-MD5 is applied to this combination using the NTLM hash of the user’s password, which makes the NTLMv2 hash.
A blob block is then constructed containing
– Timestamp
– Client nonce (8 bytes)
– Target information block from type 2 message
This blob block is concatenated with the challenge from type 2 message and then encrypted using the NTLMv2 hash as a key via HMAC-MD5 algorithm.
Lastly, this output is concatenated with the previously constructed blob to form the NTLMv2-SSP challenge-response (type 3 message)
so basically the NTLMv2_response = HMAC-MD5(text(challenge + blob), using NTLMv2 as a key)
and the challenge response is NTLMv2_response + blob.
Out of curiosity and just to know the difference between the ntlmv1 and v2, How is NTLMv1 response calculated?!
1 – The NTLM hash of the plaintext password is calculated as pre-mentioned, using the MD4 algorithm, so assuming that the password is [email protected], the NTLM hash will be E19CCF75EE54E06B06A5907AF13CEF42
2 – These 16 bytes are then padded to 21 bytes, so it becomes E19CCF75EE54E06B06A5907AF13CEF420000000000
3 – This value is split into three 7 bytes thirds
0xE19CCF75EE54E0
0x6B06A5907AF13C
0xEF420000000000
4 – These 3 values are used to create three 64 bits DES keys by adding parity bits after every 7 bits as usual
So for the 1st key 0xE19CCF75EE54E0
11100001 10011100 11001111 01110101 11101110 01010100 11100000
8 parity bits will be added so it becomes
111000001 100111000 110010111 011100101 111001110 010010100 1011000000
In Hex : 0xE0CE32EE5E7252C0
Same goes with the other 2 keys
5 – Each of the three keys is then used to encrypt the challenge obtained from Message type 2.
6 – The 3 results are combined to form the 24-byte NTLM response.
So in NTLMv1, there is no client nonce or timestamp being sent to the server, keep that in mind for later.
6 – Session Setup Response
The server receives type 3 message which contains the challenge-response
The server has its own copy of the user’s NTLM hash, challenge, and all the other information needed to calculate its own challenge-response message.
The server then compares the output it has generated with the output it got from the client.
Needless to say, if the NT-Hash used to encrypt the data on the client-side, it differs from the user’s password’s NT-hash stored on the server (The user entered the wrong password), the challenge-response won’t be the same as the server’s output.
And thus user get ACCESS_DENIED or LOGON_FAILURE message
Unlike if the user entered the correct password, the NT-Hash will be the same, and the encryption (challenge-response) result will be the same on both sides and then the login will succeed.
That’s how the full authentication process happened without directly sending or receiving the NTLM hash or the plaintext password over the network.
NTLM authentication in a windows domain environment
The process is the same as mentioned before except for the fact that domain users credentials are stored on the domain controllers
So the challenge-response validation [Type 3 message] will lead to establishing a Netlogon secure channel with the domain controller where the passwords are saved.
The server will send the domain name, username, challenge, and the challenge-response to the domain controller which will determine if the user has the correct password or not based on the hash saved at the NTDS file (unlike the previous scenario in which the hash was stored locally on the SAM).
So from the server-side, you will find the following 2 extra RPC_NETLOGON messages to and from the Domain controller.
and if everything is ok it will just send the session key back to the server in the RPC_NETLOGON response message.
NTLMSSP
To fully understand that mechanism you can’t go without knowing a few things about NTLMSSP, Will discuss this in brief and dig deeper into it during the attacks part.
From Wikipedia
NTLMSSP (NT LAN Manager (NTLM) Security Support Provider) is a binary messaging protocol used by the Microsoft Security Support Provider Interface (SSPI) to facilitate NTLM challenge-response authentication and to negotiate integrity and confidentiality options. NTLMSSP is used wherever SSPI authentication is used including Server Message Block / CIFS extended security authentication, HTTP Negotiate authentication (e.g. IIS with IWA turned on) and MSRPC services.
The NTLMSSP and NTLM challenge-response protocol have been documented in Microsoft’s Open Protocol Specification.
SSP is a framework provided by Microsoft to handle that whole NTLM authentication and integrity process,
Let’s repeat the previous authentication process in terms of NTLMSSPI
1 – The client gets access to the user’s credentials set via AcquireCredentialsHandle function
2 – The Type 1 message is created by calling InitializeSecurityContext function in order to start the authentication negotiation process which will obtain an authentication token and then the message is forwarded to the server, that message contains the NTLMSSP 8 bytes signature mentioned before.
3 – The server receives the “Type 1 message“, extracts the token and passes it to the AcceptSecurityContext function which will create a local security context representing the client and generate the NTLM challenge and send it back to the client (Type 2 message).
4 – The client extracts the challenge, passes it to InitializeSecurityContext function which creates the Challenge-response (Type 3 message)
5 – The server passes the Type 3 message to the AcceptSecurityContext function which validates if the user authenticated or not as mentioned earlier.
These function/process has nothing to do with the SMB protocol itself, they are related to the NTLMSSP, so they’re called whenever you’re triggering authenticating using NTLMSSP no matter the service you’re calling.
How does NTLMSSP assure integrity?
To assure integrity, SSP applies a Message Authentication Code to the message. This can only be verified by the recipient and prevent the manipulation of the message on the fly (in a MITM attack for example)
The signature is generated using a secret key by the means of symmetric encryption, and that MAC can only be verified by a party possessing the key (The client and the server).
That key generation varies from NTLMv1 to NTLMv2
At NTLMv1 the secret key is generated using MD4(NTHash)
At NTLMv2
1 – The NTLMv2 hash is obtained as mentioned earlier
2 – The NTLMv2 blob is obtained as also mentioned earlier
3 – The server challenge is concatenated with the blob and encrypted with HMAC-MD5 using NTLMv2 hash as a key
4 – That output is encrypted again with HMAC-MD5 using again NTLMv2 hash as a key HMAC-MD5(NTLMv2, OUTPUT_FROM_STEP_3)
And that’s the session key
You’ll notice that to generate that key it requires to know the NThash in both cases, either in NTLMv1 or NTLMv2, the only sides owning that key are the client and the server.
The MITM doesn’t own it and so can’t manipulate the message.
This isn’t always the case for sure, and it has it’s own pre-requirements and so it’s own drops which will be discussed in the next parts where we’re going to dig deeper inside the internals of the authentication/integrity process in order to gain more knowledge on how these features are abused.
Conclusion and references
We’ve discussed the difference between LM, NTHash, NTLMv1 and NTLMv2 hashes.
I went through the NTLM authentication process and made a quick brief about the NTLMSSP’s main functions.
In the next parts, we will dig deeper into how NTLMSSP works and how can we abuse the NTLM authentication mechanism.
If you believe there is any mistake or update that needs to be added, feel free to contact me at Twitter.
References
The NTLM Authentication Protocol and Security Support Provider
Mechanics of User Identification and Authentication: Fundamentals of Identity Management
[MS-NLMP]: NT LAN Manager (NTLM) Authentication Protocol
LM, NTLM, Net-NTLMv2, oh my! | https://blog.redforce.io/windows-authentication-and-attacks-part-1-ntlm/ | CC-MAIN-2020-29 | refinedweb | 3,256 | 52.23 |
>>.
.
Re: (Score:3, Funny)
from future import braces:Um, isn't java code GPL? (Score:4, Informative)
GPL'ed code will save Google from copyright claims, not patent claims. This is the case for pre-GPLv3 license which Java is under..: requi
Re: ).: (Score:3, Funny)
Welcome to the internet! You must be new here. But don't worry; there are only a few simple ground-rules. First of all, understand that the highest form of argument is to compare your opponent to Hitler. Failing that, rephrase your opponent's position in terms of a car analogy. And most important of all--discredit your opponent in every post by referring to him as a "troll."
To complicate the matter, be
Re: (Score:2)
google used the package from apache harmony and the java.* package in android are open and free, so what exactly is your point ?, Insightful)
"Evil" now means "I don't like it". | http://developers.slashdot.org/story/10/10/28/1424247/oracle-claims-google-directly-copied-our-java-code?sdsrc=rel | CC-MAIN-2014-42 | refinedweb | 155 | 76.42 |
Multithreaded applications provide the illusion that numerous activities are happening at more or less the same time. In C#, the System.Threading namespace provides a number of types that enable multithreaded programming.
System.Threading
Concurrency is one of the key concepts when more than one thread accesses a shared data. Uncontrolled concurrent access to the object may either leave it in an indeterminate state, which leads to runtime exceptions, or the object will behave unexpectedly and generate random, garbage output.
In three occasions, a shared object does not need synchronization.
An object that does not fall into one of these conditions (and most of real objects will not, for God's sake, who wants to have an object that he writes but never reads and vice versa) should to be thoroughly analyzed and properly synchronized if it is to be used, in a multithreaded environment.
Let us assume that you have designed a thread-silly object which is supposed to run in a single-threaded environment. Then all of a sudden, your project specs have changed (remember that customers are -and your boss is- always right :)). Now your poor innocent object has found itself unprotected in a multi-threaded environment.
This article discusses what may happen to a thread-silly object in a multi-threaded environment and proposes a method (e.g. using a synchronized wrapper) for safely using it in this multi-threaded environment without changing its internal structure too much.
Synchronizing an object is done via Monitors and monitoring brings an overhead to the application. So if your object will not be executed in a multi-threaded environment, using thread-unaware objects will be more efficient than their thread-aware counterparts.
Monitor
I would like to expand the concept of thread-awareness a bit:
Most of the time people use thread-aware/thread-safe words interchangeably. However, there is a slight difference between them. Despite my thorough Googling on the topic, I was unable to find a clear written definition. People appear to have their own subjective interpretations. After reading here and there and everywhere for definitions on the subject, here follows my boiled up definition for them: (I am open to and will appreciate any contributions on the terms)).
Let us begin by creating a simple interface:
namespace com.sarmal.articles.en.synchronization
{
using System;
public interface BankAccount
{
void Empty();
void Add(double money);
double Balance {get;}
bool IsSynchronized {get;}
object SyncRoot {get;}
}
}
Not a big surprise huh? Empty() method clears the balance, Add(double) adds money to the bank account, and Balance is the total amount of money currently deposited in the account.
Empty()
Add(double)
Balance
The two other elements that require more attention are IsSynchronized and SyncRoot. IsSynchronized method returns whether the object is safe for multi-threaded access, and SyncRoot is the synchronization root of the object which you can pass to lock statement as a parameter.
IsSynchronized
SyncRoot
lock
lock(acc.SyncRoot) {
... critical code goes here ...
}
The implementation of the interface is not a big issue. There is a private double member variable, Add method adds to it, Empty sets it to zero, and Balance returns what is currently stored in that variable.
double
Add
Empty
One thing that is noteworthy is the Add method:
public virtual void Add(double money) {
double temp = sum;
Thread.Sleep(0);
temp += money;
sum = temp;
}
Note that those four lines of code is equivalent to nothing but sum += money. We have split the statements into lines and added a Thread sleep in between to increase the probability of getting concurrency-related errors. Add(double money) {sum+=money;} can also serve the purpose of this article, however the code presented above leads to a more dramatic and distinct outcome.
sum += money
Thread
Add(double money) {sum+=money;}
Things that take attention in the implementation class AccountImpl are the two overridden Synchronized methods:
AccountImpl
Synchronized
public static BankAccountImpl Synchronized(BankAccountImpl impl) {
return (BankAccountImpl) Wrap(impl);
}
public static BankAccount Synchronized(BankAccount acc) {
return (BankAccount) Wrap(acc);
}
The private method Wrap returns the passed BankAccount object itself if the parameter is synchronized, otherwise it returns a synchronized wrapper class SyncAccount which extends BankAccountImpl.
private
Wrap
BankAccount
SyncAccount
BankAccountImpl
The wrapper class SyncAccount, which is the key point of this discussion, is a private sealed inner class. It stores the reference of the BankAccount as a private member and locks critical portions of the code using the SyncRoot of the member.
private sealed
private sealed class SyncAccount:BankAccountImpl {
private object syncRoot;
private BankAccount bankAccount;
public SyncAccount(BankAccount acc) {
bankAccount = acc;
syncRoot = acc.SyncRoot;
}
public override void Empty() {
lock(syncRoot) {
bankAccount.Empty();
}
}
... truncated ...
public override bool IsSynchronized {get {return true;}}
public override object SyncRoot {get {return syncRoot;}}
}
Test.cs includes the Test class to test the application. If you open it, you will see a commented out code in its main method.
Test
main
acc = new BankAccountImpl();
//acc = BankAccountImpl.Synchronized(acc);
When you build the project and run, it will generate an output similar to the following:
Total balance is expected to be: 120.
Starting Thread-0
Starting Thread-1
Starting Thread-2
Thread-0 entered add.
Thread-0: Balance before add : 0
... truncated ...
Starting Thread-4
Thread-0: Balance after add : 4
Thread-0: Balance before add : 4
Thread-1: Balance after add : 4
Thread-1: Balance before add : 4
Thread-0: Balance after add : 5
Thread-0: Balance before add : 5
Thread-1: Balance after add : 5
Thread-1: Balance before add : 5
... truncated ...
Thread-9: Balance after add : 30
Thread-9 exited add.
Thread-11: Balance after add : 30
Thread-11: Balance before add : 30
Joining Thread-10
Joining Thread-11
Thread-11: Balance after add : 31
Thread-11 exited add.
The current balance is 31.
The outcome will be different for each run but the trend will be the same. Current balance will always be less than the expected balance.
The situation may be seen somewhat analogous to the good old producer-consumer dilemma (in reverse). At a given instant of time, more than one thread time read (i.e. consumed) the shared variable. The threads increment the value they had read (i.e., not the original value but the snapshot they took) by one and store it back (i.e., produced).
Now let us uncomment the commented out parts, and rebuild the solution.
acc = new BankAccountImpl();
acc = BankAccountImpl.Synchronized(acc);
We will get a synchronized wrapper around the class, and everything will be as expected. Only one thread will be able to access each method at any given time, which will ensure data integrity.
Here is what the outcome will look like after rebuild:
Total balance is expected to be: 120.
Starting Thread-0
Starting Thread-1
Starting Thread-2
Thread-1 entered add.
Thread-1: Balance before add : 0
Thread-1: Balance after add : 1
Thread-1: Balance before add : 1
Starting Thread-3
Thread-1: Balance after add : 2
Thread-1: Balance before add : 2
Thread-1: Balance after add : 3
Thread-1: Balance before add : 3
Thread-0 entered add.
Thread-2 entered add.
Thread-0: Balance before add : 4
Thread-0: Balance after add : 5
Thread-0: Balance before add : 5
Starting Thread-4
Thread-0: Balance after add : 6
Thread-0: Balance before add : 6
Starting Thread-5
Thread-0: Balance after add : 7
Thread-0: Balance before add : 7
... truncated ...
Joining Thread-5
Joining Thread-6
Joining Thread-7
Joining Thread-8
Joining Thread-9
Joining Thread-10
Joining Thread-11
The current balance is 120.
Synchronization is an important factor to preserve data concurrency in a multithreaded environment. No matter in which language you code, whether it is ANSI C or C# or Java or anything else, if there are multiple processes that rush to gain control of a shared resource, careful analysis of the situation is extremely necessary. Real-world multi-threaded scenarios are not as simple as the above Account example. Yet the .NET framework makes the threading and synchronization easy to handle. To be honest, IMHO, the threading capabilities of C# beats Java.
Rather than diving threading stuff in great detail, I preferred to discuss basic issues around a sample application. I spare a detailed examination of System.Threading namespace's methods, advanced issues such as caching, memory models, memory barriers, lazy initialization, and conceptual topics such as deadlocks, atomicity, thread-safety, race conditions, semaphores, mutexes, critical sections etc. etc... to my proceeding articles. Else I would be rather boring and would be consuming too much paper space.. | http://www.codeproject.com/Articles/9654/Synchronization-Basics-and-Concept-of-Using-a-Sync | CC-MAIN-2016-40 | refinedweb | 1,421 | 53.71 |
operator!= (hash_map)
Visual Studio 2013
Tests if the hash_map object on the left side of the operator is not equal to the hash_map object on the right side.
The comparison between hash_map objects is based on a pairwise comparison of their elements. Two hash_maps_map_op_ne.cpp // compile with: /EHsc #include <hash_map> #include <iostream> int main( ) { using namespace std; using namespace stdext; hash_map _maps hm1 and hm2 are not equal." << endl; else cout << "The hash_maps hm1 and hm2 are equal." << endl; if ( hm1 != hm3 ) cout << "The hash_maps hm1 and hm3 are not equal." << endl; else cout << "The hash_maps hm1 and hm3 are equal." << endl; }
The hash_maps hm1 and hm2 are not equal. The hash_maps hm1 and hm3 are equal.
Reference
Show: | http://msdn.microsoft.com/en-us/library/hft6c3dc | CC-MAIN-2014-52 | refinedweb | 119 | 83.56 |
GreenScript module
The GreenScript module help you to manage javascript and CSS dependencies and do minimizing work in the same time.
What’s New for v1.2.5
- Support in-memory cache
- New configuration item:
- greenscript.cache.inmemory
What’s New for v1.2.4
- add “greenscript.url.root” option which is set to “/public” by default
- Intelligent resouce root detect. Suppose your javascript foo.js located at /public/bar/foo.js and your “greenscript.dir.root” set to “/public” as default. you can reference foo.js by either “/public/bar/foo.js” or “/bar/foo.js”, but “foo.js” is not okay, give your “greenscript.dir.js” is set to “javascripts” by default. “foo.js” will be evaluated as “/public/javascripts/foo.js”.
What’s New for v1.2.3
- upgrade YUI compressor version from 2.4.2 to 2.4.6
- Fix bug: 404 error while fetching cached files when change minimize/cache setting dynamically
- Fix bug: loaded logic breaks when minimize is enabled
What’s New for v1.2.2
- !!!Major bug fix for inline dependency declaration feature: refreshing page will cause dependency disorder
- greenscript.conf change detection in dev mode. thanks for the contribution comes from short-at ()
- CDN resource order now kept when minimize enabled
- Configuration controller is secure now
What’s New for v1.2d
- configuration change (COMPATIBILITY BROKEN!): resource dir location shall NOT include resource root dir now.
- Previously: greenscript.dir.js=/public/javascripts
- Now: greenscript.dir.js=javascripts (suppose greenscript.dir.root=/public)
- Previously: greenscript.dir.css=/public/stylesheets
- Now: greenscript.dir.css=stylesheets (suppose greenscript.dir.root=/public)
- Fix bug:
- greenscript now support dependency management in modules (your css/js files in modules, your greenscript.conf file in moduels)
- Support ‘.bundle’ suffix in resource dependency configuration via greenscript.conf
- E.g. js.jq.bundle=
- You can use ‘.bundle’ to define an alias for a resource, or
- you can use ‘.bundle’ to define a group of resources that always been used together
- Support inline dependency declaration (search for ‘inline dependency declaration’ in this document)
What’s New for v1.2c
- Fix bug: dependency management breaks for complicated dependencies
- Support reverse dependency declaration (search for “reverse dependency declaration” in this document)
What’s New for v1.2b
- Support transparent compression. You don’t even invoke #{greenscript.css|js /} tag to get your css and js file compressed
What’s New for v1.2a
- Fix bug: IllegalStateException thrown out when app restart (in DEV mode)
What’s New for v1.2
- Completely rewrite.
- GreenScript core logic detached from Play plugin project
- Clearly defined interface and well documented code comment
- Unit test cases for core logic
- Circular dependence relationship detect
- Unified javascript/css tag syntax
- Support inline javascript/css
- new tag options:
- media: pass media (screen, project, all, etc) to #{greenscript.css /} to specify the css file media target
- browser: pass to #{greenscript.css /} and #{greenscript.js /} to specify which browser it is targeted
What’s New for v1.1c
- greenscript.compress and greenscript.cache now default to true without regarding to the Play mode
- Unused compressed files in “gs” folder get cleaned
- Notice in configuration html page and demo application.conf file about greenscript.compress|cache option
- Fix bug in css.html tag: NPE encountered when trying to output without argument or "load/import"
- Add support to CDN
- Support reload dependency configuration at runtime
What’s New for v1.1
- Many bug fixes
- Completely new Plugin Configurator
- Add Command to enable user copy module tags/templates to user app directory
- More clear configuration settings
- Even more simplified tag syntax
- Support zero configuration
- Document improvement
What’s New for v1.0
- Bug fixes:
- dependency management fail while in ‘dev’ mode
- <a href=‘’>greenscript should use play configuration file
- Enhancements:
- Tag simplified: ‘sm:gsSM’ parameter no longer needed for greenscript.css and greenscript.javascript tag
- Simplified alias for greenscript.javascript tag: greenscript.js
- Use ‘import’ to replace 'load'
Three steps to use GreenScript
- Install the module and enable it in your application.conf file
- you know what I am talking about ...
- Document your javascript/css dependencies in conf/greenscript.conf file
- Check the demo’s greenscript.conf file and you will know what it is
- Use greenscript tag in your template files: #{greenscript.js “myjs1 myjs2 ...” [options] /}
- Yes, this part is a little bit complicated, but not that much. I am sure it won’t be difficult as #{list} tag
Step 2 and 3 are optional. The simplest form of using GreenScript is to add the following line into your application.conf file:
module.greenscript=${play.path}/modules/greenscript-xx
Immediately you have done that, your javascript file and css file will be compressed automatically.
Manual
Configure GreenScript Plugin
# The GreenScript module module.greenscript=${play.path}/modules/greenscript-xx
File locations
This part set the javascript, css and compressed file location in the filesystem, start from your application’s root.
# Default greenscript.dir.js point to /public/javascripts greenscript.dir.js=/public/javascripts # # Default dir.css point to /public/stylesheets greenscript.dir.css=/public/stylesheets # # Default minimized file folder point to /public/gs greenscript.dir.minimized=/public/gs
URL Path
This part set the url path GreenScript used to output javascript, css or compressed files, start from root url.
Usually you will not need to set this part as it will reuse the dir settings, which is comply with Play’s default folder layout and route mapping. However, if you have shortcut set in your application’s route file (as what I did in the demo app), you are encouraged to override defalt setting here:
greenscript.url.js=/js greenscript.url.css=/css ## # IMPORTANT: make sure the mapping does not conflict with # the mapping of greenscript module in your route file. # see <a href="dyna-conf">Configuration at runtime</a> greenscript.url.minimized=/compressed
Note that js and css url is used only when greenscript.miminize set to false, in which case, GreenScript will output links refer the original javascript/css files.
greenscript.url.minimized setting is used only when greenscript.minimize set to true, in which case, GreenScript will output links refer to the compressed(minimized) files
Minimize Settings
# Enable/Disable minimize # Once minimize disabled, GreenScript will output the original javascript/css # files without any processing. However, the order of the files is guaranteed # to follow the dependency graph you have defined in "greenscript.conf" file # # When minimize turned on, GreenScript will merge all javascript/css files # within one HTTP request into a single file. Again the merge order is # guaranteed to follow the dependency graph you have defined in the # "greenscript.conf" file # # Note if you turn off minimize, the rest 2 settings (compress, cache) will # not function whatever the value they are # # By Default minimize is turned on in prod mode and turned off in dev mode greenscript.minimize=false # # Enable/Disable compress # Once compress is enabled, GreenScript will compress the javascript/css files # while doing the merge operation. # # By default compress is turned on in prod mode and turned off in dev mode greenscript.compress=false # # Enable/Disable cache # Once cache is turned on, GreenScript will try best to reuse the processed # file instead of repeat the merge/compress process. # # By default cache is turned on in prod mode and turned off in dev mode greenscript.cache=false # Enable/Disable in-memory cache # Once in-memory cache is turned on, GreenScript will use a memory buffer to # store the minimized resource instead of a temporary file. This feature could # be useful to those apps hosted on clouds without normal File IO, e.g. GAE # This item is by default false greenscript.cache.inmemory=true
Configure javascript/css dependencies
Javascript/css dependencies are documented in a separate configuration file named greenscript.conf, which should be put into the conf dir (the same place with your application.conf). Start from v1.2d, greenscript.conf could be put under conf dir of modules, and these module level greenscript configuration will be merged with application greenscript.conf to define the whole depenedency graph of javascript and css files located in your application and module folders. One limitation to this module level greenscript.conf support is that your javascript and css file must be put in the same directory hierarchy. For example, if you app js/css files are put into ${app.root}/public/javascripts and ${app.root}/public/stylesheets, then all your module you want to use with greenscript must also store their javascripts and css files inside ${module.root}/public/javascripts and ${module.root}/public/stylesheets.
It’s fairly straght forward to document the file dependencies. Let’s say your have four javascript files a.js, b.js, c.js and d.js, the dependency relationship is b.js depends on a.js, c.js depends on both b.js and d.js, then here is the content of your greenscript.conf file:
js.b=a js.c=b,d
The same way applies to css file dependencies. The only difference is css dependancy rule starts with css. while javascript file rule starts with js.. Below is the content of greenscript.conf file of the demo application:
# js.default means the file get loaded always, even no other file depends on it js.default=prototype # Javascript Dependencies js.datepicker=prototype-base-extensions,prototype-date-extensions js.livevalidation=prototype js.pMask=prototype-event-extensions js.prototype-base-extensions=prototype js.prototype-date-extensions=prototype js.prototype-event-extensions=prototype js.dumb_1=prototype # # CSS Dependencies css.color=reset css.form=color,layout css.layout=reset # # Other configuration should go to application.conf
reverse dependency declaration (new in 1.2c)
bc. js.b-=a,c,d
The above line equals to three lines below:
bc. js.a=b
js.c=b
js.d=b
Using tags
Now that your have understand how to configured the plugin and file dependencies, it’s time to see how GreenScript can simplify your life of dealing with javascript/css in your play template files.
The base template: main.html
Normally you should have a main.html (you might call it “base” or other names, but that doesn’t matter) served as a base template for all other templates, and in the "
<link rel="stylesheet" type="text/css" media="screen" href="@{'/public/stylesheets/main.css'}"> #{get 'moreStyles' /} <script src="@{'/public/javascripts/jquery-1.4.2.min.js'}" type="text/javascript" charset="utf-8"></script> #{get 'moreScripts' /}
And here is how it should be when you are using GreenScript:
#{greenscript.css "main", output:'all'/} #{greenscript.js output: 'all'/}
Yes! that’s it. I know you might have some questions, don’t worry. Let me unveil the curtain.
- Where is my “jquery-1.4.2.min.js” ?
- When you put output: 'all' in #{greenscript.js} tag, it will output all unloaded js dependency files as well as the default js file you’ve defined in greenscript.conf. I am sure jquery-1.4.2.min.js will be reached by either of the 2 lookup paths, otherwise, I assume you will not need that file. For perfectionist, here is how to load the file anyway: #{greenscript.js “jquery-1.4.2.min.js”, output: true/}
- How can I get “moreStyles” and “moreScripts”?
- You get them automatically when you have output: true for #{greenscript.css} or output: 'all' for #{greenscript.js}. The assumption is you have told GreenScript that you need them in other places. I will let you know how to do that later at next section.
- Why do you use output for css while loadAll for js?
- loadAll is deprecated now. Both css and js use ‘output: "all"’ to output all inline declared and dependencies that has not output yet
Other templates
The differences of using GreenScript tag in other templates and in the main.html is that ususally you don’t “output” javascript or css files in your other templates, instead, you declare them (for the template usage). Here is a sample (found in ${play.path}/samples-and-tests/booking/app/views/Hotels/book.html) of how to declare javascripts and css when you don’t have GreenScript:
#{set 'moreScripts'} <script src="@{'/public/javascripts/jquery-ui-1.7.2.custom.min.js'}" type="text/javascript" charset="utf-8"></script> #{/set} #{set 'moreStyles'} <link rel="stylesheet" type="text/css" media="screen" href="@{'/public/ui-lightness/jquery-ui-1.7.2.custom.css'}" /> #{/set}
And see how you do with GreenScript available:
#{greenscript.js "jquery-ui-1.7.2.custom.min" /} #{greenscript.css "/public/ui-lightness/jquery-ui-1.7.2.custom" /}
Easy, right? You might noticed that I have put the full path for the css file in this case. This is needed because the file is not in the default stylesheet file folder (configured with greenscript.dir.css, which default to /public/stylesheets).
Inline body
Greenscript Play module support inline body start from v1.2.
#{greenscript.css} dl > dt { font-weight: bold; color: #600; } #{/greenscript.css}
In the above sample, the block that defines dl > dt’s style will be captured by greenscript and moved to your html page header. (Suppose you have “#{greenscript.css output:'all'}” in the header block of your main.html template. By using “output: true” parameter, the following sample will output the block in place rather than moving the enclosed body to the header:
#{greenscript.js output:true} var rule = ruleById('first_name'); rule.add(Validate.Presence) rule = ruleById('last_name'); rule.add(Validate.Presence) rule = ruleById('email'); rule.add(Validate.Email) rule.add(Validate.Presence) $$('input.date').each(function(el){ new Control.DatePicker(el, {icon: '/public/images/calendar.png', locale: 'en_iso8601'}); }); #{/greenscript.js}
Inline Dependency Declaration
There is a long time complaint that greenscript does not guarantee the output sequence of resource (js/css) files match the sequence of declaring those files in tags. For example, #{greenscript.js ‘a b c’/} does not necessarily output or marge javascript ‘a.js’, ‘b.js’ and ‘c.js’ in a sequence that a.js followed by b.js and then c.js. This is because greenscript output is driven by dependencies (which is defined in greenscript.conf), rather than the sequence declared in tag. Actually greenscript cannot and shouldn’t follow the sequence declared in tag at all. The reason is
1. the sequence of tag declaration might conflict with dependencies declared in greenscript.conf
2. it is hard to tell the sequence of tag declaration when the developer declare resources in multiple templates with inheritance relationships
Now (start from v1.2d) greenscript support inline dependency declaration in tags, which basically remove the inconvenience that simple resource file dependencies are also require developer to provide a greenscript.conf file:
#{greenscript.js 'myapp < mylib < jquery-1.5.min'/}
The above javascript declaration also setup the dependencies among the declared javascript resources: myapp.js relies on mylib.js which in turn relies on jquery-1.5.min. Therefore you no longer need a greenscript.conf file to define the dependencies among myapp, mylib and jquery-1.5.min javascript files.
The limitation of inline dependency declaration is you can’t use it across multiple template files. Say you have a javascript A declared in main.html and then you have another javascript B declared in index.html, you can’t use inline dependency declaration to declare the dependency relationship between A and B unless you declare them all in index.html: #{greenscript ‘A > B’/}
Reference
A > B means B relies on A A < B means A relies on B
Media and Browser
GreenScript support media and browser options start from v1.2. Issue tag "
#{greenscript.css ‘print.css’, media: ‘printer’}" to declare a css resource target to “printer” media. Later when you output all css files by "
#{greenscript.css output:‘all’}", one line will be output as:
Declare resource specific to a browser in the following way:
#{greenscript.css ‘ie’, browser: ‘lt IE 8’/}
The corresponding output is:
<!--[if lt IE 8]>
<![endif]-->
h3. Configuration at runtime
This beautiful feature enable app developer to turn on/off minimizing dynamically and could be very helpful when you need to debug your javascript/css. In order to use the feature, you will need to add an entry in your route file to map a url request to the controllers.greenscript.Configurator actions, for example:
# Enable GreenScript dynamic configuration # IMPORTANT: make sure this routine map be different from your # staticDir map to compressed file folder GET /gsconf module:greenscript
Once you have done with that, you can reach the configuration page by typing in the address bar of your favorite browser. The configuration is designed to be self guided and you won’t lost yourself there. Please be noted that runtime configuration will not be flushed to your configuration file. When you restart your app all the configurations you’ve made during last session are lost. Meaning if you want to change a configuration permanently, you must update your application.conf file. See Configuration section for detail.
You can also force GreenScript to reload the dependency configuration from “greenscript.conf” file if you have changed it. Just go to “css/js dependencies” tab and click “reload”. This feature is very friendly to developer, especially in the early stage of javascript involved development.
About Security p. There is no integrated security to access the configuration page. And here is my 2 cents on how to secury your GreenScript dynamic configuration access:
- Option 1: Remove the url mapping entry in your route file in a prod environment
- Option 2: If you are a real hack and reject any manual operations, you will probably implement your own controller extends (or @With) controllers.greenscript.Configurator, and then add security to your controller. You will need to copy the configurator templates to your views folder. Don’t worry, GreenScript provides command to help you with that. I will get there now.
Module command
I’ve just told you that you can use command to copy the greenscript Configurator.configure.html template file to your app folder. Here is how to do it. First make sure you have enabled greenscript in your application.conf file. And then go to the console, enter your app root dir, and type:
play greenscript:cp -t MyGSConfigurator
The template file will be ready in {your-app-root}/app/views/MyGSConfigurator folder. Obviously your controller should be named MyGSConfigurator. It probably should looks like:
package controllers;
import play.mvc.*;
@With({Secure.class, controllers.greenscript.Configurator.class})
@Secure(role=‘developer,admin’)
public class MyGSConfigurator extends Controller {
}
Keep it secret!
Okay, how do you feel about this Plugin? Still not satisfied because you don’t like to type 11 charaters for tag name each time? Well I have a secret weapon to alleviate your pain with that: Once you have enabled greenscript in your conf/application.conf file, go to console, enter your app root, and type:
play greenscript:cp -a gs
Now guess what happened? You are right, it copied the tags from module folder to your app folder: your-app-root/app/views/tag/gs. And now you can use tags in short version: #{gs.js “js1 js2 ...” /} and #{gs.css “css1 ...” /}. What? you are still not satisfied? how come? it’s already shorter than play’s #{script} tag! Okay, here is my nuclear weapon:
play greenscript:cp -a . ... #{js output: 'all'/} #{js "js1 js2 ..." /} .. #{css "css1 css2" /}
How do you expect anything more simpler than this?
Zero configuration
GreenScript plugin now support zero configuration. It means besides enabling it in your application.conf, you don’t need to do any configuration to use it, you don’t even need to create “greenscript.conf” file in your conf dir. But what do you get if you don’t do any configuration? Well basically you can still benefit from GreenScript with zero configuration:
- The tags. You are free to utilize all the knowledges you’ve learned from Using Tags section.
- Minimize/compress. You can also benefit from minimizing/compressing/cache capability of GreenScript.
- Dynamic configuration. You can also use the dynamic configuration controller.
So what do you lost without any configuration?
- Dependency management. Without dependencies infomation defined in greenscript.conf file, you are on your own to take care of js/css file dependencies. When you are declare a javascript or css file in a template, you should also make sure all its dependencies are explicitly declared before that scripts “IN PLACE”! If you failed to do that, you might get a lot of script/css errors in your final rendered html page.
- Dir/URL path bound to play’s default. With zero configration, you need to make sure your dir structure (the public) and route mapping of public dir strictly follow Play’s convention. Otherwise GreenScript won’t be able to locate the javascript/css files.
As an example to demonstrate zero configuration, I put the ${play.path}/samples-and-tests/booking sample in the samples-and-tests dir of greenscript, makes the minimum changes to the templates and application.conf files.
Transparent Compression
GreenScript support Transparent Compression start from v1.2b. With Transparent Compression, even you don’t use greenscript tag, your js file and css file will automatically get compressed even without your attention (in PROD mode).
In conclusion, GreenScript is a flexible and powerful javascript/css management tool for your Play applicaiton development. You can use it in simple way (zero configuration) or in a way sophsicated enough to manage your complicated javascript and css dependencies.
CDN Support
Greenscript support CDN start from version 1.1a.
Configure CDN dependencies
# Note you must escape ':' if it is in the 'key' part, no need to escape if # it's in the 'value' part. This is due to the java.util.Properties treat ':' # as separator between key and value js.http\://ajax.googleapis.com/ajax/libs/scriptaculous/1.8.3/scriptaculous.js=
Load CDN items in tags
#{greenscript.js '' /}
I found there is no javascript and css links at all in my html file rendered out!!
Make sure you have add the following lines in your main.html (or in any other name) template:
#{greenscript.css "css files separated by blank", output:'all'/} #{greenscript.js output:'all'/}
Do I need to surround #{greenscript } tag with #{set ‘moreStyles’} in my other templates?
No, you just use #{greenscript.css ‘...’ /} to declare your css files. With greenscript, you can say ‘byebye’ to ‘moreStyles’ and ‘moreScripts’.
How to use GreenScript? Is it hard to configure?
You can use GreenScript with zero configuration. However, it’s suggested to create “greenscript.conf” file to describe your javascript and css file dependancies. You will love this feature because you just need to declare explicitly used javascript/css files in your templates, leave the dependencies to GreenScript.
I want to debug javascript, can GreenScript output uncompressed version of javascript/css files?
Yes, put “greenscript.minimize=false” in your application.conf file. Actually the setting is turned off by default when you are running app in “dev” mode. An nice feature you can use is dynamic configuration which enable you turn on/off minimizing/compressing without restart your app. See Configuration at runtime section for detail
Why don’t you use GreenScript in the dynamic configuration feature?
Well, I have no idea how you will configure the dir/url path settings, so I have to hard code my javascript/css links in my template. Fortunately it’s not a big work for a single page web app ;-) | http://www.playframework.com/modules/greenscript-1.2.5/home | CC-MAIN-2014-15 | refinedweb | 3,900 | 51.44 |
So, i got this much done so far but im stuck and i cant get my third case statement to work, also for some reason it prints off the calendar like days 8 9 10, etc, if you copy and paste and run it you will see it doesnt line up correctly.
So, just looking for some help, maybe alittle clean up of my code etc.
#include<iostream> #include<iomanip> using namespace std; int main() { int month, days, day, count=1; cout<<"\t\t\t\tCalendar\n"; cout<<"Enter number of month (1 for january, 2 for february...): "; cin>>month; cout<<"Enter day of the month (1 for Sunday, 2 for Monday...): "; cin>>day; if (month ==1){cout<<"\n\t\t\tJanuary\n\n"; days=31;} if (month ==2){cout<<"\n\t\t\tFebruary\n\n"; days=28;} if (month ==3){cout<<"\n\t\t\tMarch\n\n"; days=31;} if (month ==4){cout<<"\n\t\t\tApril\n\n"; days=30;} if (month ==5){cout<<"\n\t\t\tMay\n\n"; days=31;} if (month ==6){cout<<"\n\t\t\tJun\n\n"; days=30;} if (month ==7){cout<<"\n\t\t\tJuly\n\n"; days=31;} if (month ==8){cout<<"\n\t\t\tAugust\n\n"; days=31;} if (month ==9){cout<<"\n\t\t\tSeptember\n\n"; days=30;} if (month ==10){cout<<"\n\t\t\tOctober\n\n"; days=31;} if (month ==11){cout<<"\n\t\t\tNovember\n\n"; days=30;} if (month ==12){cout<<"\n\t\t\tDecember\n\n"; days=31;} cout<<"\nSun\tMon\tTue\tWen\tThu\tFri\tSat\n"; while (count<=days){ switch(day){ case 1: cout<<count<<"\t";break; case 2: cout<<"\t"<<count<<" "; break; case 3: cout<<"\t\t"<<count<<" "; break;}// by the way case 3 does not display correctly at all... count++;} return 0; } | https://www.daniweb.com/programming/software-development/threads/320225/calendar-in-c | CC-MAIN-2018-51 | refinedweb | 305 | 58.15 |
Contents
- 1 Common Mathematical Functions
- 1.1 abs() function
- 1.2 pow() function
- 1.3 round() function
- 1.4 max() and min() function
- 1.5 math.pi and math.e constants
- 1.6 math.ceil() and math.floor() functions
- 1.7 math.fabs() and math.sqrt() functions
- 1.8 math.log() function
- 1.9 math.sin(), math.cos() and math.tan() functions
- 1.10 math.degrees() and math.radians() function
- 2 Formatting Numbers
- 3 format() Function
- 4 Formatting Floating Point Numbers
- 5 Formatting Numbers in Scientific Notation
- 6 Inserting Commas
- 7 Formatting Number as Percentages
- 8 Setting Alignment
- 9 Formatting Integers
In Python, Numbers are of 4 types:
- Integer.
- Floating Point or Real Numbers.
- Complex Numbers.
- Boolean.
Integers or
int for short are the numbers without a decimal point. for example,
100,
77,
-992 are
int but
0.56,
-4.12,
2.0 are not.
Floating point or real or
float are the numbers which have decimal point. For example,
1.2,
0.21,
-99.0 are float but
102,
-8 are not. We can also write float point number using scientific notation. The numbers written in the form
a x 10^b is known as scientific notation. Scientific notation is quite useful for writing very small or very large numbers. For example, float
0.000000123 can be written succinctly in Scientific notation as
1.23 x 10^-7. Python uses special syntax to write numbers in Scientific notation. For example,
0.000000123 can be written as
1.23E-7. The letter
E is called exponent and it doesn’t matter whether you use
e or
E.
Complex numbers are the numbers which we can’t represent on a number line. A Complex number is of the form
a + ib, where
a is the real part and
bi is the imaginary part. For example,
2 + 3i is complex number. Python uses a special syntax for complex numbers too. An integer or float with trailing
j is treated as a complex number in Python, so
10j,
9.12j are complex numbers.
Note that
5j only represents the imaginary part of the complex number. To create a complex number with a real and imaginary part, simply add a number to the imaginary part. For example, complex number
2 + 3i can be written in Python as
2 + 3j.
Boolean type is discussed later in this chapter.
Common Mathematical Functions
Python provides following built-in function to help you accomplish common programming tasks:
abs() function
pow() function
round() function
max() and min() function
Python’s
math module also provides some standard mathematical functions and constants. Recall that, to use the
math module we first need to import it using
import statement as follows:
The following table lists some standard mathematical functions and constants in the
math module.
math.pi and math.e constants
math.ceil() and math.floor() functions
math.fabs() and math.sqrt() functions
math.log() function
math.sin(), math.cos() and math.tan() functions
math.degrees() and math.radians() function
This is just a short list functions and constants in the math module, to view the complete list visit.
Formatting Numbers
Sometimes it is desirable to print the number in a particular format. Consider the following example:
python101/Chapter-05/simple_interest_calculator.py
Output:
Notice how money displayed in the output, it contains
13 digits after the decimal. This is a very common problem when floating point numbers are printed after performing calculations. As the amount is currency, it makes sense to format it to two decimal places. We can easily round the number to 2 decimal places using the
round() function, but the
round() function will not always give the correct answer. Consider the following code:
We want to output
1234.50, not
1234.5. We can fix this problem using the
format() function. Here is a revised version of the above program using
format() method.
python101/Chapter-05/simple_interest_calculator_using_format_function.py
Output:
The
format() function is explained in the next section.
format() Function
The syntax of the
format() function is as follows:
The
value is the data we want to format.
The
format-specifier is a string which determines how to format the value passed to the
format() function.
On success
format() returns a formatted string.
Formatting Floating Point Numbers
To format floating point numbers we use the following format-specifier.
The
width is the minimum number of characters reserved for value and
precision refers to the number of characters after the decimal point. The
width includes digits before and after the decimal and the decimal character itself. The
f character followed by the
precision represents that the
format() function will output the value as floating point number. The character
f is also known as type code or specifier. There are many other specifiers, as we will see.
By default, all types of numbers are right aligned. If the width is greater than the length of the value then numbers are printed right justified with leading spaces determined by subtracting the length of the value from width. On the other hand, if the width is smaller than the length of the value then the length of the width is automatically increased to fit the length of the value and no leading spaces are added
To make everything concrete, let’s take some examples:
Example 1:
Here width is
9 characters long and precision is
2. The length of number
34.712 is
6, but since the precision is
2, the number will be rounded to
2 decimal places. So the actual length of the value is
5. That means the width is greater than the length of the value, as a result, value is right justified with
4 (
9-5=4) leading spaces.
Example 2:
In this case, width is
5 and the actual length of the value is
6 (because the number will be rounded to
2 decimal places). So the width is smaller than the length of the value, as a result, the length of the width is automatically increased to fit the length of the value and no leading spaces are added.
We can also omit the width entirely, in which case it is automatically determined by the length of the value.
The width is commonly used to neatly line up data in columns.
Formatting Numbers in Scientific Notation
To format a number in Scientific Notation just replace type code from
f to
e or
E.
Inserting Commas
Reading large numbers can be difficult to read. We can make them much more readable by separating them by commas (
,). To use comma separator type
, character just after the width of format-specifier or before the precision.
If you just want to print the floating point number with commas(
,) but without applying any formatting do this:
Formatting Number as Percentages
We can use
% type code to format a number as a percentage. When
% is used in the format-specifier, it multiplies the number by
100 and outputs the result as a float followed by a
% sign. We can also specify width and precision as usual.
Setting Alignment
We have already discussed that by default, numbers are printed right justified. For example:
We can change the default alignment using the following two symbols:
Alignment symbol must come before the specified width.
Here we are printing the number left justified, as a result trailing spaces are added instead of leading spaces.
Note that the statement
format(math.pi, ">10.2f") and
format(math.pi, "10.2f") are same as right justified is the default format for printing numbers.
Formatting Integers
We can also use
format() function to format integers. Type codes
d,
b,
o,
x can be used to format in decimal, binary, octal and hexadecimal respectively. Remember that while formatting integers only width is allowed not precision.
1 thought on “Numbers in Python”
Best content I have read till now. | https://overiq.com/python-101/numbers-in-python/ | CC-MAIN-2019-35 | refinedweb | 1,308 | 59.5 |
In this tip, I demonstrate how to take advantage of the validators from the System.ComponentModel.DataAnnotations namespace in an MVC application. You can take advantage of these validators to validate form data before submitting the form data into a database. In my previous tip, I explained how you can take advantage of the validators included with […]
ASP.NET MVC Tip #41 – Create Cascading Dropdown Lists with Ajax […]
ASP […]
ASP.NET MVC Tip #39 – Use the Velocity Distributed Cache
Improve […]
ASP.NET MVC Tip #38 – Simplify LINQ to SQL with Extension Methods
In this tip, Stephen Walther demonstrate how you can create new LINQ to SQL extension methods that enable you to dramatically reduce the amount of code that you are required to write for typical data access scenarios. By taking advantage of LINQ to SQL, you can dramatically reduce the amount of code that you need […]
ASP.NET MVC Tip #37 – Create an Auto-Complete Text Field
In this tip, Stephen Walther demonstrates how you can create an auto-complete text field in an MVC view by taking advantage of the Ajax Control Toolkit. He explains how you can create a custom Ajax Helper that renders the necessary JavaScript. In the previous tip, I demonstrated how you can take advantage of the client […]
ASP.NET MVC Tip #36 – Create a Popup Calendar Helper […]
ASP […]
ASP.NET MVC Tip #35 – Use […]
ASP.NET MVC Tip #33 – Unit Test LINQ to SQL
In this tip, I demonstrate how to unit test the LINQ to SQL DataContext object by creating a Fake DataContext. You can perform standard LINQ to SQL inserts, updates, deletes and LINQ queries against the Fake DataContext. I’ve struggled for the past couple of months with different methods of unit testing MVC controllers that return […] | https://stephenwalther.com/archive/category/10/page/2 | CC-MAIN-2021-31 | refinedweb | 297 | 61.87 |
We offer the Didomi SDK as a hosted JavaScript library that you can directly include on your website with a
<script> tag.
Create a Consent Notice in the Didomi Console and get the script tag from the Embed section. Paste the tag the top of the
<head> section of your HTML pages, before any other script tag.
Important: Embed our tag before any other tag
Make sure to add the tag as close to the opening
<head> tag on your page as possible, before any other tag gets embedded
Keep in mind that the role of our JavaScript SDK is to share consent information with all the other scripts on the page. In order to do so, it MUST be placed before any other tag or the tags from your vendors will not be able to collect consent information from us. Put it as close as possible to the opening
<head> tag. If our SDK gets included after the other tags then the consent information will not be correctly shared and you will not be compliant with the GDPR requirements.
If you are using Content Security Policy for whitelisting content source domains, you have to make sure that you whitelist and to allow the Didomi SDK to operate normally.
You can skip this section if you don't have a React application.
We also provide a React component to simplify the integration of our SDK with React applications. To use it, please follow the steps below or go directly to our React component documentation:
Install the library using npm.
npm install --save @didomi/react
2. Import the module in your app.
import { DidomiSDK } from '@didomi/react';
We recommend instantiating the component as soon as possible: the sooner you instantiate the component, the faster the banner will be displayed or the faster the consents will be shared with your partners and the ads displayed.
3. Instantiate the component in your app
<DidomiSDKiabVersion={2}gdprAppliesGlobally={true}onReady={didomi => console.log('Didomi SDK is loaded and ready', didomi)}onConsentChanged={cwtToken => console.log('A consent has been given/withdrawn', cwtToken)}onNoticeShown={() => console.log('Didomi Notice Shown')}onNoticeHidden={() => console.log('Didomi Notice Hidden')}/>
The SDK will automatically pull the notice configuration from the Didomi Console.
For more information, please check our documentation :
Testing from outside the EU
If your banner is configured to not display to non EU visitors, it might be tricky to configure and test if you are not located in the EU yourself. You can use the
notice.ignoreCountry: true option if you are testing the banner from the US and force it to be shown to make sure it is working properly.
Another option is to add
#didomi:notice.ignoreCountry=true to the URL in your browser bar to force the SDK to ignore the country on the page. Example: If your website is, go to and the notice should be displayed even if you are not in the EU.
Once the SDK has loaded, you can call other functions on it to do consent management, send analytics events, etc. To make sure that you use the SDK when it is ready, you can register a global didomiOnReady array of functions that will get called when the SDK is done loading:
<script type="text/javascript">window.didomiOnReady = window.didomiOnReady || [];window.didomiOnReady.push(function (Didomi) {// Call other functions on the SDK});</script>
onDidomiReady(didomi) {console.log('Didomi Ready');// Call other functions on the SDK}...<DidomiSDK...onReady={this.onDidomiReady.bind(this)}/>
The SDK exposes other events and functions to allow you to interact programmatically with the CMP. Read our Reference section for more information:
As per GDPR, the consent notice collects consents for a specific set of vendors and purposes. You must configure the notice to let it know what vendors are used on your website and it will automatically determine what purposes are required. This can be done from the Didomi Console.
While we interoperate with a lot of vendors through the IAB framework or direct integrations, vendors that do not fall into either of these buckets must be configured through our tag manager or your existing tag manager. Failure to do so will result in vendors not being correctly blocked as needed and you will not be compliant with data privacy regulations.
Read our dedicated section to learn how to configure your vendors.
After the user has given consent or closed the banner, you must given them an easy access to their choices so that they can update them.
You can use the function
Didomi.preferences.show() to open the preferences manager and let the user update her choices. Example:
<a href="javascript:Didomi.preferences.show()">Consent preferences</a>
onDidomiReady(didomi) {this.didomiObject = didomi;}...<DidomiSDK...onReady={this.onDidomiReady.bind(this)}/><button onClick={() => this.didomiObject.preferences.show()}>Consent preferences</button>
We suggest adding this link in your privacy policy or in a header or footer menu on all of your pages.
Didomi supports the IAB Transparency and Consent Framework as well as the IAB CCPA frameworks. Read more in our documentation: | https://developers.didomi.io/cmp/web-sdk/getting-started | CC-MAIN-2021-17 | refinedweb | 839 | 54.02 |
package wow;
import java.util.Arrays;
import java.util.Collections;
import java.util.List;
import java.util.Scanner;
public class Wow
{
String question;
String answer;
int correct=0, number;
Wow[] quizBank = new Wow[14];
List<Wow> quizList = Arrays.asList(quizBank);
public static void main(String[] args)
{
Wow bank = new Wow();
bank.bankList();
bank.askQuestion();
} //end main
public void bankList()
{
quizBank[1] = new Wow();
quizBank[1].question = "A version of Windows released in October 15 2001";
quizBank[1].answer = "windows xp";
quizBank[2] = new Wow();
quizBank[2].question = "It is the world's first multi-touch smartphone released in June 29 2007";
quizBank[2].answer = " iphone";
quizBank[3] = new Wow();
quizBank[3].question = "It is the first quad core processor released by intel";
quizBank[3].answer = "core 2 quad";
quizBank[4] = new Wow();
quizBank[4].question = "He is the chief architech of the linux kernel";
quizBank[4].answer = "linus tolvards";
quizBank[5] = new Wow();
quizBank[5].question = "He is the founder of Microsoft";
quizBank[5].answer = "bill gates";
quizBank[6] = new Wow();
quizBank[6].question = "It is the first multitouch tablet relased by apple";
quizBank[6].answer = "ipad";
quizBank[7]= new Wow();
quizBank[7].question = "Its is the latest version and codename of android";
quizBank[7].answer = "4.2.2 jelly bean";
quizBank[8] = new Wow();
quizBank[8].question = "It is the operating system of iphone/ipad";
quizBank[8].answer = "ios";
quizBank[9] = new Wow();
quizBank[9].question = "Its the latest version and codename of mac os x";
quizBank[9].answer = "10.8 mountain lion";
quizBank[10] = new Wow();
quizBank[10].question = "how old is the microsoft corporation";
quizBank[10].answer = "35";
quizBank[11] = new Wow();
quizBank[11].question = "What is the latest version and codename of ubuntu linux";
quizBank[11].answer = "12.10 quantal quetzal";
quizBank[12] = new Wow();
quizBank[12].question = "it is the first computer that has a graphical users interface released by apple in 1984";
quizBank[12].answer = "macintosh";
quizBank[13] = new Wow();
quizBank[13].question = "he is the current ceo of google inc.";
quizBank[13].answer = "larry page";
quizBank[14] = new Wow();
quizBank[14].question = "A version of windows released in 2007";
quizBank[14].answer = "windows vista";
Collections.shuffle(quizList);
}
public void askQuestion()
{
Scanner input = new Scanner(System.in);
System.out.println("****************************** **");
System.out.println(" Welcome to TECH QUIZ");
System.out.println("****************************** **");
for (number=1; number<15; number++)
{
System.out.printf("%d. %s?%n", number, quizBank[number].question);
String entered = input.nextLine();
if (entered.compareTo(quizBank[number].answer)==0)
{
System.out.println("*** Correct! ***");
correct = correct + 1;
}
else {
System.out.println("--- Incorrect! ---");
System.out.println("the correct answer is");
}
}
System.out.println("*******************");
System.out.printf(" Your score is %d/%d%n", correct, number);
System.out.println("*******************");
}
}
how can i get rid of this error?
Exception in thread "main" java.lang.NullPointerException
at wow.Wow.askQuestion(Wow.java:84)
at wow.Wow.main(Wow.java:21)
Java Result: 1
How can i make this program show the correct answer in every question if the inputted answer is incorrect thanks.... | http://www.javaprogrammingforums.com/whats-wrong-my-code/24438-i-need-hep-my-java-program.html | CC-MAIN-2016-36 | refinedweb | 501 | 64.07 |
Where does your built version of panda look for the python executable? Is that where you are running your setup.py from? What I mean is, if you have two versions of python 2.4, one might not know where to look for the direct tree where the other one would. So if you haven’t yet, try running ppython setup.py instead of python setup.py
I’ve never used py2exe, so I can’t help much there.
However, I can say this: the pandac module is implemented quite differently under 1.1 than under 1.0. If you figure out how to solve it under 1.0, you’ll probably have to use a different solution under 1.1. Under 1.1, here’s how it works:
Each DLL such as “libpandaexpress” contains a bunch of python classes and methods. These are self-contained in the sense that you can import them directly into python without any sort of intervening wrapper and it works fine. For example, you can say:
from libpandaexpress import *
and you’ll get a bunch of the panda classes. In fact, I think their intent is to gradually move toward a system where you just import classes directly from the DLLs in the way I just showed.
However, in a few unusual cases, the disney guys are mixing C++ and python classes like this:
- Define the class in C++
- Write most of the methods in C++
- Write a few “extra methods” in python.
If you just import the classes directly from the DLLs, you’ll get the C++ class definition and the C++ methods, but you’ll miss out on the “extra methods.” That’s what the “pandac” directory is for.
The pandac directory contains small files that look like this:
from libpandaexpress import *
define extra method 1
insert extra method 1 into such-and-such class
define extra method 2
insert extra method 2 into such-and-such class
etc, etc.
So basically, 90 percent of the work consists of just importing the class and its methods from the DLLs, and the remaining 10 percent is attaching the python methods to the C++ classes.
I don’t know if that will help you sort this out, I hope it does, though.
I haven’t checked this method but maybe it works:
*)Use packpanda
*)Install it on your computer
*)Start py2exe for this installed game
I have recently tried to make some progress on this, but have only narrowed the problem…
I have installed py2exe into a registered version of python and copied the files over to the site-packages directory in panda3d’s python (although IPKnightly’s method above would have worked just as well). I then created a setup.py the same as IPKnightly with all of the direct packages listed, and tried testing it on a simple one-file panda application. I built the file using ppython and these were the warnings:
The first run of the executable, I got this error:
I remembered that PandaModules looks for the .pyz file in the same directory its located, so I copied PandaModules.pyz into the same directory that the executable was. I figured this was ok since PandaModules.pyz is just a bunch of zipped up python files for faster loading (maybe I am wrong about that).
After a couple more runs I had errors about missing .dll files. These were:
and:
I copied those .dll files to the same directory as the executable as well. I really just wanted to see how far I could get. The last error I got was:
I am assuming this was because I was copying dlls into places they should not have been.
When I moved the enitre directory containing the executable outside of the folder I built it in, I finally got IPKnightly’s error of
I figured out that this has nothing to do with pandac, but rather the fact that the direct/init.py file was not trivial. Here is the code from that file:
import os,sys srcdir1 = os.path.join(__path__[0], 'src') srcdir2 = os.path.join(__path__[0], '..', '..', 'direct', 'src') if (os.path.isdir(srcdir1)): __path__[0] = srcdir1 elif (os.path.isdir(srcdir2)): __path__[0] = srcdir2 else: sys.exit("Cannot find the 'direct' tree")
Basically it looks for a path either with ‘src’ appended to the path name, or two levels up from the current directory and then ‘direct/src’.
Before I had moved my executable file, it was two directories deep from the direct folder in panda, which inadvertently resolved that issue, but reappeared when I moved the location away from that particular directory structure.
So the real question is what is the best way to fix the non-trivial init problem? When byte-compiled, the init.py file needs to be able to look for the direct packages without trying to mangle the path. At leat thats my best guess. The quick solution would be to include the direct package two directories above the distribution executable, but that is really just a hack and I would like to find the ‘correct’ solution.
Any help would be greatly appreciated.
Well, the fancy init.py file is really itself just a hack to allow people to run files out of direct/src/whatever, instead of direct/whatever, which is where the installed files should end up when they are published. I would just copy the Python files into the appropriate places in the publish tree, omitting the intervening “src” directory, and omitting the init.py files.
But isn’t this whole problem already solved with Josh’s packpanda tool?
David
I am using version 1.0.5 which doesn’t have packpanda. I did install version 1.1.0 to try it out on the same file , but got an error:
I figured trying to fix packpanda (the fixes found here and here), and then debug it to build it into version 1.0.5 would be a lot more work than I could handle right now. I was hoping that getting py2exe to work might be easier.
I’ll try to resturcture the direct packages and see how it goes. Any ideas on why there are a few select pandac modules missing? I would think it would be all or nothing. Does moving the PandaModules.pyz file fix that problem?
You mean the dll’s? These have to be loaded by the system, so they have to be found somewhere on sys.path, regardless of where PandaModules.pyz is found. As to why you only got an error from two of them, well, those are the first two loaded; I bet you will get the same error from the rest of them when you get further.
David
I thought the packpanda util just made an installer for the program, but doesnt freeze the actual program into an exe like py2exe ?
Thanks David, I appreciate all of the help.
I was actually referring to the initial warnings from py2exe, but knowing that all of the dll files will be missing will help. I’ll try to find a solution.
Also, as far as I know, packpanda is an installer for .py or .pyc files that also installs panda (or some minimal version of it). It says packpanda was used to create the Airblade.exe found in the software downloads, so that might give an example of a finished packpanda distribution.
In that case, I’d recomend sticking with a combination of ino setup and py2exe. I already compiled a panda3d program with it with panda 1.0.5 and 1.1.0. Using 1.1.0 its much easier, but sadly, I already deleted the test compiles I did…
I just remember that the direct package is abit of a bastard in this, I think I had a second copy in my libarary.zip, but then with all modules in the direct dir, not source. And yes, you need to copy most .dll files from your bin dir to the dist dir.
good luck, if you want me to redo what I did, I’ll have a look.
Edit:
Oh and yeah, I seem to remember that you needed to edit some of the modules, they’re just a few from the pandac bundle though
Thanks Yellow
I restructured the direct tree and unzipped the PandaModules.pyz file using genpycode. This created all of the pandac files which allowed py2exe to search for the dll files to copy over. So that was taken care of. It also deleted the PandaModules.py file as well, which a lot of files import from. So I created my own PandaModules.py file like this:
#PandaModules.py from libpandaModules import * import ConfigConfigureGetConfigConfigShowbase import CIntervalManager import CInterval import LinearEulerIntegrator import CLerpAnimEffectInterval ConfigConfigureGetConfigConfigShowbase = ConfigConfigureGetConfigConfigShowbase.ConfigConfigureGetConfigConfigShowbase CIntervalManager = CIntervalManager.CIntervalManager CInterval = CInterval.CInterval LinearEulerIntegrator = LinearEulerIntegrator.LinearEulerIntegrator CLerpAnimEffectInterval = CLerpAnimEffectInterval.CLerpAnimEffectInterval
I created the extra imports from error messages given by running through it a couple times. I would think there is a file similar to libpandaModules that has these imports, and probably others that I am missing. Anyone know which?
The most recent error I am recieving is:
The corresponding code in ShowBase:
# Lerp stuff needs this event, and it must be generated in # C++, not in Python. throwNewFrame()
Where is throwNewFrame() located? I am guessing It would have been imported into PandaModules at some point, which I would have missed.
Thanks again
Right, throwNewFrame() is a C++ function (defined in showBase.cxx, and called throw_new_frame() there) that should have been imported when you imported the contents of PandaModules. Maybe there’s something missing in the PandaModules.py that you created.
You could try running genpycode -n. This should generate a PandaModules.py and a long list of separate *.py files, instead of zipping them all up into PandaModules.pyz.
David
Many, many thanks. That was just what I needed. It all works now.
Can you post that file you have used for making the exe.
Or can you write a tutorial in the manual for making an exe with py2exe.
Thanks Martin
The setup.py file I use is outdated. I am using py2exe version 0.4.1 so that it is compatible with python 2.2 and Panda1.0.5. If you are using Panda1.1.0, the file will look different, namely the ‘scripts’ argument will be a ‘console’ or ‘windows’ argument. anyway, here it is.
#setup.py from distutils.core import setup from distutils.core import Extension import py2exe setup(name = "pandaRun", scripts = ["pandaAnim.py"], packages = ['direct', 'direct.directbase', 'direct.showbase', 'direct.interval', 'direct.actor', 'direct.gui', 'direct.showbase', 'direct.task', 'direct.controls', 'direct.directnotify', 'direct.directtools', 'direct.directutil', 'direct.fsm', 'direct.ffi', 'direct.particles', 'direct.tkpanels', 'direct.tkwidgets', 'direct.cluster', 'direct.directdevices', 'direct.distributed', 'pandac' ], package_dir = {'direct' : 'C:\\Panda3D-1.0.5\\direct', 'direct.directbase' : 'C:\\Panda3D-1.0.5\\direct\\directbase', 'direct.showbase' : 'C:\\Panda3D-1.0.5\\direct\\showbase', 'direct.interval' : 'C:\\Panda3D-1.0.5\\direct\\interval', 'direct.actor' : 'C:\\Panda3D-1.0.5\\direct\\actor', 'direct.gui' : 'C:\\Panda3D-1.0.5\\direct\\gui', 'direct.showbase' : 'C:\\Panda3D-1.0.5\\direct\\showbase', 'direct.task' : 'C:\\Panda3D-1.0.5\\direct\\task', 'direct.controls' : 'C:\\Panda3D-1.0.5\\direct\\controls', 'direct.directnotify' :'C:\\Panda3D-1.0.5\\direct\\directnotify', 'direct.directtools' : 'C:\\Panda3D-1.0.5\\direct\\directtools', 'direct.directutil' : 'C:\\Panda3D-1.0.5\\direct\\directutil', 'direct.fsm' : 'C:\\Panda3D-1.0.5\\direct\\fsm', 'direct.ffi' : 'C:\\Panda3D-1.0.5\\direct\\ffi', 'direct.particles' : 'C:\\Panda3D-1.0.5\\direct\\particles', 'direct.tkpanels' : 'C:\\Panda3D-1.0.5\\direct\\tkpanels', 'direct.tkwidgets' : 'C:\\Panda3D-1.0.5\\direct\\tkwidgets', 'direct.cluster' : 'C:\\Panda3D-1.0.5\\direct\\cluster', 'direct.directdevices' : 'C:\\Panda3D-1.0.5\\direct\\directdevices', 'direct.distributed' : 'C:\\Panda3D-1.0.5\\direct\\distributed', 'pandac' : 'C:\\Panda3D-1.0.5\\pandac' } )
I’ll see if I have time this weekend to test out 1.1.0 with the newest version of py2exe.
Thanks a lot
Any progress with that? I’m trying to build an .exe with Panda 1.1.0, Python 2.4, and the latest py2exe, without much luck.
I’ve created a setup.py file explicitly listing all the panda modules, and I’m getting the following error when I try to run the .exe:
It appears that it doesn’t like the line:
Of course, it’s only getting there because it’s not finding the direct module files. I’ve been over this thread trying to figure out how to get it to work without much luck. I tried moving the direct modules from direct/src directory to direct and modified the init file, which got me a different error: | https://discourse.panda3d.org/t/topic/872 | CC-MAIN-2022-33 | refinedweb | 2,132 | 58.38 |
Hello everyone!
We are using Xamarin in our Mac application and started to receive random crashes with the following crash stack:
Thread 25 Crashed:: Thread Pool Worker 0 libobjc.A.dylib 0x00007fff7478bd5c objc_release + 28 1 libobjc.A.dylib 0x00007fff7478cc8c (anonymous namespace)::AutoreleasePoolPage::pop(void*) + 726 2 com.apple.CoreFoundation 0x00007fff485feee6 _CFAutoreleasePoolPop + 22 3 com.apple.Foundation 0x00007fff4a9aba5e -[NSAutoreleasePool drain] + 144 4 com.os33 0x000000010db021c3 xamarin_thread_finish(void*) + 147 5 com.os33 0x000000010db01f6c thread_end(_MonoProfiler*, unsigned long) + 28 6 com.os33 0x000000010dd1ff4d mono_profiler_raise_thread_stopped + 77 7 com.os33 0x000000010dd5bd2b mono_thread_detach_internal + 1275 8 com.os33 0x000000010dd62d35 start_wrapper_internal + 693 9 com.os33 0x000000010dd62a57 start_wrapper + 71 10 libsystem_pthread.dylib 0x00007fff75a5a305 _pthread_body + 126 11 libsystem_pthread.dylib 0x00007fff75a5d26f _pthread_start + 70 12 libsystem_pthread.dylib 0x00007fff75a59415 thread_start + 13
or
Thread 23 Crashed:: Thread Pool Worker 0 libobjc.A.dylib 0x00007fff59c5aa29 objc_msgSend + 41 1 libobjc.A.dylib 0x00007fff59c62dd4 objc_object::sidetable_release(bool) + 268 2 libobjc.A.dylib 0x00007fff59c5dc8c (anonymous namespace)::AutoreleasePoolPage::pop(void*) + 726 3 com.apple.CoreFoundation 0x00007fff2dac8ee6 _CFAutoreleasePoolPop + 22 4 com.apple.Foundation 0x00007fff2fe75a5e -[NSAutoreleasePool drain] + 144 5 com.os33 0x00000001006851c3 xamarin_thread_finish(void*) + 147 6 com.os33 0x0000000100684f6c thread_end(_MonoProfiler*, unsigned long) + 28 7 com.os33 0x00000001008a2f4d mono_profiler_raise_thread_stopped + 77 8 com.os33 0x00000001008ded2b mono_thread_detach_internal + 1275 9 com.os33 0x00000001008e5d35 start_wrapper_internal + 693 10 com.os33 0x00000001008e5a57 start_wrapper + 71 11 libsystem_pthread.dylib 0x00007fff5af2b305 _pthread_body + 126 12 libsystem_pthread.dylib 0x00007fff5af2e26f _pthread_start + 70 13 libsystem_pthread.dylib 0x00007fff5af2a415 thread_start + 13
I've tried to use the following trick but it doesn't help much =(
new System.Threading.Thread (() => { while (true) { System.Threading.Thread.Sleep (100); GC.Collect (); } }).Start ();
I can't reproduce this issue but tried to run application with NSZombieEnabled=1 and I was able to get the following error after I caught my crash:
[__NSCFLocalDataTask release]: message sent to deallocated instance 0x7fa7f4da2190 Illegal instruction: 4
Does anybody know what is __NSCFLocalDataTask? I can't find much about this class, looks like it something from NSURLSessionTask but I'm not sure.
I don't think I can use Instruments's Zombie since I was never able to catch this crash while debugging from Visual Studio. Are there any other technics with NSZombieEnabled I can try out?
Looks like you've done some good homework so far. That GC.Collect trick is the first one I pull out when you have random crashes.
Often things that begin with __NSCF the like are internal concrete classes Apple uses to implement public types. A guess that it's NSURLSessionTask related seems reasonable. A few things you could consider doing:
Hello again!
ChrisHamons, thank you for answering!
After enabling of AOT I'm still getting the same crash stack. Also I was able to catch this crash on different versions of Xamarin.Mac (I've seen it on 4.x and 5.2) so I think it doesn't related to Xamarin versions.
Also I found HttpClient Default Handler option in Mac build settings. We used NSUrlSession and I changed it to Managed hoping that this should resolve the crash but after several days I got it again.
I'm not using NSUrlSession directly in our code, only HttpClient. Is it still possible to get crashes related to NSURLSessionTask when Managed option selected? May be CFNetwork option can help?
According to our documentation NSUrlSession is the correct option for a vast majority of projects, so I would stick with that. I don't believe that should be the source of your problems.
The fact that you are seeing it in 4.x and 5.x suggests that you are either hitting a problem in multiple versions of mono or there is a bug in your networking code.
I would try to isolate your networking code in a small sample, either to make debugging easier or to be able to post it to an issue.
Would you be able to post that, so we can see how you are using the API. In some cases crashes are caused by misuse, sometimes minutes after the API in question was invoked. | https://forums.xamarin.com/discussion/150136/crash-on-objc-release-objc-msgsend | CC-MAIN-2021-25 | refinedweb | 660 | 61.22 |
I searched for information but couldn't solve my doubt
this is the code I have and it seems to work because when the images are the same the result in console is NONE
but in case the images are different I don't like modifying the code
I would like the code to do the following
Code: Select all
from PIL import Image from PIL import ImageChops im1 = Image.open("imagenupload.jpg") im2 = Image.open("imagereference.jpg") diff = ImageChops.difference(im2, im1).getbbox() print diff
if ( images are same ) -> flag=1
else ( images are different) -> flag=0
I just want to know if two images are different I don't need to know how they differ, only if they are different.
I've been reading about opencv but I've just been learning python for a while now.
very thanks | https://lb.raspberrypi.org/forums/viewtopic.php?f=32&t=206109 | CC-MAIN-2019-39 | refinedweb | 141 | 54.56 |
Hi,
It’s a big day today, since we are rolling out CLion 2016.1 release. Please, welcome! that generates stubs for virtual member functions from any of base classes and Implement that overrides pure virtual functions from base classes, we’ve added Generate definitions (
Shift+Ctrl+D on Windows/Linux,
⇧⌘D on OS X) which, as you would expect, generate definitions for existing declarations (in the previous versions you could use Implement action to get it). All three actions now put the code in the place the caret is positioned, in case it’s inside the class, or ask for a destination (if several options are available):
Thus CLion 2016.1 makes the code generation behaviour quite simple and straightforward – if you want the function to be generated in a header file – locate the caret in that class in your header file, and if you want it in the source file – execute action there.
‘Generate definitions’ can be called up in three different ways:
- By pressing
Shift+Ctrl+Don Windows/Linux,
⇧⌘Don OS X.
- Under the Generate menu (
Alt+Insertin Windows/Linux,
⌘Non OS X).
- As an intention action (
Alt+Enter).
Read more details in the corresponding blog post. – ‘Mark. You can find the plugin in our repository. below and download CLion for your operating system.
Try CLion 2016.1 now and let us know what you think in the comments section below!
The CLion Team
JetBrains
The Drive to Develop
It’s nice CLion now supports Python and Swift out of the box. I hope CPP-744 will be implemented very soon as well.
We’ll consider it for sure when planning the next release. We’ll share the plans as soon as we get the roadmap for 2016.2 version.
That might be non-trivial to get to work well. But if you can it is _the_ killer feature.
Another feature request is to support ninja as the build tool. I’m really trying to get my red/green/refactor cycle down to the fastest possible.
Thanks, we’ll consider.
Regarding CPP-744, I’d suggest that there are some potential incremental deliveries here. Remote compile and execution but _without_ remote debugging is still a useful feature. I’d go even further and say that remote compilation without even execution would be a useful feature for me.
That’s interesting, thanks a lot
I had one more thought. I don’t really know the complexities from your side, but from where I’m sitting, remote compile is rather similar to distributed compilation where the remote compile is the special case of compiling on only one remote server.
I do understand that there is a big difference between whether the code can compile locally or not, but maybe there is some synergy here?
It would be very cool if CLion would support distributed compilation without needing to have a Phd in compilerology.
Thanks) Interesting thought. We need to consider for sure.
OTOH, for us, it’s the full cross-compile to another architecture, copy over to target, remote run/debug or it’s nothing. Compiling on our weak target machine is a non-starter. Of course, if some are helped by the partial implementation that’s nice for them so it’s certainly better than nothing. Just remember not everyone has a powerful enough remote target to run a full tool chain
I can definitely live without disassembly though (considering the issue of different remote arch)
Also, this is the reason we currently don’t use clion.
Thanks for sharing!
Clion is now my favorite IDE for C++ coding, but not yet for C++ debug.
Please consider GDB debugging speed-up and Makefile/Custom build support for 2016.2!
Thanks, we’ll consider. We’ll publish the approximate roadmap as soon as we are ready. Stay tuned and thank you for your support!
Thanks for the hard work! I’m looking forward to variadic templates and some of the other features.
I noticed that the editor font appearance has changed and now has slightly more weight. I loved the old way that the default font (DejaVu Sans Mono) was displayed in 2.2. Is there any way to go back to that? I’m running elementaryOS Linux.
Thanks a lot! We appreciate your support.
CLion currently uses bundled JDK on Linux, it’s our custom version with the fixes from the JetBrains team. To change it you can call Find Action (Shift+Ctrl+A) and type ‘Switch IDE boot JDK’, then change it in the dialog that appears. Alternatively, you can set CL_JDK env var with your preferred Java version.
Thanks Anastasia. If I switch to a non-bundled JDK what fixes will I be losing?
Mostly it’s font rendering. Some crashes maybe related, but just report to us if you meet any (we’ll check if it’s related).
Ok. I installed the Oracle JDK on my system but when I invoke the Switch IDE boot JDK command it only gives me one option, the ‘bundled’ option. I followed this post, am I missing anything?
Probably it was searched in the wrong place… Feel free to share where it was installed so that we could check.
Bob, we suppose it’s a bug finally. So could you please kindly provide us some extra info to investigate and fix:
– OS version
– version output: call java (that is not seen by the switcher inside CLion) by the absolute path with -version option
Success! Towards the top of clion.sh I added this line and now my fonts are beautiful again. It seems kind of hackish but at least it’s working and I’m happy.
Thanks!
CL_JDK=$JAVA_HOME
It was installed to /usr/local/java/jdk1.8.0_73 and the $JAVA_HOME variable was set to that path. Do I need to set the $JDK_HOME variable also?
I believe no, but let us check.
elementaryOS Freya 0.3 (based on Ubuntu 14.04 LTS)
which java
/usr/bin/java
/usr/bin/java -version
java version “1.8.0_73”
Java(TM) SE Runtime Environment (build 1.8.0_73-b02)
Java HotSpot(TM) 64-Bit Server VM (build 25.73-b02, mixed mode)
Thanks. I’ve put this into the tracker:
Feel free to follow the ticket to get updates.
Hi Anastasia,
If at all possible, can you guys decouple the font rendering from the rest of the JDK fixes? I’d love to use the bundled JDK except for the font part (looks really bad for the one I use). I was putting off the upgrade till Bob’s hack helped me (thanks, Bob!).
I’m really not sure this can be done, but please leave it as a feature request here:. The team will consider then. Or at least find some solution via settings.
If you could maybe attach some screenshots of what you dislike about font rendering, it would be perfect. Thanks in advance.
Hi Bob,
I’ve reported this as IDEA-151425, please subscribe, vote & share your experiences
–Yury.
Thanks Yury. Done.
User-defined literals() have been a problem since 2014, and it is still broken now.
User-defined literals and constexpr are still not supported from C++11.
What do you mean by “not supported from c++11”? User-defined literals are used in the STL at least by “std::chrono” ()
CPP-1727 really hurts me as well
CLion currently doesn’t support user defines literals, that’s true. And we know they are used now in std::chrono, which makes us increase the priority of this task. We hope to have them supported soon. Please, follow the request to get the updates.
Amazing!
I hope will soon you will add a cmake support
*qmake
Here is the request:. Comment, follow to get updates, upvote to increase priority
Thank you for the great product! Just want to know whether it is possible to add support for different project model via 3rd party plugins. I think to add support for IAR based projects by myself because i know it will has very low priority for you. Is there a chance i can do it now?
Also, there are some annoying bugs in the C development: Any ETA for them?
Currently it’s not possible since there is no public interface or API for this. We do hope to have it when start working on at least seconf project model/build-system support. With only CMake on board it’s quite unstable to make it public.
We’ll check the issue linked and consider for the upcoming fixes/new release.
Still waiting for better preprocessor support, e.g. CPP-1100.
I have just updated to Clion 2016.1 and I can’t find the cmake toolbar anywhere.
Did I miss something??
Did already fixed the issue.
There was something wrong in my .idea directory.
The result was that no cmake files were generated.
It was just an empty directory.
When I renamed the .idea directory to something else, the problem solved.
Hm. Interesting. In case your project is CMake-based and something goes wrong with CMake in CLion next time you can try Tools | CMake | Reset Cache and Reload
How about View | Toolwindows | CMake?
I have a project that uses Qt and runs in Windows, Linux and OSX.
I use QtCreator to develop in all platforms. In windows I must use microsoft tool chain (I do not use visual studio editor, only compiler, debuger, etc..) because i use some libs that not compile with gcc in windows.
QtCreator is Ok but I was looking for something to speed up the work. I was willing to give CLion a shot, but for what I notice (correct me if I’ wrong) in windows I cannot use microsoft tool chain with CLion, I must use Reshaper for that, and more, in windows I have to start using Visual Studio editor with Reshaper and not Clion.
If this is correct instead of using CLion + QtCreator in all platforms I need a different one in windows (Visual Studio + Reshaper) and the cost will be 199€ x 2.
If this is the case I prefer to stick with QtCreator only.
Yes, we do agree that this is quite inconvenient for you. So we might consider supporting MSVC compiler in CLion:
Then you can probably develop and debug on Linux and build on Windows. It is quite easy with CMake.
Thanks for the idea but when we have platform specific code that links with 3rd party libs this is not really an option. Nevertheless we spend most time programming in linux,
But still I recommend to stick with QtCreator until debugging experience improves in Clion. If you use CMake then you can switch between QtCreator and Clion easily,
Double that! Debugger stability is really the area waiting to be improved.
Nuno: Have you used Maven in the past? If so, you could try Maven NAR plugin and configure it to use msvc toolchain, CLion doesn’t support the plugin but you could use the terminal to compile, test, and install the package in a repository. See some basic archetype here.
I recently updated to CLion 2016.1. I have some problems with Google Test: The macro EXPECT_TRUE() which is used to assert trueness is being marked as red in the code editor with message “Error after macro substitution: Class ‘testing::AsertionResult’ doesn’t have a constructor ‘AssertionResult(bool)'”. The weird thing is that the test compiles and executes succesfully. I was using CLion 1.2 before and never had this problem.
Yes, it’s a known regression –. Sorry for the inconvenience. Hope to fix soon.
Trying to compile QT project with Q_OBJECT macro for signals. Getting that error in 2017.2 EAP version. Is there a work around or something for it maybe?
Do you mean compilation issues? Or IDE’s false-positives?
Anastasia,
Do you want to push Clion into embedded systems and electronic design domain?
Currently it is completely dominated by Eclipse CDT-based solutions that are quite good, but not perfect imo.
I’m using Clion for hardware design and it works reasonably well.
We do plan this for sure.
It will be great if you share some typical issues that maybe need to be fixed especially for this area. Or just some specific feedback.
Embedded market is extremely diverse and there is no chance to create “one size fits all” solution, like JetBrains did for Java/Web world (Java/Web world is very diverse too (dozen of frameworks, database systems, multiple languages etc), but I think embedded is more diverse
).
Eclipse, the current king of embedded has different model: there is a common opensource core that anyone can contribute to and dozen of proprietary IDEs build on top of it. Eclipse-based products are supplied by Intel, TI, Xilinx, Altera, Synopsys, Analog devices, Mentor Graphics and many other companies. I’ve seen couple of Netbeans and QtCreator-based IDEs for embedded, but they are much less common.
But it was not like that 7-10 years ago, at that time most of those companies provided some custom-build IDEs or command-line tools only.
So to fight for this market Jetbrains will need to promote Clion as a better platform for plugin development then Eclipse. You will also need to win hobbyist market: looks like Arduino and Raspberry PI are two most common platforms in this area.
Now speaking about particular bugs:
Cross-compilation, remote debug
Assembly-level edit and debug
You will also need to create gui for embedded debug with CPU registers view, and memory area view (to observe memory-mapped IO)
But IMO debug is not yet there even for native linux projects. Its still slow, gdb-python pretty printers are not rendered correctly. I think you should fix this before implementing more debugger features.
Thanks Roman. We completely agree that the market is diverse and your ideas here are really valuable to us.
We also agree that we need a big work to be done in the debugger area to fit this market (and in general to satisfy our users needs). Especially those tickets you’ve mentioned. And I really hope we’ll be able to handle them in the upcoming releases. Let’s see.
I completely agree and even talked with JetBrains about this. A community / hobbyist edition of CLion that could be used to develop software for Raspberry Pi and Arduino would be great.
When these developers later land in the professional software business they won’t think twice about what tool to use.
I would like to ask where I can find see the roadmap CLion 2016.1?
Just wondering if it would be possible what QT support will be considered soon?
And of course support of UE4 is the same as in Visual Studio when you can create project in UE4 and Studio recognize project file and finally you can write a code?
Do you mean 2016.2? It will be published soon in our blog. Stay tuned.
Here it is:
Thanks for the answer.
And what about the future support UE4 and QT in qmake and cmake?
UE4 can work via CMake. So you can try it. Check this blog post for additional info:
Qt as a library is handled by CLion quite well now.
As for the qmake – we have a feature request – but (repeating our last paragraph from the post) we are currently not ready for implementing additional project model inside CLion, we’ll continue the investigation and some preliminary work in that direction, but it won’t make it to the 2016.2
I read this blog on UE4 but this is not the way out. This should work without any complicated settings, I think you understand. For example in Visual Studio is not required to do.
didn’t know about Quick Documentation feature before this post.
very useful. but not completely clear for me –
I see this shows , in addition to definition, an info from a comments as well , e.g. for STL types,
however it does not do this for my project types/methods/members.
even these are having doxygen formated doc
for project level, quick documentation just shows basic info like a header file, parent type, etc.
am I doing something wrong? maybe need to enable something in settings?
how to fix this?
Doxygen is not yet supported but is planned for 2016.2 and is currently in development.
> it does not do this for my project types/methods/members.
Could you please provide a sample?
just an examples from test project:
1. go to line like this:
std::this_thread::sleep_for(3s);
step on this_thread and press Ctrl+Q
I see something like this :
“…
@namespace std::this_thread
Brief
ISO C++ 2011 entities sub-namespace for thread. 30.3.2 Namespace this_thread.
…”
which was parsed from header:
“…
/** @namespace std::this_thread
* @brief ISO C++ 2011 entities sub-namespace for thread.
* 30.3.2 Namespace this_thread.
*/
namespace this_thread
…”
It looks good.
——————————————–
try the same for something in our project:
usage:
shm_server server;
declaration:
“…
* \class shm_server xxx.hpp “src/xxx.hpp”
* \brief processes incoming data and dump this to files
*
* class works as a wrapper for a worker thread (operator () ) which receives a data via shared memory
* server receives a data chunks from various clients until get empty vector of bytes, which is a flag of end of this client transfer
*
* \warning using std::cerr and std::clog is potentially thread unsafe, has to be replaced */
class shm_server : public base_interview_task {
….”
however Ctrl+Q gives me only:
“…
Declared In: searchinform.hpp namespace interview_tasks::searchinform class shm_server : public
…”
Thanks. I suppose it doesn’t work as expected because of this:. Feel free to upvote and follow to get the updates.
Is there any way to change the display value of custom types in Clion? I have custom containers and it would be very useful to see a custom representation of the stored value.
Do you mean in debugger? For GDB you can use pretty printers – (though some issues are known:)
I am a Clion user in China.When I need update,patch or install file from Jetbrains’ server,downloading speed will be very slow,sometimes can not complete successfully.Could you think about solutions?Such as cdn or something else?
We are currently working on such. Hope to have some in nearest future. Sorry for the inconvenience.
Thanks for your reply.Everything about Clion is good,but network speed causes much problem.
Installed today on windows and it’s unfortunate that only gdb7.8.x is supported, meanwhile, cygwin has only easy support to install gdb7.9x/7.10.x, so right away need to fight the install on windows.
The result is one needs to go through this convoluted install routine as posted on stackoverflow:
Posting this to call your attention to save some future potential users from hours of work to get it running.
Thanks.
yes, that’s true that only 7.8 is supported for now. You can still set 7.9 or 7.10 as a custom used version in the settings and use it, however we are aware of several problems (like this one, or some other related to). We plan to fix them and then we can officially announce other GDB versions support.
Hello
On Ubuntu 16.04 with clion there is no code completion for swift core libraries Foundation
Is it planned?
Thanks
Yes, we do plan this for later:
Awesome, jetbrains never disappoints i love you <3 everywhere i go i talk about your products, big thanks
Thanks for your support!
Is there any plans to support the Swift Package Manager? This would be a HUGE win! I mean really HUGE! Is there an issue I can upvote or something? make a campaign!
It’s not in the nearest plans since we didn’t collect much feedback about Swift plugin usage for now from our users. However feel free to create a feature request:, upvote and follow for the updates.
Clion is now my favorite IDE for C++ coding, but not yet for C++ debug.
Please consider add linking a server to debug.
Such as msvsmon.exe in VS2013.
Thanks.
CLion can do GDB Remote debug, provided that you have gdbserver on a remote machine (). We also plan more comprehensive remote toolchains integration (), but I can’t give you ETA on this. | https://blog.jetbrains.com/clion/2016/03/clion-2016-1-released-better-language-support-and-new-dev-tools/?replytocom=25292 | CC-MAIN-2019-47 | refinedweb | 3,398 | 74.79 |
Ubic::Daemon::PidState - internal object representing process info stored on disk
version 1.48_02
This is considered to be a non-public class. Its interface is subject to change without notice.
Constructor. Does nothing by itself, doesn't read pidfile and doesn't try to create pid dir.
Check if pid dir doesn't exist yet.
Create pid dir.
After tihs method is called,
is_empty() will start to return false value.
Read daemon info from pidfile.
Returns undef if pidfile not found. Throws exceptions when content is invalid.
Acquire piddir lock. Lock will be nonblocking unless 'timeout' parameter is set.
Remove the pidfile from the piddir.
is_empty() will still return false.
This method should be called only after lock is acquired via
lock() method (TODO - check before removing?).
Write guardian pid and guid into the pidfile.. | http://search.cpan.org/~mmcleric/Ubic-1.48_02/lib/Ubic/Daemon/PidState.pm | CC-MAIN-2016-50 | refinedweb | 136 | 71.71 |
Just one.. very morbid thought: does your tool require to be maintained as new GL versions and extensions are added? Is it possible to tweak as follows:
- Same nice command line options as now
- Takes as input a set of GL header files from which it generates the data. The idea is that glext.h (for example) has that functions for an extension foo are surrounded by #define GL_foo/#endif pair.
Though, I wonder about the horror of reused tokens and "dependent" extension functions (for example in an extension, "if GL_foo extension is supported then also the following functions for GL_bar are added: glBarFoo()" ).
At any rate, looking forward to futzing with this. | https://www.opengl.org/discussion_boards/showthread.php/179181-Interest-in-specialized-GL-function-loader/page3?p=1244147 | CC-MAIN-2015-27 | refinedweb | 113 | 61.16 |
This.
<StackPanel>
<Button Content="Click Me"/>
</StackPanel>. XAML processing to create a new instance of the named class when your XAML page is loaded. Each instance is created by calling the default constructor of the underlying class or structure and storing the result. To be usable as an object element in XAML, the class or structure must expose a public default (parameterless) constructor., your application will use something other than a completely default instance of any given object..
<Button Background="Blue" Foreground="Red" Content="This is a button"/> the content of the tag. Generally, the content is an object of the type that the property takes as its value (with the value-setting instance typically specified as another object element). The syntax for the property element itself is <TypeName.Property>. After specifying content, you must close the property element with a closing tag just like any other element (with syntax </TypeName.Property>). For properties where both attribute and property element syntax are supported, the two syntaxes generally have the same result, although subtleties such as whitespace handling can vary slightly between syntaxes. If an attribute syntax is possible, using the attribute syntax is typically more convenient and enables a more compact markup, but that is>
Property element syntax for XAML represents a significant departure from the basic XML interpretation of the markup. To XML, <TypeName.Property> represents another element, with no necessarily implied relationship to a TypeName parent beyond being a child element. In XAML, <TypeName.Property> directly implies that Property is a property of TypeName, being set by the property element contents, and will never be a similarly named but discrete element that happens to have a dot in its name.
Properties as they appear as XAML attributes on a WPF element are often inherited from base classes. For example, in the previous example, the Background property is not an immediately declared property on the Button class, if you were to look at the class definition, reflection results, or the documentation. Instead, Background is inherited from the base Control class.
The class inheritance behavior of WPF XAML elements is another significant departure from the basic XML interpretation of the markup. Class inheritance (particularly when intermediate base classes are abstract) is one reason that the set of XAML elements and their permissible attributes is nearly impossible to represent accurately and completely using the schema types that are typically used for XML programming, such as DTD or XSD format. Also, the "X" in XAML stands for "extensible," and extensibility precludes completeness of any given representation of "what is XAML for WPF" (although maintaining separate xmlns definitions can assist with this issue; xmlns is discussed in a later section).
Markup extensions are a XAML concept. In attribute syntax, curly braces ({ and }) indicate a markup extension usage. This usage directs the XAML processing to escape from the general treatment of attribute values as either a literal string or a directly string-convertible value..
When a markup extension is used to provide an attribute value, the attribute value should instead be provided by the logic within the backing class for the relevant markup extension. The most common markup extensions used in WPF application programming are Binding, used for data binding expressions, and the resource references StaticResource and DynamicResource. By using markup extensions, you can use attribute syntax to provide reference values for properties even if that property does not support an attribute syntax for direct object instantiation, or enable specific behavior that defers the general behavior of the requirement that XAML properties must be filled by values of the property's type.
For instance, the following example sets the value of the Style property using attribute syntax. The Style property takes an instance of the Style class, a reference type that by default could not be specified within>:
<Button Margin="10,20,10,30" Content="Click me"/>
The preceding attribute syntax example is equivalent to the following more verbose syntax example, where the Margin is instead set through property element syntax containing a Thickness object element, and four key properties of Thickness are set as attributes on the new instance:
<Button Content="Click me">
<Button.Margin>
<Thickness Left="10" Top="20" Right="10" Bottom="30"/>
</Button.Margin>
</Button>
Whether to use the typeconverter-enabled syntax or a more verbose equivalent syntax is generally a coding style choice, but the typeconverter-enabled syntax promotes more streamlined markup. (However, there are a limited number of objects where the typeconverter is the only way to set a property to that type, because the type object itself does not have a default constructor. An example is Cursor.)
For more information on how typeconverter-enabled attribute syntax is supported, see TypeConverters and XAML.
XAML specifies a language feature whereby the object element that represents a collection type can be deliberately omitted from markup. When a XAML processor handles page that is nested as a child element of another element is really an element that is one or both of the following cases: a member of an implicit collection property of its parent element, or an element that specifies the value of the XAML content property for the parent element (XAML content properties will be discussed in an upcoming section). In other words, the relationship of parent elements and child elements in a markup page is really a single object at the root, and every object element beneath the root is either a single instance that provides a property value of the parent, or one of the items within a collection that is also a collection-type property value of the parent. many common specifies a language feature whereby any class that can be used as a XAML object element can designate exactly one of its properties to be the XAML content property for instances of the class. When a XAML processor handles.
<StackPanel>
<StackPanel.Children>
<!--<UIElementCollection>-->
<Button>
<Button.Content>
Click Me
</Button.Content>
</Button>
<!--</UIElementCollection>-->
</StackPanel.Children>
</StackPanel>
The StackPanel / Button example has still another variation.
<StackPanel>
>
A class might support a usage as a XAML element in terms of the syntax, but that element will only function properly in an application or page when it is placed in an expected position of an overall content model or element tree. For example, a MenuItem should typically only be placed as a child of a MenuBase derived class such as Menu. Content models for specific elements are documented as part of the remarks on the class pages for controls and other WPF classes that can be used as XAML elements. For some controls that have more complex content models, the content model is documented as a separate conceptual topic. See Content Models. processors and serializers will ignore or drop all nonsignificant whitespace, and will normalize any significant whitespace... The XAML Syntax Terminology topic is also a good place to start if you are considering the XAML usages to enable if you are creating a custom class.
A XAML file must have only one root element, in order to be both a.
<Page
xmlns=""
xmlns:x=""
...
</Page>
The root element also contains the attributes xmlns and xmlns:x. These attributes indicate to a XAML processor which namespaces contain the element definitions for elements that the markup will reference. each page and on the application definition if it is provided in markup. XAML markup style that is difficult to read.
The WPF assemblies are known to contain the types that support the WPF mappings to the default xmlns. Typically, you choose a different prefix, but it is also possible to choose a different xmlns as default and then map WPF to a prefix. For more information about how xmlns namespaces and the namespaces of the backing code in assemblies are related, see XAML Namespaces and Namespace Mapping.
In the previous root element example, the prefix x: was used to map the XAML xmlns. This x: prefix will be used to map the XAML xmlns in the templates for projects, in examples, and in documentation throughout this SDK. The x: prefix/XAML xmlns contain several programming constructs that you will use quite frequently in your XAML. The following is a listing of the most common x: prefix/XAML xmlns programming constructs you will use:
x:Key: Sets a unique key for each resource in a ResourceDictionary. x:Key will probably account for 90% of the x: usages you will see in your application's markup.
x:Class: Specifies the CLR namespace and class name for the class that provides code-behind for a XAML page. You must have such a class to support code-behind, and it is for this reason that you almost always see x: mapped, even if there are no resources.
x:Name: Specifies a run-time object name for the instance that exists in run-time code after an object element is processed. You use x:Name for cases of naming elements where the equivalent WPF framework-level Name property is not supported. This happens in certain animation scenarios.
x:Static: Enables a value reference that gets a static value that is not otherwise a XAML settable property.
x:Type: Constructs a Type reference based on a type name. This is used to specify attributes that take Type, such as Style..::.TargetType, although in many cases the property has native string-to-Type conversion such that the x:Type usage is optional.
There are additional programming constructs in the x: prefix/XAML xmlns, which are not as common. For details, see XAML Namespace (x:) Language Features..="MyNamespace.MyPageCode">
<Button Click="ClickHandler" >Click Me!</Button>
</Page>
namespace MyNamespace
{
public partial class MyPageCode
{
void ClickHandler(object sender, RoutedEventArgs e)
{
Button b = e.Source as Button;
b.Background = Brushes substantial limitations. For details, see Code-Behind and XAML.
When you specify behavior in code-behind, with the handler being based on the delegate for that event. You write the handler in code-behind in a programming language such as Microsoft Visual Basic .NET or C#.
Each WPF event will report event data when the event is raised. Event handlers can access this event data. In the preceding example, the handler obtains the reported event source through the event data, and then sets properties on that source.
A particular event feature that is unique and fundamental to WPF is a routed event. Routed events enable an element to handle an event that was raised by a different element, as long as the elements are connected through an element, see Routed Events Overview.
By default, the object instance that is created by processing an object element does not possess a unique identifier or an inherent object reference that you can use in your code. just references that instance), and your code-behind can reference the named elements to handle run-time interactions within the application., use x:Name instead.);
}
}
Just like a variable, the name for an instance is governed by a concept of scope, so that names can be enforced to be unique within a certain scope that is predictable. The primary markup that defines a page denotes one unique namescope, with the namescope boundary being the root element of that page. However, other markup sources can interact with a page at run time, such as styles or templates within styles, and such markup sources often have their own namescopes that do not necessarily connect with the namescope of the page. For more information on x:Name and namescopes, see Name, x:Name Attribute, or WPF Namescopes. | http://msdn.microsoft.com/en-us/library/ms752059.aspx | crawl-001 | refinedweb | 1,914 | 51.58 |
@ Stephem: here is my build-depends: Vec -any, array -any, base -any, containers -any, mtl -any. You also can find a cabal file on my GitHub: <>@Daniel: In fact, I've inserted that context trying to fix the problem but it affected nothing. I'll remove it. Thank you guys, in advanced, for the collaboration! Edgar On 26 March 2011 21:50, Daniel Fischer <daniel.is.fischer at googlemail.com>wrote: > On Saturday 26 March 2011 21:35:13, Edgar Gomes Araujo wrote: > > Hi Stephen, > > I've have done the following: > > > > {-# LANGUAGE ScopedTypeVariables #-} > > {-# LANGUAGE RankNTypes #-} > > ... > > mbc :: forall a . (SubUnit a)=>[Point] -> SetActiveSubUnits a -> Box -> > > StateMBC a [Unit a] > > mbc p afl box = do > > cleanAFLs > > if (null afl) > > then do > > (unit, afl') <- case build1stUnit plane p1 p2 p of > > Just un -> return (([un], fromList $ getAllSubUnits > > un)::(SubUnit a)=>([Unit a], SetActiveSubUnits a)) > > Remove the context, that's given in the signature: > > return (([un], fromList ...) :: ([Unit a], SetActiveSubUnits a)) > > > _ -> return ([] , empty) > > analyze1stUnit unit afl' > > ..... > > > > > > I hope that is right. Does it? > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: <> | http://www.haskell.org/pipermail/haskell-cafe/2011-March/090489.html | CC-MAIN-2014-49 | refinedweb | 178 | 66.13 |
Hey there,
i've run in this problem as well.Before I wrote Stuff in Kotlin I used Scala and Clojure to get things done, both of them have a more elegant way to deal with multiple NonNull Values for Collection Operations and Lets, then Kotlin has out of the box. So I decided to use the power of ExtensionFunctions and wrote my own Library to make it more elegant and readable.
You can find the library on Github:
The project is still in progress and I need to add a discription to use it but here a litte example to fix your problem.
import io.multifunctions.letNotNull
Quad(userApi.get(userId),
ordersApi.get(userId),
favoritesApi.get(userId),
notesApi.get(userId))
.letNotNull { user, orders, favorites, notes ->
HttpResult.renderPage(user = user,
order = orders,
favorites = favorites,
notes = notes)
}
In this example the lambda returns null in case one of the accessed apis return a null object, the let will return a null as well.
Currently the lib is able to handle 6 parallel objects/collections with the following functions:
I hope this will help you with this problem
Despite possible performance impacts, local vals is the best solution. If the variable is mutable and accessible to outer scopes then the only way to guarantee that it hasn’t changed is making a local copy. Even null checking using ?.let {} for single variables does this:
?.let {}
fun test() {
name?.let { doSomething(it) }
}
compiles to the equivalent java code:
public final void test() {
String var10000 = this.name;
if(this.name != null) {
String var1 = var10000;
this.doSomething(var1);
}
}
(Note: the kotlin compiler does optimise this when it can be sure that the variable won’t change. If name was a local var instead of a property, the compiler wouldn’t make an additional local variable
Swift has a similar problem with null checking multiple vars and solves this by making it easy to declare local values within if statements:
// Swift code
if let name = name, let age = age {
doSth(name, age)
}
A similar syntax in Kotlin could be the following:
if (val name = name, val age = age) {
doSth(name, age)
}
Which would compile to the equivalent java code:
final String name = this.name;
if (name != null) {
final Integer age = this.age;
if (age != null) {
doSomething(name, age);
}
}
The usage of the comma instead of && is to differentiate it from normal if statements in that everything should evaluate to non-null, the || operator is not allowed and do not make sense:
&&
||
if (val name = name || val age == age) { /* Defeats the point of null checking */ }
Like with the ?.let{} case, it would be possible for the compiler to optimise the creation of the local variables, only creating them if required. Also like the it in the ?.let{} block, any values declared in the scope of the of the if statement and it’s following block would be immutable.
?.let{}
it
The syntax could be generalised as follows:
if (val var1 = <expression1> ,val var2 = <expression2>, val varN = <expressionN>) {
// Do something with var1, var2, varN
}
Where <expressionX> is any expression that returns a nullable type
<expressionX>
Which compiles to the equivalent java code:
final Type1 var1 = <expression1>;
if (var1 != null) {
final Type2 var2 = <expression2>;
if (var2 != null) {
final TypeN varN = <expressionN>;
if (varN != null) {
// Do something with var1, var2, varN
}
}
}
It may seem a bit much, but it is almost identical to how multiple consecutive safe null calls are compiled. e.g: foo?.bar?.baz
foo?.bar?.baz
The benefit however is that is that you can use earlier values in later expressions. In the example below p1 and p2 have already been null checked so we can access their age property directly.
fun whoIsOldest(person1: Person?, person2: Person?) {
if (val p1 = person1, val p2 = person2, val age1 = p1.age, age2 = p2.age) {
when {
age1 > age2 -> print("$p1 is oldest")
age1 < age2 -> print("$p2 is oldest")
else -> print("$p1 and $p1 are the same age")
}
} else {
print("Your need two people to compare ages")
}
}
And because the expressions can be anything that returns a nullable type it can also be used with safe casts:
if (val child = person as? Child, val car = child.favouriteToy as? Car) {
car.race()
print("$child is racing their toy $car")
}
A additional improvement that could be made which is also available in swift is allowing boolean expressions alongside the nullable expressions. It’s a fairly common use case to check if something is non-null and then perform an additional check to see if it is appropriate to use.
if (val childAge = person.child?.age, childAge >= 6 && childAge < 18) {
print("$person has to drop their child at school each weekday")
}
Here the use of the comma instead of && reinforces the requirement that every expression needs to evaluate to non-null and true, no || operators are allowed.
Minor performance issues aside, this seems like an ideal solution.
Especially the extra step of being able to have boolean expressions as this covers a huge number of use cases I've seen.
Yes you COULD declare these vars your self, but then they (probably?) couldn't be optimized away. And it feels pretty boilerplatey, which is antithetical to Kotlin it seems.
So tl;dr +1 from me. would LOVE to see this implemented in the language
Which means that the semantics of Kotlin change in these ways:
val childAge = person?.child?.age
Int?
Int
if
null
false
Not that I think these are blocking issues, but the inconsistencies are something to consider.
To clarify I’m not advocating that C style implicit boolean conversions should be added. null should not be treated as false in other circumstances: if (person?.name) {...} shouldn’t be possible.
Rather I’d think of it as not-being-able-to-assign-nullable-to-non-null equals false
if (person?.name) {...}
not-being-able-to-assign-nullable-to-non-null
This again makes the language more inconsistent: Some constructs only work in a limited number of situations, and not in other similar situations. I can, for example, imagine people trying to use this in a while-statement. The (justified) expectation is that if you can use something in situation A, you can also use it in similar situation B. So if people see the expression in the if statement above, it is reasonable to expect them to be able to do this:
while
val hasToBeDroppedOffAtSchoolOnWeekdays = val childAge = person?.child?.age, childAge >= 6 && childAge < 18
We could have a new control structure instead of if().
guard() or something, so it’s clear this is a special case. Much like the for loop’s syntax is somewhat strange and wouldn’t work inside an if()
I don’t see any problem with also allowing while statements to use this syntax, as it’s just another type of conditional flow control. (swift also allows it’s version of this syntax in this situation)
val ride = RollerCoaster()
while (val person = personQueue.getNextPerson(), person.height > MIN_HEIGHT) {
print("$person is riding the roller coaster")
ride.add(person)
}
ride.start()
The equivalent java code would be a bit more complicated though compared to the if version.
There are already Kotlin syntax rules which would indicate that statement wouldn’t be possible. The statement above is pretty similar to following, which won’t compile:
val bar = val foo = "baz" // compiler error
Also assigning T? to a T always requires some kind of extra context for it to work, either by explicitly null checking and relying on the smart cast or using an elvis operator to provide a fallback value. The proposed syntax is just another way to add context to let the assignment work
T?
T
I don’t see it as inconsistent, it’s just the type inference working as it should. Without it, the if statements would be written as follows look like follows:
if (val age: Int = person.age) { ... }
Where it works just like smart casting or the elvis operator, extra context is provided to ensure that age will not be null and type inference can eleminate the need to explicitly specify the non null type.
I’d even argue that this syntax is even easier to understand and more consistent than smart casts.
Its easier to understand because it forces the assignment to be next to the control structure making it easy to see why the assignment is possible. With smart casts there could be 100’s if lines of code between the null check that makes it possible and the actual cast. There’s also two cases where smart casts can occur:
if (foo != null) { foo.bar() }
if (foo == null) return; foo.bar()
At least with the first kind the curly brackets can sometimes provide a scope around where the check might have been performed, but with second kind it could be anywhere between the smart cast and the var’s declaration.
It’s more consistent because to works regardless of where it is. One of the problems showed in the example by the OP was a smart cast not working despite performing a null check. Take for example:
if (foo?.bar != null) { foo.bar.baz() }
The smart cast works inconsistently depending on whether foo is local, a parameter, a var property, a val property, a val property with an custom getter, a val property with an open getter, a package level var, or a package level val. Then multiple this by all the different cases that apply to bar as well.
foo
var
val
On the other hand the following always works, regardless of where it is placed:
if (val bar = foo?.bar) { bar.baz() }
Maybe the inconsistency could cause a problem for a small fraction of people for a moment before they learn why.
But the multiple null check issue will effect every single Kotlin user. So even if the inconsistency is an issue (which I don’t think it is, especially if you create a new keyword that it is only valid inside of) this is still a much greater good, | https://discuss.kotlinlang.org/t/kotlin-null-check-for-multiple-nullable-vars/1946?page=2 | CC-MAIN-2017-39 | refinedweb | 1,668 | 60.45 |
getrlimit(), getrlimit64()
Get the limit on a system resource
Synopsis:
#include <sys/resource.h> int getrlimit( int resource, struct rlimit * rlp ); int getrlimit64( int resource, struct rlimit64 * rlp );
Since:
BlackBerry 10.0.0
Arguments:
- resource
- The resource whose limit you want to get; one of the following:
- RLIMIT_AS
- RLIMIT_CORE
- RLIMIT_CPU
- RLIMIT_DATA
- RLIMIT_FREEMEM
-() and getrlimit64() functions get
- an effective user ID of root can raise a hard limit. Both hard and soft limits can be changed in a single call to setrlimit() subject to the constraints described above. Limits may have an "infinite" value of RLIM_INFINITY. using getrlimit(),.
A limit whose value is greater than RLIM_INFINITY is permitted.
The exec* family of functions also causes resource limits to be saved.:
#include <stdio.h> #include <stdlib.h> #include <sys/resource.h> int main( void ) { struct rlimit curr_limits; if (getrlimit (RLIMIT_NPROC, &curr_limits) == -1) { perror ("The call to getrlimit() failed."); return EXIT_FAILURE; } else { printf ("The current maximum number of processes is %d.\n", (int) curr_limits.rlim_cur); printf ("The hard limit on the number of processes is %d.\n", (int) curr_limits.rlim_max); } return EXIT_SUCCESS; }
Classification:
getrlimit() is POSIX 1003.1 XSI; getrlimit64() is Large-file support
Last modified: 2014-11-17
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | https://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/g/getrlimit.html | CC-MAIN-2019-35 | refinedweb | 212 | 52.05 |
Question
The data on the next page represent the number of pods on a sample of soybean plants for two different plot types. Which plot type do you think is superior? Why?
.png)
Answer to relevant QuestionsThe following data represent the weights (in grams) of a random sample of 50 M&M plain candies. (a) Determine the sample standard deviation weight. Express your answer rounded to three decimal places. (b) On the basis of the ...It is well known that San Diego has milder weather than Chicago, but which city has more deviation from normal temperatures over the course of a month? Use the following data, which represent the deviation from normal high ...In December 2010, the average price of regular unleaded gasoline excluding taxes in the United States was $3.06 per gallon, according to the Energy Information Administration. Assume that the standard deviation price per ...The data set on the left represents the annual rate of return (in percent) of eight randomly sampled bond mutual funds, and the data set on the right represents the annual rate of return (in percent) of eight randomly ...The frequency distribution on the following page represents the age of people living in poverty in 2009 (in thousands). In this frequency distribution, the class widths are not the same for each class. Approximate the mean ...
Post your question | http://www.solutioninn.com/the-data-on-the-next-page-represent-the-number-of | CC-MAIN-2017-13 | refinedweb | 225 | 57.37 |
- monthly subscription or
- one time payment
- cancelable any time
"Tell the chef, the beer is on me.".
In this tutorial, I’ll show you how to code a mega menu and add it to your theme.
To follow along with this tutorial, you’ll need the following:
I’m using a third-party theme (ColorMag), so I’m going to create a child theme of that and add my styling to..
Roads signs let us know where we are and point us to where we want to go. Similarly, website menus help users to navigate your website. Menus tell users their current location on the website an points them to information they’re looking for.
A good menu organizes, arranges, and displays content to users in a way that is easy to follow. Menus can be simple or complex depending on the purpose of your website. A well thought-out menu significantly enhances the experience of visitors to your website.
There are many different kinds of menu plugins available for your WordPress website. What type of menu plugin you choose depends on what purpose your website will serve. In this post, I'll show you some of the best WordPress menu plugins for 2019.
A website menu is made up of a collection of links. These links are road signs that make it possible to navigate and interact with a website. They guide you from one page to another and they lead you to information contained within the website. helps buy products you’re selling or sign up for services you provide.
A menu is divided into navigation bars. Each bar represents a particular category.
How you name these categories is very important. Navigation bars with concise, clear, descriptive categories allow user to quickly and easily get information about your business, services, products etc.
Menu placement on your website should be consistent—in the same position on all pages. This makes it easy for users to pay attention and visually process the information you provide.
Finally, a menu should be uncluttered...
You don’t have to build a website menu from scratch! CodeCanyon has many WordPress menu plugins to help you out. you.
Forget pre-determined layouts and build the mobile menu you want.
Customizing your mobile menu is faster and easier than ever, thanks to TapTap!
TapTap is a versatile, easy-to-customize, mobile-first menu plugin for WordPress that you can literally use on any WordPress site. It blends seamlessly to any WordPress website, but also allows you to quickly create menus that are uniquely yours. You can preview any changes you make in real-time.
TapTap uses the built-in WordPress customization tools and menu builder. This means the plugin is lightweight and allows you to use tools you’re already familiar with. No need to learn a new interface!
Also, TapTap is WordPress Multisite compatible.
Flawless navigations is an absolute necessity and TapTap is the best tool for the job.
Customer breturick says this about TapTap
An absolutely awesome, customizable menu plugin that allows you to design beyond the typical nav system standard on most WP themes. It really adds a nice design pop to any website...
Find out if this menu plugin is right for you by viewing the live preview here.
If you want to offer your mobile website visitors a flawless experience that’s tailored specifically for the small screen then Touchy WordPress Mobile Menu is your plugin of choice. Everything in Touchy is designed and built with smartphone usability in mind.
Touchy is tremendously customizable. With just a few clicks you can change the color of any element, alter positioning options, hide any of the menu bar buttons, override button functions, change transparencies etc., all through the ridiculously easy-to-use real-time Customizer integration
Here are additional features that make Touchy serve as a complete mobile navigation and header solution on any WordPress theme:
Finally, Touchy also works great on desktop browsers, so if you wish, you can even use it on a full-blown desktop site.
A very satisfied customer, AlexT-WebDesign, says:
Super amazing plugin, very flexible. A good move in modern UI design. Super fast great support. Will be looking at more of your plugins now. Thanks again for awesome plugin.
Check out this live preview to see what this WordPress mobile menu plugin is capable of.
The best plugins enhance has usability of your WordPress website on a webpage level. WP Floating Menu Pro does just that. It is a 2 in 1 menu for WordPress comprising a page navigator and sticky navigation menu.
With this plugin you can add a smart looking page scrolling navigation bar to any WordPress theme or website in just minutes. All you need to do is define the sections on your website and create the one page navigation menu.
Even more interesting, you can configure the navigation menu on the page level.
In fact you for your main menu.
Superfly makes navigation much easier and user-friendly both on desktops and mobiles.
Customer upperlabel says:
If you're looking for a cool menu, this plugin is perfect and best of all very responsive.
Find out if this WordPress menu plugin is right for you by checking out the live preview.
Mega Main Menu is loaded with tons of features to help you build mega menus. For example:
Find out if this WordPress menu plugin is right for you with the live demo.
Whether you are a long time Wordpress guru or a complete newbie, Hero Menu allows you to easily and intuitively create a slick and professional WordPress menu.
It has great UX and UI. It’s easy to use. In a few easy steps, you will have any desired menu up and running within minutes—whether it's a complex Mega Menu rich with features, or a simple drop-down menu.
What’s even more interesting, with the Mega Menu builder you can:uts+.
Forms are needed for almost every website. Therefore, it makes sense that a lot of free and paid services, plugins, and libraries are available to make it easy to create forms. In this post, we will list some tools, plugins and libraries which can be used to create free forms online without any knowledge of JavaScript. Some of these tools are completely free to use while others are free up to a certain usage limit.
We will also list some useful libraries that can facilitate creation of JavaScript based forms if you have basic knowledge of the language. The libraries are free to use so you can use them to create as many forms as you like.
Lets get started with the tools available online to create your own forms.
Bootsnip is a tool that is completely free to use for everyone. You can use it to create as many forms as you want. It generates HTML that you can copy and paste to your own webpage.
The drag-and-drop form builder lets you drag the form components from the left side and drop them on the right side to create the form. Once you have dropped an input component, you can click on it to see a small form and fill out the id, label text, placeholder text and other element-specific information based on the component.
Once you have created your form, it is easy to get the generated HTML by clicking on the "View HTML button. Give this tool a try if you want to create Bootstrap-based forms.
The Bootsnip website actually also offers a few other tools that you might find useful like a button builder and a page builder.
This jQuery-based form builder is also completely free and offers a very user friendly drag-and-drop method for building forms. You just have to drag different form components from the list on the right and place them on the left side. Once you have done that, you will be able to click on the edit button in the top right corner of the form component to edit all its attributes like placeholder text, whether it's required, and much more.
Another great thing about this tool is that you can create your forms in any of 27 different languages.
Any forms or form templates that you create with this tool can be rendered easily using the formRender plugin. You can find more information about the form creation and render process on the official website.
pForm is yet another free HTML-based form builder which allows you to create your own forms by following three simple steps.
You begin by choosing a color scheme for your form and then add some input fields. Once you add the input elements to your form, you will be able to set the value of various attributes like the label text, field type and other attributes.
After adding all the desired fields to your form, simply click the Save Form button to download the generated HTML. At this point, you'll also have the option to preview the form before either downloading or going back to edit the form input fields again.
Go ahead and create a form with pForm, it is incredibly easy and you also get to choose your own color scheme for the form.
The two services below in the list offer an easy way to create complex forms but their free plans offer only very limited features. If you want to create complicated forms these tools are one way to do it. However, if you have a basic knowledge of HTML and JavaScript, it might be cheaper and better for you to use one of the form plugins from CodeCanyon to create your forms.
JotForm has a free starter plan which allows you to create up to 5 forms and have 100 Monthly submissions. You also get about 1000 Monthly form views. The forms you generate will contain JotForm branding if you use the free starter pack.
If you don't mind the JotForm branding and your usage stays within the limits, this tool is actually very easy to use. Once you have signed up, you can start using the templates to create your own forms. The templates address all kinds of needs from payment and hotel booking forms to course and event registration.
Cognito forms also offers a form builder with limited features in its free plan. You will get a Cognito forms branding in the forms you create but you can create as many forms as you like. Form submissions are however limited to 500 per month in the fee plan. This is still five times the cap put in place by JotForm.
The form builder tool allows you to either create a form from scratch or use built-in templates. In either case, you will probably find it to be the most feature-rich and user-friendly of all the form builder tools listed so far.
Try creating a form using one of the Cognito form templates and see if it addresses all your needs.
If you don't mind writing a little code yourself to create your forms, you can take advantage of all the free JavaScript or jQuery plugins and libraries to create your own forms with the exact feature set you need for a particular project.
Here are some libraries to help you get started.
You can use the jQuery Validation Plugin to write your own custom validation rules and error messages, which can be easily integrated within existing forms. In fact, we also have a tutorial for jQuery based form validation that uses this particular plugin. This should help you jump start the form creation process.
Even if you have already created a basic HTML5 form, you can still use the plugin to add validation with a single line of code. It will greatly improve the user experience for people visiting your website. You have full control over the placement and styling of error messages and the library makes sure that they are displayed in a non-intrusive manner.
This library also allows you to integrate input validation in all your forms but unlike the jQuery validation plugin it relies on use of HTML data attributes instead of JavaScript for validation. Adding validation to a form is as simple as including the
data-parsley-validate attribute in the
<form> tag.
You should take a look at some of the examples mentioned on the official website to get a good understanding of the working and features of the library.
Anyone who wants to add an easy to use mobile-friendly, responsive and lightweight jQuery-based date and time input picker would find Pickadate to be very useful.
It is both touch- and keyboard-friendly. The picker supports over 40 different languages so you will probably be able to let people choose the dates in their native language.
The plugin offers a lot of customization options for both the date picker and the time picker.
This weirdly named password strength estimator is actually a great tool to let users know the relative strength of passwords that they want to use when signing up using a form that you created.
It relies on pattern matching and weighs around 30k common passwords, common names, surnames and patterns like dates or keyboard sequences like qwertyuiop etc. to determine if a password is strong enough. It is definitely worth checking out if you want to give your users a friendly way to determine their password strength.
FilePond is a JavaScript library that allows you to implement file upload functionality in your forms with an impressive set of features. One of the things that you will instantly like about this plugin is its user-friendliness.
It will accept directories, files, blobs and much more as input. Any uploaded images are optimized, resized and cropped on the client side to dramatically improve upload speed and reduce srver load at the same time.
The list of features does not end here! I strongly suggest that you check this library out if you are looking for a JavaScript library to facilitate user-friendly file upload in your forms.
In this post, we took a look at some of the popular form building tools, plugins and libraries that can be used to create feature-rich HTML forms in no time.
If you don't want to write any code, the free form builder tools mentioned in the first half of the article will generate the HTML for you. All you have to do is just drag and drop the components that you want to include in your forms. Some dedicated form building services will also give you a lot of other features but they are generally paid and the free version will give you access to limited functionality.
The plugins and libraries mentioned later in the article will be helpful for those people who want to create their own forms with the exact functionality they need. These libraries will just make it easier for them to add the required functionality while giving them a lot of control over the way the forms look and behave.
You can also take a look at different plugins and scripts available on CodeCanyon to help you create all kinds of forms with ease. They are a good option for people who don't want to code everything from scratch but still want control over the forms compared to third party services.
Today,.
Once you install the WP Quiz plugin, it will add a number of admin links to the left sidebar. In this section, we’ll go through the each section briefly.
This takes you to a quiz listing page with all the quizzes created so far.!
In this section, you can configure settings that are globally applicable. You can configure different aspects of this plugin like subscription settings, quiz types, and Google Analytics settings. a few more questions about WordPress. I used the following ones: the quiz in!
Today, we discussed how to create quizzes by using the WP Quiz plugin for WordPress. We went through an in-depth overview of this plugin and implemented a real-world example for demonstration purposes.
From the moment a user lands on your website until they either leave or convert into a customer, a series of steps lead them from one point to another. Buyer personas represent your typical customer and help address pain points your customers have as well as predicting actions specific audiences might take. About 63 percent of marketers use buyer personas when creating content.
From my experience, here are some steps to help improve your user’s journey once you develop your target audience’s unique buyer persona.
The key to any good trip is knowing where you’re starting and where you’ll end up. To write an accurate user journey map, think through the path you want your users to take from the minute they land on your page through conversion and even after for follow-up. Write out each step of the journey so that you don’t miss a single point. The final destination should align with your goal for visitors who land on your page, such as signing up for a newsletter or making a purchase.
The buyer’s journey starts with the awareness stage where the prospect has a problem or sees an opportunity. It then moves to the consideration stage, where the person better understands some possible solutions to the problem. The final step is the decision stage where the consumer decides what their solution is.
The journey may be a bit different for different types of websites. For example, on an e-commerce site, the journey might start with an informational video, move to an FAQ about the product, go to a call to action (CTA), and end with the person buying the product.
One of my clients found customers had some very specific questions when they landed on the homepage of the site. Answering these questions quickly and above the fold allowed me to design a journey map where the client moved quickly from the first stage to the consideration stage. Conversions increased, and the website’s sticky factor improved.
However, an advocacy site may use different steps to qualify volunteers and make sure they align with the organization’s values rather than sell a product. Another thing to consider is when part of the journey occurs non-digitally, such as an initial connection made at a trade show and final sale through the website. The steps will vary depending on the first contact with users and how much information must be provided through your landing page.
The first step in the user journey is your landing page. You may have more than a single landing page on your site, too.
For example, if your analytics show that most of your search engine traffic comes because of a specific piece of content, use the content as part of your landing page and include a CTA at the end of the article.
Another example: If your analytics show that 90 percent of your traffic arrives on a specific page, treat that page like any other landing page.
Figure out the audience members and create a buyer persona for that specific group of site visitors. You must also decide where these visitors are in the journey—if they are already in the consideration stage, your content will present the solution to the problem rather than just defining the problem.
Only 39 percent of marketers create a custom landing page for a specific marketing offer. However, unique pages for each of your offers allows for specific content which drives the buyer through the journey and ignores content not related to solving the user’s problem.
One of the best ways to make your landing page highly attractive from the minute users land on your site is by creating a singular focus that drives the buyer down the path you want them to take. Allow a strong balance of positive and negative space so that attention is drawn where you want it to go. Show users next steps with arrows and clear calls to action. No matter what type of website you manage, you must conduct A/B testing to figure out what changes garner the highest conversion rates from your specific users.
Before you add any content, you need to know the final purpose of the buyer’s journey. By the time your users get to the final leg of the journey, you’ve narrowed down their buying experience and know exactly what they’re looking for, but what is your goal for their actions? The micro-interactions on your site might only have a single purpose, but they should all point toward the final goal or larger purpose of your website, even from the first moment the user finds you.
Place your focus on the final call to action and perfect it as much as possible. What’s the one concern you haven’t yet answered through the micro-interactions that might make the user reluctant to take that final step? How can you effectively answer any concerns the buyer might have? Put yourself in the buyer’s shoes and think about what might keep you from buying or signing up. Then, offer a guarantee or additional information that puts the user at ease.
If your site visitor makes it midway through the journey, they’re definitely interested in what you have to offer. You must now convince them that your product, service, organization, or information is exactly what they’re seeking. Clearly answer any questions the user has.
You can figure out typical questions based on what others have asked via telephone, email, or in forums such as Yelp, the Google Play Store, or the Apple App Store. Study Google’s “People also ask” questions for more ideas. Answer the most popular questions upfront and provide additional options if the user has additional questions. Add live chat for any out-of-the-ordinary questions that need an immediate answer. There are many effective ways of educating site visitors, including video, articles, and guides.
List the places where customers engage on your website and match them to where the buyer is in their journey. Someone who connected with you in a local store might arrive right on the shopping page on your site. If you’ve already vetted a volunteer for your organization, they may just need to go directly to the signup form.
This image from optinmonster.com shows potential results for A/B/n testing, illustrating how it’s possible to see what site contents perform best.
A behavior flow report allows you to address the needs of different types of users and where they came from. Build a map based on the information users expect to see and the typical behavior of each user, but also be aware that 49 percent of users expect instant access to information at all times. As more people buy smartphones and get online with them, people grow more accustomed to searching for an answer and getting immediate results. Devices such as Alexa and Google Home also make it easier than ever to pull up information with little effort. Make the details easy to locate within your navigation and add voice search functions to your site.
Each touchpoint on the journey should have a goal. For example, the goal for the landing page may simply be to get the user to read the information and click on a video link. The goal for the video might be to direct them to a signup form. The signup form has a call to action to fill in and submit the form. Knowing the purpose of each touchpoint allows you to create a singular focus and drive conversions. Remember non-digital experiences as well. If a user sees an advertisement at a bus stop at a time when they need a solution to a problem, do you help them solve that problem as soon as they navigate to your website?
One of my clients placed advertising in local theaters but wasn’t getting the hoped-for results. Simply by adding a specific website address that took moviegoers to a page geared specifically toward that buyer persona, the client saw an increase in site visitors. She also had the advantage of knowing exactly how many moviegoers saw the ad and visited her site because the landing page was specific to that campaign.
A design that’s user-centered drives the buyer smoothly from Point A to Point B to Point C. Test each step along the way and make any tweaks needed to gain the highest conversion rate possible. You may need to change the color or wording of a CTA button, remove or add information, or complete any number of other tasks.
Split testing allows you to make sure each and every point of contact is the best it can be. Track your conversion rates and test frequently to ensure ongoing success.
You might spend countless hours mapping the buyer’s journey and planning out each point along the way. However, users don’t always enter your website where you think they should, nor do they use a straight path in their journey. Allow some room for users who bounce around in a different pattern than you expected. With a little attention to the journey, the buyer is likely to take alternate routes, you’ll get a better return on your investment than you ever expected.
The post How to Improve the User Journey on Your Website appeared first on Boxes and Arrows..
In this tutorial I’m going to show you how to make a WordPress shopping cart, using the BigCommerce WordPress plugin. We’ll go over the following things (in both video and written format, choose whichever you prefer!):
Let’s begin by answering this question:.
BigCommerce offer a “frontend” just like any other eCommerce service, but thanks to their headless approach you can also choose whichever CMS you’d like to plug your BigCommerce store into. And that gives you the WordPress shopping cart option.
The coupling of BigCommerce and WordPress is made much easier thanks to the plugin, BigCommerce For WordPress, developed by the BigCommerce team. You can download the plugin from wordpress.org and upload it to your WordPress installation, or access and install it from within the plugin page in WP Admin.
Once installed and activated you’ll need to either connect your site to an existing BigCommerce account, or a new one.
For the sake of demonstration, let’s set up a new account. Click Create New Account and the store will guide you through the whole process.
To begin with you’ll be asked to enter some standard details; your name, the store’s name, location, that sort of thing. Once done, the store will be created behind the scenes and you’ll be emailed when it’s done.”.
You’re then given the choice to automatically add new products, when they’re added to your BigCommerce account, to your channel. We’ll select Yes.
At this point you’ll be asked to create a navigation menu for the store. You can give it a name, then select (or not) any of the options you see in the screenshot below:
That’s all there is to setting up a navigation menu. If you visit Appearance > Menus in your WordPress Admin you’ll find your new menu setup and waiting for you, complete with its name and the various endpoints/menu items you chose to include.
You then need to make sure your navigation menu is associated with a location in your WordPress theme. This can be done under the Manage Locations tab and the options available to you will depend entirely on your WordPress theme.
With that done, check your frontend and you’ll see the menu in the desired location, all the required store pages, and some demo products to show you how it will work.
Let’s now add some products of our own.
Having set up your BigCommerce account via the WordPress backend you’ll have received a confirmation email, plus some login details. After you confirm your email address you’ll be asked to set a password, which you can then use to login to the BigCommerce platform.
Once in, you can add products, edit them, delete them, and apply any number of customizations you might want. If you navigate to Products in the sidebar, you’ll see all the demo products you’ll have seen appear in your WordPress website.
This is a great opportunity to play around with what’s been installed, to learn how products are managed. Click on one and you’ll see all its basic information, its description, categories, and so on. Make any change you want, then click Save. The change is automatically reflected on your WordPress site; to check, go back to your WordPress Admin, then BigCommerce > Products. You’ll see that the plugin is constantly syncing and keeping the product catalogue up to date (hit Sync Products if you want to trigger the synchronization process):
Find your product on the frontend, refresh (if the page was already open for some reason) and you’ll see the changes automatically reflected.
Adding products is as simple as you might imagine: add them to the BigCommerce account and they’ll automatically be synced with your WordPress site, making them available for purchase by your customers. Watch the video to see adding products in action.
This tutorial has shown you how easy it is to set up a BigCommerce store, coupled with a WordPress website.., and variants of common web languages.
That’s where HTML5 and JavaScript game templates come in.
Building a game is difficult. With several distinct skill sets nestled into a complex package, it can difficult to just jump into a project from scratch.
For those new to game programming, using a template can help to fill in those gaps in skills, such as User Interface or graphic design, giving insight into the workings behind a completed game. This is a great learning experience, and helps to get your first couple of projects done without becoming too overwhelming.
Experienced game programmers can find plenty of use from these game templates too. They can help to build the skeleton for a larger game, or act as the base for a new project, cutting down on creating repetitive code for each new game you make.
If you’re looking for more advanced game engines or systems, check out our favorite JavaScript Game Engines for your next project.
For now, let’s take a look at the top 20 of these templates available on CodeCanyon.
Canvas Puzzle was the first HTML5 game on Envato Market, so this seems like a good place to start.
The concept is simple, but can easily be built on to create a fully fledged game.
Compatible with all modern browsers, this HTML5 game template has an Internet Explorer 9 or less fallback while also working great on the most current iPads.
Users can use any image—just drag and drop with Firefox and Chrome—and the game generates any number of different pieces.
Canvas Puzzle is lightweight, simple, clean, and ready for you to customize to your liking.
Will your boy make it past the zombies in Boy vs Zombies?
Give it a try or make it your own version.
Feature include:
Boy vs Zombies is ready for you to reskin and modify—without any coding knowledge.
Just like many of the other HTML5 and JavaScript game templates, Indiara and the Skull Gold works on all platforms.
Are you ready to go on an adventure collecting ancient artifacts in caves full of traps?
This fantastic HTML game template includes:
Indiara and the Skull Gold is fully touch and mouse supported, includes social media share buttons, and can be visually customized by swapping out the image files or completely modified with Construct 2.
The Slot Machine: The Fruits is optimized for both mobile and desktop and includes high-quality images that support up to 1500x640 resolution.
This colorful game was developed with:
The Slot Machine The Fruits can even be installed using the CTL Arcade WordPress plugin.
Based loosely on the classic arcade game, this HTML5 game template is ready to be innovated and turned into a brand new game.
Built on HTML5, JavaScript, and CreateJS, this title includes full source code and is ready to be customized.
This combination makes it great for learning all of the pieces that go into a game, for building your own games from, or as a headstart for an app project.
PSD and Adobe Illustrator files are also available sepearately for easier customization.
Enjoy the magic of HTML5 game templates with The Sorcerer.
This game build was inspired by Zuma gameplay and includes three different progressive levels.
It has been developed with:
The 960x540 resolution scales to fit and can be used with the CTL Arcade WordPress plugin.
Customize The Sorcerer and start making some puzzle magic!
'Tis the season for some fun!
The Game Christmas Furious HTML5 game template brings touch and mouse supported fun to all platforms.
Includes:
Balloons invade the North Pole—can Santa catch all the gifts and avoid the balloons? The Game Christmas Furious HTML5 game template is ready to play or transform into your own creation.
The HTML5 3D BlackJack has:
But the best part is the hi-res 3D graphic style.
This game has been developed with HTML5, JavaScript and CreateJS, and is both ready to play and very customizable.
You can even ante up the HTML5 3D BlackJack with the CTL Arcade WordPress plugin.
Game FlapCat Steampunk is based on similar blockbuster games with its simple design and playability.
Enjoy this touch and mouse compatible game in full 1280x720 resolution.
This cool cat includes:
Game FlapCat Steampunk has a great art style and is ready to play or customize.
Like the HTML5 3D BlackJack, the 3D Roulette HTML5 casino game has a great 3D hi-res look.
It has also been developed using:
Easily modify 3D Roulette by downloading and editing the Photoshop and Illustrator files. It can also be installed directly into WordPress using CTL Arcade WordPress plugin.
Bubble Shooter reminds me a lot of Snood and other games like it.
This simple and addictive game can be installed as-is or modified to your liking.
This HTML5 game template comes in 870x1504 resolution, is fully responsive, and can be easily modified using the Photoshop and Illustrator files.
It has been developed with HTML5, JavaScript and CreateJS, and is both ready to play and can be installed directly into WordPress using the CTL Arcade WordPress plugin.
Woblox is probably the most addictive game in this roundup of HTML5 game templates.
Add your own logo and this puzzler is ready to go.
This game will play on all platforms, but it feels best using a touch screen as you will intuitively slide the blocks into place to set the green block free.
Game includes:
Easily customizable and equally addictive, Woblox is an excellent HTML5 game template.
treze-Edges sounds easy—until you try it!
Simply touch the screen to create edges, but don't let the ball escape.
Features include:
Enjoy treze-Edges or make it your own—what twist will you add?
Don't Crash.
That's it.
It sounds easy, but this mesmerizing HTML5 game template will keep you guessing.
Features include:
And never forget—Don't Crash!
The Arrows 2D Platform Action Engine is ready and waiting for you to create something amazing.
This includes:
The Arrows 2D Platform Action Engine is good enough to stand on its own, but it's true heart and soul is in offering you the tools to build your own game.
Become the next fruit ninja with the highly customizable Katana Fruits template.
This HTML5 game template was developed with the following code chops:
You can edit the look and feel with the included Photoshop and Illustrator files and install it directly into WordPress using the CTL Arcade WordPress plugin.
Ready, set, run!
This fast-running panda needs to be guided over obstacles to collect prizes and finish each of the over 20 levels.
This template includes the Construct 2 files and features:
Show some pixel love with Panda Love.
Want to add some jumpshots to your upcoming project? Ultimate Swish gives you the framework to do just that.
Ultimate Swish provides a short, addicting game cycle that could be expanded into its own full game, or customized to fit within your current project.
The game is based off of HTML5, JavaScript, and CreateJS, with all source code included, so its easy to see how it works and start customizing in no time!
If you have a need for speed, Formula Racing is what you want.
Fully responsive and ready for any screen size, this game has been built with Construct 2.
Features include:
Formula Racing is fully customizable and ready for you to race away with something new.
Balloon Fight is a charming and addicting retro style game built with Construct 2.
Pop the balloons of the other cats—and you win!
Features include:
Balloon Fight is easy to control, but difficult to master.
Some of these templates are just that—templates—while others border on complete game concepts or flexible game engines. Whether you're finding something for your website or learning how to flesh out your own idea, you can clearly see how diverse HTML5 game templates can be.
You might also find an Envato Tuts+ code tutorial on game design or game mechanics helpful, or even an HTML5 tutorial to get you started on coding your own game.
Have you used one of these templates to build your own game? Let us know how it went, and drop us a line in the comments below!. Allowing people to book online will encourage more bookings, give your customers more control, and free up your time.
In this tutorial, I'll show you how to use a plugin to add bookings to your website.
I'm going to imagine my site is for a real estate agent, and add agents and appointment types accordingly. But the steps I follow should apply with some tweaks to whatever kind of business you're running.
I'll be using the Bookly Pro plugin for this tutorial, which is one of the premium booking plugins on CodeCanyon. I've chosen this plugin because it has a wide range of features, and even some add-ons should you need more. It's also compatible with Gutenberg, so it will let you use blocks to add your booking form to your site.
There is a free version of this plugin available in the WordPress plugin directory with fewer features; I've chosen to use the premium version so I can add unlimited staff members and Google Calendar integration.
The first step is to install and activate the plugin. When you download the plugin and unpack the zip file, you'll find two more zip files inside it. The one you need to upload to the plugins screen in your admin is called bookly-addon-pro.zip.
Installing the plugin will install two plugins to your site: the core (free) Bookly plugin and the add-on plugin with the premium features.
The first step is to configure settings for the plugin.
Go to Bookly > Settings to access the settings screen..
Work through the other settings screens, editing these as necessary:
Once you've configured the plugin settings, it's time to start adding staff.
The next step is to add staff members.
You can access this either by clicking Bookly in your admin menu and clicking the Add Staff Members button, or by going to Bookly > Staff Members in the admin menu.
This takes you to the Staff Members screen. Add a new staff member by clicking the icon at the top right of the (currently empty) list of staff members. If you want, you can add categories for staff members, which you can later use to make certain appointment types available for certain staff categories..
Once you've added a staff member, you can add a photo, contact information and Google calendar sync (which we'll come to shortly). Make sure to add an email address at the very least so the staff member is notified when an appointment's made for them.
You can also list the number of hours the individual has available for appointments each day, if this is different from their working hours.
Once you've added your staff member, scroll down and click Save. Each staff member also has a tab for the services they provide, their schedule and their days off. Configure these so you know appointments won't be booked for staff members at the wrong times.
Once you've added all your staff members to the system, it's time to add services. Services allow customers to choose the appointment type they need.
Here you provide more information about the service, set its price and define how long it takes.
There are three ways to define which services are provided by which staff:
Which of these works for you will depend on the nature of your business. In my real estate agency, valuations are provided by Valuers and viewings are provided by Viewings Agents, so I'll use the categories method when setting up my services.
One of my favorite features of Bookly is the way you can edit almost every aspect of the booking form, and preview it before you publish it on your site. To do this, go to Bookly > Appearance.
Here you can edit the fields in the various sections of the form and you can also change the color. I'm going to amend the color and make a few tweaks to the text.
To amend the text, click on the underlined text and type what you want to replace it with. Then click the tick to save it.
Once you've made all the changes you want to, click the Save button at the bottom of the form to save your changes.
The Bookly Pro plugin is a complex plugin with lots of additional settings I haven't needed to use for my site. Here's an overview of some that you might need to use:
There are also a number of screens for managing your bookings which we'll explore shortly.
Once you're happy with the look of your form, it's time to publish it to your site. To do this, you create a page then add a Bookly block to it.
Create your page for bookings, then add a new block and select Bookly when choosing the block type. Choose the Bookly - Booking Form block.
In the block settings, you can configure default values for category, service and staff member, and specify which fields you want to include. Configure this how you want it then click Publish to publish your page.
Now you can view your page to see your booking form in place:
Customers can now use your form to make appointments with your team members.
Once you've got your booking form set up on your site, it's important to be able to manage appointments.
Bookly gives you a number of screens to do this:
The way you use these screens will depend on the way you manage your business. So message start the system has sent them. These will also be sent by email as long as you add staff email addresses.
And of course with Google calendar integration, my staff members can view their appointments in their calendar alongside other events.
However you manage your appointments, the Bookly Pro plugin has a range of features that you can use to create an effective bookings.
Try installing this plugin to your site and save yourself time making appointments over the phone or by email!.
So, along with your website you need to create a community of customers and engage with them. This is the best strategy to grow your business. Online communities have created and continue to create enormous value for businesses.
In this post, I'll show you some of the best WordPress community plugins available on CodeCanyon. But before we do, let’s talk a bit about online communities.
An online community is
a network of people who communicate with one another and with an organization through interactive tools such as e-mail, discussion boards and chat systems. — Business Dictionary
Online communities come in different forms:
Communities are hosted on variety of platforms. Each community serves a specific purpose. Before you decide what platform you will use to start and host your community, here are things you should consider:
Because they are integral to the success of your business, here’s why it makes sense to create an online community as part of your business growth strategy:
You will be able to reach a very large number of people, which means you will have access to high quality leads that you can market to cheaply and convert to reliable repeat customers. You won’t have to rely on expensive traditional advertising.
Having an online community mean that you will be challenged to provide information that is of value to your users. This will help you build a loyal following, which means longevity.
Through the community, you'll be able to constantly engage with your customers. You can post questions about what changes your customers want to see and get immediate feedback about what they think of a new product or service. This level of engagement keeps them coming back.
Online communities need tools to facilitate direct engagement between users and your content, between users and the business, and between the users themselves.
Some tools needed to facilitate engagement in your online community are:
Ultimately, the kinds of tools you need depend on the purpose you want the online community to serve.
Millions of websites are now powered by WordPress.
WordPress is built specifically for content. It started as a blogging tool, but was later revamped into a fully-fledged content management system. It is highly adaptable and customizable and can be customized by adding themes and plugins to get desired results.
The fact that it’s easy to customize makes it ideal choice for building your online community.
WordPress is a content management system that has user management built into it. This takes care of your user registration and login tools.
WordPress also comes with a built-in posting and commenting mechanism. This lets users engage with content you make available, questions you have for them regarding your product and service, and each other.
The basics of hosting online community are already there with WordPress. And thanks to WordPress' flexibility, any other feature.
You don’t need to create a community platform from scratch. There are plugins for that.
A plugin is a piece of software that allows you to add a specific function on your website. It saves you the time and headache of creating that particular functionality from scratch.
A community plugin is a pieces of software you add to your WordPress website to help you create your own social network or online community.
There are plenty of community plugins available to help you out. When choosing a plugin to start and host your online community, here are some things to consider:
Now that we know what to look for, let's review some of the best community plugins on CodeCanyon
With UserPro, you can do just about anything you need to build your WordPress Community website. Here are some things you can do:
User ACleverCat says this about UserPro
I've been using this plugin for years. Works great and looks professional on the front-end. Plus, really good customer support...
Youzer focuses on creating beautiful community and user profiles. It offers a lot of tools to enable users to customize their profile and community pages.
Some of its features include:
It’s also fully compatible with the Buddypress platform.
It's fast, lightweight and beautiful, with excellent support. Great plugin.
For subscription-based communities that give access to content and other services depending on membership level, Ultimate Membership Pro is your plugin of choice.
Ultimate Membership Pro allows you to:
Here’s a look at some of its features:
An excellent plugin, it adapts perfectly thanks to all its editing possibilities... as well as editing the fields I need. Keep in that way!
Private Content is a powerful yet easy way to turn your WordPress site into a true multilevel membership platform.
It comes with complete users management, a modern form framework, and a unique engine to restrict any part of your website. All this without needing coding skills!
Some features that make Private Content awesome include:
Private Content is also integrated with Google Analytics to provide precise reports: each page view, each login, logout, registration will be registered. You will be able to know exactly what users see. All in real time!
Private Content is a developer-friendly plugin and can be easily extended and customized by developers.
Customer Fibeus says this PrivateContent :
Amazing! The ease of use. The flexibility. The attention to detail. Thank you for the amazing plugin, making my life just so much easier. Great work. An absolute steal at this price for user specific content!
Make membership building process as simple as possible with the ARMember plugin.
ARMember is a one stop solution to all your membership needs, including member management, payment tracking, drip content and more. It is so easy to set up that within minutes you will have your own membership site up and running!
You can do so much more, for example:
Members are more comfortable signing up if they are being offered convenient and popular payment gateways. ARMember membership plugin for WordPress comes with some of the most popular gateways like Stripe, PayPal, Bank Transfer, 2Checkout, and Authorize.net.
If you are looking for an all-in-one membership plugin for your WordPress website with amazing and fast customer support then I highly recommend AR Membership.
Let’s face it, sometimes plugins can overwhelm you with too many features that you’ll never use. WP Membership strips this down to the basics and provides you with few beautiful, well-made templates for your membership site.
Some templates you can choose from include:
Simplicity is another great feature of WP Membership. You don’t need to configuring a lot of stuff on your website, just install it and use it. The plugin will create all necessary pages, email templates and settings on installation.
Other great features include:
User djanastasios says:
I'm very happy with this plugin. It works perfectly, is easy to install, and the support is responsive and fast!!!
The User Profiles Made Easy plugin does just what it says—it makes user profiles easy. It offers a front-end profile, login, and registration plugin for your membership-based WordPress site.
Features include:
The User Profiles Made Easy WordPress plugin is one of the best ways to build an online membership directory.
A great plugin, but greater support!! Very fast in answering emails and fixing problems....
Here is a nightmare customer service situation. You have written notes down in different computer files, napkins, and notebooks, and you don’t have all the details in one place. You forget an important follow-up appointment. You don’t call at the time you promised. Not good for the longevity of your business, right?
As a freelancer or a small business, your growth and continuity depend on the way you build trust with customers and potential customers. Boosting customer satisfaction requires you to be on top of small details. CRM and project management systems help you do just that.
Think of a neighborhood corner store. The store serves local needs. The owner knows the customers who shop there and what they frequently buy. She knows what kind of tools they like, and if they’re out of stock, she suggests something similar. How’s that for personalized customer experience? She even knows about the personal life of her customers—family, work, sports teams they like. This is an example of customer relationship management on a very neighborhood brick-and-mortar scale.
You may not have that kind of interaction with your online customers, but you can still build and maintain personalized business relationships with them. You can know their names, phone numbers, and addresses. You can track products and services they’ve purchased. You can see what products and services they’re interested in based on their search history. You can suggest more products and services based on their inquiries, searches, or purchases. You can prioritize business leads. You can automate repetitive processes.
But how does a business build and maintain relationships with its customers? How does it identify and fulfill their needs? How does it identify and follow up on business leads? Don’t forget that in this day and age, customers want a personalized experience.
Here is where a CRM system comes in: to help you with the day-to-day management of your customer relationships, sales, and marketing.
CRM stands for Customer Relationship Management. A CRM system can help you:
The end result of this is customer satisfaction!
A CRM allows you to prioritize your leads and get started with turning them into customers—that way, you maximize your business opportunities. By automating day-to-day tasks, you are freed up to focus on the most important business functions, making you efficient and productive. Finally, happy customers means return customers.
Even if your business is small, it could be worthwhile starting with a CRM. As your business grows, a CRM system grows with you.
So let’s have a look at some of the best CRM and project management plugins available on CodeCanyon.
Freelancers and small businesses can streamline their project management, from the first estimate to the final billing and everything in between, with Freelance Cockpit 3.
This CRM and project management PHP script is a small investment with huge payoffs.
Here's a quick look at some of the included features:
As a freelancer, this is top of my list of possible solutions to organize and execute projects.
Freelance Cockpit 3 brings all the important information together, so you can focus more on what you do best.
This CRM and project management solution is more than just a way to organize your tasks and easily send out invoices. Perfex is about customer relationships.
Provide your client with more than just a good product; offer them the kind of customer relationship they won't be able to find anywhere else.
You'll find lots of helpful features in this system:'s niche is to offer top-notch tools for your customers.
While this CRM and project management app serves freelancers well, it adds another layer that small businesses with multiple team members will find extremely helpful. Ekushey Project Manager CRM introduces a full-featured project space that makes collaborating much more efficient:
Additional features include:
I especially like the intuitive client and profile overviews and the ease of adding people to projects.
Ekushey is a robust CRM and project management tool that's useful for freelancers and powerful enough for businesses with many team members.
Here is why I like RISE: its clean and easy-to-navigate dashboard. This simplifies management of clients, projects, and team members.
Its features include the ability to:
Not only that, but clients can pay you directly through RISE via PayPal or Stripe.
Installation is very simple, and you can install the updates from within the app. But don’t take my word for how great it is—go to the demo and try for yourself!
Customer jlamping has this to say about RISE:
"This is a very well thought out app and works great. Awesome setup and well documented. 5 stars!!!"
Managing a school is a Herculean project, with many staff and constantly changing students. Luckily, there is the Ora School Suite. This is school management at your fingertips, 24/7 on the web and on mobile devices. Literally everything to do with managing a school is included.
From the dashboard, you can manage:
Just take a look at some of the screens from the dashboard. In this screen, you can see the invoices panel of the dashboard, with other panels showing news and messages to be answered.
And here is a look at the enquiries dashboard, which lets you manage and respond to any incoming leads or questions.
Give it a try for yourself in the live demo!
Here is what customer haroldnicky says:
"... it deserves 5 stars. Honest this the best school management web application I have today, everything is well done. The pages open quickly. In short, a great job."
Ciuis makes CRM and project management easy. It has a very simple, accessible design that makes it a breeze to use.
You can easily:
Customer WaldiePienaar says this about Ciuis CRM
"Being a coder myself I have build my own CRM before, but this is just on another level. If you are thinking of getting a CRM you have come to the right place."
TITAN has many powerful features designed to allow you to manage unlimited projects, teams of users, important tasks, and so much more.
Some features include:
Converting leads to clients has never been easier, since Titan lets you build custom forms to send to clients and receive valuable feedback. Their responses will then be stored in the Titan Project Management System!
If you need a comprehensive, all-in-one solution that brings together CRM, human resources management, and project management, the Ultimate Project Manager CRM Pro is the best choice.
Project components include:
Human resources management components include:
All this comes with a very complete client management package!
Other great features include a powerful file manager and a knowledge base section where you can curate articles that are useful to your clients.
In addition, it supports eight different payment gateways, including PayPal, Stripe, Authorize, Mollie, Braintree, CCAvenue, and more.
Customer BuckyMadrox has this to say:
"Powerful CRM is a mostly all-in-one solution for managing our firm and projects. This product is specifically designed to support the complete project lifecycle of professional services firms."
WORKSUITE can be accessed on laptop, mobile, or tablet. Its responsive design ensures clear data visibility on all types of devices.
Some features of this system include:
Customer parys has this to say:
"A very well-made application. High level of functionality and very professionally written code. Definitely too low price for such a powerful tool. I highly recommend it!!!"
A good huge. You keep your customers happy and coming back.
For more CRM and project management PHP scripts, take a look at Envato Market or at our other posts here on Envato Tuts+..
Help desk agent work is currently evaluated by KPIs, such as how many tickets agents resolve or how fast they resolve them. There is an industry trend toward broadening collaboration between agents, because it will help customers get their issues resolved more quickly. But if an agent is taking time to help another agent or multiple agents are working on the same problem, the metrics that enterprises use to judge help desk agents will all get worse.
So, changing the way agents work requires changing the way their work is evaluated. I went to Norway to run three design thinking sessions to do generative research, looking to discover how work could be evaluated when collaboration increased.
I met with a defense contracting company, a police department, and an information technology company.
With all three, I ran into unexpected cultural issues that had a profound impact both on the sessions themselves and on my findings.
Asking people to think outside the box or come up with wild ideas conflicted with two Norwegian cultural norms: selvbeherskelse and janteloven.
Let me explain.
Lots of people told me that it’s hard to get Norwegians to “defrost.” Design thinking sessions need people to feel free to come up with weird ideas and be creative. Selvbeherskelse, or self control, is highly valued, and I was asking people to step outside of their comfort zones. Acquaintances joked that the only thing that would help would be if I brought booze to my sessions!
The second cultural speed bump, janteloven, not only impacted the session itself, it also showed me that everything I thought about how work is evaluated and judged is actually dependent on my own cultural frame of reference.
Norwegians are taught to not stand out and to not to single anyone else out either. Imagine an annual work review where everyone is told the same thing: “Your work is fine.”
There are no superstars. There are no under-performers. Everyone is good enough. It is also culturally unacceptable to call attention to your own good work.
Norwegians call this janteloven, or the “Law of Jante… a code of conduct known in Nordic countries that portrays doing things out of the ordinary, being personally ambitious, or not conforming, as unworthy and inappropriate.”
The law was originally formulated in a satirical novel, A Fugitive Crosses His Tracks, by Aksel Sandemose, and can be summed up with “You are not to think you’re anyone special or that you’re better than us.”
Workers who are obviously trying to get ahead or appear better than others are breaking societal norms. It is uncommon to acknowledge that anyone is better than anyone else. Subsequently, layoffs and firing people are almost non-existent in Scandinavian corporations, and it sounded like annual work reviews don’t happen as they do in the United States.
This cultural norm also impacted the session. No one wanted to contribute an idea that might stand out in our session. But, more importantly, it called into question the entire premise for my visit.
Remember, I was there to find out how work could be evaluated in the future.
But I’m in a culture where…everyone is just good enough.
Curiously, despite this cultural norm, participants wanted to be superstars or wanted their poor performing peers to be called out. They felt it was unfair that they worked so hard and got no more reward than their “lazy” peers.
What they longed for was societal pressure to make people feel they had to contribute. Sloppy work from peers creates rework. Solving the same problems over and over again because the knowledge wasn’t being captured in reusable form was irritating to every single person in the room. Some fantasized creative punishments and social shaming for those not pulling their own weight.
Further, individual after individual mentioned that others did not want to collaborate because these other people wanted to be “special superheroes.” (But of course, no one in the workshop admitted to being that guy!)
There is pressure to conform, but emotionally, people still want to be better than others and want to be recognized for their work. They know some of their peers are cutting corners or are only picking up the easiest tasks and leaving the hard tickets for other people.
As a UX designer, it was humbling to see, first hand, how much my own cultural blinders limit my thinking about what people need and want in our software. If I could do it all again, I’d have taken more heed of the warnings about defrosting! And I will never again think that monocultural discovery work tells the whole story.
When I returned to the United States, I spent a lot of time trying to unpack my experiences and figure out how I could do things differently. I happened to speak about it with a friend who is involved in the acting improv community. Improv is a form of theater where the plot and how characters interact with each other are made up in the moment. Good improv requires people to loosen up and get out of their comfort zone. My experience sounded familiar to her, and she told me about improv warmup techniques.
Although I could not bring booze in, next time, I’ll draw from improv techniques to get people to warm up and have a warm-up exercise in my pocket. I would hope that even the ‘frosty’ Norwegians would likely be drawn in by some of these techniques.
As for not figuring out how work should be evaluated in the future….well…that’s part of why we do research. When we don’t get the results we expect, it’s an opportunity for creative solutions to problems we didn’t even know we had. Isn’t that why we go out into the field in the first place?
The post No! We’re Not All Just the Same appeared first on Boxes and Arrows.
If..
There are plugins that will turn your main navigation menu into a burger menu in WordPress (including those on Code Canyon), but what if you want to code one into your own theme?
In this tutorial, you'll learn how.
To follow along with this tutorial, you'll need:.)):
="#">☰< -->
So the burger symbol (which is shown using the
☰ HTML code) is in the header, and the navigation menu is below it.
Now to add some styling for the
toggle-nav element.:
; } }
This adds color and sizing to the icon and also overrides any styling for links, whatever state they're in.
Here's what the burger icon looks like on mobile now:
That's the burger icon styled. Now for the navigation menu.
The navigation menu itself needs to be styled on mobile. Inside your media query, add this CSS:
; }
Here are some key aspects of what that code does:
Note that the tweaks you'll need to do to your menu may be different, and will depend on how it's styled on desktop. The important bits are hiding the menu and the positioning..
To learn more about WordPress menus, check out some of our other posts:
WordPress Multisite is a powerful tool for developers who need to host multiple WordPress sites on the same server. Learn how to get the most out of it in our new course, Complete Guide to WordPress Multisite.. plugins..
Adding a membership system is a great way to bring in revenue for your business. But turning your WordPress site into a membership site can be quite complex. Thankfully, CodeCanyon has a library full of membership plugins that let you add this membership functionality to your site without you having to do any coding.
To help guide you to the perfect membership plugin for your website, I compiled a list of the most feature-rich membership plugins on the market..
Features include:
Also, this membership plugin will allow you to restrict any content on your website for guests by the type of post, page, URL, or for the entire site.
Try the live preview of this plugin to see if it is right for.
Notable features include:
You can quickly test the functionality of this membership plugin and see if it's right for you with the live demo..
This WordPress membership plugin features:
Other notable features of Private Content include customizable themes, unlimited lightboxes, and multi-language support.
See how this membership plugin can work for you by viewing the l ive preview..
Here are some of the main features:.
Here are a few of the notable features offered by the membership plugin:
ARMember uses the most popular payment gateways on the web, so your customers will feel safe signing up on your membership site.
This WordPress membership plugin has many other features that were not mentioned here, so make sure to try the live preview to find out everything the plugin has to offer.
UserPro is a membership plugin focused on helping you build a community. The stylish front-end user profiles, customizable login and registration forms, and user badge and achievement features help you create a unique community experience that your users will love.
Here is what you can expect to see from the plugin:
User Imtovi says:
"It is the most useful and powerful WordPress user profile solution I have seen! Support is so great! Markus has just resolved my issue in a few minutes! Thank you!"
Find out how this membership WordPress plugin can help build your community by viewing the live preview..
Here are a few of the many features that this plugin has:
With no hidden costs and access to all of its features from the one-time purchase, you will have the most feature-rich learning management system plugin at your fingertips to help you build your business.
To gain a better understanding of how this plugin functions, view the live preview.
If none of these plugins quite meet your needs, be sure to check out some of the many other WordPress membership plugins available on CodeCanyon. If you purchased any of the above plugins, please let us know which one you decided to go with!
In this tutorial, you'll learn how to manage data in a Firebase Firestore database.
For some background on Firebase and instructions on how to get your Firestore database set up, please read my earlier Introduction to Firebase post.
Working with Firestore, data, documents and collections are the key concepts which we need to understand.
Data can be any of the following types:
Documents contain data items in a set of key-value pairs. Firebase Firestore is optimised for storing large collections of documents. A document is a lightweight record with fields which map to values. Each document is identified by a name.
For example, we could describe a
Country document as follows:
Country Name: "Canada" Capital: "Ottawa"
Here,
Country is the document which contains two fields:
Name and
Capital.
Collections contain sets of documents. Documents live in collections, which are simply containers for documents. For example, you could have a
Countries collection to contain your various countries, each represented by a document:
Countries Country Name: “India” Capital: “New Delhi” Country Name: “England” Capital: “London"
In short, data is part of a document, which belongs to a collection.
We'll learn more about data, documents and collections by building a sample app.
To start, you'll need to create a new Xcode project and set it up in the Firebase Firestore portal.
First, create a project in Xcode and name it ToDoApp. Then, create a corresponding project in Firebase and also name it ToDoApp.
Next, register your app with a bundle identifier.
Firebase will create a config file for you to include in your project. Download the config file and add it to the root of your Xcode project as shown.
Here's a screenshot of my project folder structure with the GoogleService config file highlighted.
Next, you need to install CocoaPods for Firebase. CocoaPods make it easy to install and manage external dependencies such as third-party libraries or frameworks in your Xcode projects. If you don't have a Podfile already, follow the instructions to create one. Then add the Firebase core services to your Podfile and run
pod install.
Next, the Firebase setup wizard will give you some code to add to your
AppDelegate. Copy the highlighted lines below into your AppDelegate file.
When you've completed all these steps, the project setup screen in Firebase will look like this:
To complete the installation, you need to get your app to communicate with the Firebase server. Simply run your Xcode project and go back to the project setup screen in Firebase, and you should see something like this:
Now that we are done with the initial setup, let's start handling data in our app.:
task_details: text describing the task
We also need a collection, which we'll name
tasks, and it will contain these
Task documents. This is how the structure will look:
tasks Task task_details Task task_details
To represent this data, let's create a simple task model class in TaskModel.swift with the code below:) } }
We will be displaying our tasks in a simple table view.
First, let's create a ToDoListViewController.swift class. Then, add
ToDoListViewController in Main.storyboard and connect it to ToDoListViewController.swift. Add a
UITableView to
ToDoListViewController in the storyboard and implement delegates.
Now we'll create ListService.swift to hold the code to retrieve task data. Start by importing
FirebaseFirestore. Then create a Firestore instance with
Firestore.firestore(). ListService.swift should look as follows:
import Foundation import FirebaseFirestore class ListService { let db = Firestore.firestore() }
Next, we'll add the
completeList method in the
ListService class. We'll get all the data and documents from the
tasks collection by using the
getDocuments() method from Firebase. This will return either data or an error.
func completeList(completion: @escaping (Bool, [TaskModel]) -> ()){ db.collection("tasks").getDocuments() { (querySnapshot, err) in } }
In this code,
querySnapshot contains the returned data. It will be nil if there is no collection with the name
tasks.
Here is the rest of the
completeList function: it retrieves the task data and calls the completion callback with any retrieved data.) } } }
Let's try it out! Use the code below to call
completeList from
ToDoListViewController.() getAllTasks() } func getAllTasks() { taskService.completeList(completion: { (status, tasks) in print(status) }) } }
There should be no errors, but
querySnapshot will be nil since we do not have any documents in our collection yet. In the next section, we'll add some data and then try calling the
completeList method again.
Let's create an
addToList method in
ListService().
func addToList(taskDescription: String, completion: @escaping (Bool) -> ()) { completion(false) }
We'll use the passed-in task description to create a
taskData instance and add it to the
tasks collection. To do this, we will use the
addDocument method from Firestore Firebase.) } } }
If there is no error, we'll print the document id and return true in our completion block.
task_id will be the document id. For now, we'll just send the empty string, but eventually we'll update the
task_id with the correct document id.
Let's add some test data by calling
addToList from
ViewController. After running the below code, go back to the console and you'll see that an entry was added. Now, running
getAllTasks() should return
true) }) } }
Of course, in a real app, you'd want to create a user interface for adding tasks!
Finally, let's see how we can delete a document in Firestore. Go back to the Firebase Firestore console and make a note of the document id value for "Buy Groceries". We will be deleting the document based on the document id.
Now, add a
deleteFromList method to our
ListService as shown below.
func deleteFromList(taskId: String, completion: @escaping (Bool) -> ()){ db.collection("tasks").document(taskId).delete() { err in if let err = err { print("Error removing document: \(err)") completion(false) } else { completion(true) } } }
Let's call
deleteFromList from
ToDoListViewController. We'll pass it the document id of "Buy Groceries", which we copied from the Firebase console.() deleteTask(taskId: "ddiqw8bcnalkfhcavr") } func deleteTask(taskId: String) { taskService.deletFromList(taskId: taskId, completion: { (status) in print(status) }) } }
Running the above code, you should see
status as
true. If we go to the Firebase Firestore console, we should see that the "Buy Groceries" task has been deleted..
To make this app more functional, you could implement table view delegate methods to display the tasks in your database, a
UITextField to add new tasks, and a swipe gesture to delete tasks. Try it out!
If you have any questions, let me know in the comments below..
This MP3 player is incredibly powerful. The plugin contains a customizable and responsive MP3 player that can be added to any webpage and that runs on all major browsers.
Most web players only allow you to listen to audio when you are on a specific page, but MP3 sticky player is a pop-up player that opens in a separate window, allowing you to browse the web while listening.
And it's not just for MP3s—this player can display images and YouTube videos.
Here are a few of this plugin's great features:
User sely says:
"This is by far the most powerful MP3 player available on CodeCanyon or elsewhere. Great support too. Had multiple questions and was quickly helped. Highly recommended!"
View the video tutorial of this powerhouse video and audio player to see if it is a fit for your site.
This audio player will make a great addition to your website if you need to present audio to your users. Not only does it play .mp3 files, but it plays .ogg files as well. With the customizable design, you will be able to fit this WordPress audio player into any theme.
Here are some of the notable features of this plugin:
Here is what user tampatechllc has to say about the Audio Player PRO:
Great support team! They checked on my issue very quickly. Also great music player plugin! After trying at least 4 or 5 others, this was the only one that had more features than the other plugin features combined. Keep up the great work!"
Check out this video of the WordPress audio player plugin in action:!
Here is what you can expect from this audio player plugin:
User herringla has this to say about the WavePlayer:
"There are just too many good things about this plugin. Basically, if you're using anything else to play your music files on your WordPress website you're missing out!"
View the audio waveform integration in the live demo..
Here are the main features of this WordPress video player:
Satisfied customer ADWheeler has this to say about the video plugin:
"All the way around this is a perfect plugin. Rik is GREAT with support as well. Got back to me within the hour. The world needs more of this! Do yourself a favor, get this one. This IS the plugin you've been looking for!"
Try it for yourself with the live preview..
Here are a few of the most notable features of this plugin:
Here is what user TheWebFix has to say about the plugin.
"Very customizable, and the author is always willing to assist with any issues or questions. Love the flexibility and work put into this plugin!"
To try the plugin for yourself, check out the live preview!.
Universal Video Player features include:
User Dohmtee has this to say about the plugin:
"Customer service was able to help me configure and provide me guidance to what I wanted the player to function. My clients are very satisfied! Thumbs up to all at LambertGroup."
View the Universal Video Player in action by trying the live demo.
YouTubeR not only allows you to upload videos on YouTube from your website, but it also allows you to create video galleries. The modern interface of this plugin, playlist templates, and infinite scroll for playlists make this a plugin worth purchasing.
Here is what you can expect from YouTubeR:
Here is what user StudioPassionlab has to say about the plugin:
"I want to express my appreciation for the YouTubeR plugin by Maxiolab for the versatility of the plugin. Especially for the excellent support received from the author to help me solve problems with the themes used. Thanks Maxiolab!"
Find out what this WordPress video plugin can add to your website by trying out the live preview.
The WordPress plugins described above are some of the best-selling video and audio plugins on CodeCanyon. To know which plugins you should consider purchasing, take a look at all the live demos and features to see which ones will fit in with your site.
If none of these video and audio WordPress plugins seem right for your website's needs, be sure to check out some of the many others available on CodeCanyon.
A.
It's important to make the signup process as seamless as possible. Customers will likely shy away from a popup that contains many fields.
Be sure to match your popup with the theme of your site. This will help build credibility and trust in your brand.
A popup that offers an incentive such as free shipping, promo discount or free content is more likely to convert as opposed to a popup that has no offer.
Ensure you plan the content you provide to your subscribers once you have them on board. It's also important to give timely and interesting information. Don't over-focus on selling to your subscribers.
The first popup you make will not likely be the jackpot that gets you the subscribers you want. Be prepared to experiment with different popups to determine which popup is converting the most.
We'll start by downloading the plugin from CodeCanyon. If you don't have an account, create your popups.
So what makes ConvertPlus the best option for creating newsletters in WordPress?
ConvertPlus comes with over 100 ready-made templates which allow you to create your popups in minutes.
You can embed opt-in forms anywhere on the website with just a simple shortcode. This can be inside a post, outside a post, or just below your header.
ConvertPlus allows you to set the kind of device on which each popup will appear. This will ensure your popups are mobile-friendly by letting you create mobile-specific versions.
ConvertPlus is structured to deliver speed and high performance. This has a positive effect on the conversion rate.
ConvertPlus comes with a powerful, easy-to-use form builder which allows you to design high-quality popups in minutes. It also supports unlimited fields, analytics graphs, and third-party sync.
The ConvertPlus newsletter popup plugin allows you to do unlimited real-time tests to find which kinds of popup work best for your audience.
ConvertPlus also gives you a detailed graphical analysis of how your popups are performing. This includes the number of clicks, unique views, and conversion metrics. This will enable you to make informed decisions.
ConvertPlus integrates with most of the major email marketing providers. It also comes with export functionality that allows you to download email data collected from subscribers. Integrations include MailChimp, HubSpot, and others.
There are several steps required to make your popups live:
Let's look at each of these steps in more detail..
ConvertPlus provides various ways to build popups. These include:
You can also choose before post, after post, within a post, or widget box placements.:
The last part of the newsletter is to handle the behavior of the popup. We achieve this by setting triggers. Some triggers include:) | https://seltar.soup.io/since/668484613?mode=own | CC-MAIN-2020-10 | refinedweb | 13,830 | 62.88 |
It’s amazing how sometimes small changes can make you very happy.
This week I was looking at how DragonFruit does its entry point magic, and realized I had a great use case for the same kind of thing.
Some of my oldest code that’s still in regular use is
ApplicationChooser – a simple tool for demos at conferences and user groups. Basically it allows you to write multiple classes with
Main methods in a single project, and when you run it, it allows you to choose which one you actually want to run.
Until today, as well as installing the NuGet package, you had to create a
Program.cs that calling
ApplicationChooser.Run directly, and then explicitly set that as the entry point. But no more! That can all be done via build-time targets, so now it’s very simple, and there’s no extraneous code to explain away. Here’s a quick walkthrough for anyone who would like to adopt it for their own demos.
Create a console project
It’s just a vanilla console app…
$ mkdir FunkyDemo $ cd FunkyDemo $ dotnet new console
Add the package, and remove the previous Program.cs
You don’t have to remove
Program.cs, but you probably should. Or you could use that as an entry point if you really want.
$ dotnet add package JonSkeet.DemoUtil $ rm Program.cs
Add demo code
For example, add two files,
Demo1.cs and
Demo2.cs
// Demo1.cs using System; namespace FunkyDemo { class Demo1 { static void Main() => Console.WriteLine("Simple example without description"); } } // Demo2.cs using System; using System.ComponentModel; using System.Threading.Tasks; namespace FunkyDemo { // Optional description to display in the menu [Description("Second demo")] class Demo2 { // Async entry points are supported, // as well as the original command line args static async Task Main(string[] args) { foreach (var arg in args) { Console.WriteLine(arg); await Task.Delay(500); } } } }
Run!
$ dotnet run -- abc def ghi 0: Demo1 1: [Second demo] Demo2 Entry point to run (or hit return to quit)?
Conclusion
That’s all there is to it – it’s simple in scope, implementation and usage. Nothing earth-shattering, for sure – but if you give lots of demos with console applications, as I do, it makes life a lot simpler than having huge numbers of separate projects. | https://codeblog.jonskeet.uk/category/speaking/ | CC-MAIN-2020-24 | refinedweb | 382 | 66.74 |
Autoscaling Azure–Virtual Machines) virtual machines. A topic that comes up very regularly in both of those conversations is how autoscaling works.
The short version is that you have to pre-provision virtual machines, and autoscale turns them on or off according to the rules you specify. One of those rules might be queue length, enabling you to build a highly scalable solution that provides cloud elasticity.
Autoscaling Virtual Machines
Let’s look at autoscaling virtual machines. To autoscale a virtual machine, you need to pre-provision the number of VMs and add them to an availability set. Using the Azure Management Portal to create the VM, I choose a Windows Server 2012 R2 Datacenter image and provide the name, size, and credentials.
The next page allows me to specify the cloud service, region or VNet, storage account, and an availability set. If I don’t already have an availability set, I can create one. I already created one called “AVSet”, so I add the new VM to the existing availability set.
Finally add the extensions required for your VM and click OK to create the VM. Make sure to enable the VM Agent, we’ll use that later.
You can see that I’ve created 5 virtual machines.
I’ve forgotten to place VM2 in the availability set. No problem, I can go to it’s configuration and add it to the availability set.
This is the benefit of autoscaling and the cloud. I might use the virtual machine for a stateless web application where it’s unlikely that I need all 5 virtual machines running constantly. If I were running this on-premises, I would typically just leave them running, consuming resources that I don’t actually utilize (overprovisioned). I can reduce my cost by running them in the cloud and only utilize the resources that I need when I need them. Autoscale for virtual machines simply turns some of the VMs on or off depending on rules that I specify.
To show this. let’s configure autoscale for my availability set. Once VM5 is in the Running state, I go to the Cloud Services tool in the portal and then navigate to my cloud service’s dashboard. On the dashboard I will see a section for autoscale status:
It says that it can save up to 60% by configuring autoscale. Click the link to configure autoscale. This is the most typical demo that you’ll see, scaling by CPU. In this screenshot, I’ve configured autoscale to start at 1 instance. The target CPU range is between 60% and 80%. If it exceeds that range, then we’ll scale up 2 more instances and then wait 20 minutes for the next action. If the target is less than that range, we’ll scale down by 1 instance and wait 20 minutes.
Easy enough to understand. A lesser known but incredibly cool pattern is scaling by queues. In a previous post, I wrote about Solving a throttling problem with Azure where I used a queue-centric work pattern. Notice the Scale by Metric option provides Queue as an option:
That means we can scale based on how many messages are waiting in the queue. If the messages are increasing, then our application is not able to process them fast enough, thus we need more capacity. Once the number of messages levels off, we don’t need the additional capacity, so we can turn the VMs off until they are needed again.
I changed my autoscale settings to use a Service Bus queue, scaling up by 1 instance every 5 minutes and down by 1 instance every 5 minutes.
After we let the virtual machines run for awhile, we can see that all but one of them were turned off due to our autoscale rules.
Just a Little Code
Virtual machines need something to do, so we’ll create a simple solution that sends and receives messages on a Service Bus queue. On my laptop, I have an application called “Sender.exe” that sends messages to a queue. Each virtual machine has an application on it that I’ve written called Receiver.exe that simply receive messages from a queue. We will have up to 5 receivers working simultaneously as a competing consumer of the queue.
The Sender application sends a message to the queue once every second.
Sender
-++;
- }
-
- }
- }
- }
The Receiver application reads messages from the queue once every 3 seconds. The idea is that the sender will send the messages faster than 1 machine can handle, which will let us observe how autoscale works.
Receiver
- using Microsoft.ServiceBus.Messaging;
- using Microsoft.WindowsAzure;
- using System;
- using System.Collections.Generic;
- using System.Linq;
- using System.Text;
-
- namespace Receiver
- {
- class Program
- {
- static void Main(string[] args)
- {
- string connectionString =
- CloudConfigurationManager.GetSetting("Microsoft.ServiceBus.ConnectionString");
-
- QueueClient client =
- QueueClient.CreateFromConnectionString(connectionString, "myqueue");
-
-
- while (true)
- {
- var message = client.Receive();
- if(null != message)
- {
- Console.WriteLine("Received {0} : {1}",
- message.MessageId,
- message.GetBody<string>());
-
- message.Complete();
- }
-
- //Sleep for 3 seconds
- System.Threading.Thread.Sleep(TimeSpan.FromSeconds(3));
- }
- }
- }
- }
I built the Receiver.exe application in Visual Studio then copied all of the files in the bin/debug folder to the c:\temp folder on each virtual machine.
Running Startup Tasks with Autoscale
As each virtual machine is started, I want the Receiver.exe code to execute upon startup. I could go into each machine and set a group policy to assign computer startup scripts, but since we are working with Azure, we have the ability to use the custom script extension which will run each time the machine is started. When I created the virtual machine earlier, I enabled the Azure VM Agent on each virtual machine, so we can use the custom script extension.
We need to upload a PowerShell script to be used as a startup task to execute the Receiver.exe code that is already sitting on the computer. The code for the script is stupid simple:
Startup.ps1
- Set-Location "C:\temp"
- .\Receiver.exe
This script is uploaded from my local machine to Azure blob storage as a block blob using the following commands:
Upload block blob
- $context = New-AzureStorageContext -StorageAccountName "kirkestorage" -StorageAccountKey "QiCZBIREDACTEDuYcqemWtwhTLlw=="
- Set-AzureStorageBlobContent -Blob "startup.ps1" -Container "myscripts" -File "c:\temp\startup.ps1" -Context $context -Force
I then set the custom script extension on each virtual machine.
AzureVMCustomScriptExtension
- $vms = Get-AzureVM -ServiceName "kirkeautoscaledemo"
- foreach($vm in $vms)
- {
- Set-AzureVMCustomScriptExtension -VM $vm -StorageAccountName "kirkestorage" -StorageAccountKey "QiCZBIREDACTEDuYcqemWtwhTLlw==" –ContainerName "myscripts" –FileName "startup.ps1"
- $vm | Update-AzureVM
- }
Once updated, I can see that the Receiver.exe is running on the one running virtual machine:
Testing It Out
The next step is to fire up my Sender and start sending messages to it. The only problem is that I haven’t done a good job in providing any way to see what is going on, how many messages are in the queue. One simple way to do this is to use the Service Bus Explorer tool, a free download. Simply enter the connection string for your Service Bus queue and you will be able to connect to see how many messages are in the queue. I can send a few messages, then stop the sender. Refresh the queue, and the number of messages decreases once every 3 seconds.
OK, so our queue receiver is working. Now let’s see if it makes Autoscale work. I’ll fire up the Sender and let it run for awhile.
The number of messages in the queue continues to grow…
And after a few minutes one of the virtual machines is automatically started.
I check and make sure that Receiver.exe is executing:
Waiting for awhile (I lost track of time, guessing 30 minutes or so) you can see that all of the VMs are now running because the number of incoming messages outpaced the ability for our virtual machines to process the messages.
Once there are around 650 messages in queue, I turn the queue sender off. The number of messages starts to drop quickly. Since we are draining messages out of the queue, we should be able to observe autoscale shutting things down. About 5 minutes after the number of queue messages drained to zero, I saw the following:
Go back to the dashboard for the cloud service, and once autoscale shuts down the remaining virtual machines (all but one, just like we defined) you see the following:
Monitoring
I just showed how to execute code when the machine is started, but is there any way to see in the logs when an autoscale operation occurs? You bet! Go to the Management Services tool in the management portal:
Go to the Operation Logs tab, and take a look at the various ExecuteRoleSetOperation entries.
Click on the details for one.
Operation Log Entry
- <SubscriptionOperation xmlns=""
- xmlns:
- <OperationId>REDACTED</OperationId>
- <OperationObjectId>/REDACTED/services/hostedservices/KirkEAutoscaleDemo/deployments/VM1/Roles/Operations</OperationObjectId>
- <OperationName>ExecuteRoleSetOperation<>deploymentName</d2p1:Name>
- <d2p1:Value>VM1</d2p1:Value>
- </OperationParameter>
- <OperationParameter>
- <d2p1:Name>roleSetOperation</d2p1:Name>
- <d2p1:Value><?xml version="1.0" encoding="utf-16"?>
- <z:anyType xmlns:i=""
- xmlns:d1p1=""
- i:type="d1p1:ShutdownRolesOperation"
- xmlns:
- <d1p1:OperationType>ShutdownRolesOperation</d1p1:OperationType>
- <d1p1:Roles>
- <d1p1:Name>VM3</d1p1:Name>
- </d1p1:Roles>
- <d1p1:PostShutdownAction>StoppedDeallocated</d1p1:PostShutdownAction>
- </z:anyType>
- <>Succeeded</Status>
- <HttpStatusCode>200</HttpStatusCode>
- </OperationStatus>
- <OperationStartedTime>2015-02-20T20:47:54Z</OperationStartedTime>
- <OperationCompletedTime>2015-02-20T20:48:38Z</OperationCompletedTime>
- <OperationKind>ShutdownRolesOperation</OperationKind>
- </SubscriptionOperation>
Notice on line 26 that the operation type is “ShutdownRolesOperation”, and on line 28 the role name is VM3. That entry occurred after VM3 was automatically shut down.
Summary
This post showed a demonstration of Autoscale in Azure turning virtual machines on and off according to the number of messages in a queue. This pattern can be hugely valuable to build scalable solutions while taking advantage of the elasticity of cloud resources. You only pay for what you use, and it’s in your best interest to design solutions to take advantage of that and avoid over-provisioning resources.
For More Information
Solving a throttling problem with Azure
Automating VM Customization tasks using Custom Script Extension | https://docs.microsoft.com/en-us/archive/blogs/kaevans/autoscaling-azurevirtual-machines | CC-MAIN-2021-49 | refinedweb | 1,678 | 55.44 |
How to convert a Taylor polynomial to a power series?
With Maple I can write
g := 2/(1+x+sqrt((1+x)*(1-3*x))); t := taylor(g,x=0,6); coeffs(convert(t,polynom));
end get
1, 1, 1, 3, 6
Trying to do the same with Sage I tried
var('x') g = 2/(1+x+sqrt((1+x)*(1-3*x))) taylor(g, x, 0, n)
and get
NotImplementedError Wrong arguments passed to taylor. See taylor? for more details.
I could not find the details I am missing by typing 'taylor?'. Then I tried
g = 2/(1+x+sqrt((1+x)*(1-3*x))) def T(g, n): return taylor(g, x, 0, n) T(g, 5)
and got
6*x^5 + 3*x^4 + x^3 + x^2 + O(0) + 1
which is almost what I want (although I fail to understand this 'workaround').
But when I tried next to convert this Taylor polynomial to a power series
g = 2/(1+x+sqrt((1+x)*(1-3*x))) def T(g, n): return taylor(g, x, 0, n) w = T(g, 5) R.<x> = QQ[[]] R(w).polynomial().padded_list(5)
I got the error
TypeError: unable to convert O(0) to a rational
The question: How can I convert the Taylor polynomial of 2/(1+x+sqrt((1+x)(1-3x))) to a power series and then extract the coefficients?
Solution ??: With the help of the answer of calc314 below (but note that I am not using 'series') the best solution so far seems to be:
var('x') n = 5 g = 2/(1+x+sqrt((1+x)*(1-3*x))) p = taylor(g, x, 0, n).truncate() print p, p.parent() x = PowerSeriesRing(QQ,'x').gen() R.<x> = QQ[[]] P = R(p) print P, P.parent() P.padded_list(n)
which gives
6*x^5 + 3*x^4 + x^3 + x^2 + 1 Symbolic Ring 1 + x^2 + x^3 + 3*x^4 + 6*x^5 Power Series Ring in x over Rational Field [1, 0, 1, 1, 3]
Two minutes later I wanted to wrap things in a function, making 'n' and 'g' parameters.
def GF(g, n): x = SR.var('x') p = taylor(g, x, 0, n).truncate() print p, p.parent() x = PowerSeriesRing(QQ,'x').gen() R.<x> = QQ[[]] P = R(p) print P, P.parent() return P.padded_list(n)
Now what do you think
gf = 2/(1+x+sqrt((1+x)*(1-3*x))) print GF(gf, 5)
gives?
TypeError: unable to convert O(x^20) to a rational
Round 3, but only small progress:
tmonteil writes in his answer below: "the lines x = SR.var('x') and x = PowerSeriesRing(QQ,'x').gen() have no effect on the rest of the computation, and could be safely removed".
This does not work for me: if I do not keep the line x = SR.var('x') I get "UnboundLocalError: local variable 'x' referenced before assignment". But the line "x = PowerSeriesRing(QQ,'x').gen()" can be skipped. So I have now
(more)(more)
def GF(g, n): x = SR.var('x') p = taylor(g, x, 0 ...
With respect to conversion to nonsymbolic series, see
By the way I am using SageMathCloud which uses sage-6.3.beta6.
I updated my answer to make the use of g.variables()[0] more explicit regarding your round 3.
Thanks tmonteil. But when I write p = taylor(g, g.variables()[0], 0, n).truncate() I get: 'sage.rings.power_series_poly.PowerSeries_poly' object has no attribute 'variables'. I give up now and think that rws in his comment above is right: there is a defect somewhere.
I do not see any defect. Please read my answer below for a detailed explanation. You got this answer because, at the time you type g.variables()[0] , g is not a symbolic expression but a power series. You should understand that when you define g = 2/(1+x+sqrt((1+x)*(1-3*x))), the nature of g (symbolic expression, power series,...) depends on the nature of x (symbolic expression, power series,...) at the same time. Please do not hesitate to ask if something is still not clear. | https://ask.sagemath.org/question/24777/how-to-convert-a-taylor-polynomial-to-a-power-series/ | CC-MAIN-2018-13 | refinedweb | 692 | 76.42 |
Day
Long
Day Long
Day Long
Class
Definition
Date Block - Long Day Format. When the object is serialized out as xml, its qualified name is w:dayLong.
public class DayLong : DocumentFormat.OpenXml.Wordprocessing.EmptyType
type DayLong = class inherit EmptyType
Public Class DayLong Inherits EmptyType
- Inheritance
-
Remarks
[ISO/IEC 29500-1 1st Edition]
dayLong (Date Block - Long Day Format)
This element specifies the presence of a date block at the current location in the run content. A date block is a non-editable region of text which shall display the current date filtered through the specified date picture (see following paragraphs) . [Note: The date block is a legacy construct used for compatibility with older word processors, and should not be produced unless it was consumed while reading a document – it is recommended that the DATE field is used in its place. end note]
A date block shall be displayed using the primary editing language of the host application, regardless of the languages specified in the parent run’s lang property (§17.3.2.20).
The long day format date block shall use a date picture of DDDD, retrieving the long day format for the primary editing language.
[Example: Consider a WordprocessingML run with the following run content:
<w:r> <w:t xml:This is a long date: </w:t> <w:dayLong /> </w:r>
This run specifies that a long day format date block must be placed after the text string literal This is a long date: in the document. Assuming that the host application’s primary editing language is French (Canada) and today’s date is 2006-04-12, this run would be displayed as follows:
This is a long date: mercredi
end example]
[Note: The W3C XML Schema definition of this element’s content model (CT_Empty) is located in §A.1. end note]
� ISO/IEC29500: 2008. | https://docs.microsoft.com/en-us/dotnet/api/documentformat.openxml.wordprocessing.daylong?view=openxml-2.8.1 | CC-MAIN-2019-35 | refinedweb | 306 | 53.95 |
Update 2019-01-14: Phoenix 1.4 ships with Webpack by default, therefore making the setup much more straightforward than before. The long-overdue rewrite has been finished, and has also been made up-to-date following recent updates to Phoenix. The sample repo has also been updated.
I've been playing around with Elixir a lot lately. Recently a friend showed me this blog post by the Discord engineering team about how they could scale their platform through the power of Elixir, and after reading it I was convinced to give it a try. If you're about to learn the language, and you came from Node.js like me, I suggest you go watch this introductory video.
If Ruby has Rails, and PHP has Laravel, then Elixir has Phoenix. If you've ever used Rails before, you'll feel right at home. It has the bare essentials of your typical web framework, although it has some neat additional features like Channels, which makes building web apps with sockets much easier.
My ideal stack for a web app usually includes a React frontend. So naturally, I wanted to know how I could build a Phoenix app with a React frontend. Unfortunately, setting up React with Phoenix isn't as straightforward as many people think. Almost every guide that I came across on the internet only goes as far as rendering a single React component and doesn't cover essential things like routing and API fetching. It took me a while, but finally, I found a setup that Actually Works™.
So if you're like me and have been wondering how the heck do you actually get it to work, I'm going to show you how. Hopefully this will answer this question once and for all.
TL;DR
If reading's not your thing, I have prepared the end result of this guide here. Once you're all set up, you should have a working Phoenix setup with the following stack:
- Elixir (^1.7.4)
- Node.js (^10.15.0)
- npm (^6.4.1)
- Phoenix (^1.4.0)
- React (^16.7.0)
- TypeScript (^3.0.0)
- Webpack (^4.0.0)
Getting started
In this guide, I will assume that you already have Elixir, Phoenix, and Node.js installed. If you haven't already, open the links above in a new tab and do it. Don't worry, I'll wait.
We're also going to use Phoenix 1.4, the latest version available at the time of writing.
The boilerplate
We're going to set up a new Phoenix project, complete with the build environment we're going to use.
As of version 1.4, Phoenix ships with Webpack by default. By running the following command we'll have a Phoenix setup with built-in support for JS bundling.
$ mix phx.new phoenix_react_playground
When you're asked if you want to fetch and install dependencies as well, answer No. We'll get to it later.
By default, the
package.json file, the Webpack config, and the
.babelrc file are located in the
assets/ folder instead of the project root. This is not ideal, since it could fuck up with IDEs like Visual Studio Code. So let's move them to the project root instead.
$ cd phoenix_react_playground $ mv assets/package.json . $ mv assets/webpack.config.js . $ mv assets/.babelrc .
This means we'll need to change some of the defaults provided by Phoenix:
.gitignore
@@ -26,7 +26,7 @@ phoenix_react_playground-*.tar npm-debug.log # The directory NPM downloads your dependencies sources to. -/assets/node_modules/ +node_modules/ # Since we are building assets from assets/, # we ignore priv/static. You may want to comment
package.json
@@ -6,8 +6,8 @@ "watch": "webpack --mode development --watch" }, "dependencies": { - "phoenix": "file:../deps/phoenix", - "phoenix_html": "file:../deps/phoenix_html" + "phoenix": "file:deps/phoenix", + "phoenix_html": "file:deps/phoenix_html" }, "devDependencies": { "@babel/core": "^7.0.0", @@ -18,7 +18,7 @@ "mini-css-extract-plugin": "^0.4.0", "optimize-css-assets-webpack-plugin": "^4.0.0", "uglifyjs-webpack-plugin": "^1.2.4", - "webpack": "4.4.0", - "webpack-cli": "^2.0.10" + "webpack": "4.28.4", + "webpack-cli": "^3.2.1" } }
webpack.config.js
@@ -13,11 +13,11 @@ module.exports = (env, options) => ({ ] }, entry: { - './js/app.js': ['./js/app.js'].concat(glob.sync('./vendor/**/*.js')) + app: './assets/js/app.js' }, output: { filename: 'app.js', - path: path.resolve(__dirname, '../priv/static/js') + path: path.resolve(__dirname, 'priv/static/js') }, module: { rules: [ @@ -36,6 +36,10 @@ module.exports = (env, options) => ({ }, plugins: [ new MiniCssExtractPlugin({ filename: '../css/app.css' }), - new CopyWebpackPlugin([{ from: 'static/', to: '../' }]) - ] + new CopyWebpackPlugin([{ from: 'assets/static/', to: '../' }]) + ], + resolve: { + // Add '.ts' and '.tsx' as resolvable extensions. + extensions: ['.ts', '.tsx', '.js', '.jsx', '.json'] + } });
The above Webpack configuration works for the ideal Phoenix setup of placing unbundled assets on the
assets/ folder. We need to make sure that Phoenix correctly runs the Webpack command as our watcher. To do so, modify
config/dev.exs as follows:
- watchers: [] + watchers: [ + {"node", [ + "node_modules/webpack/bin/webpack.js", + "--watch-stdin", + "--colors" + ]} + ]
To make sure everything works, run the following commands:
$ mix deps.get $ npm install
Does everything work? Good! Next, we'll set up our TypeScript environment.
First, we'll install the TypeScript + React preset for Babel, and put it into our
.babelrc.
$ yarn add --dev @babel/preset-react @babel/preset-typescript @babel/plugin-proposal-class-properties @babel/plugin-proposal-object-rest-spread typescript
@@ -1,5 +1,10 @@ { - "presets": [ - "@babel/preset-env" - ] -} + "presets": [ + "@babel/preset-env", + "@babel/preset-react", + "@babel/preset-typescript" + ], + "plugins": [ + "@babel/plugin-proposal-class-properties", + "@babel/plugin-proposal-object-rest-spread" + ] +}
Then, we'll create a standard
tsconfig.json file and fill it up with the following.
{ "compilerOptions": { "allowJs": true, "allowSyntheticDefaultImports": true, "esModuleInterop": true, "isolatedModules": true, "lib": ["dom", "esnext"], "jsx": "preserve", "target": "es2016", "module": "esnext", "moduleResolution": "node", "preserveConstEnums": true, "removeComments": false, "sourceMap": true, "strict": true }, "include": ["./**/*.ts", "./**/*.tsx"] }
And finally, modify our Webpack config so that the
babel-loader accepts JS and TS files. Don't forget to change the extension of your Webpack entry file too!
@@ -13,7 +13,7 @@ module.exports = (env, options) => ({ ] }, entry: { - app: './assets/js/app.js' + app: './assets/js/app.tsx' }, output: { filename: 'app.js', @@ -22,7 +22,7 @@ module.exports = (env, options) => ({ module: { rules: [ { - test: /\.js$/, + test: /\.(js|jsx|ts|tsx)$/, exclude: /node_modules/, use: { loader: 'babel-loader'
Once you've got your boilerplate set up, your Phoenix project's folder structure should now look like this.
phoenix_react_playground/ ├── assets/ │ ├── js/ │ │ ├── ... │ │ └── app.tsx │ ├── scss/ │ │ ├── ... │ │ └── app.scss │ └── static/ │ ├── images/ │ │ └── ... │ ├── favicon.ico │ └── robots.txt ├── config/ │ └── ... ├── lib/ │ └── ... ├── priv/ │ └── ... ├── test/ │ └── ... ├── .gitignore ├── mix.exs ├── package.json ├── README.md ├── tsconfig.json └── webpack.config.js
Setting up React
Let's now hook up React with Phoenix the right way. First, of course, we'll need to install React.
$ yarn add react react-dom react-router-dom $ yarn add --dev @types/react @types/react-dom @types/react-router-dom
Then, we can set up our base React boilerplate. In our assets folder, rename
app.js to
app.tsx, and rewrite the file as follows.
assets/js/app.tsx
import '../css/app.css' import 'phoenix_html' import * as React from 'react' import * as ReactDOM from 'react-dom' import Root from './Root' // This code starts up the React app when it runs in a browser. It sets up the routing // configuration and injects the app into a DOM element. ReactDOM.render(<Root />, document.getElementById('react-app'))
assets/js/Root.tsx
import * as React from 'react' import { BrowserRouter, Route, Switch } from 'react-router-dom' import Header from './components/Header' import HomePage from './pages' export default class Root extends React.Component { public render(): JSX.Element { return ( <> <Header /> <BrowserRouter> <Switch> <Route exact path="/" component={HomePage} /> </Switch> </BrowserRouter> </> ) } }
assets/js/components/Header.tsx
import * as React from 'react' const Header: React.FC = () => ( <header> <section className="container"> <nav role="navigation"> <ul> <li> <a href=" Started</a> </li> </ul> </nav> <a href=" className="phx-logo"> <img src="/images/phoenix.png" alt="Phoenix Framework Logo" /> </a> </section> </header> ) export default Header
assets/js/components/Main.tsx
import * as React from 'react' const Main: React.FC = ({ children }) => ( <main role="main" className="container"> {children} </main> ) export default Main
assets/js/pages/index.tsx
import * as React from 'react' import { RouteComponentProps } from 'react-router-dom' import Main from '../components/Main' const HomePage: React.FC<RouteComponentProps> = () => <Main>HomePage</Main> export default HomePage
That should do it.
Now, open our project's
router.ex folder, and modify our routes in the
"/"scope as follows.
- get "/", PageController, :index + get "/*path", PageController, :index
Then, modify our template files so that it properly loads up our React code. In the base layout template, we can everything inside the
<body> tag with our script.
templates/layout/app.html.eex
<body> <%= render @view_module, @view_template, assigns %> <script type="text/javascript" src="<%= Routes.static_path(@conn, "/js/app.js") %>"></script> </body>
And now the Index page template. Be sure you set the
id attribute to the one you set as the application entry point specified on
app.tsx.
templates/page/index.html.eex
<div id="react-app"></div>
Sanity check
Now we're going to check if everything works. Run
mix deps.get and
npm install once again just to make sure, then run
mix ecto.setup to build our database (if we have any set up). Then run
mix phx.server, wait until the Webpack process is complete, then head over to
localhost:4000.
If it works and you can see your webpage loading up, congratulations! Let's move on to the fancy part.
Creating additional pages with
react-router
Now that we have our basic Phoenix server running, let's create several examples of the nifty things you could do with React. The most common example that people make when demonstrating the capabilities of React is a Counter app.
First, we're going add the Counter route to our
Root.tsx file.
import * as React from 'react' import { BrowserRouter, Route, Switch } from 'react-router-dom' import Header from './components/Header' import HomePage from './pages' +import CounterPage from './pages/counter' export default class Root extends React.Component { public render(): JSX.Element { return ( <> <Header /> <BrowserRouter> <Switch> <Route exact path="/" component={HomePage} /> + <Route path="/counter" component={CounterPage} /> </Switch> </BrowserRouter> </> ) } }
Then, we'll add the
Counter component.
assets/js/pages/counter.tsx
import * as React from 'react' import { Link } from 'react-router-dom' import Main from '../components/Main' // Interface for the Counter component state interface CounterState { currentCount: number } const initialState = { currentCount: 0 } export default class CounterPage extends React.Component<{}, CounterState> { constructor(props: {}) { super(props) // Set the initial state of the component in a constructor. this.state = initialState } public render(): JSX.Element { return ( <Main> <h1>Counter</h1> <p>The Counter is the simplest example of what you can do with a React component.</p> <p> Current count: <strong>{this.state.currentCount}</strong> </p> {/* We apply an onClick event to these buttons to their corresponding functions */} <button className="button" onClick={this.incrementCounter}> Increment counter </button>{' '} <button className="button button-outline" onClick={this.decrementCounter}> Decrement counter </button>{' '} <button className="button button-clear" onClick={this.resetCounter}> Reset counter </button> <br /> <br /> <p> <Link to="/">Back to home</Link> </p> </Main> ) } private incrementCounter = () => { this.setState({ currentCount: this.state.currentCount + 1 }) } private decrementCounter = () => { this.setState({ currentCount: this.state.currentCount - 1 }) } private resetCounter = () => { this.setState({ currentCount: 0 }) } }
Now go to
localhost:4000/counter and test your creation. If it works, we can continue to the next part.
Fetching APIs - a painless example
As mentioned earlier, almost every React + Phoenix tutorial that I ever found on the internet only went as far as rendering a single React component. They never seem to explain how to make both React and Phoenix properly so that they could communicate with each other. Hopefully this will explain everything.
Before you start, please please please make sure that on
router.ex, you have the
"/api" scope declared on top of the
/*path declaration. Seriously. I spent a whole week figuring why my API routes aren't working, and then only recently realised that I had the routing declarations the other way around.
router.ex
# ... scope "/api", PhoenixReactPlaygroundWeb do pipe_through :api # ...your API endpoints end # ... scope "/", PhoenixReactPlaygroundWeb do pipe_through :browser # Use the default browser stack # This route declaration MUST be below everything else! Else, it will # override the rest of the routes, even the `/api` routes we've set above. get "/*path", PageController, :index end
When we have them all set, create a new context for our sample data.
$ mix phx.gen.json Example Language languages name:string proverb:string
router.ex
scope "/api", PhoenixReactPlaygroundWeb do pipe_through :api + resources "/languages", LanguageController, except: [:new, :edit] end
You can also create a database seed to pre-populate the data beforehand. More information on how to do that is available on this Elixir Casts course.
Time for another sanity check! Run the Phoenix server and go to
localhost:4000/api/languages. If everything works correctly, you should see either a blank or populated JSON (depending on whether you preloaded the database first or not).
If everything works well, we can now proceed to our component.
Root.tsx
import * as React from 'react' import { BrowserRouter, Route, Switch } from 'react-router-dom' import Header from './components/Header' import HomePage from './pages' import CounterPage from './pages/counter' +import FetchDataPage from './pages/fetch-data' export default class Root extends React.Component { public render(): JSX.Element { return ( <> <Header /> <BrowserRouter> <Switch> <Route exact path="/" component={HomePage} /> <Route path="/counter" component={CounterPage} /> + <Route path="/fetch-data" component={FetchDataPage} /> </Switch> </BrowserRouter> </> ) } }
pages/fetch-data.tsx
import * as React from 'react'; import { Link } from 'react-router-dom'; import Main from '../components/Main'; // The interface for our API response interface ApiResponse { data: Language[]; } // The interface for our Language model. interface Language { id: number; name: string; proverb: string; } interface FetchDataExampleState { languages: Language[]; loading: boolean; } export default class FetchDataPage extends React.Component< {}, FetchDataExampleState > { constructor(props: {}) { super(props); this.state = { languages: [], loading: true }; // Get the data from our API. fetch('/api/languages') .then(response => response.json() as Promise<ApiResponse>) .then(data => { this.setState({ languages: data.data, loading: false }); }); } private static renderLanguagesTable(languages: Language[]) { return ( <table> <thead> <tr> <th>Language</th> <th>Example proverb</th> </tr> </thead> <tbody> {languages.map(language => ( <tr key={language.id}> <td>{language.name}</td> <td>{language.proverb}</td> </tr> ))} </tbody> </table> ); } public render(): JSX.Element { const content = this.state.loading ? ( <p> <em>Loading...</em> </p> ) : ( FetchData.renderLanguagesTable(this.state.languages) ); return ( <Main> <h1>Fetch Data</h1> <p> This component demonstrates fetching data from the Phoenix API endpoint. </p> {content} <br /> <br /> <p> <Link to="/">Back to home</Link> </p> </Main> ); } }
All good! Now go to
localhost:4000/fetch-data and give it a try.
The result
If you're still here, congratulations, your setup is complete! Run
mix phx.server again and go through everything. If everything works, double congratulations!
You can now use this knowledge to build your next React + Phoenix application. The end result of this guide is available here for everyone to try out.
Good luck! Feel free to tweet at me if you have any questions.
Thanks to ~selsky for their help on proofreading this post! | https://resir014.xyz/posts/2017/08/09/a-phoenix-react-initial-setup-that-actually-works | CC-MAIN-2022-21 | refinedweb | 2,495 | 52.46 |
#include <wx/debugrpt.h>
wxDebugReport is used to generate a debug report, containing information about the program current state.
It is usually used from wxApp::OnFatalException() as shown in the Debug Reporter Sample.
A wxDebugReport object contains one or more files. A few of them can be created by the class itself but more can be created from the outside and then added to the report. Also note that several virtual functions may be overridden to further customize the class behaviour.
Once a report is fully assembled, it can simply be left in the temporary directory so that the user can email it to the developers (in which case you should still use wxDebugReportCompress to compress it in a single file) or uploaded to a Web server using wxDebugReportUpload (setting up the Web server to accept uploads is your responsibility, of course). Other handlers, for example for automatically emailing the report, can be defined as well but are not currently included in wxWidgets.
A typical usage example:
This enum is used for functions that report either the current state or the state during the last (fatal) exception.
Adds all available information to the report.
Currently this includes a text (XML) file describing the process context and, under Win32, a minidump file.
Add an XML file containing the current or exception context and the stack trace.
The same as calling AddContext(Context_Current).
The same as calling AddDump(Context_Current).
Adds the minidump file to the debug report.
Minidumps are only available under recent Win32 versions (
dbghlp32.dll can be installed under older systems to make minidumps available).
The same as calling AddContext(Context_Exception).
The same as calling AddDump(Context_Exception).
Add another file to the report.
If filename is an absolute path, it is copied to a file in the debug report directory with the same name. Otherwise the file will be searched in the temporary directory returned by GetDirectory().
The argument description only exists to be displayed to the user in the report summary shown by wxDebugReportPreview.
This function may be overridden to add arbitrary custom context to the XML context file created by AddContext().
By default, it does nothing.
This function may be overridden to modify the contents of the exception tag in the XML context file.
This function may be overridden to modify the contents of the modules tag in the XML context file.
This function may be overridden to modify the contents of the system tag in the XML context file.
Retrieves the name (relative to GetDirectory()) and the description of the file with the given index.
If n is greater than or equal to the number of files, then false is returned.
Gets the current number files in this report.
Gets the name used as a base name for various files, by default wxApp::GetAppName() is used.
Returns true if the object was successfully initialized.
If this method returns false the report can't be used.
Processes this report: the base class simply notifies the user that the report has been generated.
This is usually not enough – instead you should override this method to do something more useful to you.
Removes the file from report: this is used by wxDebugReportPreview to allow the user to remove files potentially containing private information from the report.
Resets the directory name we use.
The object can't be used any more after this as it becomes uninitialized and invalid. | https://docs.wxwidgets.org/3.1.2/classwx_debug_report.html | CC-MAIN-2019-09 | refinedweb | 568 | 56.55 |
Is there a way to make an array of tokens from StringTokenizer? If so, how would I use it in a for or while loop to increase the size of the array.
could you give an example of what you want to do, sorry im a bit confused
A kram a day keeps the doctor......guessing
I am trying to read in 2 lines from a text file. The first line is the numbers of the dogs who came in 1st-4th place. The second line is the number of each dog and its breed(i.e. 24 Collie 9 Beagle). I want to know the easiest way to compare the numbers on the first line to the number and breed on the second line and output the rank and breed in order from 1st-4th. I was thinking an array of tokens gotten from using StringTokenizer might be a shorter way of doing this.
the String class has a split() method which will return an array of strings equal to the number of strings in the line seperated by a specified delimeter.
ie, if you read in the line of the text file into a varaible called "line" you could do:
stringArray = line.split(" ");
where stringArray is an array of strings. The only problem i can see is that the number of elements in the stringArray may have to be calculated first. Becuase once you define an array, you cannot change its size.
so I would firstly count the number of spaces in the line, then create the stringArray with the number of spaces as the size. Then use the split() method.
Once you have the array defined, do the same with the second line, and then compare elements in both arrays.
does this help
This is the easiest solution I could think of, there is no checking on valid input, if you dont have the dogs and rank line in sync, it just dies...
import java.util.*;
class Dog implements Comparator {
public int number=-1;
public String breed=null;
public int rank=-1;
public Dog () {}
public Dog (int number, String breed) {
this.number=number;
this.breed=breed;
}
public void setRank(int rank) {
this.rank=rank;
}
public String toString() {
return "rank: "+rank+"\tnumber: "+number+"\tbreed: "+breed;
}
public int compare(Object o1, Object o2) {
Dog dog1=(Dog)o1;
Dog dog2=(Dog)o2;
return dog1.rank-dog2.rank;
}
public boolean equals(Object obj) {
Dog aDog=(Dog)obj;
return aDog.rank==this.rank;
}
}
public class DogRun {
public static String [] lines={ // a mock file...
"9 12 7 2",
"2 Collie 12 Pitbull 9 Mutt 7 Wolf"
};
public static void main(String[] args) {
DogRun dogRun1 = new DogRun(lines);
}
public DogRun(String [] lines) {
ArrayList dogList=new ArrayList();
Hashtable ht=new Hashtable();
String [] ranks=lines[0].split(" ");
String [] numBreed=lines[1].split(" ");
// make dogs
for (int i=0; i<numBreed.length; i+=2) {
Integer number=new Integer(numBreed[i]);
String breed=numBreed[i+1];
Dog aDog=new Dog(number.intValue(),breed);
ht.put(number,aDog);
dogList.add(aDog);
}
// rank dogs
for (int i=0; i<ranks.length; i++) {
Integer number=new Integer(ranks[i]);
Dog aDog=(Dog)ht.get(number);
aDog.setRank(i+1);
}
// sort ranked dogs
java.util.Collections.sort(dogList,new Dog());
// print results
for (int i=0; i<dogList.size(); i++) {
System.out.println(dogList.get(i));
}
}
}
eschew obfuscation
Forum Rules
Development Centers
-- Android Development Center
-- Cloud Development Project Center
-- HTML5 Development Center
-- Windows Mobile Development Center | http://forums.devx.com/showthread.php?140660-Token-arrays&p=416569&mode=linear | CC-MAIN-2016-50 | refinedweb | 574 | 65.01 |
“The noblest pleasure is the joy of understanding.” -Leonardo Da Vinci
IN THIS CHAPTER
This book is about Silverlight solutions. In this chapter we go behind the scenes and see what is meant by Silverlight plug-ins, what the components of Silverlight application are, and the role of the .NET Framework is.
This chapter is meant to give you a quick overview, and set a
foundation for rest of the chapters. Let's begin.:
Figure 11.1 XAML is the core of a Silverlight application.
Figure 11.2 The Plug-in contains the necessary components to render silverlight applications.
The Silverlight Plug-in resides in the browser and is activated when you navigate to a web page that has the embedded code for Silverlight Application.
Figure 11.3 CLR is launched which then extract XAML and other assemblies.
Silverlight application is hosted on a web page using an HTML Object tag. We discuss how this works in the next section “Silverlight Host”.
Microsoft Visual Studio and Expression Blend both allow you to create Silverlight applications. These applications essentially create a XAML Application file (.XAP) file, referenced by a link in an object tag.
<object data="data:application/x-silverlight-2," type="application/x-silverlight-2">
<param name="source" value="ClientBin/HelloWorld.xap" />
<param name="onError" value="onSilverlightError" />
<param name="background" value="white" />
<param name="minRuntimeVersion" value="4.0.50826.0" />
<param name="autoUpgrade" value="true" />
<.
Figure 11.4 Silverlight and MediaPlayer control are available in the Toolbox.
<asp:Silverlight
<asp:MediaPlayer:
Figure 11.5 The Obj/Debug Folder contains the intermediate files.
The Obj/Debug Folder store objects and intermediate files before they are linked together:
The Bin/Debug is the folder where the compiled files are stored. The files created as a result of the build process are:
To deploy your Silverlight application, you upload the TestPage.html and HelloWorld.XAP file to a web server. The HelloWorld.XAP file has all the needed files. HelloWorld Application has the fol- lowing files in the .X.
<deployment xmlns="- ployment" xmlns:
<deployment.parts>
<assemblypart x:
<assemblypart x:
</assemblypart></assemblypart></deployment.parts>
</deployment> The:
Let’s talk about each of these to have a better understanding of the embedded code..
<grid x:
<textblock margin="120" fontsize="20">Hello World!</textblock>
</grid>
Figure 11.8 shows the two namespaces with common elements
Figure 11.8 The Namespace contains the element definition..
Silverlight XAML is also a powerful application and treated as a .NET class by Common Language Runtime (CLR). There are three aspects to the application part of XAML:.
During build process, an intermediate partial class file is generated with an extension .g.cs for every xaml file. The Roles of Page.g.cs includes the following:
internal System.Windows.Controls.Grid LayoutRoot;
internal System.Windows.Controls.TextBlock TextHello;
System.Windows.Application.LoadComponent(this, new System.Uri("/ HelloWorld;component/MainPage.xaml", System.UriKind.Relative));:
These features are analogous to the CLR used for .NET applications. We will see more details on these in examples. The next part is the .NET Framework libraries available for Silverlight.
The .NET Framework libraries supported by Silverlight are Base Class Libraries, Generics, Collections, Regular Expression, LINQ and Threading.
The base class library in Silverlight is meant to provide all the basic features of .NET applications, like input/output (IO) operations, text, Regular expressions, collections, and Generics. Here is the list of namespaces supported by base class libraries::
The next set is for .NET libraries related to XML, JSON data and LINQ
Communication foundation relates to data communication in the Web. It is meant to support AJAX technologies, web services and RSS, Atom feed formats.
Silverlight provides additional functionalities for creating rich and interactive applications:
Figure 11.10 gives an overview of the .NET library available in Microsoft Silverlight.
Figure 11.10 Silverlight comprises the best of the .Net Framework.
Let’s start with two simple programs.
The first example application has already been discussed in the chapter, and the second example is to give you a taste of event driven Silverlight programming.:
<grid x:
<textblock fontsize="24" margin="100">Hello World !</textblock>
</grid>.
<application xmlns="" xmlns:
<application.resources>
</application.resources></application>
<UserControl x:
<grid x:
<textblock margin="120" fontsize="20">Hello World!</textblock>
</grid>
</UserControl> 2 Application " + errorMsg + "\");");
}
catch (Exception){}
}
}
using System;
using System.Windows.Controls;
namespace HelloWorld {
public partial class Page : UserControl {
public Page() { InitializeComponent();}
}
}.
<UserControl x:
<canvas x:
<textblock x:
<textblock x:
</textblock></textblock></canvas>
</UserControl>);
}
}
}
I hope you find this useful. I would be delighted to hear your comments, take care - Rajesh Lal.. | https://www.codeproject.com/script/Articles/View.aspx?aid=292336 | CC-MAIN-2017-47 | refinedweb | 758 | 52.76 |
As detailed in the preliminary release of qml.v1 for.
Before diving into the new, let’s first have a quick look at how a Go application using OpenGL might look like with qml.v0. This is an excerpt from the old painting example:
import (
"gopkg.in/qml.v0"
"gopkg.in/qml.v0/gl"
)
func (r *GoRect) Paint(p *qml.Painter) {
width := gl.Float(r.Int("width"))
height := gl.Float(r.Int("height"))
gl.Enable(gl.BLEND)
// ...
}
The example imports both the qml and the gl packages, and then defines a Paint method that makes use of the GL types, functions, and constants from the gl package. It looks quite reasonable, but there are a few relevant shortcomings.
One major issue in the current API is that it offers no means to tell even at a basic level what version of the OpenGL API is being coded against, because the available functions and constants are the complete set extracted from the gl.h header. For example, OpenGL 2.0 has GL_ALPHA and GL_ALPHA4/8/12/16 constants, but OpenGL ES 2.0 has only GL_ALPHA. This simplistic choice was a good start, but comes with a number of undesired side effects:
- Many trivial errors that should be compile errors fail at runtime instead
- When the code does work, the developer is not sure about which API version it is targeting
- Symbols for unsupported API versions may not be available for linking, even if unused.
So this is the stage for the improvements that are happening. Before detailing the solution, let’s have a look at the new painting example in qml.v1, that makes use of the improved API:
import (
"gopkg.in/qml.v1"
"gopkg.in/qml.v1/gl/2.0"
)
func (r *GoRect) Paint(p *qml.Painter) {
gl := GL.API(p)
width := float32(r.Int("width"))
height := float32(r.Int("height"))
gl.Enable(GL.BLEND)
// ...
}
With the new API, rather than importing a generic gl package, a version-specific gl/2.0 package is imported under the name GL. That choice of package name allows preserving familiar OpenGL terms for both the functions and the constants (gl.Enable and GL.BLEND, for example). Inside the new Paint method, the gl value obtained from GL.API holds only the functions that are defined for the specific OpenGL API version imported, and the constants in the GL package are also constrained to those available in the given version. Any improper references become build time errors.
To support all the various OpenGL versions and profiles, there are 23 independent packages right now. These packages are of course not being hand-built. Instead, they are generated all at once by a tool that gathers information from various sources. The process can be tersely described as:
- A ragel-based parser processes Qt’s qopenglfunctions_*.h header files to collect version-specific functions
- The Khronos OpenGL Registry XML is parsed to collect version-specific constants
- A number of tweaks defined in the tool’s code is applied to the state
- Packages are generated by feeding the state to text templates qml package now leverages Qt for resolving all the GL function entry points and the linking against available libraries.
Going back to the example, it also demonstrates another improvement that comes with the new API: plain types that do not carry further meaning such as gl.Float and gl.Int were replaced by their native counterparts, float32 and int32. Richer types such as Enum were preserved, and as suggested by David Crawshaw some new types were also introduced to represent entities such as programs, shaders, and buffers. The custom types are all available under the common gl/glbase package that all version-specific packages make use.
Documentation importing MultMatrixd, for example. For now the documentation is being imported manually, but the final process will likely consist of some automation and some manual polishing.
Function polishing
The standard C OpenGL API can often be translated automatically (see BindBuffer or BlendColor), but in other cases the function prototype has to be tweaked to become friendly to Go. The translation tool already has good support for defining most of these tweaks independently from the rest of the machinery. For example, the following logic changes the ShaderSource function from its standard from into something convenient in Go:))
}
}
`,
Other cases may be much simpler. The MultMatrixd tweak, for instance, simply ensures that the parameter has the proper length, and injects the documentation:
name: "MultMatrixd",
before: `
if len(m) != 16 {
panic("parameter m must have length 16 for the 4x4 matrix")
}
`,
doc: `
multiplies the current matrix with the provided matrix.
...
`,
and as an even simpler example, CreateProgram is tweaked so that it returns a glbase.Program instead of the default uint32.
name: "CreateProgram",
result: "glbase.Program",.
If you’d like to help somehow, or just ask questions or report your experience with the new API, please join us in the project mailing list.Read more | http://voices.canonical.com/user/51/tag/c/ | CC-MAIN-2014-52 | refinedweb | 824 | 64.41 |
I have just read through the patterns section of Floyd's EJB Design Patterns book. I have been using Entity Beans (and Session Beans) for about one year doing a messaging system. I am also involved in producing a component that provides user administration features through JSPs. The JSPs deal with domain objects like person, company, role, address, contact details etc. Rightly or wrongly, I want the JSPs to deal with these domain objects as plain old java objects because it makes them more readable and meaningful. So, here is the problem I have with using EB in this context.
For every domain object:
* I require a home interface
* I require a remote interface
* I require a implementation class
* If I am using the Business Interface pattern, I require a business interface
* I require a standard deployment descriptor
* I require a app server specific deployment descriptor
* I require a session bean with methods to get the data my JSPs require
* I require a DTO
* I require a DTO factory, or serveral of them to match the methods on my Session Bean.
If I have 40 domain objects (which is what I have and it ain't a big system), this is a helluva lot of classes and files. Even with XDoclet or the most capable IDE, its still a helluva lot of work.
Many of the patterns in the book are, in my view, aimed at improving EB performance and compensating for their other deficiencies (like the mixing of remote capabilities and persistence as has been discussed already on this site). So it seems to me we are jumping through hoops to use EB as nothing more than a mechanism to keep SQL out of our code (load from DB using EB, then convert then to DTOs).
I also want to keep SQL out of my code and have as much transparent persistence as possible, but I don't want to have to do all of the above. In fact, all of the above indicate to me that EB is broken, or at best, badly maimed and that we should be looking for something better.
When I look at ObjectRelationalBridge or JDO I cannot help but lean towards them. I have plain old java objects and transparent persistence and DTOs all in one. I should still use the Session Facade of course, but this time its purely for layering purposes instead of compensating for the poor performance of my persistence layer.
I have no idea how ORB or JDO will perform or scale in relation to EB, but its clear my development time is less, my code is clearer and my maintenance effort is lower. Counting the effort required to add or remove an attribute to all of the files listed above depresses me.
This is a contentious issue. I have serveral colleagues at the company I am working for barking at me that I must use EB because it is proven, scalable (and since they are architects they further clarify that they are scalable horizontally and vertically ;-) ), performant and standard. Well they are also a pain in the butt, and I'd like to see if there is a better way.....?
Discussions
EJB design: Why I hate Entity Beans
Why I hate Entity Beans (66 messages)
- Posted by: Mike Hogan
- Posted on: May 27 2002 04:09 EDT
Threaded Messages (66)
- Why I hate Entity Beans by Ken Egervari on May 27 2002 06:54 EDT
- Why I hate Entity Beans by Aslak Hellesøy on May 31 2002 09:44 EDT
- Why I hate Entity Beans by Aslak Hellesøy on May 31 2002 11:00 EDT
- Why I hate Entity Beans by Claudio Morgia on May 31 2002 11:43 EDT
- Why I hate Entity Beans by Marco Wang on May 27 2002 13:08 EDT
- Why I hate Entity Beans by Colin Cassidy on May 28 2002 05:29 EDT
- Why I hate Entity Beans by Mike Hogan on May 28 2002 06:05 EDT
- JDO by Tom Davies on May 29 2002 01:31 EDT
- Why I hate Entity Beans by Jason McKerr on May 28 2002 12:11 EDT
- Why I hate Entity Beans by Kirk Israel on May 28 2002 13:16 EDT
- Why I hate Entity Beans by scot mcphee on May 28 2002 22:06 EDT
- Why I hate Entity Beans by Jason McKerr on May 28 2002 11:13 EDT
- Why I hate Entity Beans by scot mcphee on May 29 2002 03:44 EDT
- Why I hate Entity Beans by Jason McKerr on May 29 2002 11:57 EDT
- Why I hate Entity Beans by Chi Isirimah on June 04 2002 12:44 EDT
- Why I hate Entity Beans by bob farmer on June 06 2002 11:43 EDT
- Why I hate Entity Beans by Jeffrey Panici on June 07 2002 16:17 EDT
- Why I hate Entity Beans by Ferhat SAVCI on May 29 2002 06:40 EDT
- Why I hate Entity Beans by Mike Hogan on May 29 2002 12:10 EDT
- Why I hate Entity Beans by Aditya Anand on May 29 2002 05:20 EDT
- Why I hate Entity Beans by Web Master on May 29 2002 05:45 EDT
- Why I hate Entity Beans by David Churchville on May 29 2002 09:05 EDT
- Why I hate Entity Beans by Nick Minutello on May 29 2002 21:11 EDT
- Why I hate Entity Beans by Daniel Lang on May 30 2002 02:52 EDT
- Why I hate Entity Beans by Nick Minutello on May 30 2002 03:51 EDT
- Why I hate Entity Beans by Web Master on May 30 2002 09:44 EDT
- Why I hate Entity Beans by Daniel Lang on May 31 2002 02:59 EDT
- Why I hate Entity Beans by Web Master on May 31 2002 04:44 EDT
- Why I hate Entity Beans by Nick Minutello on May 31 2002 10:09 EDT
- Why I hate Entity Beans by Sean Broadley on June 04 2002 11:14 EDT
- Why I hate Entity Beans by Web Master on June 05 2002 02:17 EDT
- Why I hate Entity Beans by Sean Broadley on June 10 2002 09:32 EDT
- Why I hate Entity Beans by Web Master on June 11 2002 02:00 EDT
- Why I hate Entity Beans by Steve Muench on June 13 2002 10:46 EDT
- Why I hate Entity Beans by Nick Minutello on June 14 2002 11:14 EDT
- Why I hate Entity Beans by Mike Hogan on May 30 2002 04:30 EDT
- Why I hate Entity Beans by Nick Minutello on May 30 2002 09:47 EDT
- Why I hate Entity Beans by Anand B N on May 30 2002 10:21 EDT
- Why I hate Entity Beans by scot mcphee on June 03 2002 08:36 EDT
- Why I hate Entity Beans by Mike Hogan on May 30 2002 11:25 EDT
- Why I hate Entity Beans by Carlos Perez on May 30 2002 06:04 EDT
- Why I hate Entity Beans by Nick Minutello on May 31 2002 10:26 EDT
- Why I hate Entity Beans by Mike Hogan on June 03 2002 04:18 EDT
- Why I hate Entity Beans by Mike Hogan on June 03 2002 04:44 EDT
- Why I hate Entity Beans by David Churchville on June 03 2002 09:01 EDT
- Why I hate Entity Beans by Nick Minutello on June 03 2002 05:38 EDT
- Why I hate Entity Beans by Jerry Kiely on May 30 2002 11:08 EDT
- Why I hate Entity Beans by Carlos Perez on May 31 2002 07:04 EDT
- Why I hate Entity Beans by Mike Hogan on May 31 2002 07:24 EDT
- Why I hate Entity Beans by Carlos Perez on May 31 2002 08:41 EDT
- Why I hate Entity Beans by Mike Hogan on May 31 2002 08:59 EDT
- Why I hate Entity Beans by Carlos Perez on May 31 2002 09:10 EDT
- Why I hate Entity Beans by Mike Hogan on June 03 2002 03:46 EDT
- Why I hate Entity Beans by Nick Minutello on May 31 2002 10:06 EDT
- Why I hate Entity Beans by Robin Sharp on May 31 2002 06:56 EDT
- Why I hate Entity Beans by Jacek N/A on June 06 2002 09:24 EDT
- Why I hate Entity Beans by Roman Kharkovski on June 06 2002 10:39 EDT
- Why I hate Entity Beans by Jon Wynett on June 06 2002 13:13 EDT
- Why I hate Entity Beans by Michael Hussey on June 06 2002 17:05 EDT
- Data Object Caching by Sean Broadley on June 10 2002 07:58 EDT
- Why I hate Entity Beans by Carl Collin on June 18 2002 01:33 EDT
- Why I hate Entity Beans by Mark N on June 18 2002 07:18 EDT
- Why I hate Entity Beans by Nick Minutello on June 18 2002 21:55 EDT
- Why I hate Entity Beans by Bill Burke on June 20 2002 01:19 EDT
- Why I hate Entity Beans by Nick Minutello on June 20 2002 08:28 EDT
- Why I hate Entity Beans by Mike Hogan on June 24 2002 09:17 EDT
Why I hate Entity Beans[ Go to top ]
You know, i'm in the same disposition as you are, but I'm not really working for a company. I have a site that I'd like to build that has around 56 tables. It's a pretty big site where maybe some of it could be put into small xml files or something and maybe then it'll go down to 40-45 but still - that's a ton of work.
- Posted by: Ken Egervari
- Posted on: May 27 2002 06:54 EDT
- in response to Mike Hogan
I also have little money, so I don't have $5000 canadian to get rational xde and have all my problems solved. If i did, I wouldn't be using orion or mysql. The thing that burns me the most is that the DD files, the ejbs (the ejb itself and the localhome and local interfaces, the dtos, the dto factories, etc. can *all* be generated through the sql creation logic with maybe a few design decisions specified to make sure it comes out okay. To me, the spec should have emphasized that all this be generated by the app server or something because it's kind of rediculous all the monkey work that is required. I mean, after you have the design on paper, the work is very tedious and it's not rewarding, regardless how scalable, performant, etc. it is :/
I don't really have an answer to your question other than that I am with you =) JDO sounds like a sensible solution to the problem, but they will never put it into j2ee directly, so that makes it really tedious and just more load of stuff to learn and I dunno if the ROI on the learning and doing is really paying off. It's really too bad.
Why I hate Entity Beans[ Go to top ]
Middlegen is a free, open sourced tool that reads database metadata and generates Entity Beans OR JDOs that can access the same database. If you generate Entity Beans, you can post process them with XDoclet (which is also open source and free).
- Posted by: Aslak Hellesøy
- Posted on: May 31 2002 09:44 EDT
- in response to Ken Egervari
Tools like this minimises the development overhead dramatically. If you can have 90% of your code (including deployment descriptors) generated for you, I don't see the problem why there are so many files involved. A large number of files is not a problem as long as you don't have to write them yourself.
Those of you who want to try out Middlegen/XDoclet are encouraged to try the CVS versions. Both tools have evolved dramatically since their last releases.
Aslak
Why I hate Entity Beans[ Go to top ]
-Just a comment:
- Posted by: Aslak Hellesøy
- Posted on: May 31 2002 11:00 EDT
- in response to Aslak Hellesøy
It strikes me how alienated people are to code generators. The tools I mentioned in my previous post aren't the only ones available of course. There are heaps of both commercial and open source tools that virtually remove the problems you are complaining about in this thread: Complexity.
A car is a complicated piece of machinery. Still, a lot of people use them. Guess why? -Because they don't have to build them themselves! Somebody else does. Only crazy paople would build their own car.
For more info about this, just read this thread:
Metadata-based code generator. It has a lot of useful comments and links to other good tools.
Aslak
Why I hate Entity Beans[ Go to top ]
Aslak,
- Posted by: Claudio Morgia
- Posted on: May 31 2002 11:43 EDT
- in response to Aslak Hellesøy
I agree with you. The fact that EJBs are internally complex and are composed of multiple files shouldn't be important.
It seems to me that people often think about EJB in terms of performance, loosing the focus on another important aspect: speed of development.
EJB containers have been created factoring out all the most common solutions from existing applications/frameworks/solutions and transforming them into common services.
They are obviously complex things but developers can be shielded from the container's complexity by any good IDE.
It's my opinion that EJBs aren't perfect and doesn't fit in the same way for every scenario but they're a good approach at architectural design of complex systems.
Again, EJBs can be a good solution to reduce developing consts factoring out common solutions and focusing on the real business logic.
Claudio
Why I hate Entity Beans[ Go to top ]
The power of EJB is the services that container can provide to developer. Concurrent service, transaction service, a consistent container based framework. It doean't mean your classes/interfaces are simpler than JSP/Servlet.
- Posted by: Marco Wang
- Posted on: May 27 2002 13:08 EDT
- in response to Mike Hogan
We are developing CORBA based system, we have to handle a lot of thread consurrent issues by ourselves, we have to invent our own transaction-like mechanism(because not all ORBs support OTS), we have to develop our own plugin framework, We have to write code to implement our configuration interface...
If giving me a chance to determine which one I want to use next time, I will vote on EJB/Application Server, we still have pain on EJB, but less than CORBA.
Why I hate Entity Beans[ Go to top ]
You say that your system is not big but that you have 40 domain objects. I would suggest trying to make the granularity of your components coarser by using one of the patterns posted here on TSS.
- Posted by: Colin Cassidy
- Posted on: May 28 2002 05:29 EDT
- in response to Mike Hogan
EJB components do carry a development and performance overhead - but in return provide deployment time encapsulation and a number of enterprise level services. Choose a level of granularity that is too fine, however, and this overhead becomes disproportionately high.
If there is no natural partitioning of your domain objects at any level, then (as you suggest) consider exposing them at the service level via a session bean interface. Whichever technique you adopt, you can use JDO - I am not a great fan of CMP except in trivial systems.
Colin.
Why I hate Entity Beans[ Go to top ]
Colin,
- Posted by: Mike Hogan
- Posted on: May 28 2002 06:05 EDT
- in response to Colin Cassidy
I hear what you are saying about making the EBs coarser, but the 40 object I am talking about constitute the domain model we think is right. If EB cannot persistent the domain model we think is right, we think EB is wrong. That view might be a little fundamentalist (I hope I am not on George Bush's hit list), but it does suggest the existence of something better. JDO seems promising, but is it yet mature and proven. Also I have not yet used it. I'd love to hear some anybody who has.
JDO[ Go to top ]
Mike,
- Posted by: Tom Davies
- Posted on: May 29 2002 01:31 EDT
- in response to Mike Hogan
I've been using Kodo JDO () for a couple of months. I find it very easy to use, especially if you let it produce your schema for you, and find that it integrates well with EJBs (I'm using jBoss).
I don't see any problem with JDO being 'outside J2EE'. With JCA JDO can participate in transactions just as happily as other persistence techniques.
I haven't yet tested performance/scalability, but there is no fundamaental reason that I can see that would make this worse than entity beans. Kodo has a cache which can span JVMs, which you can use if your application has sole access to your DB. Yes, you are dependent on your (JDO) vendor's cache scheme, but at least this doesn't lock you into an app server vendor, as CMP can.
I recommend JDO.
Tom
Why I hate Entity Beans[ Go to top ]
My group is moving away from EJB for many of the reasons you discussed. Quite simply, for most projects, EJB's are like swatting a fly with a pickup truck. Too much of everything. We _do_ use some EJB's, but there is usually a specific need associated with EJB's that causes us to choose them in an instance (like we need/want container lifecycle management and callbacks, or remote etc).
- Posted by: Jason McKerr
- Posted on: May 28 2002 12:11 EDT
- in response to Mike Hogan
JDO looks nice, but none of the implementations seem to be "there" yet, except maybe Castor (which isn't JDO compliant). Object Relational Bridge is really nice, and is in the process of implementing JDO.
I have just started using Object Bridge for one of my projects, and so far its really nice. It gives me far greater flexibility than EJB, and it's a lot less overhead. The downside to this is that transactions and security are not code independant, but that's not that big a deal. How many people really write one EJB and then use it in different declarative environments (ejb-jar.xml) for security and transactions? We never really need to.
I've been talking to other J2EE architects and developers and I keep seeing this same little mental cycle among these people. First they start out and say, "Wow, EJB is the greatest thing ever." Then they do some large project(s) and the thinking becomes, "Man, this is a lot of development, maintenance, and overhead." Finally a lot of these people are saying, "Unless you _REALLY_ need it for technical reasons, skip EJB and do something else." Is there another step? Maybe I just haven't gone through the whole cycle...
Maybe part of my problem is that I haven't worked with EJB 2.0 yet.
-Newt
Why I hate Entity Beans[ Go to top ]
Oy...I'm just starting my first really major EJB project at a new company, and somehow I missed that whole "EJBs are the greatest thing ever" and landed right in the other views of "wow that's a lot of work" and "huh maybe this is overkill".
- Posted by: Kirk Israel
- Posted on: May 28 2002 13:16 EDT
- in response to Jason McKerr
Why I hate Entity Beans[ Go to top ]
OK, Entity beans aren't that bad, in terms of complexity, compared to what I've found implemented here in my new job!!! The last job I has I was designing a scheduling system that had 20 entity beans in it, spoke XML over HTTP to a graphical client, and had to be generalist in it's nature so users could customise what sort of scheduling applications it was being used in. So, it was fairly complex.
- Posted by: scot mcphee
- Posted on: May 28 2002 22:06 EDT
- in response to Jason McKerr
But, compared to what I've found in my new job, it's nothing. There are three session beans, a whole lot of ugly 'model' classes the by the database access object uses reflection on in order to get the field names, (i.e the model class's fields have to exactly match the database table's), with the table name plus some other things like a single allowable foreign key lookup being stored in a properties file. So much is determined at run-time in the introspection, it's insane. Just to add a piece of functionality that would normally take me 4 hours takes me 4 DAYS because of the amount of time in code inspection. Even to rewrite it (what I'm doing now) to get rid of the most insane parts of the design it's the most hideous thing imaginable. Some 'model' classes model 5 different tables (I discovered this morning after much detailed code inspection). And did I mention that there are over 200 classes to contain 10 domain objects and about 6 real business methods!!!
My predecessor spent the last six months putting out the major fires and getting rid of as many of the bottlenecks as possible. He had inherited this design off some half-arsed consultants who over-engineered the whole package and obviously never read any GoF patterns book let alone a J2EE one. They employed visual basic programmers to write the thing -- it's a mess! Needless to say the consultancy company is no longer in business.
Anyway the point I'm trying to make is that if they had at least written a poorly-designed EJB entity system, at least I'd still know how and where to look for things, I want to know about the system, and what to do in order to re-engineer it, and where the likely perf. problems are.
So please don't complain to me about the complexity of EJB. ;-)
regs
scot.
Why I hate Entity Beans[ Go to top ]
Scott,
- Posted by: Jason McKerr
- Posted on: May 28 2002 23:13 EDT
- in response to scot mcphee
I'm not sure that your example is a particularly good one. All of us have probably been stuck in projects like yours.
The question at hand isn't whether or not EJB is good compared a really badly designed system like the one you're working with. The question is one of some standard methodology for developing these type of systems. Even with EJB, systems like the one you are referring to can happen, as they can with just about any development convention.
My question to you is: Have you worked with EJB, JDO, big JDBC hand coded projects, and Object Relational mapping?
If you _have_ worked with at least a few of these, would you still choose EJB as your standard for development.
Of course, a requirement like distributed would make the choice obvious. pretend like you're not looking at a model that is that specific.
To be honest, if you haven't worked with at least a couple of the alternatives, I don't think you're necessarily qualified to make a judgement. That would be like saying "I hate running a mile" even though I can't run a mile. Only once you've done it do you have the mental "choice" to say, "I hate running a mile."
I've worked with all of them, although less JDO than the others. And I'm beginning to move away from EJB towards JDO/ORM. These tools are easy to use and can provide lots of value just as EJB's can. What's more is that I'm finding that a lot of our peers are beginning to feel the same way.
-Newt
Why I hate Entity Beans[ Go to top ]
I haven't worked with a JDO project, however, the others yes. I've done reasonably large scale financial systems using hand-rolled JDBC, for example. I think the current project may have been generated or based on the generated code from some sort of O-R tool, there are a couple of 'automatically generated' comments at the top of a few of the map classes (by what it doesn't say).
- Posted by: scot mcphee
- Posted on: May 29 2002 03:44 EDT
- in response to Jason McKerr
If I had to do anything that had a web based front end, I certainly would consider EJB. Probably, a mix of EJB entity for major objects and session beans fronting JDBC access for dependant objects and large row number read-only queries.
There are perfectly good EJB tools available in all the major IDEs which make working with EJB system as easy as any O-R mapping tool.
The thing is, there are two questions to answer: First what's the best architecture for the system? Second, what's the most productive architecture to code to? I am not here to say "EJB is good in all circumstances" or "EJB is best" but on the other hand when I hear "EJB is bad" I can only point out, it's much more hassle dealing with BAD DESIGN IN GENERAL than in any particular architecture in particular. What works is good and works well is better and what works and is easy to maintain is best. I have tended to observe that it all starts with good, clean design in the first instance, and that makes the biggest impact on maintainability of the code.
regs
scot.
Why I hate Entity Beans[ Go to top ]
Fair enough. I didn't say that EJB is bad, I just see a lot of people (myself included) moving away from it. I still like it for doing some jobs, as I said above, but it's no longer my standard convention.
- Posted by: Jason McKerr
- Posted on: May 29 2002 11:57 EDT
- in response to scot mcphee
Well said though, and true enough.
-Newt
Why I hate Entity Beans[ Go to top ]
Newt et al,
- Posted by: Chi Isirimah
- Posted on: June 04 2002 00:44 EDT
- in response to Jason McKerr
You guys should try webobjects, it is alot more advanced than all this EJB stuff. EJB in my opinion is very primitive and has a too many classes to access a single database table. I just started working with EJB and not that it is too complex, but as most of you have rightly pointed out. Too much maintaining to do over a minor change. It surprises me how better technologies don't get noticed.
So as I pointed out, you might want to consider playing with and you will see why EJB is an overkill. The only problem with webobjects is that it's marketshare is poor, but if you tried it, you will dump EJB in a heart beat. Give it a shot.
Chi.
Why I hate Entity Beans[ Go to top ]
As it is right that WebObjects is a lot smoother in development and much less complex, at deployment time pure J2EE app servers are more powerful (e.g. granularity in deployment is not by instance).
- Posted by: bob farmer
- Posted on: June 06 2002 11:43 EDT
- in response to Chi Isirimah
However, WebObjects has a OR-tool, that is quite decent: EOF ... what I would like to know is, if somebody has ever tried to integrate EOF in EJB?
Why I hate Entity Beans[ Go to top ]
Yes, it's called GemStone/S.
- Posted by: Jeffrey Panici
- Posted on: June 07 2002 16:17 EDT
- in response to Jason McKerr
Cheers,
Jeff Panici
Why I hate Entity Beans[ Go to top ]
I've been working with EJBs for over 3 years now. Needless to say, all the points Mike made has been a pain in the neck for me, too.
- Posted by: Ferhat SAVCI
- Posted on: May 29 2002 06:40 EDT
- in response to Mike Hogan
On our last project, we have used IBM's WebSphere Studio Application Developer tool and now I am convinced that the cause of our problems had been incomplete, incompetent tools. Developing EJBs is a cumbersome business without the right tools. There's no denying that EJBs (both in theory and practice) are not straightforward; but a good tool can hide all the complexity, letting you focus on your development.
Why I hate Entity Beans[ Go to top ]
So now we are faced with the choice between forking out for an IDE that compensates for EBs lackings OR forking out for a commercial JDO implementation. Ain't life grand ;-)
- Posted by: Mike Hogan
- Posted on: May 29 2002 12:10 EDT
- in response to Ferhat SAVCI
Why I hate Entity Beans[ Go to top ]
I disagree... proper usage is the key to ... using EB's for every sing domain object is an overkill... some things (which donot require frequent transactional updates) donot qualify as a "value-for-time" EB.. i.e. u'd be wasting time if u implemented it as an EB. EB's when used properly provide good performance and reliability. The systems have lesser critical bugs. but this does not illustrate the fact that Using EB's correctly is no casual deal... u need to plan and architect... and i would suggest not testing this on a primere project... do sand box tests...learn form your mistakes here or on small projects... this is not an article...just a post... so i will not delve into what "using properly" means.. but i'll be happy to delve in an email conversation... if need be... and for "Lot of code files"... all i can say is packages and proper tools... (together's my fav) deal with these very effectively. On my current project we have EJB's numbering close to about 100..120... 6 developers.. some 600 odd screens... but I donot see the developers overwhelmed... a key reason is the effective usage of patterns.
- Posted by: Aditya Anand
- Posted on: May 29 2002 17:20 EDT
- in response to Mike Hogan
Cheers
Adi
adiand@usa.net
Why I hate Entity Beans[ Go to top ]
Hey, wait a minute!
- Posted by: Web Master
- Posted on: May 29 2002 17:45 EDT
- in response to Mike Hogan
As in many "I hate EJB" forums everybody seems to miss the whole point on EJB (and, of course, say that EJB=Entity EJB)
The last project I was involved was somehow a tipical WebStore. It has 65 Fine Grained Entity Beans for Domain Object representation, 1 Stateful Session Bean (Shopping Cart) and 1 Stateles Session Beans modeling the only critical bussiness process that needed to be TRANSACTIONAL
(that it, sending the order to the warehouse). Of course, we used EJB1.1, so we didn't have local iterfaces.
How much time it took me to develop all those beans?. Make your guesses, but if your're informet you'll guess "less than 4 hours". I used a home-brew code-generator (2 hours of coding). In 1 hours I wrote the definitions of the beans (properties files, 0 java code, less that 10 lines in each) and the code-generator created all the classes, deployment-descriptors and DB scripts I needed. Half and hour customizing some descriptors and Half and hour running the DB script, deploying the beans and testing the stuff.
No that we put out all the code complexities of EJB, we are left with only 3 complains:
1.- It doesn't map very well to a relational database (this is called "Object-Relational Impedance Mismatch" in Ed Roman book)
2.- The Network Roundtrip for RMI calls makes multiple finegrained calls expensive
3.- BMP sucks
For 1) one little known trick is that you can define the Datasource your ENTITY BEANS will be using, (on EJB1.1 this was done on a app-server specific basic. I readed that EJB2.0 has a pluggable persistance mechanism. I'll check the spec ASAIC) so you can use CocoBase, toplink or a JDO implementation. You'll need a JDBC driver anyway, but it's worth a shot. (I hear of people using Cocobase with Orion App Server).
OR, you can use an app server that has a powerfull O/R mapping built-in (like orion)
For 2), with local interfaces that's solved. And most robust app server detect if the call is a local one and don't use RMI anyway.
For 3) I agree. Simply don't use BMP. The only reason I can ssee for BMP are complex queries in findBy methods. Use the "JDBC for reading" pattern and voila, no more BMP.
Now, EJB technology is not a "silver bullet". It ease you the burden of resource pooling, explicit transaction demarcation, persistance and multi-threading. Used properly you can engineer a system in less time. Missused will only give you FUD and frustration.
And, yes, I have used JDBC, EJB, O/R mapping tools, CORBA and early version of JDO.
This was quite a rant... But I hate to see EJB beaten to death without mercy...
PD: I didn't read all of the post before writing this one.
Scot mcphee already said some stuff that are very true.
Why I hate Entity Beans[ Go to top ]
Having use all of the above (except JDO, which, like most, I've only read about since its just become standard), I can say, absolutely, without a doubt...It all depends on the problem you're trying to solve ;)
- Posted by: David Churchville
- Posted on: May 29 2002 21:05 EDT
- in response to Web Master
If you have a non-transactional, read-mostly system, its *usually* overkill to use Entity EJBs. Session EJBs are almost always a good idea if your system might be used by different clients. I don't use stateful session beans for anything, your mileage may vary. Generally, though this is because Web apps work pretty well with session state or by storing state in a database.
If you have a Web-only, will never be anything else kind of system, I like using plain old Java class "business delegate" interfaces along with Data Access Objects that contain the SQL and JDBC. You can use these with app server DataSources to get pooling, etc.
ORM tools are pretty good if you *need* everything to be an object. I find that for most query applications, this isn't the case, and is in fact often harmful. Its OK to return result sets (or RowSets, Tables, etc.) from a query if you have fairly dynamic reporting or display needs (custom user sorting, searching, etc.).
Summary: Start with straight layering (Business Delegate/Facade and DAOs), then if needed, insert session beans (behind the delegate), and finally entity beans (in the session beans). By layering, you're never coded into a corner.
As mentioned earlier, you can always just do code generation ;) There are at least a half dozen products that can take a DB schema and crank out all the layers for you. Use them.
Why I hate Entity Beans[ Go to top ]
- Posted by: Nick Minutello
- Posted on: May 29 2002 21:11 EDT
- in response to Mike Hogan
>>Maybe part of my problem is that I haven't worked with EJB 2.0 yet.
I think this is a primary part of the problem. CMP2.0 is not just an optimisation - its a quite large change. You definitely DONT want to be pissing about with CMP1.1 any more. Most complaints (CMP is slow, for example) comes down to people who have used CMP1.1 and dont recognise what CMP2 means.
I will agree, however, that there is a large barrier to entry when it comes to using EJB - entity beans in particular. However, there are many "barriers" in EJB that are there for good reasons.
Barrier 1: Cost. Most decent containers are not cheap. (I havent checked JBoss in the last 3 months to see how their container has improved). It would obviously be better without this barrier ;-)
Barrier 2: Complexity. There are quite a few classes and deployment descriptors to write.
But lets look at the counter-arguments
Cost: Its true that the high-end containers are expensive. However, the nice part of using a standard approach is that you can migrate up as you can afford/require it - its standard.
Complexity: Too many classes. This the most common one I hear.
Interface, Home, Remote... :
The truth is that you always want to code to an interface when it comes to your domain model. Your domain model will be with you for a long time - and lots of things will depend on it. You need to be able to change implementation - so a Interface and Factory (remote/local and home) are just good practice if you expect your system to have any lifetime.
DTO's etc:
Well, you need some kind of object for data transfer - but its important to remember that DTO's are use-case oriented. You need to decouple the DTO representation from the domain model (otherwise all you client systems become inter-dependant, via the domain model)
JDO significantly lowers the barrier to entry - when it comes to O-R persistance. However, keep in mind that for most serious projects, these complexities will exist no matter whether it is JDO or Entity Beans. JDO just gives you more rope to hang yourself in my mind (not enforcing the use of interfaces). JDO is without doubt a good competitor to JDBC (you get a graph of objects rather than a flat ResultSet) - but it is the component model of EJB that is interesting.
Ultimately, keep in mind that a standards based system (be it Entity Beans or JDO) will be many times easier to maintain than some home-brewed or obscure persistance "framework". At the very least, if you place a job ad for "EJB experience" or "JDO experience" in 2 yrs time, you will get some people who know it (even in 4-5 years time). If it is home-brewed, you can be sure that you wont have anyone who knows the "framework" after just 2 years - and then it will be unmaintainable (no matter what the metrics are on the number of classes).
Even in 10 years, you will still find books for EJB and JDO...
There is always room for improvement - but no strikingly "better way" has shown itself recently, in my mind.
Cheers,
Nick
Why I hate Entity Beans[ Go to top ]
- Posted by: Daniel Lang
- Posted on: May 30 2002 02:52 EDT
- in response to Nick Minutello
It really scares me when people say that EJB 2.0 is a vast improvement on 1.1. I've used only 2.0, and it is a hard slog to get anything done.
Take a few examples:
(a.) Relationships.
These need to be defined in:
1. ejb-jar.xml
2. server specific xml
3. bean
4. local/remote interface
5. home interface (create methods)
6. Value Objects maybe too
So change a relationship on your DB, and you've got some work to do.
(b.) Finders
1. ejb-jar.xml
2. home interface
Every time anybody wants to add a finder, they need access to the ejb-jar.xml file. What does a finder have to do with deployment?
?
I really wonder if the benifits of EJB are worth it.
Daniel.
Why I hate Entity Beans[ Go to top ]
- Posted by: Nick Minutello
- Posted on: May 30 2002 03:51 EDT
- in response to Daniel Lang
>>I really wonder if the benifits of EJB are worth it.
Compared to..?
Why I hate Entity Beans[ Go to top ]
Daniel,
- Posted by: Web Master
- Posted on: May 30 2002 09:44 EDT
- in response to Daniel Lang
First remember, as it has pointed out lots of times, that EJB are not ONLY entity beans, and that entity beans are not OLNY a way to persist. They represent your Domain Objects, is just that these objects know how to persist themselves, and it just happen that they support transactions, can be pooled for resource efficiency, etc,etc.
Let's examine all your examples:
>Relationships.
>These need to be defined in:
>1. ejb-jar.xml
>2. server specific xml
>3. bean
>4. local/remote interface
>5. home interface (create methods)
>6. Value Objects maybe too
>So change a relationship on your DB, and you've got some >work to do.)
(Free advertisement: Check the Ed Roman book, available free for download in the resource section. Page 325)
>(b.) Finders
>1. ejb-jar.xml
>2. home interface
>Every time anybody wants to add a finder, they need access
>to the ejb-jar.xml file. What does a finder have to do
>with deployment?
You declare the signature of the method in the factory (where it belongs). Now, instead of implementing it in the bean you declare it in the ejb-jar.xml, because it's easier to debug and modify. I prefer to modify an xml file and redeploy rather that recompile a class and redeploy, but that's just me.
?
Ok, let's suppose you want to use JDBC:
try {
Class.forName(MyJDBCDriver);
} catch (ClassNotFoundException e) {
//there is no MyJDBCDriver
}
Connection con;
try {
DiverManager.getConnection(myURL,myLogin,myPassword);
} catch (SQLException e) {
// Couldn't connect
}
Statement sta;
try {
try {
con.createStatement();
} catch(SQLException e) {
// can't create statement
con.close();// beware of leaking connections!!!
return;
}
try {
sta.executeUpdate("INSERT INTO yourtable(<list-of-field-name>) VALUES(<list-of-field-values>)");
} catch(SQLException e) {
con.close(); // beware of leaking connections!!!
//can't create the stuff
return;
}
try {
sta.close();
} catch (SQLException e) {
//cannot close statement;
}
con.close();
} catch (SQLException e) {
// Cannot close connection
}
That's WAY a lot more code. Of course, you can optimize it encapsulating the connection and statement creation, and ALSO you can encapsulate the Home adquisition or the create method..
Why I hate Entity Beans[ Go to top ]
- Posted by: Daniel Lang
- Posted on: May 31 2002 02:59 EDT
- in response to Web Master
Rafael,
Well I can see that you have read the books on EJB... I get the feeling though that you havn't quite gotter around to actually using Entity beans yourself.
> First remember, as it has pointed out lots of times, that
> EJB are not ONLY entity beans, and that entity beans are > not....
Read the title of the thread. All my examples were of Entity beans, except one which is general to all EJBs.
a) Relationships
>)
Um (stunned silence), I'll make it a bit clearer for you. I want to add a new CMP (yes that's CMP) bi-directional relationship, using existing keys in the DB. (ie the keys already exist in the DB and in the BEAN, but the relationship isn't defined yet).
Lets see what I have to do:
1. ejb-jar.xml
- Add relationship tags
- remove CMP fields
2. server specific xml
- Add the relationship tags
- remove CMP fields
- (* Note I've only used Weblogic, so this may differ)
3. bean
- Add Relationship getters & setters in both beans.
- Remove the CMP field getters & setters.
- Remove the CMP fields from create.
4. local/remote interface
- Remove CMP field getters & setters
5. home interface (create methods)
- Remove CMP fields from create
6. Value Objects
- Remove CMP fields from both VOs
So that's 1 ejb-jar.xml + 1 weblogic-jar.xml + 2 beans + 2 home interfaces + 2 local/remote interfaces + 2 Value Objects = 10 files to modify. Feel free to recalculate that for me.
b) Finders:
I won't bother arguing much here, this is a personal thing. I just can't see what a finder has to do with deployment... But that's just me.
c)
> That's WAY a lot more code. Of course, you can optimize
> it encapsulating the connection and statement creation,
> and ALSO you can encapsulate the Home adquisition or the
> create method.
Very good. JDBC requires more coding then CMP entity beans. (Pretty scary if it didn't). I didn't write a comparison - I just said that Entity beans are verbose. I don't know what is better. I just know that Entity Beans are not what I'm looking for.
>.
I agree with everything you say in this paragraph (see, we can be friends). Maybe apply it to your own writing :-)
Cheers,
Daniel.
Why I hate Entity Beans[ Go to top ]
Daniel,
- Posted by: Web Master
- Posted on: May 31 2002 16:44 EDT
- in response to Daniel Lang
First of all, I want to apologize for the high tone in my "FLAME POST", but I'm getting tired of people that say "EJB sucks" just because they screw it the first time and instead of looking for the whys they just blame EJB ( I had to remake the whole design 3 times until I started to get a grip on the concepts). You're of the second category, those who emit informed opinions. Now, to the post :)
a) Relationships: oops, my bad. I din't saw the code-cleaning needed. So, I will eat the flames.
b)Finders: Yes, this is a debate as productive as the one on "where the { should be: end of line or new line?"
c) Yep, EJB are verbose, but it's not EJB fault, is Java fault. All those try/catch are needed for recovering from unexpected situation, and Java enforces it. We both exagerated the examples a little, we could surround the whole code with a single try and multiple catch (after all, all the instruction can be considered a functional block). Much more readable. AFAIK, JDO is at least as verbose as EJB, and JDBC is WAY more verbose.
I'm curious when you say that "I just know that Entity Beans are not what I'm looking for". Well, I'm curious is about what you're looking for, 'cause perhaps a framework over JDBC/EJB/JDO is the answer. Can you email me (soronthar at flashmail dot com) for further discussion?
And yes, we can be friends (even if you hate EJB ;))
Rafael
Why I hate Entity Beans[ Go to top ]
- Posted by: Nick Minutello
- Posted on: May 31 2002 22:09 EDT
- in response to Daniel Lang
Why do you feel you need to remove the CMP fields?
Why I hate Entity Beans[ Go to top ]
Rafael,
- Posted by: Sean Broadley
- Posted on: June 04 2002 23:14 EDT
- in response to Web Master
Maybe I don't get it. People keeping saying this, and I can't see how it can be true.
"Entity beans represent your Domain Objects, is just that these objects... ...can be pooled for resource efficiency"
What makes you think pooling of Entity beans gives.
Have I missed something?
(Now, EJB containers doing resource pooling of database connections, JMS Sessions, etc *is* an advantage - but that's a different matter).
Sean
Why I hate Entity Beans[ Go to top ]
What makes you think pooling of Entity beans gives
- Posted by: Web Master
- Posted on: June 05 2002 14:17 EDT
- in response to Sean Broadley
If your point is that creating n+1 objects has the same cost than creating n when n is a large, point granted. If not, I missed your point entirely.
There are two things to consider: Garbage Collection and Heap Space. If you have a request to 1000 entity "objects" with 20 fields stored in a collection, you will have 20000 objects created that will not be collected by the GC until you finish with your collection and the collection is collected itself. Also the time of collecting them is high (depending on the GC strategy your JVM is using)If you use a pool, you will have only (objects-in-pool * 21) objects created, and the instance variables of each instance will be collected as soon as you finish using that instance (well, not exactly but a good aprox)..
Why I hate Entity Beans[ Go to top ]
Hi,
- Posted by: Sean Broadley
- Posted on: June 10 2002 21:32 EDT
- in response to Web Master
Our main point of disagreement is this:
. "
That's just not my experience when I profile J2EE apps. My belief is that the cost of saving and loading to/from a database is huge compared to the cost of keeping a few extra objects in memory. That's why in-memory caching of objects is generally considered a good thing.
Sean
Why I hate Entity Beans[ Go to top ]
Sean,
- Posted by: Web Master
- Posted on: June 11 2002 14:00 EDT
- in response to Sean Broadley
I dind't make my point clear. Let me clarify the statement
"
When I say "call and execution overhead" I mean the cost to invoke the method (prepare the call and execute the call by the JVM), not the execution of the method per se.
I totally agree with you that the execution of those calls can be very expensive, but if you implement them so they don't touch the database unless needed (for example, using a "dirty flag" in the ejbStore method to decide if you need to persist or not) then the cost to invoke them is neligible compared to "the time to GC or to the improvement in the Heap use".
Besides that, I cannot agree more with your two latest posts.
Rafael
Why I hate Entity Beans[ Go to top ]
The Oracle Business Components for Java J2EE framework provides a robust and proven solution to some of the original concerns raised by the original poster of this thread. Over time we're adding support for entity beans for developers who are convinced that's the way they want to go, but most of our users do benchmarks and choose the simpler choice of lightweight entity classes today.
- Posted by: Steve Muench
- Posted on: June 13 2002 22:46 EDT
- in response to Web Master
We let you deploy an EJB Session Bean facade with these lightweight entity classes inside it, or you can also use the Session Facade as a simple java class, too.
Hundreds of web application developers in our Oracle Applications division use our framework.
Simplifying J2EE and EJB Development with BC4J
Why I hate Entity Beans[ Go to top ]
- Posted by: Nick Minutello
- Posted on: June 14 2002 23:14 EDT
- in response to Steve Muench
... And you can also use Oracle Business Components on Sybase, Ingres, DB2, SQL Server.......
Stick to standards, is my advice - be it JDO or Entity Beans.
Why I hate Entity Beans[ Go to top ]
Nick,
- Posted by: Mike Hogan
- Posted on: May 30 2002 04:30 EDT
- in response to Nick Minutello
I don't see the rationale in fronting your domain model with interfaces since there will be a one to one mapping between the interface and the implementation. When I speak about domain objects I mean dumb nouns with getters and setters. If a interface is required to front domain classes, why are they not required to front DTOs?
As I see it, there should be very little difference between DTOs and domain objects. Ideally they should be the same. I can see that some use cases would require custom DTOs. But to say that presentation layer components need to be insulated from domain objects is debatable in my view (they are not insulated from DTOs).
If you are writing a presentation layer that sells books, the presentation layer is going to have to deal with books. De-typing books into a hashmap to avoid this coupling has as many disadvantages as advantages.
Another fallout of having the domain model and DTOs differ is the introduction of DTOFactories (and the DTOs themselves). So it is true that EB will require more classes, even if an IDE generates all the EBs for you.
I have finally finished Floyds EJB design pattern book and he has almost an entire chapter devoted to JDO. Take a look at it and see how clean the code is. DTO and EB become one, DTO Factories disappear, business, remote and home interfaces all disappear. His view is that making EBs as components is what reduced their usefulness and that seems right to me. All I want is persistent DTOs.
Why I hate Entity Beans[ Go to top ]
- Posted by: Nick Minutello
- Posted on: May 30 2002 09:47 EDT
- in response to Mike Hogan
>>All I want is persistent DTOs.
In most non-trivial cases, your persistant data model can never be the same as your DTO data model (we will ignore custom DTO's (I am using Floyds terminology here))
If they are the same, and your persistant data model encapsulates (persists) complex relationships between objects, then when you pass your persistant domain object as a DTO, the serialization will end up dragging over your whole database to the client. OR, you will have to make a break somewhere. Where you break these links/relationships will depend on your particular use-case. Given that you also dont want DTO's with null fields (where you have to do lots of checking before you access a field), then you have to have your DTO's different from your persistant data model, no?
>>If a interface is required to front domain classes, why
>>are they not required to front DTOs
They can if you want. However, usually, DTO's are not re-used as much as your domain objects as they are use-case oriented.
>>De-typing books into a hashmap to
I agree. I would never advocate this. (Did I imply this?)
I dont dispute the apparent simplicity of JDO - but I believe that some of the perceived "overhead" of CMP is actually good for you.
It all depends on what your system is, what business it is in and how long you expect it to last for.
Cheers,
Nick
Why I hate Entity Beans[ Go to top ]
After reading all the comments on Why EJBs(or rather CMPs) are useless, may be we can take a vote on which of the following did all the developers use so far:-
- Posted by: Anand B N
- Posted on: May 30 2002 10:21 EDT
- in response to Nick Minutello
1. SesionBean-EntityBean-Database
2. SessionBean - JDO Impl - Database
3. SessionBean - JDBC Utility class - Database
4. SessionBean - BMP - Database
5. Servlet - JDBC Utility class - Database[no EJB :oP]
I think we all get the picture, Session Bean is what people never crib about and I think it has it's advantages beyond any other technology. As regards to Entity Beans I think CMP2.0 has though given a lot of flexibility complicated the design of a Entity Bean layer design of my RDBMS. That's where JDO scored it's victory over EB.
As far as my vote anyday goes to No.2
Why I hate Entity Beans[ Go to top ]
1. SesionBean-EntityBean-Database
- Posted by: scot mcphee
- Posted on: June 03 2002 20:36 EDT
- in response to Anand B N
> 2. SessionBean - JDO Impl - Database
> 3. SessionBean - JDBC Utility class - Database
> 4. SessionBean - BMP - Database
> 5. Servlet - JDBC Utility class - Database[no EJB :oP]
I have done Systems in 1, 3, & 5. System 1 was the most complex. Actually it went Browser based client application (non-java) -> XML -> Servlet -> DTO -> SB -> EB -> DB.
I only used 3 & 5 for much smaller systems.
For the system 1 I also used (for a specialised client)
Java client -> DTO -> SB -> EB -> DB.
regs
scot.
Why I hate Entity Beans[ Go to top ]
Nick,
- Posted by: Mike Hogan
- Posted on: May 30 2002 11:25 EDT
- in response to Nick Minutello
I guess we will just have to agree to differ on whether DTOs and domain objects should be one and the same.
>If they are the same, and your persistant data model
>encapsulates (persists) complex relationships between
>objects, then when you pass your persistant domain object
>as a DTO, the serialization will end up dragging over
>your whole database to the client.
I have not coded this, but from what I can see about JDO, you can surf the object graph as you please, detach it from the transaction (makeTransient()) and return it from a session bean without pulling the db with you.
Additionally, ObjectRelationalBridge and JDO implementations allow you to do lazy materialization, which will load the data for a referenced object only once you navigate the reference. This works only in the context of a txn of course.
>>De-typing books into a hashmap to
>I agree. I would never advocate this. (Did I imply this?)
No, you didn't imply this, but it is a pattern in the book that helps reduce coupling between the presentation layer and the domain model.
Cheers,
Mike.
Why I hate Entity Beans[ Go to top ]
I read this article about BC4J
- Posted by: Carlos Perez
- Posted on: May 30 2002 18:04 EDT
- in response to Mike Hogan
and I'm wondering if this made much better sense than the current way of building EJB applications.
Any opinions?
Why I hate Entity Beans[ Go to top ]
- Posted by: Nick Minutello
- Posted on: May 31 2002 22:26 EDT
- in response to Mike Hogan
>>I have not coded this, but from what I can see about JDO,
>>you can surf the object graph as you please, detach it from
>>the transaction (makeTransient()) and return it from a
>>session bean without pulling the db with you.
Not quite true. Detaching it from the transaction doesnt break the object references that the Java serialisation engine will follow. For example, if you have a persisted linked list, trying to return just the head of the LL will pull the whole LL over the network. The only way to break it is to make a copy - and set some references to null - OR mark some fields as "transient". Unfortunately, marking it transient will also direct the JDO engine not to persist it...
Therefore, the only option is to clone and set fields to null. This means that on the client, you now dont ever quite know what references you can follow in your "domain" DTO. In order to know, you have to dig around the DTO builder... check if (ref!=null){..} and hope for the best(bad).
>>Additionally, ObjectRelationalBridge and JDO
>>implementations allow you to do lazy materialization, which
>>will load the data for a referenced object only once you
>>navigate the reference. This works only in the context of a
>>txn of course.
I am not sure that lazy loading (a database "serialisation" optimisation) has any relationship to serialization in general. I dont think that doing a read outside of a transaction will change anything. (unless I am completely mistaken). If the reference exists, the serialization engine will follow it (unless its marked transient) (or unless you do your own serialization - ... but forget about that)
I have a growing dislike for the term DataTransferObject (DTO). I think I prefer calling them a ViewObject. Because that is what they are - a "view" onto the domain.
My experience has been that your ViewObjects (DTO's) are naturally and necessarily different from your domain. (though, I will admit I didnt realise this straight away)
-Nick
Why I hate Entity Beans[ Go to top ]
Nick,
- Posted by: Mike Hogan
- Posted on: June 03 2002 04:18 EDT
- in response to Nick Minutello
>For example, if you have a persisted linked list, trying to return
>just the head of the LL will pull the whole LL over the network.
>Therefore, the only option is to clone and set fields to null.
Yes, I see your point on this. But cloning/copying is required regardless of whether you use EB or JDO. Its just that in the case of JDO you use the clone() method, instead of a DTOFactory in the EB case. Setting nulls is certainly bothersome. Would it be useful if the following feature was supported in JDO 2.0:
* Surf as you please across an object graph in your session bean.
* Make a call to "makeTransientRecursive()", that will cut out the subset of the graph you have surfed and make that subset available as an independent object graph.
>This means that on the client, you now dont ever quite know what
>references you can follow in your "domain" DTO. In order to know,
>you have to dig around the DTO builder... check if (ref!=null){..}
>and hope for the best(bad).
Again, is this not also true in the EB case? The DTO models use (which were prompted by others I see) are object models in their own right and only subsets of them are populated to meet the requirements of the current usecase. You can still surf into a null reference.
>I am not sure that lazy loading (a database "serialisation" optimisation)
>has any relationship to serialization in general.
It doesn't. I figured it could enable a nice mechanism to mark the boundary of a subset of a persistent object graph that could then be detached from the persistent graph and serialized back through a bean interface. We should chat with the JDO guys about this.
>My experience has been that your ViewObjects (DTO's) are naturally
>and necessarily different from your domain. (though, I will admit
>I didnt realise this straight away)
I'm not trying to be a smart arse here, but if you didn't realise it straight away, how can it be natural or necessary?
I have been giving a lot of thought to what you say though, and I am coming around somewhat to the view (and getting into this runs the risk of moving away from the title of the thread)
Providing a layer of ViewObjects that are different to Domain Objects acts as a data contract between the service layer and the layers that depend on it. Not adding this contract means my front end layers are directly dependent on my domain objects (and maybe even my database schema). Now that brings the argument down to whether domain objects are different animals to view objects - in content I mean. If we can go through the pros and cons step by step.
Assume I have a very simple domain object, a book. It has a title, number of pages and an author name.
If I want to:
* Add an attribute, say ISBN. I can add it to the domain class. If I use a ViewObject, I will also add it to that (because presumably I am using it so that front end layers can use it - or is this the presumption you are questioning?). So ViewObject and Domain Object are still the same.
* Same applies to removing and attribute.
* Changing an attribute type or name does show an advantage in separating View and Domain objects, as I can just change the mapping code and leave intact everything above the service , which uses the ViewObjects.
* The real power comes if I want to refactor the domain object into two classes - say Book and Author. Then I can just change the mapping code and again keep those layers above the service layer intact.
* The inverse also applies, if I want to change an attribute in or refactor a view object, I can do it without affecting the domain objects.
Does this tie in with what you are talking about? In general, when are View Objects different to Domain Objects?
The conflict in my mind is that the introduction of ViewObjects seems to break MVC. What is the model when I have domain objects and view objects? Altervatively, are the ViewObjects part of the M, the V or the C?
After an OO analysis of a domain, I have nouns (the domain objects) and verbs (the actions to be performed on the nouns). The nouns constitute the model, the verbs constitute the controller and work on the nouns, and the views constitute the application interface and also work on the nouns. It is the last clause of this sentence that is causing me problems when it comes to fitting ViewObjects into the picture.
What do you think?
Cheers,
Mike.
Why I hate Entity Beans[ Go to top ]
A quick PS to this. I can see that the objects in my persistence layer might be different to those exposed by my service layer if I am inheriting a mucky legacy datamodel. Then I see ViewObjects being different to Domain Objects, because the domain objects are already botched.
- Posted by: Mike Hogan
- Posted on: June 03 2002 04:44 EDT
- in response to Mike Hogan
But, in the absence of this circumstance, the question still remains: when and why would my carefully designed domain model not meet the needs of my presentation layer?
Why I hate Entity Beans[ Go to top ]
"when and why would my carefully designed domain model not meet the needs of my presentation layer? "
- Posted by: David Churchville
- Posted on: June 03 2002 21:01 EDT
- in response to Mike Hogan
One word: "joins".
Domain models for many of us tend to look suspiciously like relational models. This means that we model individual objects as tables so that we can administer them.
Later, when you want to do more advanced navigation/reporting/displaying, you tend to be at the mercy of the end user/customer's presentation needs, which often require "rows" of information from several related "tables" or "objects".
So what we're really talking about is "query" ViewObjects for the most part.
At this point, you start wondering why you modeled everything as objects, since you need a custom object for each new customer query of the same 4 or 5 tables (or you can make one big fat mega-object that might or might not have its fields populated). How is this easier or better?
Answer: It isn't. At some point, the benefits of a strongly typed object model for query results starts to break down, and you decide to use a "RowSet" or equivalent technique. You'll see evidence of this as "AttributeAccess pattern", "JDBC for querying", etc., mentioned elsewhere.
The problem is that the relational model (and SQL) was PURPOSE-BUILT to handle unexpected query situations across different applications without writing custom code (or creating new "objects"). Once you start hard-coding object model on top of this, you're stuck. The best we can do in the object world is to use EJBQL (or OQL, etc), and some sort of Map to hold result attributes.
I believe that domain objects make sense for transactional (CRUD) operations. For queries (other than the simple, get a collection of Object X), you're probably better off with a generic Map interface like:
class ResultRow {
public Object get( String attribute) { ...}
public void set( String attribute, Object value) {...}
}
Why I hate Entity Beans[ Go to top ]
- Posted by: Nick Minutello
- Posted on: June 03 2002 17:38 EDT
- in response to Mike Hogan
>>But cloning/copying is required regardless of whether you
>>use EB or JDO
Thats my point. Creating DTO's is a constant - regardless of EB or JDO (or direct JDBC). However, creating DTO's was complained about as being a "complexity" unique to Entity Beans in your original post.
>>The DTO models use are object models in their own right and
>>only subsets of them are populated to meet the requirements
>>of the current usecase. You can still surf into a null reference
This is only in the case of bad design, I feel. You should never have a design where this is possible.
>>I'm not trying to be a smart arse here, but if you didn't
>>realise it straight away, how can it be natural or
>>necessary?
No offense taken ;-)
Its easy to explain: I didnt fully understand the problem in the beginning. Once I understood, it appeared obvious (its usually the case with hindsight ;-)
>>Does this tie in with what you are talking about? In
>>general, when are View Objects different to Domain Objects?
In general, in your example, you hit all the relevant nails on the head.
Sometimes your View object will match your domain object - but keep in mind that this is coincidental and transient.
If you want to display a list of (book title + Author), then your View Object will have the relevent information (Tile + Author). If your domain object happens to have the same representation, then fine. However, as you point out, you are free to refactor your domain model, and create seperate Book and Author objects. Likewise, you are free to refactor your View Object to contain price as well...
>>The conflict in my mind is that the introduction of
>>ViewObjects seems to break MVC. What is the model when I
>>have domain objects and view objects
In so far as MVC is concerned, this is a layered MVC.
You can consider the View objects as being the View to the domain (the controller is your business logic). However, your application will see the View objects as the Model (the controller is the presentation logic in this case).
>>when and why would my carefully designed domain model not
>>meet the needs of my presentation layer?
You can safely assume that no matter how much care and design you put into your domain model, it will not meet all the needs of your presentation layer. This is a given. The needs of your presentation layer will be guaranteed to change.
You should assume that the "mucky legacy datamodel" you refer to in your PS may be your own after 2 months ;-) (despite your best efforts and abilities)
Why I hate Entity Beans[ Go to top ]
I don't know if this is any help but...
- Posted by: Jerry Kiely
- Posted on: May 30 2002 11:08 EDT
- in response to Mike Hogan
I don't necessarily agree with most of the issues that have been brought up in the thread, but I did start looking for an alternative when I discovered issues more to do with application servers, etc.
I came up with a pattern (based on an article I read in DDJ over a year ago, but with much improvement on my part IMHO) that solves most of the EB headaches. It involves a single Generic BMP Bean (which implements the AttributeAccess pattern and manages DB activities), a Generic BMP Bean Primary Key (which manages the details specific to the bean instance, the main workhorse of the pattern), the Home and Remote Interfaces, and Sub Classes of the Generic BMP Bean Primary Key class for every domain object I require.
The current system I am working on (A complex GIS system containing many enormous tables) contains but one EB, and many Primary Keys, like the one below:
<pre>
public class SpecificBeanPK extends GenericBMPBeanPK
{
private static final String TABLE_NAME = "TABLE_NAME";
private static final String[] FIELD_NAMES = {
"FIRST_FIELD",
"SECOND_FIELD",
"THIRD_FIELD",
"FOURTH_FIELD"
};
// denotes that the key contains a
// single / non-composite key field, and
// that it's name is "FIRST_FIELD"
private static final int PK_FIELD_COUNT = 1;
public SpecificBeanPK(Object[] id)
{
super(TABLE_NAME, FIELD_NAMES, PK_FIELD_COUNT, id);
}
public SpecificBeanPK()
{
super(TABLE_NAME, FIELD_NAMES, PK_FIELD_COUNT);
}
} // Yes, That's it!
</pre>
I can now have multiple views of the same table (two different bean instances looking at different fields of the same table). The deployment descriptor has the Generic BMP Bean entered multiple times (each with different method transaction attributes, etc.) and each with it's own JNDI name.
Once you don't mind dealing with AttributeAccess methods and HashMaps (and why would you?) it works like a dream. Of course the 'session facade' is maintained.
Early reports are that it is quick as bejaysus!. Very quick indeed!
For the next version I think I will extend it for complex beans involving joins across tables.
J.K.
Why I hate Entity Beans[ Go to top ]
Another company out there had a similar idea (i.e. Macadamian Syndeo), its a very curious idea. Generic entity bean seems to be a very appealing idea. Oracle's BC4J has an idea that looks like a cross between the standard and a completely generic bean.
- Posted by: Carlos Perez
- Posted on: May 31 2002 07:04 EDT
- in response to Jerry Kiely
My take is that the standard EJB approach is fundamentally flawed. Many J2EE design patterns seems to be needed as duct tape. Now there's this proposal to use JDO, this has potential to give back to the programmer things like polymorphism, unfortunately the whole transaction model of JDO is just plain wacky and the weak support for relational databases is just plain unrealistic. So, just maybe Macadamian or Oracle may be on a track that does make sense.
If we looked at the historical motivations for EJB, we can see that people were evolving the Transaction Monitor idea, that is, of managed and pooled resources, particularly database connections. Somewhere a long the journey, someone threw the idea of Business Objects into the mix. It may be because back then, there was this IBM project called San Francisco that has similarties w/ the EJB idea. Well San Francisco had grand ambitions, unfortunately it was a complete dog and failed miserably in the marketplace. Today we have this J2EE spec that has a solid foundation on TM technology, unfortunately people mistakenly think it provides a means towards the idea of Business Objects (see Oliver Sims for reference). The word "beans" in EJB is what throws people off.
EJB is extremely weak when it comes to this "beans" idea, in fact there isn't much of a relationship between an EJB and a Java Bean. Until the 2.0 spec there wasn't even a descent way to "link" 2 Entity beans together. The model doesn't even support inheritance or polymorphism. I can't do a query like, give me all vehicles, and get a collection of cars, trucks, planes etc. Any hope of doing something really complex using the standard proposal is a losing battle.
Clearly, one has to build his domain model like he would do it in Java. Then build Entity Bean Facades to gain the transaction monitoring capabilities of EJB. Thinking that EJB=Business Objects is one way to shoot yourself in the foot. Now I don't know if there's an alternative out there, but Oracle and Macadamian may have in fact something promising.
Why I hate Entity Beans[ Go to top ]
Carlos,
- Posted by: Mike Hogan
- Posted on: May 31 2002 07:24 EDT
- in response to Carlos Perez
> unfortunately the whole transaction model of JDO is just plain wacky and the weak support for relational databases is just plain unrealistic
Please explain:
* What is wrong with the JDO transaction model
* Given that JDO cares not whether you are dealing with an RDB, ODB or flatfile (which is what I assume you mean by saying support for RDB is weak), why is this a negative?
Cheers,
Mike.
Why I hate Entity Beans[ Go to top ]
Mike,
- Posted by: Carlos Perez
- Posted on: May 31 2002 08:41 EDT
- in response to Mike Hogan
* I think the declarative transactions of EJB makes more sense, if I can get that while still using JDO then it'll be good. Specifically, regarding JDO, say I've got this object that a grab from a db within a transaction, then I commit that transaction, can I still have access to that object after, and then reuse it in another transaction?
* I've got a client that displays a table view coming from the server. The table view consists of some fields coming from 2 different objects, the view may contain thousands of entries, now I don't know if you can join 2 classes in JDO or get a subset of the fields of that joined class, I'm assuming you can't. So in JDO its transparent, unfortunately you'll need to grab all fields of both objects to be able to display the table, big performance hit. The whole idea of 100% transparent persistence is completely dubious, in fact Sun in the early days of Java was working on a project for orthogonal persistent java. What happened to it? Simply didn't work out too well. It's a nice concept, unfortunately a pure OO model doens't cut the mustard.
Why I hate Entity Beans[ Go to top ]
Carlos,
- Posted by: Mike Hogan
- Posted on: May 31 2002 08:59 EDT
- in response to Carlos Perez
>say I've got this object that a grab from a db within a
>transaction, then I commit that transaction, can I still
>have access to that object after,
Yes.
>and then reuse it in another transaction?
Yes
>unfortunately you'll need to grab all fields of both
>objects to be able to display the table, big performance
>hit.
(I'm not sure if I understand you, but...) In JDO you still have to specify what db columns are mapped into your object, so its entirely possible to have an object mapped to a subset of a table or view. Is this what you meant?
Now, I have never coded JDO, but I have read about, looked at the sample code, same for ObjectRelationalBridge and used the ODMG api a log, so I am not an expert and am trying to be as careful as I can about what I say. What I wanted to do was learn from other people who have used it before.
Cheers,
Mike.
Why I hate Entity Beans[ Go to top ]
Mike,
- Posted by: Carlos Perez
- Posted on: May 31 2002 09:10 EDT
- in response to Mike Hogan
Well you can't map 2 different kinds of objects to overlapping subsets of the same table. You're going to have problems w/ object identity clashes or even worse inconsistent data.
I'll be honest, I haven't worked with JDO, but I've worked with ODMG. So I'm basing my opinions on what I know about ODMG (after all JDO is just a "cleaner" version of JDO). My experience with ODMG hasn't been pleasant.
Also, I'm currently working with ObjectRelationalBridge (OJB), but I'm sticking with the core layers. I just think the ODMG and JDO api's are completely hokey and aren't based on solid foundations.
Carlos
Why I hate Entity Beans[ Go to top ]
Carlos,
- Posted by: Mike Hogan
- Posted on: June 03 2002 03:46 EDT
- in response to Carlos Perez
>Well you can't map 2 different kinds of objects to
>overlapping subsets of the same table. You're going to
>have problems w/ object identity clashes or even worse
>inconsistent data
But wont this will only be a problem if the overlapping objects execute in the same transaction? This kind of behaviour will happen no matter what kind of O/R layer you are using. Is this what you mean?
Why I hate Entity Beans[ Go to top ]
- Posted by: Nick Minutello
- Posted on: May 31 2002 22:06 EDT
- in response to Carlos Perez
Carlos,
I found it easier to understand the value of JDO when I compared it to JDBC. I feel that JDO is much more a competitor for JDBC than Entity Beans (though, obviously it reduces the value proposition of entity beans)
JDO doesnt provide declaritive transactions in the same way that JDBC doesnt. However, a JDO implementation will be JTA aware - so that if used from a session bean or from a BMP bean, the transaction will be declared on the EB/SB...
For the record, I dont believe in transparent persistance (just as the same as I dont believe in location transperancy when it comes to distributed computing) - so I kinda have some doubts as to the objectives of JDO...
-Nick
Why I hate Entity Beans[ Go to top ]
If you want an exhaustive list of problems with EJB and Entity Beans we rung the towel dry.
- Posted by: Robin Sharp
- Posted on: May 31 2002 06:56 EDT
- in response to Mike Hogan
We realised most of these issues when writing a code generator - called JGenerator. There are demo's on our web site that show how it works.
The bit I (and most others I've talked to) find upsetting about EJB's is that I actually like the general thrust of the client API's Session, Home, findBys() etc. I find the unnecessary requirement to implement all the server details along with the APIs intellectually unfathomable.
JGenerator takes these API's and can generate In-Memory, File, Jdbc, JDO and EJB implementations under the covers. So it shows you that it is not difficult to de-couple bean architectural interfaces (Sessions and Homes) from the actual implementation. This means you can mix and match your implementation for different entities depending on your specific requirements.
One size does not fit all - never has, never will.
Why I hate Entity Beans[ Go to top ]
You may want to take a look at my JDO-related article in JavaWorld:
- Posted by: Jacek N/A
- Posted on: June 06 2002 09:24 EDT
- in response to Mike Hogan
One of the paragraphs compares JDO and CMP also...
-- Jacek
Why I hate Entity Beans[ Go to top ]
If you are using IBM WebSphere Studio Application Developer tool then developing EJBs is the same as developing Java Beans. And BTW - testing is even easier since it generates for you dynamic interactive test client. That is really cool stuff. You don't have to mess w/ many interfaces, DDs, etc. All done under the covers and that IS what EJB spec assumes. I'm amazed that people do J2EE development by hand.
- Posted by: Roman Kharkovski
- Posted on: June 06 2002 10:39 EDT
- in response to Mike Hogan
J2EE spec was designed in a way so development can be facilitated by tools (automatic generation of artifacts instead of tons of monkey work) and those who use Notepad (or vi or SlickEdit and alike) - waste their time and money.
Why I hate Entity Beans[ Go to top ]
You mention the burden of creating and maintaining so many support files for entity beans. Have you tried EJBGen? It is a new EJB generation tool that requires you to maintain only a single file and all other files are generated.
- Posted by: Jon Wynett
- Posted on: June 06 2002 13:13 EDT
- in response to Mike Hogan
It makes beans extremely simple. Here's the link:
Why I hate Entity Beans[ Go to top ]
The system I'm currently working on is an extremely large, complex enterprise application. It has a web front end, and uses nearly 200 stateless session beans. These beans use custom classes similar to JDO to access the database. All of these use the Value Object J2EE pattern to cut down on method invocations. It does mostly viewing of data with some modification.
- Posted by: Michael Hussey
- Posted on: June 06 2002 17:05 EDT
- in response to Jon Wynett
Unfortunately, some small number of these objects are fetched very frequently, causing us scalability problems due to heavy db usage. The database CPU usage was low, but the network latency and the resulting garbage generation was the source of our scalibility problems. Using JProbe we discovered that most of the garbage that we were generating came from the Oracle JDBC driver! All that marshalling/unmarshalling of fields was a killer.
We got dramatic improvements in performance and scalability by using bmp entity beans to front those select number of objects that were fetched very frequently. Since the entity beans hold cached values, we avoided the massive garabage generation and network overhead for these particular objects.
This also gives our customers the ability to adjust the cache size, etc, of these beans, via a standard mechanism, the deployment descriptor. We use JBoss and Weblogic which both have settings to tell the container it owns the data. The container then can keep the bean in memory and avoid an ejbLoad.
So I'm not sure how one would solve this without entity beans. They provide the caching and have transaction knowledge to know when to refetch. Does someone know a different solution to this problem? I think this is an important case for entity beans.
Don't say, don't get those objects so often! It would make our APIs very cumbersome because we'd have to have nearly double the number of methods...one method which takes an id of a referenced object, and one method that takes a value object of the referenced object.
Data Object Caching[ Go to top ]
Hi Michael,
- Posted by: Sean Broadley
- Posted on: June 10 2002 19:58 EDT
- in response to Michael Hussey
Our experience in profiling our J2EE apps has also been that it's the persistence layer (including DB) that eats most of the time. JDBC and O/R mapping are expensive. Like you, we've also found that improving our design to touch the db less makes the biggest difference to performance. So I think you've pointed out a major design issue: a transaction-aware cache is very important, and is an important desirable feature that entity beans have.
But... you said that you use custom classes "similar to JDO" to access the database. I'd expect a commercial JDO implementation to include such caching, just as entity beans do. Of course, such caches may not be distributed - but if your transactions are 'sticky' (ie a single transaction stays in one app server instance) that shouldn't be an issue.
Could someone with better knowledge of JDO please either confirm or correct that expectation of mine?
Do the non-JDO O/R tool vendors (hi, Ward) also want to comment on the caching issue?
Sean
PS: 200 session beans!!! Good God!! Our projects typically involve 10 to 15 j2ee-developer-years, plus the analysts, testers, managers, users on-site, etc, and we're usually closer to 20 or so components, ie 20 or so session beans. Just how big is this "extremely large" Behemoth of yours?
Why I hate Entity Beans[ Go to top ]
I have been reading this post with interest. I will admit I don't have very much experience in dealing with technologies that mimic the database in an Object model. We are converting off of a system that uses a mix of BEA Tuxedo and Oracle PRO/C for creating "services of business logic." I actually like this idea of units of work, forgetting using C of course. One idea we are proposing at work is using Stateless Session beans that have SQLJ code in them for our business logic. That way we will have an easier time to develop by using Java than C, have the benefits of using an application server (JMS, Resource Pooling) and yet still be able to code our business logic in a manner we are used to "Plain Old SQL". The system we deal with has hundreds of tables and sometimes we do triple or quad table joins. What would we gain by going to an Entity EJB system other than a training nightmare for 70+ people? Please respond here or email me personally at cdog1977 at hotmail dot com I would be interested in getting people's opinions with greater experience than mine.
- Posted by: Carl Collin
- Posted on: June 18 2002 01:33 EDT
- in response to Mike Hogan
Why I hate Entity Beans[ Go to top ]
"SQLJ code in them for our business logic"
- Posted by: Mark N
- Posted on: June 18 2002 07:18 EDT
- in response to Carl Collin
You code your business logic in SQL?
You may not want to EJBs for all or anything. You may want to mix and match.
Why I hate Entity Beans[ Go to top ]
- Posted by: Nick Minutello
- Posted on: June 18 2002 21:55 EDT
- in response to Carl Collin
Here I refer to CMP generically - to mean CMP Entity Beans or JDO.
Depending on your complexity, CMP may not at all be possible for *some* of your cases. The question is: How much of your database access is *that* complex (the multi-multi-table joins you speak of) that CMP could not work. Pick the solution that solves the majority of your problem (rather than that wich solves the 5%).
The mistake some people make is that they feel there is a binary decision to be made: CMP or JDBC. You can use both. I always start with CMP until it doesnt fit - but know your container. Not too many people really know what a good CMP container can do for you.
In many cases, you can get by with the Entity Bean CMP engine your appserver gives you (weblogic 7's is reasonably feature rich). In some cases, you can improve performance/flexibility of CMP simply by adding a good persistance manager (Such as Toplink - now owned by Oracle - or Cocobase, ObjectFrontier etc) and you wont have to change any code.
In some of these cases, the performance gain that you get from using these products - the caching, tuned updates / batched updates, lazy loading, eager loading are not so easy to get when using "Plain Old SQL". Caching especially so.
The real benefit in this case is that optimising your database access can be as simple as tweaking some settings. Your hardened database programmers will be the best people to appreciate what the tweaking will give them. And when you do this optimisation, you can do so with a fair degree of confidence you will have ZERO debugging to do (definitely NOT the case with roll-your-own persistance)
Hard-core SQL-ers have a hard time accepting CMP solutions. Just as C++ programmers found it uncomfortable letting go the reigns on memory management when moving to Java, so too do many old-school "database programmers" find it hard to trust the generation of mindless SQL to a piece of software.
I would say that your best bet is to slip in slowly. A training course for 70+ developers sounds like a disaster waiting to happen.
Take aside a small team of about 3-4 willing and able developers (a mix of experience and open-mindedness is important) and let them explore the boundaries of using Entity beans or JDO. Build on the success of the pilot to spread the change. Start with a good CMP2.0 implementation - Weblogic or Borland Appserver (Websphere is still on EJB1.1 until at least August/September). Dont waste your time on CMP1.1.
-Nick
Why I hate Entity Beans[ Go to top ]
Hey Nick, don't forget JBoss in your recommendation to Carl. JBoss 3.0 has a CMP 2.0 engine and its free and open-source.
- Posted by: Bill Burke
- Posted on: June 20 2002 13:19 EDT
- in response to Nick Minutello
BTW Carl, you could get training for your 70+ developers and consulting and support from JBossGroup for about the same price you would pay for your Weblogic or Websphere or Borland licenses and support.
Why I hate Entity Beans[ Go to top ]
- Posted by: Nick Minutello
- Posted on: June 20 2002 20:28 EDT
- in response to Bill Burke
No, and JBoss too ;-). I havent had time to check up on JBoss lately. Their 3.0 seemed to take forever to turn out - and there was some question as to whether they had nailed their CMP2.0.
If you do have a look at JBoss, make sure that you shell out for their documentation - you will struggle without it.
-Nick
Why I hate Entity Beans[ Go to top ]
Just one last notion. Some are saying that JDO will never get into J2EE and that will be an obstacle to its adoption. And them some say that JDO compliments JDBC more than it does Entity Beans. So maybe JDO will be made part of J2SE? The would be kinda nice.
- Posted by: Mike Hogan
- Posted on: June 24 2002 09:17 EDT
- in response to Nick Minutello | http://www.theserverside.com/discussions/thread.tss?thread_id=13664 | CC-MAIN-2014-15 | refinedweb | 15,431 | 67.89 |
On Tue, Aug 7, 2012 at 12:51 AM, Dean Herington <heringtonlacey at mindspring.com> wrote: > At 4:30 PM -0700 8/5/12, Matthew wrote: >> >> On Sun, Aug 5, 2012 at 12:32 AM, Henk-Jan van Tuyl <hjgtuyl at chello.nl> >> wrote: >>> >>>: >>> >>> >>> >> >> >> Thanks for the response. The one problem I have with this is that it >> will not be at all obvious which test case (or cases!) failed. >> >> That said, maybe I could do something similar, with a Writer? A passed >> test writes "", but a failed one writes a test-specific failure >> message. Then the test itself uses this string as the assert message. > > > > Let HUnit tell you about the failing test cases. Here's one way to do it. > > > import Test.HUnit > import Data.Char (isDigit) > > data Suit = Spades | Hearts | Diamonds | Clubs > deriving (Show, Read, Eq, Ord) > type Rank = Int -- 2 .. 14 (jack=11, queen=12, king=13, ace=14) > type Card = (Suit, Rank) > > > parseCard :: String -> Maybe Card > parseCard [rankChar, suitChar] = do suit <- suitFrom suitChar; rank <- > rankFrom rankChar; return (suit, rank) > parseCard _ = Nothing > > suitFrom char = lookup char [('S', Spades), ('H', Hearts), ('D', Diamonds), > ('C', Clubs)] > > rankFrom dig | isDigit dig = let v = read [dig] in if v >= 2 then Just v > else Nothing > rankFrom char = lookup char [('T', 10), ('J', 11), ('Q', 12), ('K', 13), > ('A', 14)] > > makeTest :: (String, Maybe Card) -> Test > makeTest (string, result) = string ~: result ~=? parseCard string > > tests = [("TH", Just (Hearts, 10)), ("XH", Nothing)] > > main = (runTestTT . TestList . map makeTest) tests ...it seems so obvious now. This is *exactly* what I was looking for; clearly I was over-thinking this. Thanks, Dean! > > > Dean | http://www.haskell.org/pipermail/haskell-cafe/2012-August/102738.html | CC-MAIN-2013-48 | refinedweb | 267 | 81.02 |
Technical Support
On-Line Manuals
RL-ARM User's Guide (MDK v4)
#include <RTL.h>
#include <rl_usb.h>
int32_t USBD_CDC_ACM_DataRead (
uint8_t *buf, /* Buffer to where data will be stored */
int32_t len ); /* Maximum number of bytes to be read */
The function USBD_CDC_ACM_DataRead reads data from the
intermediate buffer that was received over the Virtual COM Port and
stores them into buf. Parameter buf is a pointer to the
location where the data are stored. Parameter len identifies
the maximum number of bytes to be read.
The function is part of the USB Device Function Driver layer of
the RL-USB Device Software Stack.
Number of bytes. | https://www.keil.com/support/man/docs/rlarm/rlarm_usbd_cdc_acm_dataread.htm | CC-MAIN-2020-34 | refinedweb | 105 | 57.16 |
Hello,
This is my first post here, though I have been lurking on this site for a while. I have been using Microsoft Visual Studio 2012 Express edition for the past two weeks, so I'm still very new with it. I tried to compile AngelScript from source using the provided project for VC++ 2012 edition, and I got these 5 errors (used to be 10, with 5 of them being Intellisense errors, those are gone now though):
Description File Line Col Project Error 1 error C3861: 'InitializeCriticalSection': identifier not found as_thread.cpp 334 1 angelscript Error 2 error C3861: 'CreateSemaphore': identifier not found as_thread.cpp 384 1 angelscript Error 3 error C3861: 'InitializeCriticalSection': identifier not found as_thread.cpp 386 1 angelscript Error 4 error C3861: 'WaitForSingleObject': identifier not found as_thread.cpp 413 1 angelscript Error 5 error C3861: 'WaitForSingleObject': identifier not found as_thread.cpp 437 1 angelscript
The error is coming from the file "as_thread.cpp", but from my Google searches and my quick search of the GameDev forum, I haven't found any solutions to remove those errors in order to successfully compile AngelScript. I was hoping someone on these forums might have an potential solutions or workarounds. I have tried adding
#include <synchapi.h>
to the code in the file "as_criticalsection.h", since it is running on Windows 8, and InitializeCriticalSection is supposed to be in that header file according to MSDN, but the errors persist. | http://www.gamedev.net/topic/639085-initializecriticalsection-identifier-not-found-error/?forceDownload=1&_k=880ea6a14ea49e853634fbdc5015a024 | CC-MAIN-2016-40 | refinedweb | 239 | 53.61 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.