text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
- How about a general repository for us to share filters? I'm thinking of something along the lines of the way Google offers widgets and gadgets and whatnot for iGoogle and Desktop. I've created some and gone out and found some out there that did things I didn't even realize were possible; why re-invent the wheel?
- I find the breadcrumb navigation somewhat lacking. There's a lot of times where I'm digging down deep to the keyword level, and I want to go back just one step, and I end up back about four levels, or worse yet, where I started.
- Much more help on setting up to track other types of campaigns (MSN and Yahoo PPC, online campaigns, offline campaigns)
- For those of us who are working with clients (as opposed to just our own accounts) it would be really great to be able to set up some kind of a demo account with some demo stats (or have one available) to show to clients. To be honest, they'd rather have me show them the benefits step by step with an account that looks like their traffic and their sample products than sit through your presentation. I can't show one client another's data, and they just don't seem to get how useful and important this information is unless they can see it and click on it and I can explain it in terms of their business.
Ok, that's for starters. I have more.
*forms?
Sorry, I may be slow today - but I'll need a bit more information to go on. ;)
AWA
Hope there can be bulk edit for location formats* in latitudes and longitudes.
I dont know if this is still the case, but it would be very helpful to put the information about why ads are disapproved that can be found in the 'tools > disapproved ads' section directly available from the campaign.
[edited by: Seb7 at 8:20 am (utc) on May 13, 2009]
I like the CPA bids, but on some compaigns I would prefer a CPC bid (without loosing the conversion optimisation).
............
Looking at where Google is putting the ads highlights areas where ads are given tens of thousands of impressions without a single click, and areas where I've had thousands of clicks without a single conversion.
Google's approach seems to be to slow down compaigns which dont produce a good CTR, but Google can easily optimise the comapaign to produce better results themselves.
Would be nice to stop wasting Ad impressions on URLs where it has become very obvious its not going to produce a result, and increase impressions where high results are being returned.
[edited by: Seb7 at 8:40 am (utc) on May 13, 2009]
It's like we have campaigns specific in match type and share exactly the same ad groups, keywords and target the same geo-locations.(blue-widgets-phrase/blue-widgets-exact).
Now we have the location list:
AA,-aaa AA,-bbb ...
We have to put in them one by one in the custom targeting option and check them manually every campaign setting.
Is that clearer?
Aside to whitneycia: when I asked '*forms?' above, I had totally missed your initial post in which you had the typo 'froms' in the last sentence. Which explains why I had no idea what you meant in your follow-on post in which you just said '*forms'. I thought that was a stand-alone post. ;)
Anyway, the Advertiser Feedback Report goes out in about 6 hours - and additional feedback will happily be included from anyone who cares to post it here.
Now, let's start on next week's report. :)
it would be great if we could exclude keywords/phrases within urls:
Having negative keywords such as games, photoshop, lyrics, sport etc, does not exclude them from webpages with those words in their urls in the content network.
same for tld's: for example .cn etc..
how about a "whitelist" of domains and webpages for an Inclusion list?
When running a Placement Report, I keep on checking domains for placement quality, that I had already checked in the past.
An option to exclude my whitelisted domains in the placement report would help out in the tedious weeding process of those thousand urls.
Same for whitlisting just a directory or specific webpage(s) but blocking the rest of the domain
Large informational sites such as about.com, encyclopedia type sites and patent gatherers can be excellent to advertise on, but could be easier to whitelist just that one directory, instead of blocking 32 directories for that site. Furthermore, not every site has their content organised in subdirectories or subdomians -> so I could just whitlist the 10 most interesting informational pages/patents etc..
When creating a new report, one can manually select campaigns you want to estimate from a list, but the list table is so narrow that I can not figure out which is the campaign I want(as a result of organized long campaign names :P). Please widen the table or put some funtion like mouseovering and display the full name.
Thanks a lot.
So it would be possible to follow your accounts Quality Score.
The report goes out late Thursday evening - so there's still plenty of time for more. :)
Best,
- In Conversion Optimizer, when setting the CPA, it shows me all the deleted ad groups and sets a minimum CPA; if I try to remove it I get an error message.
- When I click on "Dismiss This Message" would like it to actually dismiss this message. I can't get rid of messages saying "Upcoming changes for creating Click-to-Play Video Ads" in my video campaigns.
This doesn't seem to work in the new interface if there are bids or urls attached.
Further, the old edit keywords box would let you see all the existing keywords and paste right over them. So if I have a list of 953 keywords (and I do), each with unique bids and landing pages, and I changed the bids, I could view them all, select them all, and paste the new list, and be done. In the new interface, I have to first delete them existing keyword entries and I can only do that 50 at a time (or edit them manually one at a time, YUCK!) and then pasting the updated ones in, with keyword level bids and urls specified doesn't look to be possible.
Help, the old method was vastly superior to what I see now.
In fact, the new interface, it's too much work to do things this way. I paused it.
Could you please consider making it like the old one? Or let us delete more than 50 at a time and provide a way to upload a file (or better yet, paste away like we used to be able to do!)?
a "whitelist" of domains and webpages for an Inclusion list on Campaign or maybe even Account level:
Add to that the possibility of giving weighted percentages per whitelisted domain or url.
So for my content network based advertising bids, the actual $-bids will be increased or lowered per domain/url:
white-widget.com 150% black-widget.com 125% purple-widget.com 50%
similar to the % bid adjustement mode in the time of day based Ad scheduling.
kill it or at least give us the option somewhere to exclude today from the graphs.
but when i come back, the second metric is still selected, but it isn't visible on the graph.
to get it to show again, i have to select a different second metric (which i then switch back to my original choice).
[edited by: RhinoFish at 3:01 pm (utc) on May 21, 2009]
There are still five or six hours before I hit 'Send' on this week's Advertiser Feedback Report - and I'm happy to include late-breaking additions. I'll check back a time or two before it goes out.
By the way, last week's report was #300. Basically sent weekly for the past six years - with a great deal of the (verbatim) feedback coming directly from this excellent forum.
Thanks WebmasterWorld!
AWA
===================
PS: an example of advertiser feedback at work (including your feedback) was posted about by engine, in this thread:
Google AdWords Enhanced Search Query Reports [webmasterworld.com...]
I'll check back one more time just before sending.
And thanks again for all the feedback this week. Much appreciated.
We have a campaign that targets France and the campaign targets all 41 languages. We do this because there are always some people whose settings are different that the host country. For example, English is a common search setting all over the world, so there is a decent amount of traffic to be had by targeting English as well as the native language of the targeted country.
A real example. I'm targeting France and Spain, and my targeted languages are English, French, and Spanish. When I hold my cursor over the tool, it gives me the diagnostics results for "keyword" in "Paris, France" with language setting "English." Not very useful. What it should do is run the diagnostics with French. This is a more useful diagnostic.
[edited by: Soze at 7:18 am (utc) on May 22, 2009]
* For the Networks view, the text for the sites in list should be clickable links allowing the advertiser to easily open the site in another window to investigate it.
* The feature from the previous interface allowing one to look at campaign and adgroup results by either search or content network should be added to the new interface.
* There is a scrolling bug that causes several lines to be passed by when scrolling down. (In Firefox; other browsers not tested.)
* The graph function has a bug causing it to display metrics other than what was specified. It only happens when opening the page, not when changing metrics.
i REALLY need that feature, please put it back.
not having it seems to be causing other these problems.
right now i'm clicking back to the old interface and pasting into that old edit box and that works fine. and my keywords i just processed have appended keyword urls and keywords bids.
when i switch back to the new interface i see several things wrong with my entered, existing keywords...
(1) the count is wrong (because the ad group negative keywords disappear in the new interface)
(2) if i filter my existing keywords by match type, negative isn't a choice. i do realize that negatives also have exact/phrase/broad types to them, but that doesn't show them either. further, forgetting the negative keywords in this ad group for a second, if i filter non-neg keywrods by either exact, phrase or broad match type, it finds no keywords. perhaps it chokes on the appended keyword urls and keywods bids (which i'm begging you to put back into the edit box process, and to make the interface understand the appended data information).
however, the filtering on the non-negative keywords by match type remains an issue though.
and did i mention i miss the ability to paste all ad group keywords at once, with appended keyword urls and keyword bids? :-)
[edited by: RhinoFish at 7:47 pm (utc) on May 22, 2009]
It feels like the data window is sooooo small (and we're primarily in the interface to access data!).
I hope the analytical folks within G do a comparison of the old interface, versus the new, and look at the 2D data area that contains data, as well as volume of data presented. An above the fold analysis, as well as full page, should be an interesting, compelling, additional way to judge the new interface's efficacy.
when the new interface shows you the edit box for the new ad, the dest url it shows has converted characters like "&" to "&", which confuses the user. if G wants to internally convert urls in this way, that's cool of course. but showing us the url that's been modified will lead to questions and mistakes, in my opinion. so either force url character conversion upon entry or hide it all in the background - but be consistent about it please.
this is true in both the new and old interface. suggest during the copy process, you add an option to also copy these things. at the campaign level, users are thinking about broad conceptual issues that do very likely apply to all campaigns. a campaign level neg or exclusion might, for example, be a competitor's trademark or site that you've chosen to avoid in all things. have the analysts look at the nature of the c-level negs and exclusions - i think G should make them copy over to, by default.
1) filter the sites it finds using our entered campaign level exclusions. it takes time to inspect each site and when i've made choices about types of sites and then am previewing placements and up comes some near naked women smoking from a bong on the home page of a suggested placement domain, i really feel like my time has been robbed.
2) when looking at the list of suggested placements, when there are subplacements and you drill down, it jumps you out of your list and when you return (after inspecting the subsections of a particular domain), your place in the higher level list is lost. please make that work better - i often avoid the valuable, better targeted subsections because it's a real pain to view/preview them.
3) when you click a site to go preview it, a google page opens with very little information on it, then you click through to the actual domain from there to preview the site. either ditch that intermediate step (its a time robber also) so we can jump directly to the suggested site -OR- add more valuable information to that intermediate g-hosted webpage. some ideas for more data there would be:
+title & meta keyword tag data from the site's home page (to make my previewing more effective, often a domain name doesn't tell the story)
+thumbnail screenshot of the site's home page (visual recognition can speed reviews, but make it load fast)
+average load time [so i can avoid the sites that get suggested that take forever to load (and this should flag your click fraud department, i mean, what human beings browse on sites that take more than 30 seconds to load? boot them from your network!)]
+demographics of the suggested site's visitors - sex, age, maps of where their visitors are from
+some kind of smart pricing indicator that tells me about the overall conversion quality of a site's traffic. I know you don't want to fork over a number - it may not be relevant to what i'm advertising also, but assign a tag to indicate that suggested sites fall into the upper third of content sites regarding aggregate G-tracked conversions so I know its a quality source of content. And hey, since we're doing this, put an option in there for me to bump up my from my default placement bid... yeah, i'm thinking of both of our needs here G. :-)
4) space is tight in this screen, consider ditching the mini-window that the placement tool is now framed within, in the new interface.
If you read this far, send me a fridge man, i felt jipped this past christmas. :-) | http://www.webmasterworld.com/google_adwords/3463862-9-30.htm | CC-MAIN-2014-10 | refinedweb | 2,610 | 67.08 |
Learn more about these different git repos.
Other Git URLs
c4fcc7f
ede5bf3
@@ -3,6 +3,7 @@
import ast
import fnmatch
import hashlib
+ import itertools
import json
import logging
import os
@@ -6608,8 +6609,17 @@
builds.append(binfo)
seen_pkg[binfo['name']] = 1
else:
- tagged = session.listTagged(args[0])
+ # find all pkg's builds in tag
+ pkgs = set([koji.parse_NVR(nvr)['name'] for nvr in args[1:]])
+ tagged = []
+ with session.multicall() as m:
+ for pkg in pkgs:
+ tagged.append(m.listTagged(args[0], package=pkg))
+ # flatten
+ tagged = list(itertools.chain(*[t.result for t in tagged]))
idx = dict([(b['nvr'], b) for b in tagged])
+
+ # check exact builds
builds = []
for nvr in args[1:]:
binfo = idx.get(nvr)
Fixes:
:thumbsup:
rebased onto ede5bf3
2 new commits added
fix chain iteration
Don't use listTagged(tag, *) for untag-build
works for me.
pretty please pagure-ci rebuild
@tkopecek what's missing here? Can we target it for 1.22?
Yes, it is scheduled for 1.22 (see #2037, version are linked to issues, so it is not visible in PRs)
Well this is certainly better, but I find I'm frustrated with the api limitations here. We're still querying irrelevant data, just less of it. Unfortunately, I don't see any way that is much better without changing the hub. We could call listTags for each build instead, but that is also full of irrelevant data. I guess we could use queryHistory calls, but that seems obscure.
Anyway... :thumbsup:
Metadata Update from @tkopecek:
- Pull-request tagged with: testing-ready
Commit ef0730f fixes this pull-request
Pull-Request has been merged by tkopecek
Fixes: | https://pagure.io/koji/pull-request/2038 | CC-MAIN-2022-21 | refinedweb | 271 | 67.35 |
I found that there is still lacking of documentation of "USDZ Converter", so I might start some kind of thread talking about this. So far I found developer 'Ash' out there has been really helpful and we are still discovering USDZ feature, gotchas and limitations along the way. There is this USD Interest group at Google Groups that is more specifically talk about USD.
A bit of information about me:
I am an independent developer, mostly using Blender 3D and XCode only. I have not touched Unity or Unreal. I have old Maya and Houdini, but installing USD plugins seems like not trivial. I did manage to have some USD tools on my Mac, but it is not optimal since I do not have NVidia graphic cards. However, USDZ Converter and the intermediate USDA file generated is pretty helpful.
A couple of interesting things:
- Make sure your 3D mesh has UV layout, without UV the PBR will simply display black on iOS devices, although it might shows okey on MacOS when using 3D or AR Quick Look.
- USDZ makes PBR materials and shaders linking quite easy
- When mesh is too complex or USDZ is too big, or animation has too many objects, USDZ will not display on iOS device. Might work ok on the MacOS. There is no warning about RAM exceeding iOS.
- Watch out for Mapping, do not use 8K image texture for now.
- Animation seems to be supported via Alembic export, works for Transform only. No vertex, no bones, no blendshapes yet.
- USDZ can be easily imported into XCode for 3D SceneKit. I tested it using basic AR app.
- There is this ModelIO inside XCode that can be used to unpack USDZ, it seems, but I have not gone that far.
- Procedural animation can be embeded into USDZ, check USD example of Spinning Top animation. It's pretty cool for turntable.
I am currently investigating:
- Materials that is not PBR, but able to display color, is this possible? Because often time I just need Material with simple Color
- Can we have Lights inside USDZ?
Re: Let's discuss USDZ Converter in details...YojimboMaster Aug 9, 2018 1:39 AM (in response to YojimboMaster)
Also I am wondering whether we can:
- Make Double Sided per Mesh
- Make Subdivision per mesh
- Combining USDZ via USDA
Re: Let's discuss USDZ Converter in details...YojimboMaster Aug 9, 2018 1:42 AM (in response to YojimboMaster)
First of all, to use "USDZ Converter" ensure you have latest XCode 10, and Command Line Tools installed so you can run this inside Terminal:
xcrun usdz_converter <obj, alembic, usda you want to convert> <usdz
-specularColor r g b Floating point values 0.0 ... 1.0
-useSpecularWorkflow i 0 for false, non-zero for true
(*) Specify infield only with -v (Verbose) to display group information.
(*) '#' in the first character position of a line in a command file interprets the line as a comment.
Re: Let's discuss USDZ Converter in details...YojimboMaster Aug 9, 2018 1:58 AM (in response to YojimboMaster)
usdz_converter has interesting feature like:
-f filePath Read commands from a file.
I am new to this hypen tagging command and using it to "override" Materials and assigning Shader Maps. But really handy after all.
We can apparently supply multi lines commands like this (I am using Sublime Text), we don't even need to worry about using forward slash to separate lines. This is cleaner and easier to work with.
Re: Let's discuss USDZ Converter in details...YojimboMaster Aug 9, 2018 2:04 AM (in response to YojimboMaster)
PBR Shader Map texture can have "Alpha" I tested it works.
But not sure how to enable "Double Sided". Also I wonder what these below is for because they do not have effect: (all this color_default, normal_default, emissive_default, etc)
I am using XCode 10 Beta 5 and waiting for Beta 6.
Re: Let's discuss USDZ Converter in details...JeremyRaven Oct 10, 2018 8:54 PM (in response to YojimboMaster)
Thanks, nice info, Im following this thread.
Some more stuff from Stackoverflow in case its needed.
Re: Let's discuss USDZ Converter in details...DeanWray Dec 6, 2018 7:34 AM (in response to YojimboMaster)
Wondering about Multi UV support, this is the main issue for me currently with USDZ, I have a SCN pipeline now with an OSX tool I created that allows for 8 UV Maps per mesh.... for realtime optimal assets this is essential, wondered if you had attempted it yet?
-
-
Re: Let's discuss USDZ Converter in details...benkoch Oct 17, 2018 8:06 AM (in response to YojimboMaster)
Hi guys, I'm struggling to get the converter working. Whenever I start to convert for example an obj file, the converter tells me that it succeeded. But actually the created usdz file is not showing up, what am I missing?
Re: Let's discuss USDZ Converter in details...JaydenIrwin Oct 30, 2018 6:26 PM (in response to YojimboMaster)
Has the converter changed in Xcode 10? I can't add textures anymore. It says the converter is now version 1.008.
xcrun usdz_converter obj_path usdz_path -g Mesh -color_map texture_path -roughness_map roughness_path
Re: Let's discuss USDZ Converter in details...Wixted Nov 26, 2018 2:49 PM (in response to YojimboMaster)
Hi Yojimbo,
We are having an issue with a USDZ file displaying on older devices. You can view the file here:. It's the "Michael Murphy" sofa. It's 33k poly with 2k textures yet still doesn't display on older devices. It was made using Maya and substance painter.
Does anyone know why this will not work?
Thanks,
Allen
Re: Let's discuss USDZ Converter in details...funnest Dec 3, 2018 2:39 PM (in response to Wixted)
Wixted, could you post more specifics of the problem you're having? (And/or file a bug)
Which devices? Which OS version? What happens?
FWIW, the Michael Murphy sofa loaded ok for me on an iPhone 7 with iOS 12.
Re: Let's discuss USDZ Converter in details...dddaaaaaavvvveeee Nov 28, 2018 4:59 PM (in response to YojimboMaster)
Great to find some discussion on USDZ!
I've found AR Quicklook to be fantastic, I've been testing it out in Safari using the rel="ar" command.
It feels a bit unstable still... about 5% of the time it won't load the object and displays this message "Preview Universal Scene Description (Mobile)". If you cancel out and reload, works fine. Have tried with different complexities, doesn't seem to make much difference. At first I thought maybe the webserver was causing issues, but have replicated this using local files on the device.
Anyone else had this issue? I'd be keen to know if it is something I can fix.
Other bits of information I've discovered through testing that people might find interesting:
- AR Quicklook has a draw distance, so if you move an object too far away or make it huge, it will clip.
- It also automatically 'grounds' the object, so if you want something to float, you need to add a small object below to hack this feature.
Re: Let's discuss USDZ Converter in details...zAppledot Apr 4, 2019 3:18 PM (in response to dddaaaaaavvvveeee)
Has anyone found a way to adjust the clipping plane range inside AR Quicklook? I am running into the clipping plane with the large objects that i am testing with. If anyone has any advise on this it would be appreciated. Thanks!
Re: Let's discuss USDZ Converter in details...bmguerreiro Jun 11, 2019 6:33 AM (in response to dddaaaaaavvvveeee)
Has anyone found more information about what causes the "Preview Universal Scene Description (Mobile)" error?
This is still hapening on newest iOS 12.3.1.
Can someone please help me with this? You can reach me at bruno@epigraph.us.
Thank you!
Re: Let's discuss USDZ Converter in details...scanta Jan 18, 2019 4:30 AM (in response to YojimboMaster)
Hi Guys,
USDZ format supported by apple is really cool. For quicklook we need to provide only the file and the work is done. I got success in showing the model with animation in usdz format. Only the greatest drawback that is the size of the file is too heavy. Could some one help me in reducing the size of the usdz file. as the fbx or dae file format size is too small (i.e in 2-3 MB) while the usdz file is (60-100 MB).
Please help me bagoria2011@gmail.com.
Re: Let's discuss USDZ Converter in details...MaxShining Jan 30, 2019 2:26 AM (in response to YojimboMaster)
Hey Yojimbno,
I was wondering if you could shed more light on the results regarding using materials other than the standard PBR one.
I only have an ambient texture (applied through the -color_map command), but once converted to USDZ, all objects look way too dark.
You mentioned you use the -f feature to access a file and "override" the materials.
Maybe that solved the issue for you, but I could not find any information on this anywhere..
Best regards,
André
Re: Let's discuss USDZ Converter in details...Sergio Borges - 3D Work Mar 21, 2019 1:35 PM (in response to YojimboMaster)
I`m trying to set the materials through -m <nameOfTheColor> <path> but is not working , my line of code is like this
xcrun usdz_converter /Users/mac013dw/Desktop/Teste\ cor/export/testeobj.obj /Users/mac013dw/Desktop/Teste\ cor/export/testeobjtestecor.usdz -v -m /testeobj/Materials/Jeep_Cor_229_31_31_0 -color_map /Users/mac013dw/Desktop/Teste\ cor/export/vermelho.jpg -m /testeobj/Materials/Jeep_Cor_221_124_15_0 -color_map /Users/mac013dw/Desktop/Teste\ cor/export/verde.jpg -m /testeobj/Materials/Jeep_Cor_115_148_64_0 -color_map /Users/mac013dw/Desktop/Teste\ cor/export/laranja.jpg
can someone help me with that?
Re: Let's discuss USDZ Converter in details...Sergio Borges - 3D Work Mar 22, 2019 5:14 AM (in response to Sergio Borges - 3D Work)
Something that I found out : using that code i posted above, when I preview the usdz file in the PC it appear without the colors I assigned to each material, but on the iphone it shows correctly.
Re: Let's discuss USDZ Converter in details...julian3003 Jun 3, 2019 11:51 PM (in response to YojimboMaster)
I am also wondering, if there is an option to disable either the 3D tab from the Quick Look View. I want the user to jump directly AR mode (scanning the environment) without previously inspecting the object in 3D.
Does anybody have previous experience with this?
Cheers,
Julian
Re: Let's discuss USDZ Converter in details...jlv Jun 9, 2019 5:09 AM (in response to YojimboMaster)
Thanks for the info, this will come in handy once I get it up and running. I'm a 3D artist but so far I can't get usdzconvert to run at all.
In terminal I cd to the usdzconvert folder and then type "usdzconvert -h". I get:
"-bash: usdzconvert: command not found"
edit, now I get:
Error: failed to import pxr module. Please add path to USD Python bindings to your PYTHONPATH.
edit2:
ok changed the shell to zsh, whatever that is, now it seems to work. I think Apple should make this process A LOT easier. Also why is there no way to simply export to USDZ from, say, 3ds max?
So it works now. But when I try to convert my FBX model, it says that my file has an unsupported file extension. I think I'll leave USDZ alone for now until it's more usable. pity.
I mean if you can't export FBX, what other file format could you use that supports embedded textures, animations and materials?
Re: Let's discuss USDZ Converter in details...gchiste Jun 12, 2019 7:55 PM (in response to jlv)
The usdz Tools found at, currently support .obj and .gltf
Re: Let's discuss USDZ Converter in details...Giln Jun 19, 2019 1:56 PM (in response to gchiste)
Hello
In WWDC 2019 Session 602 some slides say usdzconvert supports other formats like fbx, abc etc...
When willl we be able to see that version?
Also tried exporting a SCN file with animations from Xcode 11 Beta 2, but nohting happens. No exported file, no error, no logs...
Anyone got better success?
Thanks
Re: Let's discuss USDZ Converter in details...funnest Jun 21, 2019 9:21 AM (in response to Giln)
It's available from -- see the 'usdz Tools' link at the bottom of the page.
Re: Let's discuss USDZ Converter in details...glitche123 Jul 13, 2019 5:44 AM (in response to gchiste)
Are they supported on other platforms windows/linux?
I am running into some usdArkItchecker issues when trying to create a usdz on windows, however the USDZ gets created but the textures do not apply and it appears as a plain white mesh just like a USD
Re: Let's discuss USDZ Converter in details...jackhhchan Jul 16, 2019 9:38 PM (in response to glitche123)
Hi Glitche123,
were you able to get usdzconvert to run on linux?
I am keep getting this error...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/xxx/Downloads/usdpython/usdpython/usdzconvert/usdUtils.py", line 12, in <module>
from pxr import *
File "/home/xxx/Downloads/usdpython/usdpython/USD/lib/python/pxr/Tf/__init__.py", line 85, in <module>
from . import _tf
ImportError: /home/xxx/Downloads/usdpython/usdpython/USD/lib/python/pxr/Tf/_tf.so: invalid ELF header
I've tried to run this on ubuntu 18.04.2 LTS, and a docker instance of ubuntu 14.04.6 LTS.
Edit:
I've tried it on Windows as well with the same error, how did you get it import so that asdARKitChecker runs?
Thanks!
Re: Let's discuss USDZ Converter in details...glitche123 Aug 1, 2019 1:12 AM (in response to jackhhchan)
Sorry for the late response.
What i did was
- Build USD on windows
- Set environment variables acordingly.
- Run usdz_convertor
Re: Let's discuss USDZ Converter in details...VForsmann Sep 26, 2019 2:57 PM (in response to jackhhchan)
Hey jackhhchan!
I got the same error, making a major mistake. The used USD Pipeline () is precompiled on MacOS. So you have to build USD on your Linux OS (or Windows...) first (see README.md on above GitHub Repo). Copy the created USDPython/lib and replace the USD/lib. After that you will be able to use usdzconvert on your OS.
Better late than never. :-) | https://forums.developer.apple.com/thread/107094 | CC-MAIN-2020-24 | refinedweb | 2,423 | 65.62 |
08 September 2008 16:22 [Source: ICIS news]
LONDON (ICIS news)--UK acrylics producer Lucite has hired Deutsche Bank and Merrill Lynch to examine options of a possible sale or initial public offering (IPO) amid media reports over two $2bn (€1.4bn) takeover bids, it said on Monday.
?xml:namespace>
Earlier the Financial Times reported that Lucite had received offers of around $2bn (€1.4bn) from both ?xml:namespace>
The potential sale comes as Lucite prepares to launch a new 120,000 tonne/year methyl methacrylate (MMA) facility in
“We were clear at the time of the refinancing that, once Alpha was proven, we would examine the options of either an IPO or sale at the end of 2008 or 2009,” UK-headquartered Lucite said in a statement.
“We are very pleased with the progress made on Alpha. The project remains on plan and the plant will be operational by the end of 2008,” it said.
Initially developed by ICI, the process uses ethylene, carbon monoxide and methanol as raw materials instead of conventional materials such as acetone, hydrocyanic acid and isobutylene, saving 40% in construction costs, according to Lucite.
Lucite attempted an IPO or sale in 2006, but the process stalled because shareholders believed the offers on the table undervalued the business.
Lucite is 85% owned by private equity group Charterhouse, with
($1 = €0.70)
For more on M | http://www.icis.com/Articles/2008/09/08/9154713/lucite-hires-advisers-for-potential-sale-ipo.html | CC-MAIN-2014-42 | refinedweb | 230 | 59.64 |
*Reminiscence solution in Clear category for I Love Python! by JimmyCarlos
def i_love_python():
"""
I'm not great with words, so I'll try to keep this short.
Programming came into my life at a very dark time after losing something very dear to me.
It was a website called Project Euler that got me hooked.
It has maths problems that are very challenging, and not knowing any programming,
I used Excel to create tables with thousands of squares over 5MB large.
I eventually started trying Lua after it was recommended as a good beginner's language.
I studied hard, practised examples and did questions - but got frustrated over the fact that Lua,
as well as most other programming languages can't accurately add long numbers.
*
I brought up this in real life to a friend who was an IT School Teacher who mentioned about Python.
He sent me the 82 page workbook he uses to teach his teenage computer science class,
and I start working away at it with any free time I had.
The more Python I did and the more I practised, the more I fell in love with the language.
Everything is so logical (with the exception of 0-based indexing!!!) and easy to use,
that it became the only language I used.
After finishing the book, I started looking online for more challenges to practise my coding,
and checkIO is one of my favourites. What really sets checkIO apart from the others is the community,
who are full of fantastic people.
*
All in all, I love Python!
"""
print("".join(c for c in message if c not in {"\n","\t"}).replace("*","\n\n"))
return message[-19:-5]
Aug. 7, 2018
Forum
Price
Global Activity
Jobs
ClassRoom Manager
Leaderboard
Coding games
Python programming for beginners | https://py.checkio.org/mission/i-love-python/publications/JimmyCarlos/python-3/reminiscence/share/e8751d4df144a743cd53576f3ab2843c/ | CC-MAIN-2019-51 | refinedweb | 298 | 72.36 |
BSP Porting¶
Introduction¶
The Apache Mynewt core repo contains support for several different boards. For each supported board, there is a Board
Support Package (BSP) package in the
hw/bsp directory. If there isn’t a BSP package for your hardware, then you will need to make one
yourself. This document describes the process of creating a BSP package from scratch.
While creating your BSP package, the following documents will probably come in handy:
The datasheet for the MCU you have chosen.
The schematic of your board.
The information on the CPU core within your MCU if it is not included in your MCU documentation.
This document is applicable to any hardware, but it will often make reference to a specific board as an example. Our example BSP has the following properties:
Name:
hw/bsp/myboard
MCU: Nordic nRF52
Download the BSP package template¶
We start by downloading a BSP package template. This template will serve as a good starting point for our new BSP.
Execute the
newt pkg new command, as below:
$ newt pkg new -t bsp hw/bsp/myboard Download package template for package type bsp. Package successfuly installed into /home/me/myproj/hw/bsp/myboard.
Our new package has the following file structure:
$ tree hw/bsp/myboard hw/bsp/myboard ├── README.md ├── boot-myboard.ld ├── bsp.yml ├── include │ └── myboard │ └── bsp.h ├── myboard.ld ├── myboard_debug.sh ├── myboard_download.sh ├── pkg.yml ├── src │ ├── hal_bsp.c │ └── sbrk.c └── syscfg.yml 3 directories, 11 files
We will be adding to this package throughout the remainder of this document. See Appendix A: BSP files for a full list of files typically found in a BSP package.
Create a set of Mynewt targets¶
We’ll need two targets to test our BSP as we go:
Boot loader
Application
A minimal application is best, since we are just interested in getting the BSP up and running. A good app for our purposes is blinky.
We create our targets with the following set of newt commands:
newt target create boot-myboard && newt target set boot-myboard app=@mcuboot/boot/mynewt \ bsp=hw/bsp/myboard \ build_profile=optimized newt target create blinky-myboard && newt target set blinky-myboard app=apps/blinky \ bsp=hw/bsp/myboard \ build_profile=debug
Which generates the following output:
Target targets/boot-myboard successfully created Target targets/boot-myboard successfully set target.app to @mcuboot/boot/mynewt Target targets/boot-myboard successfully set target.bsp to hw/bsp/myboard Target targets/boot-myboard successfully set target.build_profile to debug Target targets/blinky-myboard successfully created Target targets/blinky-myboard successfully set target.app to apps/blinky Target targets/blinky-myboard successfully set target.bsp to hw/bsp/myboard Target targets/blinky-myboard successfully set target.build_profile to debug
Fill in the
bsp.yml file¶
The template
hw/bsp/myboard/bsp.yml file is missing some values that need to be added. It also assumes certain
information that may not be appropriate for your BSP. We need to get this file into a usable state.
Missing fields are indicated by the presence of
XXX markers. Here are the first several lines of our
bsp.yml
file where all the incomplete fields are located:
bsp.arch: # XXX <MCU-architecture> bsp.compiler: # XXX <compiler-package> bsp.linkerscript: - 'hw/bsp/myboard/myboard.ld' # - XXX mcu-linker-script bsp.linkerscript.BOOT_LOADER.OVERWRITE: - 'hw/bsp/myboard/myboard/boot-myboard.ld' # - XXX mcu-linker-script
So we need to specify the following:
MCU architecture
Compiler package
MCU linker script
Our example BSP uses an nRF52 MCU, which implements the
cortex_m4 architecture. We use this information to fill in
the incomplete fields:
bsp.arch: cortex_m4 bsp.compiler: '@apache-mynewt-core/compiler/arm-none-eabi-m4' bsp.linkerscript: - 'hw/bsp/myboard/myboard.ld' - '@apache-mynewt-core/hw/mcu/nordic/nrf52xxx/nrf52.ld' bsp.linkerscript.BOOT_LOADER.OVERWRITE: - 'hw/bsp/myboard/boot-myboard.ld' - '@apache-mynewt-core/hw/mcu/nordic/nrf52xxx/nrf52.ld'
Naturally, these values must be adjusted accordingly for other MCU types.
Flash map¶
At the bottom of the
bsp.yml file is the flash map. The flash map partitions the BSP’s flash memory into sections
called areas. Flash areas are further categorized into two types: 1) system areas, and 2) user areas. These two area
types are defined below.
System areas
Used by Mynewt core components.
BSP support is mandatory in most cases.
Use reserved names.
User areas
Used by application code and supplementary libraries.
Identified by user-assigned names.
Have unique user-assigned numeric identifiers for access by C code.
The flash map in the template
bsp.yml file is suitable for an MCU with 512kB of internal flash. You may need to
adjust the area offsets and sizes if your BSP does not have 512kB of internal flash.
The system flash areas are briefly described below:
Add the MCU dependency to
pkg.yml¶
A package’s dependencies are listed in its
pkg.yml file. A BSP package always depends on its corresponding MCU
package, so let’s add that dependency to our BSP now. The
pkg.deps section of our
hw/bsp/myboard/pkg.yml file
currently looks like this:
pkg.deps: # - XXX <MCU-package> - '@apache-mynewt-core/kernel/os' - '@apache-mynewt-core/libc/baselibc'
Continuing with our example nRF52 BSP, we replace the marked line as follows:
pkg.deps: - '@apache-mynewt-core/hw/mcu/nordic/nrf52xxx' - '@apache-mynewt-core/kernel/os' - '@apache-mynewt-core/libc/baselibc'
Again, the particulars depend on the MCU that your BSP uses.
Check the BSP linker scripts¶
Linker scripts are a key component of the BSP package. They specify how code and data are arranged in the MCU’s memory. Our BSP package contains two linker scripts:
First, we will deal with the application linker script. You may have noticed that the
bsp.linkerscript item in
bsp.yml actually specifies two linker scripts:
BSP linker script (
hw/bsp/myboard.ld)
MCU linker script (
@apache-mynewt-core/hw/mcu/nordic/nrf52xxx/nrf52.ld)
Both linker scripts get used in combination when you build a Mynewt image. Typically, all the complexity is isolated to the MCU linker script, while the BSP linker script just contains minimal size and offset information. This makes the job of creating a BSPpackage much simpler.
Our
myboard.ld file has the following contents:
MEMORY { FLASH (rx) : ORIGIN = 0x00008000, LENGTH = 0x3a000 RAM (rwx) : ORIGIN = 0x20000000, LENGTH = 0x10000 } /* This linker script is used for images and thus contains an image header */ _imghdr_size = 0x20;
Our task is to ensure the offset (
ORIGIN) and size (
LENGTH) values are correct for the
FLASH and
RAM
regions. Note that the
FLASH region does not specify the board’s entire internal flash; it only describes the area
of the flash dedicated to containing the running Mynewt image. The bounds of the
FLASH region should match those of
the
FLASH_AREA_IMAGE_0 area in the BSP’s flash map.
The
_imghdr_size is always
0x20, so it can remain unchanged.
The second linker script,
boot-myboard.ld, is quite similar to the first. The important difference is the
FLASH
region: it describes the area of flash which contains the boot loader rather than an image. The bounds of this region
should match those of the
FLASH_AREA_BOOTLOADER area in the BSP’s flash map. For more information about the Mynewt
boot loader, see this page.
Copy the download and debug scripts¶
The newt command line tool uses a set of scripts to load and run Mynewt images. It is the BSP package that provides these scripts.
As with the linker scripts, most of the work done by the download and debug scripts is isolated to the MCU package. The
BSP scripts are quite simple, and you can likely get away with just copying them from another BSP. The template
myboard_debug.sh script indicates which BSP to copy from:
#!/bin/sh # This script attaches a gdb session to a Mynewt image running on your BSP. # If your BSP uses JLink, a good example script to copy is: # repos/apache-mynewt-core/hw/bsp/nordic_pca10040/nordic_pca10040_debug.sh # # If your BSP uses OpenOCD, a good example script to copy is: # repos/apache-mynewt-core/hw/bsp/rb-nano2/rb-nano2_debug.sh
Our example Nordic nRF52 BSP uses JLink, so we will copy the Nordic PCA10040 (nRF52 DK) BSP’s scripts:
cp repos/apache-mynewt-core/hw/bsp/nordic_pca10040/nordic_pca10040_debug.sh hw/bsp/myboard/myboard_debug.sh cp repos/apache-mynewt-core/hw/bsp/nordic_pca10040/nordic_pca10040_download.sh hw/bsp/myboard/myboard_download.sh
Fill in BSP functions and defines¶
There are a few particulars missing from the BSP’s C code. These areas are marked with
XXX comments to make them
easier to spot. The missing pieces are summarized in the table below:
For our nRF52 BSP, we modify these files as follows:
src/hal_bsp.c:
#include "mcu/nrf52_hal.h"
const struct hal_flash * hal_bsp_flash_dev(uint8_t id) { switch (id) { case 0: /* MCU internal flash. */ return &nrf52k_flash_dev; default: /* External flash. Assume not present in this BSP. */ return NULL; } }
include/bsp/bsp.h:
#define RAM_SIZE 0x10000 /* Put additional BSP definitions here. */ #define LED_BLINK_PIN 17
Add startup code¶
Now we need to add the BSP’s assembly startup code. Among other things, this is the code that gets executed immediately on power up, before the Mynewt OS is running. This code must perform a few basic tasks:
Assign labels to memory region boundaries.
Define some interrupt request handlers.
Define the
Reset_Handlerfunction, which:
Zeroes the
.bsssection.
Copies static data from the image to the
.datasection.
Starts the Mynewt OS.
This file is named according to the following pattern:
hw/bsp/myboard/src/arch/<ARCH>/gcc_startup_<MCU>.s
The best approach for creating this file is to copy from other BSPs. If there is another BSP that uses the same MCU, you might be able to use most or all of its startup file.
For our example BSP, we’ll just copy the Nordic PCA10040 (nRF52 DK) BSP’s startup code:
$ mkdir -p hw/bsp/myboard/src/arch/cortex_m4 $ cp repos/apache-mynewt-core/hw/bsp/nordic_pca10040/src/arch/cortex_m4/gcc_startup_nrf52.s hw/bsp/myboard/src/arch/cortex_m4/
Satisfy MCU requirements¶
The MCU package probably requires some build-time configuration. Typically, it is the BSP which provides this configuration. Completing this step will likely involve some trial and error as each unmet requirement gets reported as a build error.
Our example nRF52 BSP requires the following changes:
Macro indicating MCU type. We add this to our BSP’s
pkg.ymlfile:
pkg.cflags: - '-DNRF52'
Enable exactly one low-frequency timer setting in our BSP’s
syscfg.ymlfile. This is required by the nRF51 and nRF52 MCU packages:
# Settings this BSP overrides. syscfg.vals: XTAL_32768: 1
Test it¶
Now it’s finally time to test the BSP package. Build and load your boot and blinky targets as follows:
$ newt build boot-myboard $ newt load boot-myboard $ newt run blinky-myboard 0
If everything is correct, the blinky app should successfully build, and you should be presented with a gdb prompt. Type
c <enter> (continue) to see your board’s LED blink.
Appendix A: BSP files¶
The table below lists the files required by all BSP packages. The naming scheme assumes a BSP called “myboard”.
A BSP can also contain the following optional files: | https://mynewt.apache.org/latest/os/core_os/porting/port_bsp.html | CC-MAIN-2022-05 | refinedweb | 1,866 | 50.33 |
Stack Exchange network consists of 174 QA communities including
Womens Joey Straight Jeans Pepe Jeans London Cheap Sale Outlet Store Free Shipping Really Buy Online With Paypal 3r1wj8z0!
Mens Reversible Down Gilet Wool Vest Hackett Manchester Online Inexpensive Online Outlet Affordable MRySccJS7V
or Discount New Arrival Sneakernews Cheap Online Zipped knitted cardigan with threedimensional structure HUGO BOSS 2018 Unisex Sale Online rV6xbkZ
to reply
BarbaraV.
|
|
Discount Pre Order Discount Outlet printed Aline skirt Blue Red Valentino Really Sale Free Shipping flN5WfON.
Mens Loose Fit Jeans Jack amp; Jones Cheap Order 1m0vWmO
View Cheap Authentic Outlet Sleeveless Top Yellow Spritz III by VIDA VIDA Zv876AGi
| | Clearance Many Kinds Of frill trim cropped trousers Blue Marco De Vincenzo Free Shipping Eastbay Original Sale Online Cheap Amazing Price Sale With Mastercard S5lIR
Besides the
Womens Ava Scoop Neck Midi Bodycon Dress Boohoo Official Site Sale Online Websites Q7kzCKa9X
statement just introduced, Python knows the usual control flow statements known from other languages, with some twists.
Perhaps the most well-known statement type is the
if
statement. For example:
There can be zero or more
elif
parts, and the
Clearance Pick A Best Multi Coloured Offtheshoulder Ruffled Striped Cottongauze Top Ecru Splendid Sale Pay With Paypal How Much Sale Online zB3QNsRy
part is optional. The keyword ‘
elif
’ is short for ‘else if’, and is useful to avoid excessive indentation. An
if
…
elif
…
elif
… sequence is a substitute for the
switch
or
case
statements found in other languages.
The
Roller Skate Four Stripe Track Pants Mira Mikati Outlet Deals WhfIFETC
New Lower Prices Womens Soft Leisure Cashmere Ruffle Hem Dress 1618 BLACK Lands End Free Shipping How Much yaGzZ:
New Arrival Fashion slim fit pants White Frame Denim Outlet Fake Jrofre
stonewashed skinny jeans Black R13 Inexpensive Sale Online ZVooC5
classes are defined, you can import them in
ready()
, using either an
import
statement or
Cheap Sale View Jeans Boyfriend TS blue female Taifun Amazing Price For Sale Fake L0xzOe
.
If you’re registering
TShirt for Women On Sale White Cotton 2017 10 8 Paul Smith Sast Cheap Online Free Shipping 100% Guaranteed Outlet Prices Ip2TJNcy
, you can refer to the sender by its string label instead of using the model class itself.
Example:
Warning
Although you can access model classes as described above, avoid interacting with the database in your
implementation. This includes model methods that execute queries (
,
Essential Top Calavera Sugar Skull by VIDA VIDA Free Shipping Official Site iX6W9BG82
Womens Bordeaux Dress Derhy Clearance Buy With Paypal Low Price 4CYZ7CjF
methods are called.
Returns an iterable of
AppConfig
instances.
Returns an
AppConfig
for the application with the given
app_label
. Raises
DESIGN Skinny/ Super Skinny Joggers 2 Pack Black Save Black Asos Cheap Official Deals Sale Online Visit New Perfect For Sale hXLXFX
if no such application exists.
Every month, over 350,000 golfers check the 20,874 golf clubs, 24,875 golf courses, 368,851 reviews and 121,956 photos on Leadingcourses.com, the largest golf course review site of Europe. | http://www.absolutesoftwarestore.com/marni_woman_ruched_floralprint_silktwill_top_black_size_36_marni/interpretation/2853de0472ba314-7811d085e3f860.shtml | CC-MAIN-2018-43 | refinedweb | 499 | 50.6 |
React Internationalization – How To
First of all, let’s define some vocabulary. “Internationalization” is a long word, and there are at least two widely used abbreviations: “intl,” “i18n”. “Localization” can be shortened to “l10n”.
Internationalization can be generally broken down into three main challenges: Detecting the user’s locale, translating UI elements, titles as well as hints, and last but not least, serving locale-specific content such as dates, currencies and numbers. In this article, I am going to focus only on front-end part. We’ll develop a simple universal React application with full internationalization support.
Internationalization can be generally broken down into the following challenges:
- detecting the user’s locale;
- translating UI elements, titles and hints;
- serving locale-specific content such as dates, currencies and numbers.
>Note: In this article, I am going to focus only on front-end part. We’ll develop a simple universal React application with full internationalization support.
Let’s use my boilerplate repository as a starting point. Here we have the Express web server for server-side rendering, webpack for building client-side JavaScript, Babel for translating modern JavaScript to ES5, and React for the UI implementation. We’ll use better-npm-run to write OS-agnostic scripts, nodemon to run a web server in the development environment and webpack-dev-server to serve assets.
Our entry point to the server application is
server.js. Here, we are loading Babel and babel-polyfill to write the rest of the server code in modern JavaScript. Server-side business logic is implemented in
src/server.jsx. Here, we are setting up an Express web server, which is listening to port
3001. For rendering, we are using a very simple component from
components/App.jsx, which is also a universal application part entry point.
Our entry point to the client-side JavaScript is
src/client.jsx. Here, we mount the root component
component/App.jsx to the placeholder
react-view in the HTML markup provided by the Express web server.
So, clone the repository, run
npm install and execute nodemon and webpack-dev-server in two console tabs simultaneously.
In the first console tab:
git clone cd smashing-react-i18n npm install npm run nodemon
And in the second console tab:
cd smashing-react-i18n npm run webpack-devserver
A website should become available at
localhost:3001. Open your favorite browser and try it out.
We are ready to roll!
1. Detecting The User’s Locale
There are two possible solutions to this requirement. For some reason, most popular websites, including Skype’s and the NBA’s, use Geo IP to find the user’s location and, based on that, to guess the user’s language. This approach is not only expensive in terms of implementation, but also not really accurate. Nowadays, people travel a lot, which means that a location doesn’t necessarily represent the user’s desired locale. Instead, we’ll use the second solution and process the HTTP header
Accept-Language on the server side and extract the user’s language preferences based on their system’s language settings. This header is sent by every modern browser within a page request.
Accept-Language Request Header
The
Accept-Language request header provides the set of natural languages that are preferred as a response to the request.
-.
(It is worth mentioning that this method is still imperfect. For example, a user might visit your website from an Internet cafe or a public computer. To resolve this, always implement a widget with which the user can change the language intuitively and that they can easily locate within a few seconds.)
Implementing Detection Of User’s Locale
Here is a code example for a Node.js Express web server. We are using the
accept-language package, which extracts locales from HTTP headers and finds the most relevant among the ones supported by your website. If none are found, then you’d fall back to the website’s default locale. For returning users, we will check the cookie’s value instead.
Let’s start by installing the packages:
npm install --save accept-language npm install --save cookie-parser js-cookie
And in
src/server.jsx, we’d have this:
import cookieParser from 'cookie-parser'; import acceptLanguage from 'accept-language'; acceptLanguage.languages(['en', 'ru']); const app = express(); app.use(cookieParser()); function detectLocale(req) { const cookieLocale = req.cookies.locale; return acceptLanguage.get(cookieLocale || req.headers['accept-language']) || 'en'; } … app.use((req, res) => { const locale = detectLocale(req); const componentHTML = ReactDom.renderToString(<App />); res.cookie('locale', locale, { maxAge: (new Date() * 0.001) + (365 * 24 * 3600) }); return res.end(renderHTML(componentHTML)); });
Here, we are importing the
accept-language package and setting up English and Russian locales as supported. We are also implementing the
detectLocale function, which fetches a locale value from a cookie; if none is found, then the HTTP
Accept-Language header is processed. Finally, we are falling back to the default locale (
en in our example). After the request is processed, we add the HTTP header
Set-Cookie for the locale detected in the response. This value will be used for all subsequent requests.
2. Translating UI Elements, Titles And Hints
I am going to use the React Intl package for this task. It is the most popular and battle-tested i18n implementation of React apps. However, all libraries use the same approach: They provide “higher-order components” (from the functional programming design pattern, widely used in React), which injects internationalization functions for handling messages, dates, numbers and currencies via React’s context features.
First, we have to set up the internationalization provider. To do so, we will slightly change the
src/server.jsx and
src/client.jsx files.
npm install --save react-intl
Here is
src/server.jsx:
import { IntlProvider } from 'react-intl'; … --- const componentHTML = ReactDom.renderToString(<App />); const componentHTML = ReactDom.renderToString( <IntlProvider locale={locale}> <App /> </IntlProvider> ); …
And here is
src/client.jsx:
import { IntlProvider } from 'react-intl'; import Cookie from 'js-cookie'; const locale = Cookie.get('locale') || 'en'; … --- ReactDOM.render(<App />, document.getElementById('react-view')); ReactDOM.render( <IntlProvider locale={locale}> <App /> </IntlProvider>, document.getElementById('react-view') );
So, now all
IntlProvider child components will have access to internationalization functions. Let’s add some translated text to our application and a button to change the locale (for testing purposes). We have two options: either the
FormattedMessage component or the
formatMessage function. The difference is that the component will be wrapped in a
span tag, which is fine for text but not suitable for HTML attribute values such as
alt and
title. Let’s try them both!
Here is our
src/components/App.jsx file:
import { FormattedMessage } from 'react-intl'; … --- <h1>Hello World!</h1> <h1><FormattedMessage id="app.hello_world" defaultMessage="Hello World!" description="Hello world header greeting" /></h1>
Please note that the
id attribute should be unique for the whole application, so it makes sense to develop some rules for naming your messages. I prefer to follow the format
componentName.someUniqueIdWithInComponent. The
defaultMessage value will be used for your application’s default locale, and the
description attribute gives some context to the translator.
Restart nodemon and refresh the page in your browser. You should still see the “Hello World” message. But if you open the page in the developer tools, you will see that text is now inside the
span tags. In this case, it isn’t an issue, but sometimes we would prefer to get just the text, without any additional tags. To do so, we need direct access to the internationalization object provided by React Intl.
Let’s go back to
src/components/App.jsx:
--- import { FormattedMessage } from 'react-intl'; import { FormattedMessage, intlShape, injectIntl, defineMessages } from 'react-intl'; const propTypes = { intl: intlShape.isRequired, }; const messages = defineMessages({ helloWorld2: { id: 'app.hello_world2', defaultMessage: 'Hello World 2!', }, }); --- export default class extends Component { class App extends Component { render() { return ( <div className="App"> <h1> <FormattedMessage id="app.hello_world" defaultMessage="Hello World!" description="Hello world header greeting" /> </h1> <h1>{this.props.intl.formatMessage(messages.helloWorld2)}</h1> </div> ); } } App.propTypes = propTypes; export default injectIntl(App);
We’ve had to write a lot more code. First, we had to use
injectIntl, which wraps our app component and injects the
intl object. To get the translated message, we had to call the
formatMessage method and pass a
message object as a parameter. This
message object must have unique
id and
defaultValue attributes. We use
defineMessages from React Intl to define such objects.
The best thing about React Intl is its ecosystem. Let’s add babel-plugin-react-intl to our project, which will extract
FormattedMessages from our components and build a translation dictionary. We will pass this dictionary to the translators, who won’t need any programming skills to do their job.
npm install --save-dev babel-plugin-react-intl
Here is
.babelrc:
{ "presets": [ "es2015", "react", "stage-0" ], "env": { "development": { "plugins":[ ["react-intl", { "messagesDir": "./build/messages/" }] ] } } }
Restart nodemon and you should see that a
build/messages folder has been created in the project’s root, with some folders and files inside that mirror your JavaScript project’s directory structure. We need to merge all of these files into one JSON. Feel free to use my script. Save it as
scripts/translate.js.
Now, we need to add a new script to
package.json:
"scripts": { … "build:langs": "babel scripts/translate.js | node", … }
Let’s try it out!
npm run build:langs
You should see an
en.json file in the
build/lang folder with the following content:
{ "app.hello_world": "Hello World!", "app.hello_world2": "Hello World 2!" }
It works! Now comes interesting part. On the server side, we can load all translations into memory and serve each request accordingly. However, for the client side, this approach is not applicable. Instead, we will send the JSON file with translations once, and a client will automatically apply the provided text for all of our components, so the client gets only what it needs.
Let’s copy the output to the
public/assets folder and also provide some translation.
ln -s ../../build/lang/en.json public/assets/en.json
Note: If you are a Windows user, symlinks are not available to you, which means you have to manually copy the command below every time you rebuild your translations:
cp ../../build/lang/en.json public/assets/en.json
In
public/assets/ru.json, we need the following:
{ "app.hello_world": "Привет мир!", "app.hello_world2": "Привет мир 2!" }
Now we need to adjust the server and client code.
For the server side, our
src/server.jsx file should look like this:
--- import { IntlProvider } from 'react-intl'; import { addLocaleData, IntlProvider } from 'react-intl'; import fs from 'fs'; import path from 'path'; import en from 'react-intl/locale-data/en'; import ru from 'react-intl/locale-data/ru'; addLocaleData([…ru, …en]); const messages = {}; const localeData = {}; ['en', 'ru'].forEach((locale) => { localeData[locale] = fs.readFileSync(path.join(__dirname, '../node_modules/react-intl/locale-data/${locale}.js')).toString(); messages[locale] = require('../public/assets/${locale}.json'); }); --- function renderHTML(componentHTML) { function renderHTML(componentHTML, locale) { … <script type="application/javascript" src="${assetUrl}/public/assets/bundle.js"></script> <script type="application/javascript">${localeData[locale]}</script> … --- <IntlProvider locale={locale}> <IntlProvider locale={locale} messages={messages[locale]}> … --- return res.end(renderHTML(componentHTML)); return res.end(renderHTML(componentHTML, locale));
Here we are doing the following:
- caching messages and locale-specific JavaScript for the currency,
DateTimeand
Numberformatting during startup (to ensure good performance);
- extending the
renderHTMLmethod so that we can insert locale-specific JavaScript into the generated HTML markup;
- providing the translated messages to
IntlProvider(all of those messages are now available to child components).
For the client side, first we need to install a library to perform AJAX requests. I prefer to use isomorphic-fetch because we will very likely also need to request data from third-party APIs, and isomorphic-fetch can do that very well in both client and server environments.
npm install --save isomorphic-fetch
Here is
src/client.jsx:
--- import { IntlProvider } from 'react-intl'; import { addLocaleData, IntlProvider } from 'react-intl'; import fetch from 'isomorphic-fetch'; const locale = Cookie.get('locale') || 'en'; fetch(`/public/assets/${locale}.json`) .then((res) => { if (res.status >= 400) { throw new Error('Bad response from server'); } return res.json(); }) .then((localeData) => { addLocaleData(window.ReactIntlLocaleData[locale]); ReactDOM.render( --- <IntlProvider locale={locale}> <IntlProvider locale={locale} messages={localeData}> … ); }).catch((error) => { console.error(error); });
We also need to tweak
src/server.jsx, so that Express serves the translation JSON files for us. Note that in production, you would use something like
nginx instead.
app.use(cookieParser()); app.use('/public/assets', express.static('public/assets'));
After the JavaScript is initialized,
client.jsx will grab the locale from the cookie and request the JSON file with the translations. Afterwards, our single-page application will work as before.
Time to check that everything works fine in the browser. Open the “Network” tab in the developer tools, and check that JSON has been successfully fetched by our client.
To finish this part, let’s add a simple widget to change the locale, in
src/components/LocaleButton.jsx:
import React, { Component, PropTypes } from 'react'; import Cookie from 'js-cookie'; const propTypes = { locale: PropTypes.string.isRequired, }; class LocaleButton extends Component { constructor() { super(); this.handleClick = this.handleClick.bind(this); } handleClick() { Cookie.set('locale', this.props.locale === 'en' ? 'ru' : 'en'); window.location.reload(); } render() { return <button onClick={this.handleClick}>{this.props.locale === 'en' ? 'Russian' : 'English'}; } } LocaleButton.propTypes = propTypes; export default LocaleButton;
Add the following to
src/components/App.jsx:
import LocaleButton from './LocaleButton'; … <h1>{this.props.intl.formatMessage(messages.helloWorld2)}</h1> <LocaleButton locale={this.props.intl.locale} />
Note that once the user changes their locale, we’ll reload the page to ensure that the new JSON file with the translations is fetched.
High time to test! OK, so we’ve learned how to detect the user’s locale and how to show translated messages. Before moving to the last part, let’s discuss two other important topics.
Pluralization And Templates
In English, most words take one of two possible forms: “one apple,” “many apples.” In other languages, things are a lot more complicated. For example, Russian has four different forms. Hopefully, React Intl will help us to handle pluralization accordingly. It also supports templates, so you can provide variables that will be inserted into the template during rendering. Here’s how it works.
In
src/components/App.jsx, we have the following:
const messages = defineMessages({ counting: { id: 'app.counting', defaultMessage: 'I need to buy {count, number} {count, plural, one {apple} other {apples}}' }, … <LocaleButton locale={this.props.intl.locale} /> <div>{this.props.intl.formatMessage(messages.counting, { count: 1 })}</div> <div>{this.props.intl.formatMessage(messages.counting, { count: 2 })}</div> <div>{this.props.intl.formatMessage(messages.counting, { count: 5 })}</div>
Here, we are defining a template with the variable
count. We will print either “1 apple” if
count is equal to
1, 21, etc. or “2 apples” otherwise. We have to pass all variables within
formatMessage’s
values option.
Let’s rebuild our translation file and add the Russian translations to check that we can provide more than two variants for languages other than English.
npm run build:langs
Here is our
public/assets/ru.json file:
{ … "app.counting": "Мне нужно купить {count, number} {count, plural, one {яблоко} few {яблока} many {яблок}}" }
All use cases are covered now. Let’s move forward!
3. Serving Locale-Specific Content Such As Dates, Currencies And Numbers
Your data will be represented differently depending on the locale. For example, Russian would show
500,00 $ and
10.12.2016, whereas US English would show
$500.00 and
12/10/2016.
React Intl provides React components for such kinds of data and also for the relative rendering of time, which will automatically be updated each 10 seconds if you do not override the default value.
Add this to
src/components/App.jsx:
--- import { FormattedMessage, intlShape, injectIntl, defineMessages } from 'react-intl'; import { FormattedDate, FormattedRelative, FormattedNumber, FormattedMessage, intlShape, injectIntl, defineMessages, } from 'react-intl'; … <div>{this.props.intl.formatMessage(messages.counting, { count: 5 })}</div> <div><FormattedDate value={Date.now()} /></div> <div><FormattedNumber value="1000" currency="USD" currencyDisplay="symbol" style="currency" /></div> <div><FormattedRelative value={Date.now()} /></div>
Refresh the browser and check the page. You’ll need to wait for 10 seconds to see that the
FormattedRelative component has been updated.
You’ll find a lot more examples in the official wiki.
Cool, right? Well, now we might face another problem, which affects universal rendering.
On average, two seconds will elapse between when the server provides markup to the client and the client initializes client-side JavaScript. This means that all
DateTimes rendered on the page might have different values on the server and client sides, which, by definition, breaks universal rendering. To resolve this, React Intl provides a special attribute,
initialNow. This provides a server timestamp that will initially be used by client-side JavaScript as a timestamp; this way, the server and client checksums will be equal. After all components have been mounted, they will use the browser’s current timestamp, and everything will work properly. So, this trick is used only to initialize client-side JavaScript, in order to preserve universal rendering.
Here is
src/server.jsx:
--- function renderHTML(componentHTML, locale) { function renderHTML(componentHTML, locale, initialNow) { return ` <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Hello React</title> </head> <body> <div id="react-view">${componentHTML}</div> <script type="application/javascript" src="${assetUrl}/public/assets/bundle.js"></script> <script type="application/javascript">${localeData[locale]}</script> <script type="application/javascript">window.INITIAL_NOW=${JSON.stringify(initialNow)}</script> </body> </html> `; } const initialNow = Date.now(); const componentHTML = ReactDom.renderToString( --- <IntlProvider locale={locale} messages={messages[locale]}> <IntlProvider initialNow={initialNow} locale={locale} messages={messages[locale]}> <App /> </IntlProvider> ); res.cookie('locale', locale, { maxAge: (new Date() * 0.001) + (365 * 24 * 3600) }); --- return res.end(renderHTML(componentHTML, locale)); return res.end(renderHTML(componentHTML, locale, initialNow));
And here is
src/client.jsx:
--- <IntlProvider locale={locale} messages={localeData}> <IntlProvider initialNow={parseInt(window.INITIAL_NOW, 10)} locale={locale} messages={localeData}>
Restart nodemon, and the issue will almost be gone! It might persist because we are using
Date.now(), instead of some timestamp provided by the database. To make the example more realistic, in
app.jsx replace
Date.now() with a recent timestamp, like
1480187019228.
(You might face another issue when the server is not able to render the
DateTime in the proper format, which will also break universal rendering. This is because version 4 of Node.js is not built with Intl support by default. To resolve this, follow one of the solutions described in the official wiki.)
4. A Problem
It sounds too good to be true so far, doesn’t it? We as front-end developers always have to be very cautious about anything, given the variety of browsers and platforms. React Intl uses the native Intl browser API for handling the
DateTime and
Number formats. Despite the fact that it was introduced in 2012, it is still not supported by all modern browsers. Even Safari supports it partially only since iOS 10. Here is the whole table from CanIUse for reference.
This means that if you are willing to cover a minority of browsers that don’t support the Intl API natively, then you’ll need a polyfill. Thankfully, there is one, Intl.js. It might sound like a perfect solution once again, but from my experience, it has its own drawbacks. First of all, you’ll need to add it to the JavaScript bundle, and it is quite heavy. You’ll also want to deliver the polyfill only to browsers that don’t support the Intl API natively, to reduce your bundle size. All of these techniques are well known, and you might find them, along with how to do it with webpack, in Intl.js’ documentation. However, the biggest issue is that Intl.js is not 100% accurate, which means that the
DataTime and
Number representations might differ between the server and client, which will break server-side rendering once again. Please refer to the relevant GitHub issue for more details.
I’ve come up with another solution, which certainly has its own drawbacks, but it works fine for me. I implemented a very shallow polyfill, which has only one piece of functionality. While it is certainly unusable for many cases, it adds only 2 KB to the bundle’s size, so there is not even any need to implement dynamic code-loading for outdated browsers, which makes the overall solution simpler. Feel free to fork and extend it if you think this approach would work for you.
Conclusion
Well, now you might feel that things are becoming too complicated, and you might be tempted to implement everything yourself. I did that once; I wouldn’t recommend it. Eventually, you will arrive at the same ideas behind React Intl’s implementation, or, worse, you might think there are not many options to make certain things better or to do things differently.
You might think you can solve the Intl API support issue by relying on Moment.js instead (I won’t mention other libraries with the same functionality because they are either unsupported or unusable). Fortunately, I tried that, so I can save you a lot of time. I’ve learned that Moment.js is a monolith and very heavy, so while it might work for some folks, I wouldn’t recommend it.
Developing your own polyfill doesn’t sound great because you will surely have to fight with bugs and support the solution for quite some time. The bottom line is that there is no perfect solution at the moment, so choose the one that suits you best.
(If you feel lost at some point or something doesn’t work as expected, check the “solution” branch of my repository.)
Hopefully, this article has given you all of the knowledge needed to build an internationalized React front-end application. You should now know how to detect the user’s locale, save it in the cookie, let the user change their locale, translate the user interface, and render currencies,
DateTimes and
Numbers in the appropriate formats! You should also now be aware of some traps and issues you might face, so choose the option that fits your requirements, bundle-size budget and number of languages to support.
Further Reading on SmashingMag:
- Why You Should Consider React Native For Your Mobile App
- How To Scale React Applications
- Building Your First iOS App With JavaScript
| https://www.smashingmagazine.com/2017/01/internationalizing-react-apps/ | CC-MAIN-2022-33 | refinedweb | 3,740 | 50.33 |
When you
#include iostream, you get the object
std::cout
That is, you get the object
cout, which is in the std namespace. You can use an object in the std namespace either directly:
std::cout << "Something";
or by stating that you want to use that particular object like this:
using std::cout; cout << "Something";
or that you want to use everything from std namespace (this one is generally a bad idea, and if you do it in a header file someone will be very angry)
using namespace std; cout << "Something";
The object named cout that is part of the namespace std is declared in iostream (or something included from iostream, or maybe in iostream it's simply declared with
extern), and that declaration contains the information that cout is part of the std namesapce.
cout will be defined in one of the files that came with your compiler (for example, in the libstdc++), but exactly where and how is up to whoever wrote it. The standard doesn't enforce where it is defined.
Edited 2 Years Ago by Moschops
If you don't know what namespaces are then you need to read a good tutorial because namespaces are fundamental to c++ language. Namespaces were invented to keep from getting name conflicts from one file to the next, that was a big problem in C language. For example, file A.h might declare a struct named foo, and file B.h might declare another structure with the same name. If both files are included in the same *.cpp file then there is a name clash. Namespace helps prevent that problem.
All (or most) standard C++ header files are under the namespace called "std". So when you write
using namespace std; you are telling the compiler to use everything in that namespace regardess of where it comes from. In small pograms like you might write in school that is not a big problem, but in large professional c++ program it can become a problem. There are a couple alternate ways of coding it, such as
using std::cin; or just writing
std::cin << "Hello World\n";
Where is it written? If you follow up the chain of library files, you'll come to:
yvals.h
Which has the line:
#define _STD_BEGIN namespace std{
Most every library file, after its #include statements, leads off with:
_STD_BEGIN
(at least that's where it exists in MS Visual C++)
Edited 2 Years Ago by vmanes
thnks vmanes.
my teacher told following format for a namespace--
namespace abc { int a; variables }
but in yvals.h all i could find is
#define _STD_BEGIN namespace std { #define _STD_END }
where is cout ?
sorry i just started c++ and got confused with namespace std and iostream.
cout is defined in <iostream> header file. As a beginner, don't try to read iostream header file -- it's a big mess. Just be aware that cin and cout are declared in iostream. If you want to do file io then use fstream, ifstream and ofstream classes. Trying to follow the header files can ruin your brain.
Edited 2 Years Ago by Ancient Dragon
As a beginner, the easiest thing is to just always put
using namespace std;
after your program's #include statements.
Like Nike says, "Just | https://www.daniweb.com/programming/software-development/threads/475336/namespace-and-header-file | CC-MAIN-2016-50 | refinedweb | 546 | 77.67 |
You may have noticed when running an application you need to provide explicit permissions for everything that runs. This can be cumbersome and we want development to be easy (at least easier, right?). Lets learn a better way!
Setup
If you haven’t already, you’ll need to install deno. For instructions, check out my getting started guide.
Now that you have deno installed, we will create a simple project and learn how to output environmental variables and look into what we can do to make the permissions part easier.
In your project directory, create a new file called
index.ts.
Let’s create a simple line of code in it:
console.log('Hello World');
Adding some functionality
That’s simple enough but wouldn’t it be cool if we could have it spit out a “Hello [username]”? (Your username from terminal)
From terminal you can use
whoami or
echo $USER and you will see the output is your username. (Works on Mac only).
We can use a built in
Deno.env feature that has a
get, a
set, and a
toObject() function. We are going to use the
.get which will retrieve the value of an environmental variable (Returns undefined if that key doesn’t exist).
We want to get our USER variable – that will look like this:
console.log(`hello ${Deno.env.get('USER')}`); //note the backticks and the template literals
What happens if we run this?
In terminal run
deno run index.ts
You will get an error:
$ deno run index.ts error: Uncaught PermissionDenied: access to environment variables, run again with the --allow-env flag at unwrapResponse ($deno$/ops/dispatch_json.ts:42:11) at Object.sendSync ($deno$/ops/dispatch_json.ts:69:10) at Object.getEnv as get at
Now this is nothing new, we kind of went over this in the security section of my future of the backend development with javascript article.
So we can just give it permissions right?
This is a good thing though because our programs shouldn’t be able to out of the box reach into our environment and pull (potentially sensitive) data without our permissions.
To avoid the error we’d need to run it as
deno run --allow-env.
What if we don’t want to be bothered with
--allow-env every time we want to run our project? We have some alternatives!
You could use
deno run --allow-all index.ts and that will run or we can do the shorter version
deno run -A index.ts. Now this will allow anything to run without specific permissions – not necessarily a good thing because it is removing some potential warnings you’d likely want to know about if you downloaded this code from a random imported URL on the internet.
This is a key benefit of deno so disabling it isn’t recommended. We really want to give it the least amount of privilege necessary to perform it’s intended task and we only want to allow access to the specific services that are necessary – no more (sounds redundant but isn’t).
We don’t want network access or the ability to load plugins and the like.
There are some approaches we can take to make our permissions a bit easier.
Setting permissions
Let’s assume we have done a lot more work on this project and we are ready to make this thing go live and easy to use.
We could create some scripts in a bash shell that will run that program, but you’d need several versions for caching, running, testing, and for different operating systems.
Deno has a way to do this built in. All you need to use is
deno install --allow-env index.ts. Deno will compile our file with permissions.
Developmentwhich is the name of the directory it’s in.
If we
cat that file, you can see it created a bash script like below:
Now we can open that file directly and will have access to it without providing any specific permissions or having to manually create a bunch of bash scripts!
That’s cool but how is writing out the full path better?
Use
deno help install and you will see if you scroll up some options to change the command’s name from the directory it’s in.
To change the executable name, use -n/--name: ex. deno install --allow-net --allow-read -n exampleName
We will want to run
deno install --allow-env -n hello index.ts. Bear in mind that all the flags have to go before the target file (index.ts) – order matters.
We will then need to export the path provided as you can see in the gif below.
After doing that,
hello will be a valid command in terminal and we won’t even need to provide it the permissions!
This is useful for a production option but what about development? For that, we will need something more like a task runner. More on that in the next section.
Development (Task Runner)
Task runners will be more and more useful as we develop heavier and heavier programs that require different configurations and permissions. This is a role that is filled by npm and packages.json in node. This is similar to “Make”, “Ant”, “Rake”, & “MSBuild”.
Deno has a third party module called “drake” which is a make like task runner for deno.
Navigate in your browser to
If you copy the example into a new file we will call
Drake.ts.
In the import statement within the curly brackets add
, sh. Now this will allow us to run commands from your shell script.
Within the task function we want to use a
sh() function and pass in our
deno run ---allow-env index.ts shell command and you will need to make this asyncronous. It should look something like this:
import { desc, run, task, sh } from "[email protected]/mod.ts"; desc("Minimal Drake task"); task("hello", [], async function() { console.log("Hello From Drake!"); //Make this say what ever you like! await sh('deno run --allow-env index.ts'); }); run()
Now we want to run drake with “all” permissions and what script to run (in this case
hello) because drake is already passing in all the permissions in line 6 that we need!
Run
deno run -A Drake.ts hello
Drake has a full API for scripting your tasks, you’ll want to dig into. These scripts can do anything deno can do and with the correct permissions!
In my next article we will look at Third Party Modules and Deno Tools.
If you found this article helpful, give me a shout on twitter @drewlearns2 or if you find any errors, feel free to highlight them and mash that R button the right side of the screen. | http://drewlearns.com/2020/07/04/deno-permissions-project/ | CC-MAIN-2021-04 | refinedweb | 1,132 | 73.88 |
Wiki
pure-lang / PureOnMSWindows
Pure on MS Windows
Introduction
This page describes how to install and run Pure on a Windows machine. Already known issues are named and if possible solutions are given. More information about running and using Pure can also be found in the documentation, in particular check the Running Pure on Windows and Using PurePad sections for further details.
Installing Pure
The easiest way to get started with Pure on Windows is by using the "one-click" installer (pure-x.y.msi), provided in the download area. Please note that this is a x86 (32 bit) version of the program, but it will also work fine on all recent 64 bit Windows versions (Windows 7, 8 and 10 have all been tested). The package already includes many addon packages. Some of the addon packages are also distributed separately for Windows, please check the Addons wiki page for details.
As installation directory a path without spaces should be chosen, especially, when Pure is used together with other command line tools (see sections Using The Batch-Compiler and Using MSYS).
Running Pure
The same features and command-line options as for the other platforms are valid.
The following issues are already known:
GNU readline bug for non-US keyboards:
There is a bug in the readline library (see also this discussion thread), used by the Pure interpreter, which sometimes makes alternate-keys on non-US keyboards unusable. If you are observing problems in generating special keys, like
[]{}\~, you might need to disable readline by calling pure with the
--noeditingoption:
> pure --noediting
Alternatively, readline can also be disabled during compilation of Pure.
interactive help command:
The default installation of Pure is looking for the
w3mwebbrowser to show the html documentation, which usually is not part of a typical Windows setup. You can easily switch to another webbrowser just by setting the
PURE_HELPenvironment variable. You might need to suround the path to the webbrowser by "" (no escaping like \" is necessary), if the path includes special characters like spaces. It can easily be checked by calling the set command in cmd.exe (the Windows command shell):
> set PURE_HELP PURE_HELP="C:\Program Files\Mozilla Firefox\firefox.exe"
Using the Batch Compiler
If the batch-compiler of Pure
pure -c additional tools are needed:
- MinGW's C-compiler (gcc & g++)
- LLVM toolchain
For windows both tools are available as a binary distribution, so there is no need for compiling them in the first place:
Installing MinGW
The one-click windows installer works fine, but spaces in the installation path should be avoided. The current binary release (MinGW-5.1.6) still has some old versions of the included tools, e.g. gcc-3.5.4, and also some include files are outdated, but it works fine with Pure, so no further manuals updates are needed. Manual update of the MinGW-environment is usually done by downloading and extracting the respective packages from the MinGW-download site (where also the binary distribution can be found):
Note: The one-click installer also adds an update function, which usually overrides any manually added tools, header-files, etc.
Installing LLVM
A binary distribution is available as a zip file. Simply extract the binaries to a path (without spaces, again) and add it to the search path. Binaries for Windows can be found here (choose Mingw32/x86 as architecture).
Compiling Pure
Compilation of Pure is only needed if different compilation options from the msi-version are desired, e.g. no readline support for the interpreter, or if Pure has to be installed into the MSYS environment (see below). In this case, you might wish to have a look at Jiri Spitz' instructions. For your convenience, there's also a little package with required mingw libraries, dlls and headers.
Using MSYS
MSYS is a kind of "add-on" to MinGW, which provides a Unix- (and Linux-) style environment (sh-tool with a wrapper to map Unix-style path names to the Windows world). Installing instructions and the correct setup of paths, tools etc. is also described here.
Known issues:
Matching pathnames for pure-gen:
MSYS mounts the Windows drive letters, e.g.
C:,
D:, as root dirs
/cor
/d, so one can access
C:\MinGWalso by
/c/MinGWor
/C/MinGW(MSYS is case-sensitive incontrast to Windows, where
C:\mingwwould also work). Please be aware that internally MSYS converts
/c/MinGWback to
c:/MinGW. This can lead to undesired effects for instance when using pure-gen with a pattern-matching option for the target symbols: pure-gen -s '/c/myproject/myinclude.h:' /c/myproject/myinclude.h Here the path
/c/myproject/myinclude.his internally converted to
c:/myproject/myinclude.hand does not match the given pattern anymore. The result would be an empty pure- and c-wrapper file. Better stick with the original path style of windows: pure-gen -s 'c:/myproject/myinclude.h:' c:/myproject/myinclude.h The drawback is that the colon in
c:/sometimes raise further trouble when dealing with other Unix tools...
Fixing install target for "Linux" Makefiles:
Makefiles for the Pure addons often contain a section to guess the installation path:
# Try to guess the installation prefix (this needs GNU make): prefix = $(patsubst %/bin/pure,%,$(shell which pure 2>/dev/null)) ifeq ($(strip $(prefix)),) # Fall back to /usr/local. prefix = /usr/local endif
This fails when using MSYS together with the MSI-package of Pure, which does not have the usual (Unix/Linux)-style path structure, e.g. prefix/bin/pure. This can be fixed by calling
make prefix=*whereverPureIsLocated*. A more convenient method is to add a clause covering this case to the Makefile:
# Try to guess the host system type. host = $(shell ./config.guess) # Try to guess the installation prefix (this needs GNU make): prefix = $(patsubst %/bin/pure,%,$(shell which pure 2>/dev/null)) ifeq ($(strip $(prefix)),) ifneq "$(findstring -mingw,$(host))" "" # Windows: there might be an MSI installation: prefix = $(patsubst %/pure,%,$(shell which pure 2>/dev/null)) else # Fall back to /usr/local. prefix = /usr/local endif endif
Of course, here
hostmust be defined beforehand.
Updated | https://bitbucket.org/purelang/pure-lang/wiki/PureOnMSWindows | CC-MAIN-2017-51 | refinedweb | 1,013 | 52.09 |
Simply do it level by level, using the
next-pointers of the current level to go through the current level and set the
next-pointers of the next level.
I say "real" O(1) space because of the many recursive solutions ignoring that recursion management needs space.
def connect(self, root): while root and root.left: next = root.left while root: root.left.next = root.right root.right.next = root.next and root.next.left root = root.next root = next
Hello, is this pseudocode? Could you please provide a Java version of this solution to make it more understandable? Thanks
@wxl163530 No, it's not pseudocode. It's Python.
@wxl163530 Here's a pretty direct translation to Java:
public void connect(TreeLinkNode root) { while (root != null && root.left != null) { TreeLinkNode next = root.left; while (root != null) { root.left.next = root.right; root.right.next = root.next == null ? null : root.next.left; root = root.next; } root = next; } }
Though this line for
root.right.next might be clearer:
if (root.next != null) root.right.next = root.next.left;
@StefanPochmann Thanks for you help!
@StefanPochmann Hi, thank you so much for sharing. I have a question, should we consider the case that 'root.left = null' ? thank you
@happykimi Not sure what you mean. I am considering that case. Did you overlook my
and root.left?
@StefanPochmann, I was wondering how this line below work
root.right.next = root.next and root.next.left
You use an
and operator to let
root.right.next either equals to
root.next or
root.next.left. This looks very confusing to me. I checked some other usage of
and such as
1 and None,
1 and 2,
10 and 3, without surprise the result is
Nothing,
2,
3 respectively. It always output the latter one of the operator. So why in the case of this question it can output the former instead of the latter. Many thanks in advance!
@StefanPochmann I'm not sure I understand what you mean. Some external link to other resource would be welcome as well.
@bdeng3 Like
0 and 1. And the straight-forward resource is the Python documentation.
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect. | https://discuss.leetcode.com/topic/16547/7-lines-iterative-real-o-1-space | CC-MAIN-2017-43 | refinedweb | 373 | 79.56 |
A brief guide to Python programming best practices
As a software developer, we should always strive to maintain good code quality by following best practices or standards. This is to ensure that the code delivered to solve a problem is readable and maintainable in the long run. To quote Guido van Rossum, “Code is read much more often than it is written.” which pretty much sums up a software developer’s day! 😄
Here, I will try to summarize some of the practices which I have learnt & followed while working with Python programming language
One of the utmost important thing in any programming language is naming. From naming variable, methods and pretty much everything, It sometimes takes a good amount of time just to come up with a good name!
Naming conventions (PEP-8)
- Variables, functions, methods, packages, modules: this_is_a_variable (Snake-case)
- Classes and exceptions: CapWords (Pascal case)
- Protected methods and internal functions: _single_leading_underscore
- Private methods: __double_leading_underscore
- Constants: CAPS_WITH_UNDERSCORES (All caps)
General best practices
- Maintain all dependencies in requirement.txt file for the lambda
- Maintain all constants in constants.py file for the lambda
- Null check before accessing values and Checking against None (very important). For e.g.
bad practice:
if data["state"] == "fixed":
good practice:
if data and data["state"] == "fixed":
4. Method annotation with expected return type to enforce types. For e.g.
def circumference(radius: float) -> float:
5. Do not use hard-coded strings.use constants instead.
bad practice:
"google.com" in link:
good practice:
google_link in link:
6. Avoid using messy nested if-else. Use switcher instead if possible.
7. Use constants or enums in-place of magic numbers.
8. Never Ignore the Exceptions you catch! Errors should never pass silently.
bad practice:
catch Exception as ex:
pass
9. Cache/Store frequently used values.
10. Use conditional checks to check against whitelist of conditions.
11. Document __init__ methods in the doc-string for the class.
12. Use getters and setters instead of directly accessing globals and privates
>>> class C:
… @property
… def x(self):
… return self.__x
… @x.setter
… def x(self, value):
… self.__x = value
13. Use String interpolation or String Templating library to avoid security issues.
14. use ‘in’ and == cautiously for string comparisons.
15. For Reading and Writing to a File, Use With Statement Context Managers as it manages the closing the file handle for you.
These are just some of the practices and there are so much more, if you want to learn more then I recommend to go through the PEP-8 styling guide mentioned in the good reads below. Hope you find this article helpful. Thanks for reading! 😃
Some other good reads -
- Always sanitize your input to prevent sql injection in your python code
- Zen of Python | https://tauseeef.medium.com/a-brief-guide-to-python-programming-best-practices-791c8f41d61?source=user_profile---------3------------------------------- | CC-MAIN-2022-05 | refinedweb | 455 | 67.45 |
.
I had one of the first Raspberry PI lying around, so that was to be my base.
I ordered a couple of things from AliExpress :
- A relay with 5v on the switching side, and up to 220v on the hot side.
- A ultrasonic sonar
- Wifi dongle
- 1k and 470ohm resistors
- some cable
All links to these products can be found on the bottom of this page.
So, lets look at the hardware side first.
The ultrasonic sensor outputs 5v on the echo pin, so this needs to be taken down to around 3.3v, so we don’t blow out the RPI. This is done using a 470k and 1Ohm resistor, connected as shown in the diagram. The 470 connects to the echo wire, and the 1Ohm goes between the GND and echo wire.
The relay will be connected to the garage door opener motor, which has inputs for connecting a physical opener inside the garage. The button just closes a loop, and this is what the relay will be doing aswell. When RPI receives a signal it will send a GPIO command switching the relay on and off. This will close the wire loop, and open/close the door.
Besides having to drop the voltage alittle, the rest should be fairly easy. The diagram also described which GPIO pins where used.
The RPI as loaded with the latest Raspbian, and booted.
If you plan on just using ssh and not connecting a screen for setup, add a empty file called “ssh” to the boot sector. This activates the ssh-daemon.
The wifi dongle was detected nicely, and the board was connected to the wifi network.
The software side is currently running python scripts for the GPIO communication, and PHP/MQTT for connecting it to the real world. I will not go into details on the PHP/MQTT part, but should there be any questions please ask.
The python scripts are fairly straight forward. I had some problems making the relay work, sending hi and low commands, so i ended up just switching modes on it, which works just fine for me.
So, here are the code for the relay :
import RPi.GPIO as GPIO
import time
import sys
GPIO.setwarnings(False)
#pin = int(sys.argv[1])
pin = 18
GPIO.setmode(GPIO.BOARD)
GPIO.setup(pin,GPIO.OUT)
time.sleep(1)
GPIO.setup(pin,GPIO.IN);
GPIO.cleanup
print 'OK'
Sonar code :
import RPi.GPIO as GPIO #Import GPIO library
import time #Import time library
import sys
GPIO.setwarnings(False)
GPIO.setmode(GPIO.BOARD)
TRIG = 16 #int(sys.argv[1]) #Associate pin 23 to TRIG
ECHO = 15 #int(sys.argv[2]) #Associate pin 24 to ECHO
GPIO.setup(TRIG,GPIO.OUT)
GPIO.setup(ECHO,GPIO.IN)
GPIO.output(TRIG, False)
time.sleep(2)
GPIO.output(TRIG, True)
time.sleep(0.00001) #Delay of 0.00001 seconds
GPIO.output(TRIG, False)
while GPIO.input(ECHO)==0: #Check whether the ECHO is LOW
pulse_start = time.time() #Saves the last known time of LOW pulse
while GPIO.input(ECHO)==1:
pulse_end = time.time()
pulse_duration = pulse_end – pulse_start #Get pulse duration
distance = pulse_duration * 17150 #Multiply pulse duration by 17150 to get distance
distance = round(distance, 2) #Round
if distance > 2 and distance < 400: #Check whether the distance is within range
print distance
else:
print “OutOfRange”
This was all packaged into a small IP65 housing ( which i had to modify to fit the 1.gen RPI ). Had i gone with a 2. gen or newer with a microsd i would’nt had to modify the case any. Since its sitting in a dry place i did’nt care to much.
After using the opener for a few days i’m very satisfied with how it works. Its been rock stable, doing what its made for. I will do a writeup on how the opener is using MQTT to communicate with the real world.
Next IOT project in line is making a self sustained temperature and humidity monitor with a tiny footprint. It will be based on the small arduino compatible board called ESP8266.
Ultrasonic sensor :
Relay :
WifiDongle : | https://www.woxholt.no/diy-garage-door-opener/ | CC-MAIN-2019-43 | refinedweb | 681 | 75.81 |
fputwc()
Write a wide character to a stream
Synopsis:
#include <wchar.h> wint_t fputwc( wchar_t wc, FILE * fp );
Since:
BlackBerry 10.0.0
Arguments:
- wc
- The wide character you want to write.
- fp
- The stream you want to write the character to.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:
The fputwc() function writes the wide character specified by wc, cast as (wint_t)(wchar_t), to the stream specified by fp.
Returns:
The wide character written, cast as (wint_t)(wchar_t), or WEOF if an error occurred ( errno is set).
If wc exceeds the valid wide-character range, the value returned is the wide character written, not wc.
Errors:
- EAGAIN
- The O_NONBLOCK flag is set for fp and would have been blocked by this operation.
- EBADF
- The stream specified: 2014-06-24
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | https://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/f/fputwc.html | CC-MAIN-2019-35 | refinedweb | 159 | 68.26 |
jQuery on() Method
Example
Attach a click event to the <p> element:
alert("The paragraph was clicked.");
});
Definition and Usage
The on() method attaches one or more event handlers for the selected elements and child elements.
As of jQuery version 1.7, the on() method is the new replacement for the bind(), live() and delegate() methods. This method brings a lot of consistency to the API, and we recommend that you use this method, as it simplifies the jQuery code base.
Note: Event handlers attached using the on() method will work for both current and FUTURE elements (like a new element created by a script).
Tip: To remove event handlers, use the off() method.
Tip: To attach an event that only runs once and then removes itself, use the one() method.
Syntax
Try it Yourself - Examples
Attach multiple events
How to attach multiple events to an element.
Attach multiple event handlers using
the map parameter
How to attach multiple event handlers to the selected elements using the map parameter.
Attach a custom event on an element
How to attach a customized namespace event on an element.
Pass along data to the function
How to pass along data to the function.
Add event
handlers for future elements
Show that the on() method also works for elements not yet created.
Remove an event handler
How to remove an event handler using the off() method. | https://www.w3schools.com/jquERy/event_on.asp | CC-MAIN-2021-49 | refinedweb | 232 | 63.29 |
Revision history for Parse-RecDescent 1.00 Mon Aug 11 13:17:13 1997 - original version 1.01 Mon Sep 8 18:04:14 EST 1997 - changed "quotemeta" to "quotemeta $_" in Text::Balanced to workaround bug in Perl 5.002 and 5.003 1.10 Tue Sep 30 14:51:49 EST 1997 - fixed fatal bug in tracing code - added m<delim>...<delim> format for regex tokens - added support for trailing modifiers (/.../gimsox) in regex tokens - added $thisline variable 1.20 Thu Oct 2 11:46:57 EST 1997 - fixed handling of trailing regex modifiers (now no whitespace allowed before between last delimiter and first modifier) - added trace diagnostic for failure/success of actions (demystifies failures caused by an action returning undef) - added context for "Matched rule..." trace - added a test so that an integer value (N>1) in the $::RD_TRACE variable now truncates (to N characters) all contexts reported in any trace - added "start-up" actions: actions appearing before the first rule were previously an error. They are now executed (once) at the start of the parser's namespace. 1.21 Sat Oct 4 17:12:23 EST 1997 - modified truncation of context in trace diagnostics (successful matches now report first N/2 and last N/2 chars, instead of first N) - fixed incorrect version number in Balanced.pm 1.22 Tue Oct 7 11:53:27 EST 1997 - fixed lurking trace problem with certain pathological regexes - fixed bug in generation of special namespaces (this was serious as it prevented the use of more than one alternation in a parser, as well as preventing the use of more than one parser in a script! 1.23 Fri Oct 17 10:15:22 EST 1997 - fixed error message generation for <error?:msg> directives - fixed error message handling of empty productions - fixed handling of multi-line start-up actions - removed spurious debugging message for implicit subrule generation - changed naming scheme for alternations (pseudo-rule's name now describes location of alternation exactly) - added support for repetition specifiers on alternations. - Text::Balanced::extract_.... altered to honour the context in which they are called (see Balanced.pod for details). 1.24 - fixed minor problem in tracing code (context string now correctly reported for actions) - added explicit namespace declaration at beginning of generated code, to ensure that any "start code" is declared in the appropriate namespace. - fixed left recursion check on empty productions - added $::RD_AUTOSTUB flag and associated autostubbing behaviour (see new section - "Autostubbing" - in RecDescent.pod) - eliminated hierarchical precedence between $::RD_HINT and $::RD_TRACE. Enabling tracing now does _not_ automatically turn on hinting (although error and warning messages are still automatically enabled). - fixed bug in Text::Balanced. Division now correctly handled in code blocks. 1.25 Mon Feb 09 12:19:14 EST 1998 - Resynchronized numbering schemes for RecDescent and Balanced. 1.26 Wed Feb 25 13:52:15 EST 1998 - Fixed bug (and inefficiency) in <resync:pattern> directive. - Improved checking of regexes within grammars - Added subrule arguments (major change to internal grammar parser) - Added <matchrule:...> directive - started work on Compile() option (not complete yet - do not use!) - Made generated code "use strict" - Fixed bug which incorrectly warned against items following a <error?> directive. - Improved $thisline (added assignment and resync) - Fixed expectation messages for subrules - Rearranged tar file to co-operate with CPAN.pm 1.30 Fri May 22 05:52:06 1998 - Added <rulevar> directive - Added culling of productions starting with <reject> or <rulevar> - Cleaned up and improved format (and speed) of tracing code - Added warning levels - Optimized generation of token separator checking code. - Fixed bug encountered when parsing a literal string - Added $::RD_AUTOACTION to simplify standard actions at the end of each production 1.31 Fri May 22 06:11:26 1998 - Fixed bug in naming archive file 1.33 Fri May 22 06:15:26 1998 1.35 Wed Jun 24 09:57:02 1998 - Removed "foreach my $var ( @list )" constructs, which were biting users with perl 5.003 and earlier. - Fixed bug calling &Parse::RecDescent::toksepcode instead of &Parse::RecDescent::Rule::toksepcode - Changed grammar so that colons in rule definitions must appear on the same line as the rule name (as documented). Added an explicit error message when this is not the case. - Added $thiscolumn, which indicates the current column at any point in the parse. - Added $thisoffset, which indicates the absolute position in the original text string at any point in the parse. - Added $prevline and $prevcolumn, which indicate line and column of the last char of the last successfully matched item. - Added @itempos which provides: $itempos[$n]{offset}{from} $itempos[$n]{offset}{to} $itempos[$n]{line}{from} $itempos[$n]{line}{to} $itempos[$n]{column}{from} $itempos[$n]{column}{to} corresponding to each $item[$n]. See new documentation. - Several trivial lexical changes to make xemacs happy 1.41 Mon Aug 10 14:52:53 1998 - Enhanced POD in response to user feedback - Fixed subtle bug in Text::Balanced::extract_codeblock. It only bit when '(?)' appeared in implicit subrules - Added ability to pass args to the start-rule. 1.42 ???? - Added a test.pl - Modified behaviour of repetitions, so that the results of repeated subrules which succeed but don't consume are preserved (at least up to the minimal number of repetitions) - Fixed bug: @itempos now not incorrectly reset if grammar contained alternations - Fixed bug: Embedded unmatched '}' in regex tokens now works correctly - Miscellaneous tweaks to RecDescent.pod (e.g. updated meta-grammar) 1.43 Sat Aug 15 06:43:46 1998 - Resychronized Balanced.pm versions 1.50 Thu Aug 27 09:29:31 1998 - Changed <rulevar:...> parser to use extract_codeblock, so as to handle embedded '>' chars (e.g. <rulevar: $tmp = $self->{tmp}> ) - Added <defer:...> to allow deferred actions which are only executed if they are part of a rule that eventually succeeds. (see the new section under "Directives" in RecDescent.pod) - Fixed matching interpolated literals (was broken when literal contained pattern metacharacters) 1.51 Thu Aug 27 16:25:08 1998 - Maintenance release, rectifying bad soft links in the 1.50 distributions 1.60 Wed Oct 21 09:44:15 1998 [Never released] 1.61 Wed Oct 21 11:06:19 1998 - Added <token:...> directive for supporting (future) token-stream parsing (see pod) - Added feature that data is consumed if passed as a reference (see pod) - Fixed bug in autogenerated errors: now ignores directives - Modified behaviour of <defer> directive so that deferred actions only executed if total parse succeeds (i.e. returns a defined value) - Made error messages "anti-deferred". That is, only those errors invoked in paths that eventually caused a parse to fail are printed - see documentation. - Miscellaneous fixes for Text::Balanced subroutines - Made private namespaces inherit Parse::RecDescent namespace (leads to more intuitive behaviour when calling methods of $thisparser) - *** NON-BACKWARDS COMPATIBLE CHANGE! *** Changed the behaviour of token separator specification. Now uses <skip:...> directive. See pod for new details. 1.62 Wed Dec 9 11:26:29 1998 - Reinstated missing $prevoffset variable - Corrected a possible bug with autoactions (thanks Mitchell) - *** IMPORTANT CHANGE *** $::RD_WARN now initialized 3 by default. Serious but non-fatal errors are automatically reported, unless you explicitly undefine $::RD_WARN. - Fixed bug in AUTOLOADing non-method subs defined in package Parse::RecDescent (thanks Mario) 1.63 Thu Mar 25 09:13:21 1999 - Rewrote documentation to replace the concept of a token separator with that of a token prefix. - Fixed obscure bug in replacement of rules containing implicit subrules (alternations). Thanks Craig. 1.64 Sun Mar 28 05:44:14 1999 - Synchronized with Text::Balanced version - Fixed obscure bug in the treatment of escaped backslashes in literal tokens. Thanks Matthew. 1.65 Wed May 19 12:35:05 1999 - Added <leftop:...> and <rightop:...> directives - Added level 2 warning and autoreject for lone <error?> directive in a production. 1.66 Fri Jul 2 13:35:06 1999 - Improved error message when an action fails to parse (Thanks Tuomas). - Allowed predefined subroutines in package Parse::RecDescent to be used as rules in grammars - Changed error report on bad regexes to level 3 warning, since compile-time interpolation failure may falsely invalidate regexes that would work at run-time. 1.70 Fri Oct 8 14:15:36 1999 - Clarified use of "eofile" idiom in POD file Clarified meaning of "free-form" in description of grammars Fixed <resync> examples, which were invalidated by earlier change in semantics of <error>. (Thanks Knut). - Added grammar precompiler (see documentation) - Tweaked message for <reject> optimization. - Fixed bug when using '@' as a terminal (thanks Abigail) - Fixed nasty bug when $return set to zero - Added <score> and <autoscore> directives (see documentation) 1.77 Mon Nov 22 06:11:32 1999 - IMPORTANT: Now requires 5.005 or better. - Added <perl_quotelike>, <perl_codeblock>, and <perl_variable> directives (see documentation) - Added <autotree> directive (see documentation) - Added %item hash (see documentation - thanks Stef!) - Tweaked internal parser in line with changes to Text::Balanced - Added <nocheck> directive to switch off recursion checking and other checks in stable grammars (see documentation). - Refined code generation WRT positional variables ($thisoffset, etc) - Added positional entries for %item (see documentation) - Fixed bug with (missing) start actions under precompiler (thanks Theo) 1.78 Mon Mar 20 12:03:17 2000 - Fixed error messages and documentation for Parse::RecDescent::Precompile (thanks Jim) - Moved demos to /demo subdirectory - Added tutorial in /tutorial subdirectory - Added <autotree> directive - Added (s /sep/) notation (thanks Greg) - Circumvented \G and /gc calamities - Added more comprehensible error message when parser invoked through non-existent startrule (thanks Jeff) - Fixed serious bug with creating new parsers after existing ones had failed. (Thanks Paul) - Fixed problem with nested implicit subrules (thanks Marc). 1.79 Mon Aug 21 11:27:39 2000 - Pod tweak (thanks Abigail) - Documented need to use do{..} within some <reject:...> directives (thanks Paul) - Added Save method - Fixed bug that was preventing precompiled parsers being subsequently extended (thanks Jeff). - Changed keys used by %item. Now uses "named positionals" rather that simple positionals for non-subrule items (see documentation) - Added trimmer for surrounding whitespace in matchrules. - Squelched bug in (not) handling invalid directives (thanks John) 1.80 Sat Jan 20 05:02:35 2001 - Fixed Save so that saved parsers can still be used after saving (thanks Supun) - Fixed bug in line number tracking (thanks Theo) - Fixed bug in (s /pat/) shorthand (thanks Julien) - Improved docs on <rulevar> (thanks Steve){'word'}, as it would have been in previous versions of the module). (thanks Anthony) - Changed argument passing behaviour. If no arguments specified for subrule, it is now passed current rule's @arg instead. To get old (no arguments) behaviour use: subrule[] - Fixed bug in <reject> handling: failed to reject if $return had been set. (thanks Nick) - Added two useful demos of restructuring nested data (thanks Marc) - Fixed doc bug re use of // (thanks Randal) - Localized filehandles, like a good citizen should - Misc doc bug fixes (thanks all) - Fixed Text::Balance dependency in Makefile.PL (thanks Dominique) - Fixed bug that @itempos wasn't set up if referred to only in an autoaction. (thanks Eric) - Fixed truncation bug in tracing contexts - Dramatically improved speed of line counting (thanks John) - Made item(s) and item(s /,/) behave consistently wrt %item (thanks Marcel) - Added prototype <autorule:...> handling - Added outer block markers for <perl_codeblock> - Fixed multi-grammar precompilation (thanks Dominique) - Fixed numerous snafus in tutorial.html (thanks Ralph) - Added nesting level information to traces - Fixed resetting of $text after an <uncommit> rule. 1.91 Fri Mar 28 23:20:28 2003 - Updated Text::Balanced to fix various bugs 1.92 Wed Apr 2 04:45:37 2003 - Removed Text::Balanced from distribution (now a prereq only) 1.93 Wed Apr 2 22:25:14 2003 - Fixed fatal error with $tracelevel (thanks everyone) 1.94 Wed Apr 9 08:29:33 2003 - Replaced 'our' with 'use vars' to reinstate 5.005 compatibility. 1.95.1 Sun Sep 30 05:06:56 2007 - Updated README to reflect new status of Text::Balanced (i.e. required but not included in the distribution) - Fixed demo_logic (Thanks, Steve) - Fixed autopropagation of arguments into repetitions (Thanks, Luke) - Limited context info to 500 chars in traces (Thanks, Stephen) - Added option to select base namespace for autotreeing (thanks Gaal) - Improved formatting compatibility with 5.9.0 (thanks, David) - Added support for $::RD_HINT = 0 to turn off hinting entirely - Fixed bug in line handling - Returned $return variable to documented behaviour (i.e. setting return doesn't guarantee the match, only what is returned if the match succeeds) - Fixed nit in debugging of conditional regexes (thanks, Brian) - Moved expectation creation to compile-time (thanks François) - Removed redundant inheritances (i.e. @ISA elements) in internal namespace (thanks Hugo) - Added warning against C<return> in actions to "GOTCHAS" documentation - Added demo_another_Cgrammar.pl (thanks Hendrik) - Documented parens (thanks Robin) - Removed incorrect meta-grammar from docs 1.96.0 Fri Oct 3 06:08:24 2008 - Propagated correct Changes file (thanks Matthew!) - Added: <warn> <hint> <trace_build> <trace_parse> <nocheck> - 1.962.0 Tue Aug 25 19:45:15 2009 - Doc bug fix (thanks Christophe) - Fixed assymmetrical push/pop on @lines tracker (thanks Peter!) - Bumped sub-version number hugely to fix CPAN indexing (thanks Jerome) - Remove all occurrences of $& so we don't affect other regular expressions. - Perl 5.6.0 required for use of $+[0] and $-[0] for replacement of $&. 1.962.1 Thu Aug 27 21:39:30 2009 - Fixed subtle bug in leftop and rightop caused by removal of $& 1.963 Thu Jan 21 09:13:19 2010 - Fixed even subtler bug in leftop and rightop caused by removal of $& (Thanks Francesco) 1.964 Wed Feb 17 09:33:39 2010 - Fixed bug with undefined $1 when parsing literals (thanks Dan!) - Fixed premature namespace destruction bug with compiled grammars 1.964001 Tue Feb 23 15:15:18 2010 - Updated version number because versioning is a neverending nightmare in Perl 5 (thanks Matt) 1.965001 Sun Apr 4 15:00:10 2010 - Removed all references to /opts version of perl interpreter - Added Parse::RecDescent::redirect_reporting_to() to enable ERROR, TRACE, and TRACECONTEXT filehandles to be easily redirected._002 Sun Jan 22 19:08:37 2012 - *** NON-BACKWARDS COMPATIBLE CHANGE! *** Change the caches for $prevline and $thisline to be local to the parser, rather than lexical vars in Parse::RecDescent. This prevents previously generated parsers from interfering with the line counts of later parsers. - removed trailing whitespace from all member files (cosmetic) - new tests, updated MANIFEST - Added Jeremy Braun as an author and current maintainer - update file permissions - fixed a few broken links in the pod 1.967001 Sat Jan 28 20:54:48 2012 - Addressed RT.cpan.org #28314: regex modifiers for tokens not honored during regex syntax check. (Thanks SADAHIRO!) - Fixed some POD typos - Added message on how to turn off "default" hint value in the default hint value ($::RD_HINT = 0). RT.cpan.org # #4898. - Modified _write_ERROR to call formline twice to avoid repeated $errorprefix. - Collected match tracing messages into a common function which takes into account positive/negative lookahead. - Addressed RT.cpan.org #74258: RD_AUTOSTUB does not work with precompiled parsers. (Thanks Yuri!) -. That prevents the $compiling argument to new() from being incorrectly interpreted as $isimplicit.003 Mon Jan 30 07:24:53 2012 - Remove the 'use 5.10' from t/skip_dynamic.t, it runs fine against Perl 5.8.9. (Thanks Slaven!) 1.967_004 Tue Feb 7 22:11:11 2012 - Localize the OUT filehandle during Precompile. - Document the <autotree:Base::Class> form of the <autotree> directive. - Provide a simple test for the <autotree> directive, t/autotree.t. Renamed basics.t to ensure it runs before autotree.t. - Allow a global <skip:> directive that functions the same as modifying $Parse::RecDescent::skip prior to compiling a grammar. (Thanks Flavio!) - Require that the $file returned by caller() be eq '-', rather than merely starting with '-'. This allows execution of the following. (Thanks Christopher) perl -MParse::RecDescent -e 'print "$Parse::RecDescent::VERSION\n";' - Warn on empty productions followed by other productions. The empty production always matches, so following productions will never be reached. - *** NON-BACKWARDS COMPATIBLE CHANGE! *** A repetition directive such as 'id(s /,/)' correctly creates a temporary @item variable to hold the 'id's that are matched. That @item variable is them used to set the real $item[] entry for that repetition. The same treatment is now given to %item. Formerly, in a production like: id ',' id(s /,/) matched against: xxx, yyy, zzz The $item{id} entry which should be 'xxx' is overwritten by 'yyy' and then 'zzz' prior to the action being executed. Now 'yyy' and 'zzz' set $item{id}, but in the private %item, which goes out of scope once the repetition match completes. - ** EXPERIMENTAL ** When precompiling, optionally create a standalone parser by including most of the contents of Parse::RecDescent in the resulting Precompiled output. - Accept an optional $options hashref to Precompile, which can be used to specify $options->{-standalone}, which currently defaults to false. - The subroutines import, Precompile and Save are not included in the Precompile'd parser. - The included Parse::RecDescent module is renamed to Parse::RecDescent::_Runtime to avoid namespace conflicts with an installed and use'd Parse::RecDescent. - Add a new t/precompile.t to test precompilation. - Add a new $_FILENAME global to Parse::RecDescent to make it easy for the Precompile method to find the module. - Remove the prototype from _generate. It is not required, and it caused t/precompile.t (which ends up re-definiing a lot of Parse::RecDescent subroutines) to fail needlessly, as the calls to _generate in Replace and Extend normally do not see the prototype, but do when re-defined. - POD documentation for standalone parsers added. 1.967_005 Wed Feb 8 18:46:35 2012 - Added JTBRAUN@CPAN.org as author in Build.PL. - Added ExtUtils::MakeMaker build/configure version requirements. (RT.cpan.org #74787, Thanks POPEL!) 1.967006 Fri Feb 10 20:48:48 2012 - Bumped version to 1.967006 for non-development release._008 Tue Mar 13 22:28:00 2012 - Restore old _parserepeat calling convention. Change a parser's DESTROY method to check for $self->{_not_precompiled} instead of $self->{_precompiled}. (Fix for RT #74593). 1.967009 Fri Mar 16 07:25:09 2012 - Bumped version to 1.967009 for non-development release. 1.967_010 Sun Jul 7 11:23:53 2013 - global the <skip:> directive to eval similar to other <skip:> directives, rather than being single-quoted in the resulting parser. 1.967011 Sat Sep 12 16:42:01 2015 - Correct some typos in the documentation. (RT.cpan.org #87185, thanks dsteinbrunner!) - Sort hash keys and rulenames when generating code. This keeps the output text for a given input text the same, reducing differences in automated builds. (RT.cpan.org #102160, thanks Reiner!) - Precompiled parsers now document which $Parse::RecDescent::VERSION was used to generate them. (RT.cpan.org #77001) 1.967012 Sun Sep 13 07:59:00 2015 - Reference Data::Dumper::Sortkeys, not SortKeys. Actually produces reproducible precompiled parsers now. (RT.cpan.org #107061, thanks Slaven!) 1.967013 Sun Sep 27 10:00:36 2015 - Wrap Data::Dumper->Dump() to localize some $Data::Dumper::VARS to control the dumped output. In particular, Data::Dumper::Terse=1 was reported to break parser generation. (RT.cpan.org #107355, thanks Sherrard!) 1.967014 Sat Apr 1 10:33:29 2017 - Add a newline to package declaration lines in precompiled parsers, to keep CPAN from indexing them. (RT.cpan.org #110404, thanks Martin!) - Provide repository and bugtracker entries in MYMETA.*. (RT.cpan.org #110403, thanks Martin!) - Update tests to handle '.' no longer being part of @INC in perl-5.26.0. (RT.cpan.org #120415, thanks Jim!) 1.967015 Tue Apr 4 07:38:07 2017 - Fix misuse of require to include MYMETA.pl, data is just included in both Makefile.PL and Build.PL nowB. (RT.cpan.org #120922, thanks Kent!) | https://metacpan.org/changes/distribution/Parse-RecDescent | CC-MAIN-2020-16 | refinedweb | 3,288 | 56.76 |
<ac:macro ac:<ac:plain-text-body><![CDATA[
.
Zend Framework: Zend_View_Helper_Dojo Component Proposal
Table of Contents
1. Overview
2. References
3. Component Requirements, Constraints, and Acceptance Criteria
.
- 1. Overview
- 2. References
- 3. Component Requirements, Constraints, and Acceptance Criteria
- 4. Dependencies on Other Framework Components
- 5. Theory of Operation
- 6. Milestones / Tasks
- 7. Class Index
- 8. Use Cases
- require a dojo component:
- register a module path
- tell dojo to use the CDN, version 1.0
- tell dojo to use the Google CDN
- tell dojo to use local install
- tell dojo to parse on load
- specify a stylesheet by module notation
- require a custom stylesheet
- specify a function to run onLoad
- specify an object method to run onLoad
- capture JS to use as a named callback to run onLoad
- enable dojo (minimally, loads dojo.js and dojo.css)
- disable dojo (will not enable any functionality when rendered)
- test if dojo is enabled
- echo the dojo requirements
- 9. Class Skeletons
Using dojo requires a number of view integration points; these include requiring the dojo.js file itself, dojo.require calls, loading appropriate stylesheets, etc. ZF should make including these as easy as possible.
- MUST be a single view helper with multiple methods
- MUST be able to render as string
- MUST have methods for setting dojo include metadata
- MUST support using via CDN, using specified version
- SHOULD allow specifying either AOL or Google as CDN
- MUST support using local install, via path
- MUST allow specifying djConfig key/value pairs
- SHOULD default to CDN if no path specified
- MUST have method for requiring components
- MUST have method for specifying module paths
- MUST emit default dojo.js stylesheet
- SHOULD default to CDN location if no path specified
- MUST allow specifying CDN version
- MUST allow specifying dojo install location by local path
- MUST allow specifying dijit themes by module
4. Dependencies on Other Framework Components
- Zend_View_Exception
- Zend_View_Helper_Placeholder
5. Theory of Operation
This component provides a fluent, PHP5 interface for programmatically setting up your Dojo environment within your view scripts. You may specify whether Dojo loads from a local path or the AOL CDN, paths to any custom modules, stylesheet modules to use, and more. Within your layout script, you will then echo the placeholder to setup the appropriate style and script tags based on the settings provided.
6. Milestones / Tasks
- Milestone 1: [DONE] Prepare API
- Milestone 2: Publish this proposal
- Milestone 3: Complete first working code and tests
- Milestone 4: Write documentation
7. Class Index
- Zend_View_Helper_Dojo
23 Commentscomments.show.hide
May 27, 2008
Geoffrey Tran
<p>I'd prefer enable()/disable() vs enableDojo() because of the redundancy</p>
May 27, 2008
Rob Allen
<p>I would prefer enable()/disable() in preference to enableDojo()/disableDojo(). </p>
<p>Similarly setDjConfig()/getDjConfig() would be easier as setConfig()/getConfig()</p>
<p>Other than that, it looks good.</p>
May 28, 2008
Matthew Weier O'Phinney
<p>I'm agreeing with using enable() and disable(). However, setDjConfig()/getDjConfig() map to the djConfig object in Dojo, and thus need to retain their name.</p>
May 27, 2008
Jeffrey Sambells
<p>Agreed, it would be much better if method names were not related to the class name. Using the generic names would provide better consistency in the future (or at least less annoyance) if other JS libraries are implemented.</p>
May 27, 2008
Robin Skoglund
<p>I agree with on enable()/disable() and setConfig()/getConfig().</p>
<p>When capturing:</p>
<ac:macro ac:<ac:plain-text-body><![CDATA[
<? $this->dojo()->onLoadCaptureStart('foo') ?>
bar();
baz();
<? $this-.dojo()->onLoadCaptureStop('foo') ?>
]]></ac:plain-text-body></ac:macro>
<p>What will this do, exactly? What is 'foo'? Will this put the captured content into a function called foo(), which is added to run in the onload() event, or will it set the captured content be the only thing run in onload()? What's going on here?</p>
<p>Also, how is $this->dojo()->addStyleSheet('/css/custom.css') different from adding a style sheet using the HeadLink() view helper?</p>
May 28, 2008
Matthew Weier O'Phinney
<p>Yes, 'foo' will be a closure that is called by dojo's onload handler (which offers some functionality beyond the standard onLoad DOM event).</p>
<p>Regarding addStyleSheet(), Dojo typically adds stylesheets using @import statements as follows:</p>
<ac:macro ac:<ac:plain-text-body><![CDATA[
<style type="text/css">
@import "";
@import ""
</style>
]]></ac:plain-text-body></ac:macro>
<p>addStyleSheet() would place an @import at the top of this stack to ensure that the various stylesheets are loaded in the appropriate order for cascading and to group the Dojo-specific stylesheet with other Dojo stylesheets.</p>
May 27, 2008
Geoffrey Tran
<p>Consistency aside, if there is not a good reason to expose the view variable as public, it should be protected to prevent contamination. Public variables in a loose-typed language such as PHP is a double-edged sword.</p>
May 27, 2008
Robin Skoglund
<p>Yeah, why is the view public all over the framework?</p>
May 28, 2008
Matthew Weier O'Phinney
<p>Honestly, there's very little reason for it <em>not</em> to be public. Yes, it could bet "contaminated", but the chances of that happening are unlikely, and will, in fact, likely raise some pretty serious errors that will trickle up the error stack quickly.</p>
<p>I have no problem making the view property protected, but this is a minor implementation detail at best.</p>
May 28, 2008
Bradley Holt
<p>I know it was announced after you created your proposal, but you may want to consider giving developers the option of getting Dojo through the <a href="">AJAX Libraries API</a> instead of the AOL CDN if they'd prefer. I'm not sure how many Dojo developers would choose this over the AOL CDN, but if the proposed concept ends up being extended to other JavaScript libraries then I could see the AJAX Libraries API being very useful since it lets you access several different JavaScript libraries. </p>
May 28, 2008
Matthew Weier O'Phinney
<p>Funny – I just saw your comment after I updated the use cases and class skeletons to allow for this. <ac:emoticon ac: See UC-03a for an example.</p>
May 29, 2008
Vincent de Lau
<p>How do useCdn() and setCdn($cdn) relate?</p>
<p>Maybe lose setCdn(), and change useCdn() to useCdn($cdn = 'aol'), where $cdn = false will result in not using a CDN.</p>
<p>Isn't this kind of CDN logic a candidate for a sepparate component? We kind of know that other JavaScript frameworks will get a Zend component, most of which can be loaded from a CDN. Also, hardcoding CDN's to use might not be the proper way to go.<br />
Besides this, Google also has the <a href="">google.load()</a> API to load JavaScript libraries, which might be an interesting thing to add to Zend. </p>
May 29, 2008
Matthew Weier O'Phinney
<p>Good feedback. I've removed useCdn() from the class skeleton and use cases in favor of setCdn() with a sensible default. By default, the CDN is used unless setLocalPath() is called, so there is no reason to call setCdn(false). BTW, I'm only hardcoding CDNs that we know about and will support; we will allow passing an arbitrary CDN path as well, though this isn't shown in the use cases.</p>
<p>Regarding the idea of separating the CDN logic to a separate component, I'll take it under consideration. Google's ajax libraries api is new, and I need to study it more to see if it's something we want to target in a generic fashion or not.</p>
Jun 17, 2008
Jordan Ryan Moore
<p>Shouldn't any CSS/javascript manipulation delegate to the already existing view helpers that deal with CSS/javascript?</p>
Jun 17, 2008
Matthew Weier O'Phinney
<p>I'm debating that issue currently as I work on the implementation. There are pros and cons to this:</p>
<ul>
<li>Pros
<ul>
<li>Keeps code DRY; not reimplementing functionality</li>
<li>No need for additional helper calls in layout script</li>
</ul>
</li>
<li>Cons
<ul>
<li>More difficult interactions for storing/retrieving data from the dojo view helper (needs to proxy)</li>
<li>More difficult to trace when items were added to the other placeholders ("where did that come from?" types of situations)</li>
</ul>
</li>
</ul>
<p>Keep an eye on the incubator once the proposal is approved to see what the final decision is.</p>
Jul 15, 2008
Matthew Weier O'Phinney
<p>During implementation, it became very clear that the interactions between the dojo() view helper and the other placeholders would become very complex very quickly. In the end, I opted to keep all such functionality inside the dojo() view helper.</p>
<p>In doing so, we've now identified that some functionality in the placeholder implementations may need to be extracted, such as the creation of arbitrary HTML elements. We will look into this for a future release.</p>
Jun 23, 2008
Ralph Schindler
<ac:macro ac:<ac:parameter ac:Zend Official Response</ac:parameter><ac:rich-text-body>
<p>This proposal is accepted to the Standard Incubator.</p>
<p>Notes:</p>
<ul>
<li>Move development to Zend_Dojo top level namespace.</li>
</ul>
</ac:rich-text-body></ac:macro>
Jul 03, 2008
Remi ++
<p>Hi Matthew,</p>
<p>I think, we need an additional support for the requirements of <a href="">DojoToolkit Builds</a>. Please look at the following Use Case:</p>
<p>A developer has finished his Webapplication with DojoToolkit and decides to create a Custom Build of his DojoToolkit files, in order to speed up the loading time of his/her application. The DojoToolkit Build tool allows to aggregate DojoToolkit components into one or more layer (together with ShrinkSafe and such things...). The DojoToolkit folks has decided to use such layers with their new demos. One example is the <a href="">Demo from dante</a>, where you can find two lines in the <head>-section, in order to load</p>
<ol>
<li>dojo.js (with path src/release/dojo/dojo/dojo.js)</li>
<li>demo.js (with path src/release/dojo/skew/demo.js)</li>
</ol>
<p>The result is, that only these two files are loaded by the Browser and not tons of individual .js file as you see, if you use a normal DojoToolkit installation without such a Build.</p>
<p>That means, if a developer decides to use the DojoToolkit build tool, he/she needs the possibility to load more than one dojo.js file with the ViewHelper. The question is, if this is already considered with the current Proposal?</p>
<p>Regards,</p>
<p> Remi </p>
Jul 03, 2008
Matthew Weier O'Phinney
<p>I'll talk with dante about this; I think what we have scripted already should work, but I'll verify with him.</p>
Jul 04, 2008
Remi ++
<p>I'm in contact with dante about another problem: Is it possible to include the Dojo Build process into the Zend Framework? But this is far away from this proposal.</p>
<p>So, back to the Zend_Dojo_View_Helper_Dojo: In the moment there is only the possiblity to set one local path with setLocalPath($path) and the _renderDojoScriptTag() method returns only one line</p>
<ac:macro ac:<ac:plain-text-body><![CDATA[
<script type="text/javascript" src="' . $source . '"></script>
]]></ac:plain-text-body></ac:macro>
<p>But if you want to use a Dojo Build with one or more layers, you need more than this one line above. The example from Dante, I mentioned above, consists of two layers: dojo.js and demo.js and needs at least one more line in the <head>-section:<br class="atl-forced-newline" /></p>
<ac:macro ac:<ac:plain-text-body><![CDATA[
<script type="text/javascript" src="src/release/dojo/dojo/dojo.js"></script>
<script type="text/javascript" src="src/release/dojo/skew/demo.js"></script>
]]></ac:plain-text-body></ac:macro>
<p>This is only a simple example. If a developer decides to use more layers, he/she needs <em>n</em> script lines with individual paths in the <head>-section.</p>
<p>One possible solution could be, to create a addLayer($path) method. Another idea could be to change the setLocalPath() method to accept an array instead of a string.</p>
<p>Remi </p>
Jul 15, 2008
Matthew Weier O'Phinney
<p>Remi – addLayer() is now available in the incubator, and I have tested it locally with a custom build. It allows adding multiple layers, and generates a script tag for each.</p>
Jul 15, 2008
Benjamin Eberlei
<p>There maybe a naming conflict with the addJavascriptFile()/getJavascriptFiles() of the jQuery proposal I currently write up. They actually provide the same functionality under different names. The question is, why the method name addLayer()/getLayers()? I can't really grasp the meaning of a layer in this relationsship.</p>
<p>I also saw that there is a local relative path variable now, which allows for an implicit directory setting for all additional files and the stylesheet files. How should we generalize this? Should same directory as library be enforced on local path usage? Then i could drop my setJavascriptDirectory() methods in favour of an implicit relative directory detection.</p>
Jul 15, 2008
Matthew Weier O'Phinney
<p>Dojo builds can create what are called 'layers' – they contain the original dependencies, intern any static templates, and pass them through shrinksafe; they can be in a directory relative to Dojo, or in their own tree. Typically they ship with a stripped dojo distribution as well. Basically, the term is very specific to Dojo, and the mechanics can be different than simply loading a javascript file (though that is the current implementation).</p>
<p>We could potentially have addJavascriptFile and family in an abstract class or interface; there's no reason that functionality could not be included with the dojo helper as well.</p>
<p>The local relative directory path is autodiscovered based on the location of the dojo file.This is used to help load stylesheet resources, which follow a particular naming convention; this allows us to utilize notation like 'dijit.themes.tundra' and have it resolve to the appropriate file (../dijit/themes/tundra/tundra.css, relative to the dojo/ directory). If you have your toolkit library in a different tree than other JS files, this relative path may be inaccurate.</p> | http://framework.zend.com/wiki/display/ZFPROP/Zend_View_Helper_Dojo?focusedCommentId=3080197 | CC-MAIN-2013-48 | refinedweb | 2,415 | 53.61 |
If you are a newbie to c++ and trying to understand a very basic c++ program to print something, you may have this question. You better read this article before going into this. If you have some knowledge of printing in c++ just read this.
First of all, you need to know what c++ namespaces are. In programming, we cannot have variables, functions, etc with the same name. So to avoid those conflicts we use namespaces.
So for one namespace we can have one unique name and that same name can also be used in another namespace. Following example shows two namespaces.
We have used the same name for variables and functions here. We can call these A::printX() which will give the result 5 and B::printX() which will give the result 10. There is no naming conflicts since we use namespaces. Following code, block shows how to use namespaces.
?using namespace std? means we use the namespace named std. ?std? is an abbreviation for standard. So that means we use all the things with in ?std? namespace. If we don?t want to use this line of code, we can use the things in this namespace like this. std::cout, std::endl.
If this namespace is not used, then computer finds for the cout, cin and endl etc.. Computer cannot identify those and therefore it throws errors.
So now you have an idea on namespaces. Let?s go to the original question why namespace is used, when we have all in the iostream header file. iostream is a file that has all the things like cout, endl and etc is defined. If we need to use them we need to add that file. So basically #include <iostream> means copying and pasting the code in that file to your code. But if we try to use cout, endl in our code without specifying the namespace it will throw an error, because these are defined in the std namespace in the iostream.h file like following. Following is dummy of iostream.h file. This will give you an idea about how this is defined.
So when we run a program to print something, ?using namespace std? says if you find something that is not declared in the current scope go and check std.
So now you have the answer why both statements
#include <iostream>using namespace std;
are used. It is because computer needs to know the code for the cout, cin functionalities and it needs to know which namespace they are defined.
So as a summary, why you need both the header file and the namespace to run a simple c++ program, because computer needs to know the definition of the code of the functionalities. It is defined in the header file. So header file needs to be included. namespace is needed because if a functionalities like cout is used, but not defined in the current scope computer needs to know where to check. so namespace needs to be included. Because we are writing the code outside the std namespace. | https://911weknow.com/why-using-namespace-std-is-used-after-including-iostream | CC-MAIN-2021-17 | refinedweb | 511 | 84.07 |
Finds the next attribute after prevattr in an entry. To iterate through the attributes in an entry, use this function in conjunction with the slapi_entry_first_attr() function.
#include "slapi-plugin.h" int slapi_entry_next_attr( Slapi_Entry *e, Slapi_Attr *prevattr, Slapi_Attr **attr );
This function takes the following parameters:
Entry from which you want to get the attribute.
Previous attribute in the entry.
Pointer to the next attribute after prevattr in the entry.
This function returns 0 if successful or -1 if prevattr was the last attribute in the entry.
Never free the returned attr. Use slapi_attr_dup() to make a copy if a copy is needed. | http://docs.oracle.com/cd/E19528-01/820-2492/aaiha/index.html | CC-MAIN-2015-48 | refinedweb | 101 | 58.89 |
On Tue, Aug 11, 2009 at 2:45 PM, Gunnar Wolf<gwolf@gwolf.org> wrote: > Russ Allbery dijo [Sat, Aug 08, 2009 at 05:51:33PM -0700]: >> >. Automation is definitely the recipe for success when it comes to open source. >> >>) My question then is, would it be possible to get debugging symbols for the C/XS stuff we compile? Especially for figuring out segfaults that would be tremendously useful, even in the context of Perl modules. > > • Less namespace explosion. We would get rid of all the -debug > packages. I understand what you mean, but I hope you don't intend get rid of *all* those packages, because not all of them are what you expect them to be. perl-debug for example is just Perl compiled with debugging symbols enabled (you run it via debugperl rather than perl). Steve Langasek mentioned this in a previous mail. > > -- > Gunnar Wolf • gwolf@gwolf.org • (+52-55)5623-0154 / 1451-2244 > > > -- > To UNSUBSCRIBE, email to debian-policy-REQUEST@lists.debian.org > with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org > > | https://lists.debian.org/debian-devel/2009/08/msg00387.html | CC-MAIN-2019-26 | refinedweb | 178 | 67.86 |
Segmentation Fault
Hi, I was working on a GUI in PyQt5, and I seem to often get Segmentation Faults.
It did give a backtrace once, which is here:
My entire code: (requires)
I wasn't sure what happened so I asked some people and they said it's likely an issue with PyQt5, so.. here I am.
Basically, the program randomly exits with
Segmentation fault (core dumped)at random times.
The program is meant to receive text, and add it to a QLineEdit.
PyQt5 version: 5.11.3
Okay first off you are probably biting off way more than you should with this basic program so instead of trying so much at once let us make it even more basic -- I would have tested this myself but do not have the minecraft stuff loaded so you will have to test it instead -- by doing the following:
import sys from time import sleep from PyQt5.QtWidgets import QApplication from minecraft import authentication from minecraft.exceptions import YggdrasilError from minecraft.networking.connection import Connection from minecraft.networking.packets import Packet, clientbound, serverbound from minecraft.compat import input def handle_join_game(join_game_packet): print('Connected:',join_game_packet) def print_chat(chat_packet): print("Chat :",chat_packet.json_data) sleep(2) #I had to add this or it instantly crashed def main(): print('Start Main') try: connection = Connection(options_addr, options_port, username=options_user) except Exception as err: print("ERROR 1:",err) sys.exit() try: connection.register_packet_listener(handle_join_game, clientbound.play.JoinGamePacket) except Exception as err: print("ERROR 2:",err) sys.exit() try: connection.register_packet_listener(print_chat, clientbound.play.ChatMessagePacket) except Exception as err: print("ERROR 3:",err) sys.exit() try: connection.connect() except Exception as err: print("ERROR 4:",err) sys.exit() sys.exit() if __name__ == '__main__': app = QApplication([]) ex = main() sys.exit(app.exec_())
What you are trying to do with the above is figure out where the error is occurring and perhaps get a better idea of what that error actually is. Having looked at the dump and the logic you have in place my guess is that it is not actually a pyqt5 error exactly but perhaps more an error contained within the Minecraft process or a misunderstanding of how to handle what you are getting back from Minecraft -- this is meant to help you determine both.
Now if the above runs without crashing and you are getting what you expect back within those various calls then post again with sample data of what you are getting back and I will check the rest of the code but again my guess is the issue resides in Minecraft or not fully understanding (aka correctly handling) what you are getting back from Minecraft Still if you catch the error at least we should have a slightly better idea of exactly what the error is this way and that should help as well.
Hi,
I did test out your code above - it simply ran and outputted "Start Main". However I tried to go back to the original version as much as I could:
def handle_join_game(join_game_packet): print('Connected:',join_game_packet) def print_chat(chat_packet): print("Chat :",chat_packet.json_data) sleep(2) #I had to add this or it instantly crashed def main(): try: connection = Connection(options_addr, options_port, username=options_user) except Exception as e: print("conn error") connection.register_packet_listener(handle_join_game, clientbound.play.JoinGamePacket) connection.register_packet_listener(print_chat, clientbound.play.ChatMessagePacket) connection.connect()
and the output that I was expecting was
Username set to testing249399 Connected: 0x25 JoinGamePacket(entity_id=3343, game_mode=0, dimension=0, difficulty=3, max_players=20, level_type='flat', reduced_debug_info=False) Chat : {"text":""} Chat : {"text":""} Chat : {"text":""}
(This should be a replica of what you have above without try/excepts (I'll see if any error at all happens), not sure why yours isn't working)
There should be a tcp connection established until the client quits. The messages at this point should be something along the lines of endless
{"text":""}.
Nothing has thrown an error so far.
Okay well start adding in elements of your full program in the smallest chunks feasible and keep encapsulating things with the try except -- eventually you should catch your error where it is occurring and once you have that it might be easier to figure out how to fix the issue
Hi,
I believe that the problem is during
def print_chat(chat_packet): self.textbox.append(chat_packet.json_data)
It only exits with seg fault if the values are appended - however it happens if any value is appended. The program runs fine without issues if no value is appended... however I have no clue on fixes for this. Do you have any recommendations?
(The lib can perfectly return the values, I tried printing them and it works - it's just when anything is appended to that box that there's a seg fault at different times)
Okay @KCocco what I would suggest is the following:
def print_chat(chat_packet): try: self.textbox.append(chat_packet.json_data) except Exception as err: print ("Append Error:",err) print ("Append Data :","["+chat_packet.json_data+"]")
The key here is that it appears this is where the error is and since its intermittent it makes me feel its more a data issue of some sort (again something coming through that is not expected to) as such we print the error along with the data that we are shipping to the append routine. The "[ ]" are to make sure there are no hidden characters preceding or following the regular textual string we assume is contained within this packet. Once we see what the string contains then we ought to be able to duplicate the error by simply duplicating that string and sending it through the append string. If this does not work then perhaps its the packet just before the packet that triggers the error that is the issue -- in this case do thing following:
newPacket = "[" + chat_packet.json_data + "]" self.textbox.append(newPacket)
This way you can look at the contents of the file and determine if any hidden characters are corrupting the file and causing the issue.
@Denni said in Segmentation Fault:
self.textbox.append(newPacket)
Hi,
I've tried the solution you've suggested. However, I still get segfaults without any exceptions thrown. Also - this time I got a copy of the crash log, which I've uploaded here:
Maybe it can be useful to you.
Also, here's the stacktrace:
Okay first off not sure if this is any part of the issue but from my understanding pyqt5 (which I am assuming you are using) does not play well with anything earlier than python 3.7 (and it appears you are using python 3.5) -- now I know you can perhaps get it to work with python 3.5 but that does not mean it will be 100% stable and this might be a symptom of that -- so my suggestion at this point is to make sure you are using pyqt5 on python 3.7+ --- and if you can do a clean install of latest version python 3.7 along with pyqt5 that would be best ..... while it might not happen I have run into situations where having earlier versions of platform software cause issues - note I am currently upgrading a python 2.7 / pyqt4 project to python 3.7 / pyqt5 and I made sure to load nothing but python 3.7 and pyqt5 on the development machine .... I use a different machine to run the py2.7/qt4 stuff
Note keep in mind this kind of bug is not only catastrophic as you have seen but hard to track down -- especially since I cannot run it myself using your environment so this means you have to try and think outside the box a bit and figure out how to catch the error if possible -- normally a try except within code (if placed properly) would catch the error and keep it from being catastrophic so not sure why its not catching it unless its not placed in the right spot --- try encapsulating all of your code elements within try except blocks -- aka start each function there-abouts with a try and end it there-abouts with an except printing an error statement that clearly lets you know where you are at -- the issue might be cropping up outside our context without being realized -- then again if its deep enough issue perhaps a try except will not catch it but its always worth a try (pun intended) ;)
P.S. It appears that there have been several of these crash reports for py3.5 using pyqt5 and they all "seem" to have to do with graphic rendering -- that is just my quick glance out on the internet using the crash designated [ python3.5 crashed with SIGSEGV in QTextEngine::shapeTextWithHarfbuzzNG() ]
Hi,
I didn't expect to have that many issues with an older version of python... welp.
Either way, thanks for your help :).
Yeah sometimes you do and sometimes you do not -- it is a crap shoot... box cars or snake eyes do come up from time to time.
When investigating this I saw that I could use earlier versions of python but frankly there are reasons that they make changes to the software and using an earlier version (if you do not have to) is just asking for trouble (imho). Further I had noticed in my quick research that while pyqt5 has been tweaked to work with earlier versions of python it was not designed too.
As a final note -- when developing something new I almost always make sure I am using the latest most up to date tools especially since in today's rapidly advancing technological process hardware actually becomes obsolete in about 5 years or so and software while it will last longer sometimes -- It to has been changing fairly quickly. For instance Python 2.7 will no longer be supported come next year and we are only on Python 3.7 So it behooves you to get the latest and greatest stable platforms to do any new coding on or with.
If you are needing more assistance once you have upgraded do drop a line if I am about I would enjoy lending you a hand.
Hey @KCocco your project interested me so I started looking into it and I came across this and thought you might be interested
Granted I think you are using ubuntu but I have found that their are similar aspects from one OS to another so maybe there is something here you can use to help you and I also see my you might be using an earlier version of python -- still all-things-considered -- I would simply get the source code in bits and pieces and begin updating it to 3.7 and pyqt5 ... which is kind of what I plan to do ... so perhaps we can help one another I was going to send you an email but could not so figured I would try it this way. | https://forum.qt.io/topic/103631/segmentation-fault | CC-MAIN-2022-40 | refinedweb | 1,813 | 56.79 |
DateTime.AddMonths Method (Int32) time-of-day part of the resulting DateTime object remains the same as this instance.
The following example adds between zero and fifteen months to the last day of December, 2015. In this case, the AddMonths method returns the date of the last day of each month, and successfully handles leap years.
using System; public class Example { public static void Main() { var dat = new DateTime(2015, 12, 31); for (int ctr = 0; ctr <= 15; ctr++) Console.WriteLine(dat.AddMonths(ctr).ToString("d")); } } // The example displays the following output: // 12/31/2015 // 1/31/2016 // 2/29/2016 // 3/31/2016 // 4/30/2016 // 5/31/2016 // 6/30/2016 // 7/31/2016 // 8/31/2016 // 9/30/2016 // 10/31/2016 // 11/30/2016 // 12/31/2016 // 1/31/2017 // 2/28/2017 // 3/31/2017
Available since 4.5
.NET Framework
Available since 1.1
Portable Class Library
Supported in: portable .NET platforms
Silverlight
Available since 2.0
Windows Phone Silverlight
Available since 7.0
Windows Phone
Available since 8.1 | https://msdn.microsoft.com/en-us/library/system.datetime.addmonths.aspx | CC-MAIN-2015-40 | refinedweb | 176 | 66.84 |
QtQuick.qml-tutorial1
This first program is a very simple "Hello world" example that introduces some basic QML concepts. The picture below is a screenshot of this program.
Here is the QML code for the application:
import QtQuick 2.0 Rectangle { id: page width: 320; height: 480 2.0
Rectangle Type
Rectangle { id: page width: 320; height: 480 color: "lightgray"
We declare a root object type contains many other properties (such as
x and
y), but these are left at their default values.
Text Type
Text { id: helloText text: "Hello world!" y: 30 anchors.horizontalCenter: page.horizontalCenter font.pointSize: 24; font.bold: true }
We add a Text type as a child of the root Rectangle type that displays the text 'Hello world!'.
The
y property is used to position the text vertically at 30 pixels from the top of its parent.
The
anchors.horizontalCenter property refers to the horizontal center of an type. In this case, we specify that our text type should be horizontally centered in the page element (see Anchor-Based Layout).
The
font.pointSize and
font.bold properties are related to fonts and use the dot notation.
Viewing the example
To view what you have created, run the qmlscene tool (located in the
bin directory) with your filename as the first argument. For example, to run the provided completed Tutorial 1 example from the install location, you would type:
qmlscene tutorials/helloworld/tutorial1.qml | https://phone.docs.ubuntu.com/en/apps/api-qml-development/QtQuick.qml-tutorial1 | CC-MAIN-2021-04 | refinedweb | 237 | 57.37 |
Reading Javascript from a website
By
Kidney, in AutoIt General Help and Support
Recommended Posts
Similar Content
- By SkysLastChance
I have a goofy problem. I am hoping someone could shed some light. The example is not going around the text box. It is way off.
I have seen some post blaming IE 11, however I have IE11 on my desktop and it works fine.
Is there anything I can do that might fix this?
; Open a browser with the form example and get a reference to the form ; textarea element. Get the coordinates and dimensions of the text area, ; outline its shape with the mouse and come to rest in the center #include <IE.au3> Local $oIE = _IE_Example("form") Local $oForm = _IEFormGetObjByName($oIE, "ExampleForm") Local $oTextArea = _IEFormElementGetObjByName($oForm, "textareaExample") ; Get coordinates and dimensions of the textarea Local $iScreenX = _IEPropertyGet($oTextArea, "screenx") Local $iScreenY = _IEPropertyGet($oTextArea, "screeny") Local $iWidth = _IEPropertyGet($oTextArea, "width") Local $iHeight = _IEPropertyGet($oTextArea, "height") ; Outline the textarea with the mouse, come to rest in the center Local $iMousespeed = 50 MouseMove($iScreenX, $iScreenY, $iMousespeed) MouseMove($iScreenX + $iWidth, $iScreenY, $iMousespeed) MouseMove($iScreenX + $iWidth, $iScreenY + $iHeight, $iMousespeed) MouseMove($iScreenX, $iScreenY + $iHeight, $iMousespeed) MouseMove($iScreenX, $iScreenY, $iMousespeed) MouseMove($iScreenX + $iWidth / 2, $iScreenY + $iHeight / 2, $iMousespeed)
-.
- By SkysLastChance
So I have two things I am trying to click.
Policy which works.
$oInputs3 = _IETagNameGetCollection($oIE, "div") For $oInput3 in $oInputs3 If StringStripWS($oInput3.innertext,1) = "Policy" Then $target = $oInput3 _IELoadWait($target,"",70000) ExitLoop EndIf Next _IEAction($target, "click")
And Add Insurance which I havent been able to get to work.
$oInputs2 = _IETagNameGetCollection($oIE, "div") For $oInput2 in $oInputs2 If StringStripWS($oInput2.innertext,1) = "Add Insurance" Then $target = $oInput2 _IELoadWait($target,"",70000) ExitLoop EndIf Next _IEAction($target, "click")
Any Ideas on what I am doing wrong? I feel like it might be the spaces between > Add Insurance < but I am not sure.
- By FMS
Hello,
I'm trying to read a div element and wait until it hits 100%.
The structure is like :
<div class="progress-bar" style="width: 48.0219%; overflow: hidden; "></div>
And want to wait until :
<div class="progress-bar" style="width: 100%; overflow: hidden; "></div>
because afther this there will be an redirection whish i don't know the URL from and want to catsh this URL.
And want to push a button on this redidertion page.
Is there a best pratice way how to do this or is there a better way to wait for the redirection?
Maybe wait until button exist or something?
Does anybody could give me some tips about this challange?
thnx in advanced.
#include <IE.au3> Global $IE_flvto = _IECreate("",0,1,1,1) Global $oForm = _IEFormGetObjByName ($IE_flvto, "convertForm") Global $oText = _IEFormElementGetObjByName ($oForm, "convertUrl") _IEFormElementSetValue ($oText, "some text") _IEFormSubmit($oForm) ;wait for redirection ;if redirection loaded push button | https://www.autoitscript.com/forum/topic/135585-reading-javascript-from-a-website/ | CC-MAIN-2018-34 | refinedweb | 461 | 54.52 |
Been looking every where for a tutorial or something.
I've been trying to implement my old generic repository pattern for MVC5 into a new MVC6 project.
I set up 3 class library's
.Core,
.Data and
.Service, However there's an issue with
IDBset, seems my intellisense doesn't like it, I tried to add
System.Data and Entity framework 6 but without any luck (cant find it...confusing).
After roaming google I decided to ask here, is there a tutorial with the correct way or can someone throw up a very simple MVC6 Generic Repository pattern? I have a feeling the Old method of doing it may have changed, just cant seem to find any information other than the inbuilt DI.
Code:
my
IDbContext interface
IDbSet<TEntity> Set<TEntity>() where TEntity : BaseEntity;
does not see
IDbSet, this simply because of Entity Framework? I do have the References to it.
Issue may be i cant find the using statment for entity framework.
UPDATE:
Using Entity framework 8.0.0 beta. Change all IDbset references to DbSet.
However in my generic repository where i use methods such as:
public virtual T GetById(object id) { return this.Entities.Find(id); }
"Find" isnt a method. and i can no longer use "DbEntityValidationException" in my catchs.
Entity Framework 7 Beta 8 doesn't come with the Find Method. It will probably be added before the final release.
You will have to use the the
FirstOrDefault method instead until that happens
public virtual T GetById(int id) { return this.Entities.FirstOrDefault(x => x.Id == id); }
Because
Id property will not be recognized you'll have to add an interface and make your repository implement it.
public interface IEntity { int Id { get; set; } }
e.g.
public class GenericRepository<T> : IGenericRepository<T> where T: class, IEntity
From the github issues list. EF7 does not perform automatic data validation so DbEntityValidationException does not exist in EF7.
Take note: EF7 is not a update of EF but a rewrite.
On the contrary of firste's answer, I'm not confident by using FirstOrDefault, cause it will generates a SELECT TOP 1 [...].
In many cases, the Id should be unique, but sometimes you can have bad designed Db's. If not, your application should throws an exception (that's what we're looking after).
So, until EF7 implements a Find() method, I strongly suggest using SingleOrDefault() :
public virtual T GetById(int id) { return this.Entities.SingleOrDefault(x => x.Id == id); }
Like that, you add an application control to verify that Id should be unique, taking no cares if the Db is correctly done or not. It add another level of security to your business. | https://entityframeworkcore.com/knowledge-base/33647984/asp-net-5-mvc-6-generic-repository-pattern | CC-MAIN-2021-17 | refinedweb | 445 | 57.06 |
Python 3.7 introduced dataclasses (PEP557). Dataclasses can be a convenient way to generate classes whose primary goal is to contain values.
The design of dataclasses is based on the pre-existing
attr.s library. In fact Hynek Schlawack, the very same author of attrs, helped with the writing of PEP557.
Basically dataclasses are a slimmed-down version of attrs. Whether this is an improvement or not really depends on your specific use-case.
I think the addition of dataclasses to the standard library makes attrs even more relevant. The way I see it is that one is a subset of the other, and having both options is a good thing. You should probably use both in your project, according to the level of formality you want in that particular piece of code.
In this article I will show the way I use dataclasses and attrs, why I think you should use both, and why I think attrs is still very relevant.
What do they do
Both the standard library's dataclasses and the
attrs library provide a way to define what I'll call "structured data types" (I would put
namedtuple,
dict and
typeddict in the same family)
PS: There's probably some more correct CS term for them, but I didn't go to CS School, so ¯\(ツ)/¯
They are all variations on the same concept: a class representing a data type containing multiple values, each value addressed by some kind of key.
They also do a few more useful things: they provide ordering, serialization, and a nice string representation. But for the most part, the most useful purpose is adding a certain degree of formalization to a group of values that need to be passed around.
An example
I think an example would better illustrate what I use dataclasses and attrs for. Suppose you want to render a template containing a table. You want to make sure the table has a title, a description, and rows:
def render_document(title: str, caption: str, data: List[Dict[str, Any]]): return template.render({ "title" : title, "caption": caption, "data": data, })
Now, suppose you want to render a document, which consists of a title, description, status ("draft", "in review", "approved"), and a list of tables. How would you pass the tables to
render_document?
You may choose to represent each table as a
dict:
{ "title": "My Table", "caption": "2019 Earnings", "data": [ {"Period": "QT1", "Europe": 500, "USA": 467}, {"Period": "QT2", "Europe": 345, "USA": 765}, ] }
But how would you express the type annotation for the
tables argument so that it's correct, explicit and simple to understand?
def render_document(title: str, description: str, status: str, tables: List[Dict[str, Any]]): return template.render({ "title": title, "description": description, "status": status, "tables": tables, })
That only gets us to describe the first level if
tables. It doesn't tell us that a
Table has a title, or caption. Instead, you could use a dataclass:
@dataclass class Table: title: str data: List[Dict[str, Any]] caption: str = "" def render_document(title: str, description: str, tables: List[Table]): return template.render({ "title": title, "description": description, "tables": tables, })
This way we have type hinting, helping our IDE helping us.
But we can go one step further, and also provide type validation at runtime. This is where dataclasses stops, and attrs comes in:
@attr.s class Table(object): title: str = attr.ib(validator=attr.validators.instance_of(str)) # don't you pass no bytes! data: List[Dict[str, Any]] = attr.ib(validator=...) description: str = attr.ib(validator=attr.validators.instance_of(str), default="") def render_document(title: str, description: str, tables: List[Table]): return template.render({ "title": title, "description": description, "tables": tables, })
Now, suppose we also need to render a "Report", which is a collection of "Document"s. You can probably see where this is going:
@dataclass class Table: title: str data: List[Dict[str, Any]] caption: str = "" @attr.s class Document(object): status: str = attr.ib(validators=attr.validators.in_( ["draft", "in review", "approved"] )) tables: List[Table] = attr.ib(default=[]) def render_report(self, title: str, documents: List[Document]): return template.render({ "title": title, "documents": documents, })
Note how I am validating that
Document.status is one of the allowed values. This comes particularly handy when you're building abstractions on top of Django models with a field that uses
choices. Dataclasses can't do that.
A couple of patterns I keep finding myself in are the following:
- Write a function that accepts some arguments
- Group some of the arguments into a
tuple
- Hm, I want field names ->
namedtuple.
- Hm, I want types ->
dataclass.
- Hm, I want validation ->
attrs.
Another situation that happens quite often is this:
- write a function that accepts some arguments
- add typing so my IDE can help me out
- oh, by the way, it needs to support a list of those things, not just one at a time!
- refactor to use dataclasses
- This argument can only be one of those values, or
- I ask myself: How do I make sure other developers are passing the right type and/or values?
- switch to attrs
Sometimes I stop at the dataclasses. Lots of times I get to the attrs step.
And sometimes, this happens:
1. one half of this legacy codebase uses
-1 as special value for
False, that other half uses
False. Switch to
attr.s so I can use
converter= to normalize.
Comparison
The two libraries do appear very similar. To get a clearer picture of how they compare, I've made a table of the features I use most:
As you can see, there's a lot of overlap. But the additional features on
attrs provide functionality that I need more often than not.
When to use dataclasses
Dataclasses are just about the "shape" of the data. Choose dataclasses if:
- You don't care about values in the fields, only their type
- adding a dependency is not trivial
When to use attrs
attrs is about the shape and the values. Choose attrs if:
- you want to validate values. A common case would be the equivalent of a ChoiceField.
- you want to normalize, or sanitize the input
- whenever you want more formalization than dataclasses alone can offer
- you are concerned about memory and performances.
attrscan create slotted classes, which are optimized by CPython.
I often find myself using dataclasses and later switching to attr.s because the requirements changed or I find out I need to guard against some particular value. I think that's a normal aspect of developing software and what I call "continuous refactoring".
Why I like dataclasses
I'm glad dataclasses have been added to the standard library, and I think it's a beneficial addition. It's a very convenient thing to have at your disposal whenever you need.
For one, it will encourage a more structured style of programming from the beginning.
But I think the most compelling case is a practical one. Some high-risk corporate environments (eg: financial institutions) require every package to be vetted (with good reason: we've already had incidents of malicious code in libraries). That means that adding attrs is not as simple as adding a line to your
requirements.txt, and will involve waiting on approval from your corpops team. Those developers can use dataclasses right away and their code will immediately benefit from using more formalized data types.
Why I like attrs
Most people don't work in such strictly-controlled environments.
And sure, sometimes you don't need all the features from attrs, but it doesn't hurt having them.
More often than not, I end up needing them anyway, as I formalize more and more of my code's API. Dataclasses only gets half-way of where I want to go.
Conclusion
I think dataclasses encompass only a subset of what attrs has to offer. Admittedly, it is a big subset. But the features that are not covered are important enough and needed often enough that they make attrs not only still relevant and useful, but also necessary.
In my mind, using both allows developers to progressively refactor their code, moving the contracts amongst functions from loosely-defined arguments all the way up to formally described data structures as the requirements of the app stabilize over time.
One nice effect of having dataclasses is that now developers are more incentivized to refactor their code toward more formalization. At some point dataclasses is not going to be enough, and that's when developers will refactor to use attrs. In this way, dataclasses actually acts as an introduction to attrs. I wouldn't be surprised if attrs becomes more popular thanks to dataclasses.
References
Acknowledgments + Thanks
I want to thank the following people for revising drafts and providing input and insights:
- Hynek Schlawack
- Jacob Kaplan-Moss
- Jacob Burch
- Jeff Triplett | https://www.revsys.com/tidbits/dataclasses-and-attrs-when-and-why/ | CC-MAIN-2019-26 | refinedweb | 1,461 | 54.93 |
tcgetattr - get the parameters associated with the terminal
#include <termios.h> int tcgetattr(int fildes, struct termios *termios_p);
The tcgetattr() function gets the parameters associated with the terminal referred to by fildes and stores them in the termios structure referenced by termios_p. The fildes argument is an open file descriptor associated with a terminal.
The termios_p argument is a pointer to a termios structure.
The tcgetattr() operation is allowed from any process.
If the terminal device supports different input and output baud rates, the baud rates stored in the termios structure returned by tcgetattr() reflect the actual baud rates, even if they are equal. If differing baud rates are not supported, the rate returned as the output baud rate is the actual baud rate. If the terminal device does not support split baud rates, the input baud rate stored in the termios structure will be 0.
Upon successful completion, 0 is returned. Otherwise, . | http://www.opengroup.org/onlinepubs/007908799/xsh/tcgetattr.html | crawl-001 | refinedweb | 153 | 54.42 |
Social power, influence, and performance in the NBA, Part 2
Exploring the individual NBA players
Python, pandas, and a touch of R
Content series:
This content is part # of # in the series: Social power, influence, and performance in the NBA, Part 2
This content is part of the series:Social power, influence, and performance in the NBA, Part 2
Stay tuned for additional content in this series.
Getting started
In Part 1 of this series, you learned about the basics of data science and machine learning. You used Jupyter Notebook, pandas, and scikit-learn to explore the relationship between NBA teams and their valuation. Here, you will explore the relationship between social media, salary, and on-the-court performance for NBA players.
Create a unified data frame (Warning: hard work ahead!)
To get started, create a new Jupyter Notebook and name it
nba_player_power_influence_performance.
Next, load all of the data about players and merge the data into a single unified data frame.
Manipulating several data frames falls into the category of the 80 percent of the hard work of data science. In listings 1 and 2, the basketball-reference data frame is copied, then several columns are renamed.
Listing 1. Setting up Jupyter Notebook and loading data frames
import pandas as pd import numpy as np import statsmodels.api as sm import statsmodels.formula.api as smf import matplotlib.pyplot as plt import seaborn as sns from sklearn.cluster import KMeans color = sns.color_palette() from IPython.core.display import display, HTML display(HTML("<style>.container { width:100% !important; }</style>")) %matplotlib inline attendance_valuation_elo_df = pd.read_csv("../data/nba_2017_att_val_elo.csv") salary_df = pd.read_csv("../data/nba_2017_salary.csv") pie_df = pd.read_csv("../data/nba_2017_pie.csv") plus_minus_df = pd.read_csv("../data/nba_2017_real_plus_minus.csv") br_stats_df = pd.read_csv("../data/nba_2017_br.csv")
Listing 2. Fixing bad data in column in plus minus data frame
plus_minus_df.rename(columns={"NAME":"PLAYER"}, inplace=True) players = [] for player in plus_minus_df["PLAYER"]: plyr, _ = player.split(",") players.append(plyr) plus_minus_df.drop(["PLAYER"], inplace=True, axis=1) plus_minus_df["PLAYER"] = players plus_minus_df.head()
The output of the commands to rename the NAME column to the PLAYER column
is shown below. The extra column is also dropped. Take note of the
inplace=TRUE and the drops to apply to the existing data frame.
Figure 1. NBA dataset load and describe
The next step is to rename and merge the core data frame that holds the majority of the stats from Basketball Reference. To do this, use the code provided in listings 3 and 4.
Listing 3. Rename and merge basketball reference data frame
nba_players_df = br_stats_df.copy() nba_players_df.rename(columns={'Player': 'PLAYER','Pos':'POSITION', 'Tm': "TEAM", 'Age': 'AGE'}, inplace=True) nba_players_df.drop(["G", "GS", "TEAM"], inplace=True, axis=1) nba_players_df = nba_players_df.merge(plus_minus_df, how="inner", on="PLAYER") nba_players_df.head()
Listing 4. Clean up and merge PIE fields
pie_df_subset = pie_df[["PLAYER", "PIE", "PACE"]].copy() nba_players_df = nba_players_df.merge(pie_df_subset, how="inner", on="PLAYER") nba_players_df.head()
Figure 2 shows the output of splitting the columns into two parts and re-creating the column. Splitting and re-creating columns is a typical operation and takes up much of the time of data manipulation in solving data science problems.
Figure 2. Merge PIE data frames
Up until now, most of the data manipulation tasks have been relatively straightforward. Things are about to get more difficult because there are missing records. In Listing 5, there are 111 missing salary records. One way to deal with this is to do a merge that drops the missing rows. There are many techniques to deal with missing data; just dropping missing rows, as shown in the example, is not always the best choice. There are many examples of dealing with missing data in Titanic: Machine Learning from Disaster. It is well worth the time to explore a few example notebooks there.
Listing 5. Clean up salary
salary_df.rename(columns={'NAME': 'PLAYER'}, inplace=True) salary_df.drop(["POSITION","TEAM"], inplace=True, axis=1) salary_df.head()
In Listing 6, you can see how a set is created to calculate the number of
rows that are missing data. This is a handy trick that is invaluable in
determining what is different between two data frames. It is accomplished
by using the Python built-in function
len(), which is also
commonly used in regular Python programming to get the length of a
list.
Listing 6. Find missing records and merge
diff = list(set(nba_players_df["PLAYER"].values.tolist()) - set(salary_df["PLAYER"].values.tolist())) len(diff) Out[45]: 111 nba_players_with_salary_df = nba_players_df.merge(salary_df)
The output is shown below.
Figure 3. Difference between data frames
With the data frame merges complete, it's time to create a correlation heatmap to discover which features are correlated. The heatmap below shows the combined output of the correlation of 35 columns and 342 rows. A couple of immediate things that pop out are that salary is highly correlated with both points and WINS_RPM, which is an advanced statistic that calculates the estimated wins a player adds to their team by being on the court.
Another interesting correlation is that Wikipedia page views are strongly correlated with Twitter Favorite counts. This correlation makes sense intuitively because they are both measures of engagement and popularity of NBA players by fans. This is an example of how a visualization can help nail down which features will go into a machine learning model.
Figure 4. NBA player correlation heatmap: 2016-2017 season (stats and salary)
With some initial discovery of what features are correlated, the next step is to further discover relationships in the data by plotting in Seaborn. Commands executed to run the plot are shown below.
Listing 7. Seaborn lmpot of salary versus WINS_RPM
sns.lmplot(x="SALARY_MILLIONS", y="WINS_RPM", data=nba_players_with_salary_df)
In the plot output shown below, there appears to be a strong linear relationship between salary and WINS_RPM. To further investigate this, run a linear regression.
Figure 5. Seaborn lmplot of salary and wins real plus minus
The output of two linear regressions on wins is below. One of the more interesting findings is that wins are explained more by WINS_RPM than by points. The R-squared (goodness of fit) is 0.324 in the case of WINS_RPM versus 0.200 in points. WINS_RPM is the statistic that shows the individual wins attributed to a player. It makes sense that a more advanced statistic that takes into account defensive and offensive statistics and time on the court is more predictive versus just an offensive statistic.
An example of how this could play out in practice is to imagine a player who had a very low shooting percentage, but high points. If he shot the ball often, instead of a teammate with a higher shooting percentage, it could cost him wins. This case played itself out in real life during the 2015-16, season where Kobe Bryant in his last year with the Los Angeles Lakers, had 17.6 points per season, but a 41-percent shooting percentage for two-pointers. The team ended up only winning 17 games, and the WINS_RPM stat was 0.66 (only half a win attributed to his play during the season).
Figure 6. Linear regression wins
Listing 8. Regression wins and points
results = smf.ols('W ~POINTS', data=nba_players_with_salary_df).fit() print(results.summary())
Another way to represent this relationship graphically is with ggplot in
Python. Listing 9 is an example of how to set up the plot. The library in
Python is a direct port of ggplot in R
and is in active development. As of the time of this writing, it isn't as
smooth to use as the regular R ggplot, but it has a lot of nice features. The
graph is shown below.
Note: A handy feature is the ability to represent another column of continuous variables by a color.
Listing 9. Python ggplot
from ggplot import * p = ggplot(nba_players_with_salary_df,aes(x="POINTS", y="WINS_RPM", color="SALARY_MILLIONS")) + geom_point(size=200) p + xlab("POINTS/GAME") + ylab("WINS/RPM") + ggtitle("NBA Players 2016-2017: POINTS/GAME, WINS REAL PLUS MINUS and SALARY")
Figure 7. Python ggplot plus minus salary points
Grabbing Wikipedia page views for NBA players
The next task is to figure out how to collect Wikipedia page views, which is typically a messy data collection. Problems include:
- Figuring out how to retrieve the data from Wikipedia (or some website)
- Figuring out how to programmatically generate Wikipedia handles
- Writing the data into a data frame and joining it to the rest of the data
The code below is in the GitHub repository for this tutorial. Comments about this code are throughout the sections below.
Listing 10 provides the code to construct a Wikipedia URL that returns a JSON response. In Part 1, in the docstrings, the route to construct is shown. This is the URL the code calls to get the page view data.
Listing 10. Wikipedia, part 1
""" Example Route To Construct: + metrics/pageviews/per-article/ + en.wikipedia/all-access/user/ + LeBron_James/daily/2015070100/2017070500 + """ import requests import pandas as pd import time import wikipedia BASE_URL =\ ""
In Listing 10, part 2, Wikipedia handles are created by guessing that the first and last name is the player's name, then trying to append "(basketball)" to the URL if there is an error. This solves the majority of the cases, and only a few names/handles are missed. An example guess would be "LeBron" as the first name and "James" as the last name. The reason to initially guess this way is that it matches close to 80 percent of the Wikipedia pages and saves the time of finding the URLs one by one. For the 20 percent of names that don't fit this pattern, there is another method (shown below) that matches 80 percent of those initial misses.
By adding "(basketball)," Wikipedia can differentiate between one famous name and another. This convention catches the majority of names that did not match. Listing 10, part 2 shows the last method to find the other names.
Listing 10. Wikipedia, part 2 def create_wikipedia_handle(raw_handle): """Takes a raw handle and converts it to a wikipedia handle""" wikipedia_handle = raw_handle.replace(" ", "_") return wikipedia_handle def create_wikipedia_nba_handle(name): """Appends basketball to link""" url = " ".join([name, "(basketball)"]) return url
In Listing 10, part 3 the guess of a handle is facilitated by having access to a roster of players. This portion of the code runs the matching code shown above against the entire NBA roster collected earlier in the article.
Listing 10. Wikipedia, part 3
def wikipedia_current_nba_roster(): """Gets all links on wikipedia current roster page""" links = {} nba = wikipedia.page("List_of_current_NBA_team_rosters") for link in nba.links: links[link] = create_wikipedia_handle(link) return links
In Listing 10, part 4, the entire script runs using the CSV file as input and making another CSV file work as output. Note that the Wikipedia Python library is used to inspect the page to find the word "NBA" in the final matches. This is the last check for pages that have failed multiple guessing techniques. The result of all of these heuristics is a relatively reliable way to get the Wikipedia handles for NBA athletes. You could imagine using a similar technique for other sports.
Listing 10. Wikipedia, part 4()
Grabbing Twitter engagement for NBA players
Now you need the Twitter library so you can download the tweets for NBA players. Listing 11, part 1 shows the API to use this code. The Twitter API is more advanced than the simple script shown below. This is one of the advantages of using a third-party library that has been developed for years.
Listing 11. Twitter extract metadata, part 1
""" Get status on Twitter df = stats_df(user="KingJames") In [34]: df.describe() Out[34]: favorite_count retweet_count count 200.000000 200.000000 mean 11680.670000 4970.585000 std 20694.982228 9230.301069 min 0.000000 39.000000 25% 1589.500000 419.750000 50% 4659.500000 1157.500000 75% 13217.750000 4881.000000 max 128614.000000 70601.000000 In [35]: df.corr() Out[35]: favorite_count retweet_count favorite_count 1.000000 0.904623 retweet_count 0.904623 1.000000 """ import time import twitter from . import config import pandas as pd import numpy as np from twitter.error import TwitterError
In this next section, the tweets are extracted and converted into a pandas data frame that stores the values as a median. This is an excellent technique to compress the data by only storing the values we are interested in (i.e., the median of a set of data). The median is a useful metric because it is robust against outliers.
Listing 11. Twitter extract metadata, part 2 def create_twitter_csv(data="data/nba_2016_2017_wikipedia.csv"): nba = median_engagement(data) nba.to_csv("data/nba_2016_2017_wikipedia_twitter.csv")
Creating advanced visualizations
With the addition of social media data, you can create more advanced plots with additional insights. Figure 8 is an advanced plot, called a heatmap. It shows the correlation of a compressed set of key features. These features are a great building block for doing more machine learning, such as clustering (see Part 1 of this series). It would be worth using this data on your own to experiment with different clustering configurations.
Figure 8. NBA player endorsement, social power, on-court performance, team valuation correlation heatmap: 2016-17 season
Listing 12 provides the code to create the correlation heatmap.
Listing 12. Correlation heatmap
endorsements = pd.read_csv("../data/nba_2017_endorsement_full_stats.csv") plt.subplots(figsize=(20,15)) ax = plt.axes() ax.set_title("NBA Player Endorsement, Social Power, On-Court Performance, Team Valuation Correlation Heatmap: 2016-2017 Season") corr = endorsements.corr() sns.heatmap(corr, xticklabels=corr.columns.values, yticklabels=corr.columns.values, cmap="copper")
Listing 13 shows a heatmap with colors created using a log scale, along with a special color map. This is a great trick to provide a distinct contrast between each cell. A log scale is a transformation that shows the relative change versus the actual change. It is a common technique to use in graphing when the values have large magnitudes of differentiation — for example, 10 and 10 million. Showing the relative change, versus the actual change, adds more clarity to a plot. Normally, a plot is shown in linear scale (a straight line). A log scale (log line) diminishes in power as it is plotted (meaning that it flattens out).
Listing 13. Correlation heatmap advanced
from matplotlib.colors import LogNorm plt.subplots(figsize=(20,15)) pd.set_option('display.float_format', lambda x: '%.3f' % x) norm = LogNorm() ax = plt.axes() grid = endorsements.select_dtypes([np.number]) ax.set_title("NBA Player Endorsement, Social Power, On-Court Performance, Team Valuation Heatmap: 2016-2017 Season") sns.heatmap(grid,annot=True, yticklabels=endorsements["PLAYER"],fmt='g', cmap="Accent", cbar=False, norm=norm)
Figure 9. NBA player endorsement, social power, on-court performance, team valuation heatmap: 2016-17 season
One last plot uses the R language to create a multi-dimensional plot in ggplot. This is shown in Listing 14 and Figure 10. The native ggplot library in R is a powerful and unique charting library that can create multiple dimensions with color, size, facets, and shapes. The ggplot library in R is well worth the time to explore on your own.
Listing 14. Advanced R-based ggplot
ggplot(nba_players_stats, aes(x=WINS_RPM, y=PAGEVIEWS, color=SALARY_MILLIONS, size=TWITTER_FAVORITE_COUNT)) + geom_point() + geom_smooth() + scale_color_gradient2(low = "blue", mid = "grey", high = "red", midpoint = 15) + labs(y="Wikipedia Median Daily Pageviews", x="WINS Attributed to Player( WINS_RPM)", title = "Social Power NBA 2016-2017 Season: Wikipedia Daily Median Pageviews and Wins Attributed to Player (Adusted Plus Minus)") + geom_text(vjust="inward",hjust="inward",color="black",size=4,check_overlap = TRUE, data=subset(nba_players_stats, SALARY_MILLIONS > 25 | PAGEVIEWS > 4500 | WINS_RPM > 15), aes(WINS_RPM,label=PLAYER )) + annotate("text", x=8, y=13000, label= "NBA Fans Value Player Skill More Than Salary, Points, Team Wins or Another Other Factor?", size=5) + annotate("text", x=8, y=11000, label=paste("PAGEVIEWS/WINS Correlation: 28%"),size=4) + annotate("text", x=8, y=10000, label=paste("PAGEVIEWS/Points Correlation 44%"),size=4) + annotate("text", x=8, y=9000, label=paste("PAGEVIEWS/WINS_RPM Correlation: 49%"),size=4, color="red") + annotate("text", x=8, y=8000, label=paste("SALARY_MILLIONS/TWITTER_FAVORITE_COUNT: 24%"),size=4)
Figure 10. NBA player social power: 2016-17 season
Conclusion
In Part 1 of this series, you learned the basics of machine learning and used unsupervised clustering techniques to explore the valuation of the team. The tools utilized for this data science were Python and advanced graphs with Jupyter Notebook.
Here in Part 2, you explored the players and their relationship with social media, influence, salary, and on-the-court performance. Many advanced graphs were created in a Jupyter Notebook, but there was also a brief touch of R.
Some questions exposed or needing further investigation (they might be wrong assumptions):
- Salary paid to players isn't the best predictor of wins.
- Fans engage more with highly skilled athletes (versus highly paid, for example).
- Endorsement income correlates to how many wins a team has for a player, so they may want to be careful about which team they switch to.
- There appears to be a different audience that attends games in person and the audience that engages with social media. The audience in person seems bothered if their team is unskilled.
There's more you can do. Try applying both supervised and unsupervised machine learning to the data set provided in GitHub. I have uploaded the data set for you to experiment with this project on Kaggle. | https://www.ibm.com/developerworks/library/ba-social-influence-python-pandas-machine-learning-r-2/index.html | CC-MAIN-2021-10 | refinedweb | 2,898 | 57.06 |
Hello,
If there are two projects A and B. A is created as .dll, while B is created as static library.
project A has a class "classA"
classA: public classB
{
public:
testA();
};
project B has a class "classB"
classB
{
public:
testB();
} ;
I want to make a symbol of classB::testB() in A.exp (like ?testB@classB@@QAEXXZ), can I do it by using __declspec(dllexport)?
Also if I make classB as a template class, can the symbol of classB::testB() be generated automatically in A.exp when building project A?
Thanks,
Originally Posted by jeffwang66
A is created as .dll, while B is created as static library.
This is where your problem is. The specific of linking with static library is that objects get extracted from the library on the stage of linking loadable module. Only those ones that required for resolving this particular module symbols. Once the dll hosts classA but never calls classB::testB inside the code, the said classB::testB body never appears in the module, and therefore, there's no symbol ?testB@classB@@QAEXXZ to export from the dll. For the process of linking the relation of inheritance really means nothing, and linking is strictly about resolving binary symbol dependencies.
Last edited by Igor Vartanov; November 28th, 2013 at 12:52 AM.
Best regards,
Igor
The answer is already given in your previous post about this.
Either explicitely provide the testB member override in classA or make your executable dependant on the static lib as well.
Forum Rules | http://forums.codeguru.com/showthread.php?541277-Seive-of-Atkin&goto=nextoldest | CC-MAIN-2017-13 | refinedweb | 252 | 66.44 |
We are proud to announce that the book has been published: Lua Programming Gems. edited by LH de Figueiredo, W. Celes, R. Ierusalimschy
lua is really very flexible and can simulate many other languages of the advanced features of today's math class to learn that one element method, in this record. every thing is table in lua I think the calculation of the table computing technique kn
Requirements: Operational needs, using lua package, a set of UI. Now the traditional way too much trouble developing interfaces, and the state machine maintenance, image loading and release of resources at any time pay attention to memory overflow, b
LUA scripting language entry Preliminary Study Lua Programming: Mu Feng (Second Life members) This article comes from CSDN blog: In this article, I want to tell you how to Lua programmi
Requirements: Operational needs, a package with lua UI. Developed using traditional methods now take a lot of interface, state machine maintenance, image loading and release of resources at any time to pay attention to memory overflow, but also pay a
In this article, I demonstrate you how to call c functions from lua script. Let's say we have some C functions declared & defined in liba.h and liba.c respectively, and our goal is to be able call them in lua script. //liba.h typedef struct tagT{ int
ConcurrentLua - Lua for concurrent programming Original Address Linker translated text only provide more information. Introduction ConcurrentLua is a shared-nothing implementation of asynchronous message passing model. The model from the Erlang langu
LuaBind - the most powerful Lua C + + Bind Translation: Linker Lin (linker.m.lin @ gmail.com) 1 Introduction LuaBind is a help you bind C + + and Lua library. Her ability to expose C + + functions and classes to Lua. She has the ability to support th
ConcurrentLua - Lua for concurrent programming Original Address Linker translated text only provide more information. Introduction ConcurrentLua is a shared-nothing implementation of asynchronous message passing model. The model from the Erlang langu
C + + big language small library Lua small language small library Erlang small language big library
Squirrel is a relatively new programming language, it inherits from the famous LUA language a lot of features, the applicable scope and the LUA language similar. Squirrel's author is Italian Alberto Demichelis, SQUIRREL development are intended to be
Need to get rid of the wordFilter , This class is used to convert a Java object lua The string table and . Used in dynamically generated when using lua script You can get to el function inside , Famous original author, but please note , Object I didn
1 - Introduction Lua is an extension-type programming language, it is designed to support general procedural programming, and relevant data description facilities. Lua is also able to object-oriented programming, functional programming, data-driven p
Company to develop micro-Bo application, but the most common micro-Bo features, many operators will be embedded inside other side of things, such as banner ads or something. Summarize our previous development experience, things like this constantly c
Transfer from 3D1 Lua scripts in C + + under the steps (a) Now, more and more C + + server and client support into the script, especially in the gaming area, a scripting language has penet
table Part of the table function is only part of its impact on the array, while the other part is an impact on both the table. The following instructions will be separated. table.concat (table, sep, start, end) concat is concatenate (chain linking) t
In order to get to know the LUA in our GDEX in how to use them in the end, I decided to look at how to better package in the WPF in a lua based on the APP framework. Today the first of the Lua for C # for a simple package. Used Lua in C # under the p
lua parser management of a stack When the value of C need to pass the time lua, He can Lua_pushXXX way, the value (with type) onto the stack, passed to lua When lua need to pass the value of C when, lua will value (dynamic type) onto the stack, C can
function rename(dirpath,func) os.execute('dir ' .. dirpath .. ' /b/a -d > temp.txt') io.input("temp.txt") local dirname = "" local files = {} for line in io.lines() do if string.find(line,'.bmp') then table.insert(files,line) end en
#include <stdio.h> #include <string.h> #include <lua.h> #include <lauxlib.h> #include <lualib.h> void luaM_setstring(lua_State *L, const char *index, char *value) { lua_pushstring(L, index); lua_pushstring(L, value); lua_sett
Recently read an old book "Game of the trip - my programming sentiment" (others), as this book is quite deep, but very thorough analysis of game development, leading to only read half of my last. Denied the book was very well written, many of th
LUA source code reading order - [LUA] Copyright: Reprinted form of a hyperlink when you identify the original source and author information in the article and the statement Transfer from:
LuaEclipse Lua for Windows LuaJava LuaC # (WPF, SL) Port of the Lua programming language f
Small is Beautiful: the design of Lua finishing speech translation The speech was very well written, summarizes a lot of noteworthy things Lua, Lua is a good entry-learning materials, so I have translated and recorded. 10 Mar 2010 Roberto talks at th
The writer is said to reach people OGDEV the HACK Learning by example Lua (1) ---- Hello World 1. Preface to use scripting language and ultimately the game. Lua is a and C / C + + with a scripting language very closely, and very efficient. Is general
The official reference manual for Lua 5.1.4 Translated version of cloud wind Another more complete version I transl
2.5.6 - Precedence Operator precedence in Lua follows the table below, from lower to higher priority: or and <> <=> = ~ = == .. + - * /% not # - (unary) ^ As usual, you can use parentheses to change the precedences of an expression. The concat
2.11 - Coroutines Lua supports coroutines, also called collaborative multithreading. A coroutine in Lua represents an independent thread of execution. Unlike threads in multithread systems, however, a coroutine only suspends its execution by explicit er
3.8 - The Debug Interface Lua has no built-in debugging facilities. Instead, it offers a special interface by means of functions and hooks. This interface allows the construction of different kinds of debuggers, profilers, and other tools that need "
5.4 - String Manipulation This library provides generic functions for string manipulation, such as finding and extracting substrings, and pattern matching. When indexing a string in Lua, the first character is at position 1 (not at 0, as in C). Indic
io.close ([file]) Equivalent to file: close (). Without a file, closes the default output file. -------------------------------------------------- ------------------------------ io.close ([file]) Equivalent to file: close (). Do not specify a file, c
Recently did a project uses Lua, so it is the way to learn a bit, must have played World of Warcraft's friends heard about lua, but it's still relatively little used. OO is just a thought, we can achieve in Lua, reducing redundant code. Build a first 3D1% 26amp; orderby% 3Ddateline% 26amp; filter% 3D86400 import luaAlchemy.LuaAlchemy; import luaAlchemy.LuaAssets; private var lua:LuaAlchemy public function init():void { lua=new LuaAlchemy(Lu
Some grammatical questions on the packaging does not involve (visible in front of a chapter), mainly on the arrangement of the stack. Along with the transferred static void f_parser (lua_State * L, void * ud) function, the stack recorded as follows:
Function: {"getlocal", db_getlocal} Start with the tone from the db_getlocal. The function is to print out all the variables themselves. General idea is to count to execute code, executable code according to the size limit, the variable informat
Recently looking at a lua debugger, remdebug, remote breakpoints. Achieved mainly using coroutines, very comfortable. Based on it changed a bit, made a toy. Can be remote, run-time view certain value. When the server is running, you suddenly want to
Lua is undoubtedly an air of the East - "simple but not simple," I like it. My development machine is: RedHat Linux AS 5 First, Lua's official website () Download the latest release package (PS: I downloaded the lua-5.1.4.tar.
tc: tar zxvf tokyocabinet-1.4.47.tar.gz cd tokyocabinet-1.4.47 . / Configure make make install lua: make linux make install file / usr / local / bin /
Note: The latest version in the following blog post: ngx_lua_module an nginx http module, it is the lua parser embedded in nginx, used to parse and execute the lua scripting language web pages background. Features: *) HTML
Note: The latest version of the first in the following blog: ngx_lua_module an nginx http module, it is the lua parser embedded in nginx, used to parse and execute the lua scripting language web pages background. Update: *
Lua 网站 : Lua 是一个小巧的脚本语言.作者是巴西人.该语言的设计目的是为了嵌入应用程序中,从而为应用程序提供灵活的扩展和定制功能. Lua脚本可以很容易的被C/C++代码调用,也可以反过来调用C/C++的函数,这使得Lua在应用程序中可以被广泛应用.不仅仅作为扩展脚本,也可以作为普通的配置文件,代替XML,Ini等文件格式,并且更容易理解和维护. Lua由标准C编写而成,代码简洁优美,几乎在所有操作系统和平台上都可以编译,运行. 一个完整的Lua
google-pinyin-api 网站 : 谷歌拼音2.1.9.57版开始加入了给予lua语言的api扩展功能,本项目用于发布谷歌拼音api扩展. 授权协议: GPLv3 开发语言: Lua 操作系统: Windows
LuaBind 网站 : LuaBind 是一个帮助你绑定C++和Lua的库.她有能力暴露 C++ 函数和类到 Lua . 她也有能力支持函数式的定义一个Lua类,而且使之继承自C++或者 Lua. Lua类可以覆写从 C++ 基类继承来的虚函数. 她的目标平台是Lua 5.0 ,不能支持Lua 4.0 . 她利用模板原编程技术实现.这意味着,你不需要额外的预处理过程去编译你的工程(编译器会替你完成全部的工作).这
LuaJIT 网站 : LuaJIT:采用C语言写的Lua的解释器的代码 LuaJIT is a Just-In-Time Compiler for the Lua* programming language. LuaJIT试图保留Lua的精髓--轻量级,高效和可扩展. 功能 所有的函数缺省会被JIT(即时编译器)编译到本地机器码: * 没有被使用的函数不会被编译. * 可以选择性打开和关闭即时编译函数,子函数甚至整个模块. * 需要解析的函数(译注:即没有
Lua for Windows 网站 : Lua for Windows 为 Windows 系统下提供了 Lua 脚本语言的开发和运行环境. 授权协议: MIT 开发语言: Lua 操作系统: Windows
Emscripten 网站 : [email protected]用它编写的LLVM to javascript编译出了一个lua在js语言上的实现,虽然不知道运行效率如何,但是确实是从c翻译过来.有兴趣的可以点此试试.当中的lua.js未混淆. 授权协议: MIT 开发语言: JavaScript Lua 操作系统: 跨平台
LuaPlus 网站 : LuaPlus是Lua的C++增强,也就是说,LuaPlus本身就是在Lua的源码上进行增强得来的.用它与C++进行合作,是比较好的一个选择. 授权协议: MIT 开发语言: C/C++ Lua 操作系统: 跨平台
CodeWeblog.com 版权所有 闽ICP备15018612号
processed in 0.045 (s). 9 q(s) | http://www.codeweblog.com/stag/lua/ | CC-MAIN-2017-51 | refinedweb | 1,634 | 53.51 |
Why does if None.__eq__("a") evaluate to True?
If you execute the following statement in Python 3.7, it will (from my testing) print
b:
if None.__eq__("a"): print("b")
However,
None.__eq__("a") evaluates to
NotImplemented.
Naturally,
"a".__eq__("a") evaluates to
True, and
"b".__eq__("a") evaluates to
False.
I initially discovered this when testing the return value of a function, but didn't return anything in the second case -- so, the function returned
None.
What's going on here?
This is a great example of why the
__dunder__ methods should not be used directly as they are quite often not appropriate replacements for their equivalent operators; you should use the
== operator instead for equality comparisons, or in this special case, when checking for
None, use
is (skip to the bottom of the answer for more information).
You've done
None.__eq__('a') # NotImplemented
Which returns
NotImplemented since the types being compared are different. Consider another example where two objects with different types are being compared in this fashion, such as
1 and
'a'. Doing
(1).__eq__('a') is also not correct, and will return
NotImplemented. The right way to compare these two values for equality would be
1 == 'a' # False
What happens here is
- First,
(1).__eq__('a')is tried, which returns
NotImplemented. This indicates that the operation is not supported, so
'a'.__eq__(1)is called, which also returns the same
NotImplemented. So,
- The objects are treated as if they are not the same, and
Falseis returned.
Here's a nice little MCVE using some custom classes to illustrate how this happens:
class A: def __eq__(self, other): print('A.__eq__') return NotImplemented class B: def __eq__(self, other): print('B.__eq__') return NotImplemented class C: def __eq__(self, other): print('C.__eq__') return True a = A() b = B() c = C() print(a == b) # A.__eq__ # B.__eq__ # False print(a == c) # A.__eq__ # C.__eq__ # True print(c == a) # C.__eq__ # True
Of course, that doesn't explain why the operation returns true. This is because
NotImplemented is actually a truthy value:
bool(None.__eq__("a")) # True
Same as,
bool(NotImplemented) # True
For more information on what values are considered truthy and falsey, see the docs section on Truth Value Testing, as well as this answer. It is worth noting here that
NotImplemented is truthy, but it would have been a different story had the class defined a
__bool__ or
__len__ method that returned
False or
0 respectively.
If you want the functional equivalent of the
== operator, use
operator.eq:
import operator operator.eq(1, 'a') # False
However, as mentioned earlier, for this specific scenario , where you are checking for
None, use
is:
var = 'a' var is None # False var2 = None var2 is None # True
The functional equivalent of this is using
operator.is_:
operator.is_(var2, None) # True
None is a special object, and only 1 version exists in memory at any point of time. IOW, it is the sole singleton of the
NoneType class (but the same object may have any number of references). The PEP8 guidelines make this explicit:
Comparisons to singletons like
Noneshould always be done with
isor
is not, never the equality operators.
In summary, for singletons like
None, a reference check with
is is more appropriate, although both
== and
is will work just fine.
From: stackoverflow.com/q/53984116 | https://python-decompiler.com/article/2018-12/why-does-if-none-eq-a-evaluate-to-true | CC-MAIN-2019-47 | refinedweb | 562 | 57.67 |
A ~5 minute guide to Numba¶ compile them. When a call is made to a Numba decorated function it is compiled to machine code “just-in-time” for execution and all or part of your code can subsequently run at native machine code speed!
Out of the box Numba works with the following:
- OS: Windows (32 and 64 bit), OSX and Linux (32 and 64 bit)
- Architecture: x86, x86_64, ppc64le. Experimental on armv7l, armv8l (aarch64).
- GPUs: Nvidia CUDA. Experimental on AMD ROC.
- CPython
- NumPy 1.15 - latest
How do I get it?¶
Numba is available as a conda package for the Anaconda Python distribution:
$ conda install numba
Numba also has wheels available:
$ pip install numba
Numba can also be compiled from source, although we do not recommend it for first-time Numba users.
Numba is often used as a core package so its dependencies are kept to an absolute minimum, however, extra packages can be installed as follows to provide additional functionality:
scipy- enables support for compiling
numpy.linalgfunctions.
colorama- enables support for color highlighting in backtraces/error messages.
pyyaml- enables configuration of Numba via a YAML config file.
icc_rt- allows the use of the Intel SVML (high performance short vector math library, x86_64 only). Installation instructions are in the performance tips.
Will Numba work for my code?¶
This depends on what your code looks like, if your code is numerically
orientated (does a lot of math), uses NumPy a lot and/or has a lot of loops,
then Numba is often a good choice. In these examples we’ll apply the most
fundamental of Numba’s JIT decorators,
@jit, to try and speed up some
functions to demonstrate what works well and what does not.
Numba works well on code that looks like this:
from numba import jit import numpy as np x = np.arange(100).reshape(10, 10) @jit(nopython=True) # Set "nopython" mode for best performance, equivalent to @njit))
It won’t work very well, if at all, on code that looks like this:
from numba import jit import pandas as pd x = {'a': [1, 2, 3], 'b': [20, 30, 40]} @jit def use_pandas(a): # Function will not benefit from Numba jit df = pd.DataFrame.from_dict(a) # Numba doesn't know about pd.DataFrame df += 1 # Numba doesn't understand what this is return df.cov() # or this! print(use_pandas(x))
Note that Pandas is not understood by Numba and as a result Numba would simply run this code via the interpreter but with the added cost of the Numba internal overheads!
What is
nopython mode?¶
The Numba
@jit decorator fundamentally operates in two compilation modes,
nopython mode and
object mode. In the
go_fast example above,
nopython=True is set in the
@jit decorator, this is instructing Numba to
operate in
nopython mode. The behaviour of the
nopython compilation mode
is to essentially compile the decorated function so that it will run entirely
without the involvement of the Python interpreter. This is the recommended and
best-practice way to use the Numba
jit decorator as it leads to the best
performance.
Should the compilation in
nopython mode fail, Numba can compile using
object mode, this is a fall back mode for the
@jit decorator if
nopython=True is not set (as seen in the
use_pandas example above). In
this mode Numba will identify loops that it can compile and compile those into
functions that run in machine code, and it will run the rest of the code in the
interpreter. For best performance avoid using this mode!
How to measure the performance of Numba?¶
First, recall that Numba has to compile your function for the argument types given before it executes the machine code version of your function, this takes time. However, once the compilation has taken place Numba caches the machine code version of your function for the particular types of arguments presented. If it is called again the with same types, it can reuse the cached version instead of having to compile again.
A really common mistake when measuring performance is to not account for the above behaviour and to time code once with a simple timer that includes the time taken to compile your function in the execution time.
For example:
from numba import jit import numpy as np import time x = np.arange(100).reshape(10, 10) @jit(nopython=True) def go_fast(a): # Function is compiled and runs in machine code trace = 0.0 for i in range(a.shape[0]): trace += np.tanh(a[i, i]) return a + trace # DO NOT REPORT THIS... COMPILATION TIME IS INCLUDED IN THE EXECUTION TIME! start = time.time() go_fast(x) end = time.time() print("Elapsed (with compilation) = %s" % (end - start)) # NOW THE FUNCTION IS COMPILED, RE-TIME IT EXECUTING FROM CACHE start = time.time() go_fast(x) end = time.time() print("Elapsed (after compilation) = %s" % (end - start))
This, for example prints:
Elapsed (with compilation) = 0.33030009269714355 Elapsed (after compilation) = 6.67572021484375e-06
A good way to measure the impact Numba JIT has on your code is to time execution using the timeit module functions, these measure multiple iterations of execution and, as a result, can be made to accommodate for the compilation time in the first execution.
As a side note, if compilation time is an issue, Numba JIT supports on-disk caching of compiled functions and also has an Ahead-Of-Time compilation mode.
How fast is it?¶
Assuming Numba can operate in
nopython mode, or at least compile some loops,
it will target compilation to your specific CPU. Speed up varies depending on
application but can be one to two orders of magnitude. Numba has a
performance guide that covers common options for
gaining extra performance.
How does Numba work?¶
Numba reads the Python bytecode for a decorated function and combines this with information about the types of the input arguments to the function. It analyzes and optimizes your code, and finally uses the LLVM compiler library to generate a machine code version of your function, tailored to your CPU capabilities. This compiled version is then used every time your function is called.
Other things of interest:¶
Numba has quite a few decorators, we’ve seen
@jit, but there’s
also:
@njit- this is an alias for
@jit(nopython=True)as it is so commonly used!
@vectorize- produces NumPy
ufuncs (with all the
ufuncmethods supported). Docs are here.
@guvectorize- produces NumPy generalized
ufuncs. Docs are here.
@stencil- declare a function as a kernel for a stencil like operation. Docs are here.
@jitclass- for jit aware classes. Docs are here.
@cfunc- declare a function for use as a native call back (to be called from C/C++ etc). Docs are here.
@overload- register your own implementation of a function for use in nopython mode, e.g.
@overload(scipy.special.j0). Docs are here.
Extra options available in some decorators:
parallel = True- enable the automatic parallelization of the function.
fastmath = True- enable fast-math behaviour for the function..
GPU targets:¶
Numba can target Nvidia CUDA and (experimentally) AMD ROC GPUs. You can write a kernel in pure Python and have Numba handle the computation and data movement (or do this explicitly). Click for Numba documentation on CUDA or ROC. | https://numba.readthedocs.io/en/stable/user/5minguide.html | CC-MAIN-2020-40 | refinedweb | 1,202 | 64.61 |
Ronald Oussoren wrote: > > On 24 Feb, 2009, at 16:20, P.J. Eby wrote: >> >>> Indeed. Having an index file would make things a whole lot simpler. >> >> For *whom*? Certainly not for system packaging tools (rpm, deb, et al). >> >> A design goal should be to allow system packaging tools to install a >> static file footprint: i.e., independent files with predefined >> content, and no post-processing steps. You can't do that with a >> shared file, which is why setuptools uses a .pth hack to install >> namespace packages when building packages for rpm et al. > >. By your reasoning, we should also have something which warns users not to install to the system directory. These ideas are a duplication of functionality -- this functionality is implemented by the disabling write permissions of non-sysadmins into system directories. Or do you propose users put some stuff into their system directories not managed by their package managers and other stuff managed by the package managers? | https://mail.python.org/pipermail/distutils-sig/2009-February/011037.html | CC-MAIN-2014-15 | refinedweb | 160 | 65.73 |
Revision history for MooseX-MethodAttributes 0.32 2020-08-30 01:32:08Z - no longer using MooseX::Types internally 0.31 2015-09-27 05:28:49Z - increase Moose prereq on perl 5.8.x to fix boolean overload handling 0.30 2015-08-16 04:02:53Z - update some distribution tooling 0.29 2013-11-15 02:31:46Z - change docs to recommending using a role to grant Inheritable behaviour rather than a superclass, and changed tests to match - converted all uses of namespace::clean to namespace::autoclean - converted all uses of Test::Exception to Test::Fatal - repository migrated to the github moose organization 0.28 2012-09-04 03:28:26Z - RT#79385: Import Carp::croak into the right package (spotted by Bill Moseley) 0.27 2012-02-13 18:28:00Z - Fix issue with new Moose and new Module::Runtime where Moose functions were not getting correctly exported to the users of this module. 0.26 2012-01-13 12:06:00Z - Fix packages relying on ->meta->make_immutable to return true. This should work, but doesn't in some occasional circumstances. 0.25 2011-06-20 10:53:00Z - Updated to avoid test issues from Moose 2.0007 (spotted by ilmari) 0.24 2010-07-19 03:23:57Z - Updated to avoid warnings from Moose 1.09 (Dave Rolsky). 0.23 2010-06-15 19:22:00Z - Fix dependency on MooseX::Types::Moose (RT#58406) 0.22 2010-05-31 19:49:00Z - Fix issues causing composing multiple (normal) roles onto a subclass of a MooseX::MethodAttributes class to fail by removing a forced metaclass reinitialization which wasn't needed. 0.21 2010-05-07 02:48:54Z - Add more metadata, including a repository url. 0.20 2010-02-10 00:46:11Z - Remove horrible code and epic comment working around Moose bugs with reinitializing anon classes now that the bug is fixed upstream in Moose (commit cf600c83). 0.19 2010-01-09 17:29:00Z - Adapt to changes in in composition_class roles in new Moose releases (>= 0.93_01) 0.18 2009-09-25 09:51:24Z - Bump Test::More dependency to 0.88 for done_testing - Require namespace::autoclean for t/late_reinitialize.t 0.17 2009-09-23 14:35:50Z - Bump MooseX::Types version to 0.20 to avoid warnings with newer Moose releases 0.16_02 2009-09-20 16:58:38Z - Also export the Moose::Role sugar from MooseX::MethodAttributes::Role 0.16_01 2009-09-18 01:29:38Z - Combining roles now works as expected when writing roles, or when applying multiple roles to a class - Bump other dependencies in line with required Moose version 0.16 2009-09-15 05:58:14Z - Fix so that MooseX::Role::Parameterized can be used in combination with roles containing method attributes + testcase from phaylon (RT#48758) - Fixes to avoid a deprecation warning from the latest Class::MOP (Dave Rolsky) 0.15 2009-07-26 - Fix test which was failing in some cases and additional test cases. - No other changes on the dev release. 0.14_01 2009-07-16 -. :/ 0.14 2009-06-07 - Fix bugs with composing roles with method attributes into other roles with method attributes + tests 0.13 2009-05-28 00:19:00Z - Add Test::More and Test::Exception to requirements for RT#46395 and RT#46396 0.12 2009-05-25 18:33:30Z - Add additional tests for role composition behavior. - Add an error message if someone tries to exclude or alias methods from a role with attributes, which currently doesn't work. - Add tests for this error, and tests for behavior if aliasing did work. 0.11_03 2009-05-24 23:06:50Z - Fix overenthusiastic meta trait application which caused classes which already had methods with attributes to have their attributes wiped out. 0.11_02 2009-05-21 00:46:47Z - Add support for use Moose::Role -traits => 'MethodAttributes' if we've already been loaded. - Add support for composing a role containg methods with attributes into another role. 0.11_01 2009-05-17 22:50:44Z - Do not apply metaclass roles unless needed. - Add MooseX::MethodAttributes::Role::Meta::Role, for roles which contain methods with attributes. - Split attribute container functionality out into MooseX::MethodAttributes::Role::Meta::Map. 0.11 2009-05-15 16:02:27Z - Depend on Moose 079 to prevent metaclass incompatibility failure. 0.10 2009-05-13 23:08:30Z - Stop non Moose classes which inherit MooseX::MethodAttributes::Inheritable and which define a sub meta from throwing an exception. 0.09 2009-04-28 08:47:28Z - Use modifiers in the metaclass role to catch modifiers being applied to subs, and apply our wrapped method role to the generated method instance. This is horrible, but appears to be a sane way to avoid that fact that method metaclasses applied to one class aren't inherited. 0.08 2009-04-25 15:30:00Z - Fix get_nearest_methods_with_attributes to deal with wrapped methods. - Add tests for this, and how Catalyst uses the module - Add TODO tests showing that method metaclass inheritance (or lacktherof) into subclasses causes us to fail to do the right thing. 0.07 2009-04-25 12:47:05Z - Add the get_nearest_methods_with_attributes method. 0.06 2009-04-19 22:03:06Z - Fix bug when using base, as Moose doesn't automatically inherit the method metaclass from your parent class unless you use the 'extends' syntax. - Package on a different machine, due to reported unarchiving issues on win32. 0.05 2009-04-01 20:40:05Z - Ensure that we have an initialised metaclass to apply roles to in AttrContainer::Inheritable, fixing bugs with non-moose base classes which have not has a metaclass initialised for them. 0.04 2009-02-26 21:47:18Z - Depend on an MX::Types version with support for parameterisation. - Add tests for behaviour of get_all_methods_with_attributes and method modifiers. 0.03 2009-02-19 07:13:18Z - Implement metaclass methods for getting all meta methods with attributes. 0.02 2009-02-14 21:17:56Z - Depend on Moose 0.70 for wrapped_method_metaclass_roles support. - Apply a role to wrapped method metaclasses to support getting attributes of wrapped methods. - Add MooseX::MethodAttributes::Inherited as a way of capturing method attributes without explicitly using MooseX::MethodAttributes in every class. 0.01 2009-02-13 21:21:11Z - Initial release. | https://metacpan.org/release/ETHER/MooseX-MethodAttributes-0.32/changes | CC-MAIN-2022-21 | refinedweb | 1,049 | 57.16 |
Yes, I know that this is homework. But I need help. This is code the teacher gave us few days ago. This is a header file. And I needed to do the following:
My problem is that I didn't know how to use my vecetor in my .cpp file. How to add the names into the vector? I now it should be kind of like vectorname.push_back("name"); ???
How do I call my vector to print the names?
I don't know I'm lost and need help. I know that there is people that don't want to help with homework but this homework is already past due , I want to learn... and the teacher will never give us the answers, he gives us a grade and if you didn't learned is not his problem... that's how nuts it is...
So I didn't do it because I had no idea how to do it. I'm 100% new with C++ and my teacher the only thing he does is to read some slides he have and he doesn't explain anything. And I don't have anyone to ask. I don't really like my instructor. But I cannot drop the class. Please help me to understand how to call vectors, how to print them, how to do the killing thing. Need help !!! I want to learn. I already did pretty bad in my test :S
• develops a list of twenty unique names, one of them being Josephus
• simulates the stated problem with an arbitrary positive count m (how far to count around the
circle) and a list of n (10 <= n <= 20) named individuals (you may pick the names), listing
these n individuals in the order in which the lot falls upon each.
o Include Josephus in every “game”, regardless of the value of n; Josephus is allowed
to die, and his original position in the circle does not matter
o For n people, the returned list should contain n individuals
• for arbitrary numbers m (m > 0) and n (10 <= n <= 20), computes where Josephus should
stand to be the last one chosen; this location is referred to as the “safe index”, the location
that would be chosen last should the game be played with the specified m and n
--------------------------------------… #pragma once #ifndef CIRCLE //Do not modify this line #define CIRCLE //Do not modify this line //You may add "#include" statements here #include <string> #include <vector> #include <iostream> using namespace std; //Do not modify this line class Circle //Do not modify this line { //Do not modify this line //You may insert visibility modifiers anywhere within this document //You may declare member variables anywhere within this document //-----------------------------------… Circle(); //Do not modify this line ~Circle(){} //Do not modify this line //-----------------------------------… /* getNames Returns a list of names in the order in which the people will be standing for the "game". Although fewer people may be playing, you must return 20 names here. Do not provide duplicate names. For the sake of the test driver, this method must return the list of 20 names in the same order every time it is called, and this list of 20 names in this order must be used to play the "game". This method will be called repeatedly. */ vector<string> getNames(); //Do not modify this line //You may not implement this method here; you must do so in a .cpp document /* playGame Plays a "game" with the first n people from the list (above) counting forward every m. An explanation for how the "game" works can be found in the exam specs. This method should return a list of names in the order in which the lot fell upon them (including the survivor, who should be last). If n is not between 10 and 20 or if m is non-positive, return an empty vector. This method will be called repeatedly. */ vector<string> playGame(int n, int m); //Do not modify this line //You may not implement this method here; you must do so in a .cpp document /* reportSafeIndex Returns the "safe index", the last index/location in the circle that will be chosen when the "game" is played. The point of this method is to permit someone to cheat the system by finding the safe location ahead of time. If n is not between 10 and 20 or if m is non-positive, return -1. This method may be called repeatedly. */ int reportSafeIndex(int n, int m); //Do not modify this line //You may not implement this method here; you must do so in a .cpp document //-----------------------------------… }; //Do not modify this line #endif //Do not modify this line | https://www.daniweb.com/programming/software-development/threads/418854/vectors-and-classes | CC-MAIN-2018-05 | refinedweb | 778 | 76.66 |
How to load all server side data on initial vue.js / vue-router load?
I'm currently making use of the WordPress REST API, and vue-router to transition between pages on a small single page site. However, when I make an AJAX call to the server using the REST API, the data loads, but only after the page has already rendered.
The vue-router documentation provides insight in regards to how to load data before and after navigating to each route, but I'd like to know how to load all route and page data on the initial page load, circumnavigating the need to load data each time a route is activated.
Note, I'm loading my data into the acf property, and then accessing it within a .vue file component using this.$parent.acfs.
main.js Router Code:
const router = new VueRouter({ routes: [ { path: '/', component: Home }, { path: '/about', component: About }, { path: '/tickets', component: Tickets }, { path: '/sponsors', component: Sponsors }, ], hashbang: false }); exports.router = router; const app = new Vue({ router, data: { acfs: '' }, created() { $.ajax({ url: '', type: 'GET', success: function(response) { console.log(response); this.acfs = response.acf; // this.backgroundImage = response.acf.background_image.url }.bind(this) }) } }).$mount('#app')
Home.vue Component Code:
export default { name: 'about', data () { return { acf: this.$parent.acfs, } }, }
Any ideas?
Answers
My approach is to delay construction of the store and main Vue until my AJAX call has returned.
store.js
import Vue from 'vue'; import Vuex from 'vuex'; import actions from './actions'; import getters from './getters'; import mutations from './mutations'; Vue.use(Vuex); function builder(data) { return new Vuex.Store({ state: { exams: data, }, actions, getters, mutations, }); } export default builder;
main.js
import Vue from 'vue'; import VueResource from 'vue-resource'; import App from './App'; import router from './router'; import store from './store'; Vue.config.productionTip = false; Vue.use(VueResource); Vue.http.options.root = ''; Vue.http.get('data') .then(response => response.json()) .then((data) => { /* eslint-disable no-new */ new Vue({ el: '#app', router, store: store(data), template: '<App/>', components: { App }, }); });
I have used this approach with other frameworks such as Angular and ExtJS.
Check this section in docs of Vue Router
So first of you have to write method that would fetch data from your endpoint, and then use watcher to watch route.
export default { watch: { '$route': 'fetchItems' }, methods: { fetchItems() { // fetch logic } } }
Since you are working with WP Rest API, feel free to check my repo on Github
You can use navigation guards.
On a specific component, it would look like this:
export default { beforeRouteEnter (to, from, next) { // my ajax call } };
You can also add a navigation guard to all components:
router.beforeEach((to, from, next) => { // my ajax call });
One thing to remember is that navigation guards are async, so you need to call the next() callback when the data loading is finished. A real example from my app (where the guard function resides in a separate file):
export default function(to, from, next) { Promise.all([ IngredientTypes.init(), Units.init(), MashTypes.init() ]).then(() => { next(); }); };
In your case, you'd need to call next() in the success callback, of course.
Alright, I finally figured this thing out. All I'm doing is calling a synchronous ajax request within my main.js file where my root vue instance is instantiated, and assigning a data property the requested data as so:
main.js
let acfData; $.ajax({ async: false, url: '', type: 'GET', success: function(response) { console.log(response.acf); acfData = response.acf; }.bind(this) }) const router = new VueRouter({ routes: [ { path: '/', component: Home }, { path: '/about', component: About }, { path: '/tickets', component: Tickets }, { path: '/sponsors', component: Sponsors }, ], hashbang: false }); exports.router = router; const app = new Vue({ router, data: { acfs: acfData }, created() { } }).$mount('#app')
From here, I can use the pulled data within each individual .vue file / component like so:
export default { name: 'app', data () { return { acf: this.$parent.acfs, } },
Finally, I render the data within the same .vue template with the following:
<template> <transition name="home" v-on: <div class="full-height-container background-image home" v-bind: <div class="content-container"> <h1 class="white bold home-title">{{ acf.home_title }}</h1> <h2 class="white home-subtitle">{{ acf.home_subtitle }}</h2> <div class="button-block"> <a href="#/about"><button class="white home-button-1">{{ acf.link_title_1 }}</button></a> <a href="#/tickets"><button class="white home-button-2">{{ acf.link_title_2 }}</button></a> </div> </div> </div> </transition> </template>
The most important piece of information to take away, is that all of the ACF data is only being called ONCE at the very beginning, compared to every time a route is visited using something like beforeRouteEnter (to, from, next). As a result, I'm able to get silky smooth page transitions as desired.
Hope this helps whoever comes across the same problem.
Need Your Help
problem with INIT=RUNSCRIPT and relative paths
java hibernate maven-2 configuration h2I use maven conventions for source paths (src/main src/test) and i have my sql scripts in src/main/resources/scripts. | http://www.brokencontrollers.com/faq/45899653.shtml | CC-MAIN-2019-51 | refinedweb | 820 | 56.45 |
If you have a vector containing pointers, how do you get another pointer to point where one of the pointers in a vector is pointing.
If what I said is hard to understand then here's the basic idea of what i want to accomplish
(not actual code from a program)
Code:
#include <vector>
#include <anyThingElseRequired>
using std::whateverComesFromTheStdNamespace;
...
vector<someObject*> v; //vector full of pointers to someObject?
someObject *phead; //pointer to someObject
v.put_back(phead); //no trouble here
someObject *pnext; //pointer to someObject
pnext = v[0]; //cannot convert `std::vector<someObject*,
//std::allocator<someObject*> >' to `someObject*' in
//assignment
How do i get pnext to point to the same object that the pointer stored at v[0] is pointing to?
-Thank you for your patience | http://cboard.cprogramming.com/cplusplus-programming/92811-vector-full-pointers-printable-thread.html | CC-MAIN-2014-49 | refinedweb | 125 | 52.19 |
, 28 Mar 2001 17:01:00 +1000, you wrote:
.
Yep, that should work.
>However, when I exec the line
>
>import minestar_admin
>
>I get:
>
>Traceback (innermost last):
> File "<string>", line 1, in ?
>NameError: minestar_admin
>
>..
>
>which suggests to me that PythonInterpreter can't find the file.
Failing to find the .py file should result in an
ImportError: no module named...
I think the NameError is a different problem, but you can check your
actual python.path by inserting
interp.exec("import sys; print sys.prefix, sys.path");
If you still can get a handle on the problem, try to make a small
standalone example that show the bug and post it here. This worked for
me:
----------- i:\\temp\\ss9\\registry -----------
python.path=i:\\temp\\ss9\\subdir
----------- i:\\temp\\ss9\\subdir\\minestar_admin.py -----------
print "hello"
----------- si7.java -----------
import org.python.core.*;
import org.python.util.*;
public class si7 {
public static void main(String[] args) {
PythonInterpreter interp = new PythonInterpreter();
interp.exec("import sys; print sys.prefix, sys.path");
interp.exec("import minestar_admin");
}
}
----------- command line -----------
java -Dpython.home=i:\temp\ss9 si7
>Could
>someone please tell me what is wrong with my hypothesis above?
Just for your info, you can also set the python.path property this way:
regards,
finn. However, when I exec the line
import minestar_admin
I get:
Traceback (innermost last):
File "<string>", line 1, in ?
NameError: minestar_admin
at org.python.core.Py.NameError(Py.java:134)
at org.python.core.PyFrame.getglobal(PyFrame.java:166)
at org.python.core.PyFrame.getname(PyFrame.java:146)
at org.python.pycode._pyx0.f$0(<string>)
at org.python.pycode._pyx0.call_function(<string>)
at org.python.core.PyTableCode.call(PyTableCode.java:155)
at org.python.core.Py.runCode(Py.java:965)
at org.python.core.__builtin__.eval(__builtin__.java:241)
at org.python.core.__builtin__.eval(__builtin__.java:245)
at org.python.util.PythonInterpreter.eval(PythonInterpreter.java:102)
which suggests to me that PythonInterpreter can't find the file. Could
someone please tell me what is wrong with my hypothesis above? I read
the Javadoc for PythonInterpreter, and it said:
>public class PythonInterpreter extends java.lang.Object
>
>The PythonInterpreter class is a standard wrapper for a JPython interpreter for use embedding in a Java >application.
>
>Version: 1.0, 02/23/97 <---------------------- I think there is a problem.
>Author: Jim Hugunin
Thanks for any help.
John
--
Dr John Farrell - Research Architect - Mincom Limited
I don't suffer from stress, I suffer from idiots.
----------------------------- Mincom wants no credit for anything I say:. | https://sourceforge.net/p/jython/mailman/message/5260766/ | CC-MAIN-2018-26 | refinedweb | 414 | 55 |
>>.
Who else thought (Score:1, Funny)
I was sad
"Python is 'already quite secure,'" (Score:4, Funny)
Stay! (Score:3, Funny)
Please, stay where you are, sir. We have enough problems out here already.
Who names this stuff? (Score:4, Funny)
I should read more carefully... (Score:0, Funny)
and in other news (Score:1, Funny)
Elemental Security (Score:2, Funny)
Re:Guido's goodbye message (Score:3, Funny)
Re:Prominently on python.org .
Least ugly? (Score:5, Funny)
I hereby cast my vote for Guido VanRossum for Least Ugly Open-Source Project Leader.
Emmental security? (Score:2, Funny)
Re:Least ugly? (Score:3, Funny)
Re:Good times. (Score:4, Funny)
def the_count(): #{#}
Tadaa! Curly braces. The code is now readable.
PS. Sorry for the odd indentation, I haven't posted code under slashdot for a while... | https://developers.slashdot.org/story/03/07/09/1549236/guido-van-rossum-leaves-zopecom/funny-comments | CC-MAIN-2016-40 | refinedweb | 137 | 69.99 |
26 April 2012 08:27 [Source: ICIS news]
By Nurluqman Suratman
?xml:namespace>
Despite the general weakness in the global economy, demand for petrochemicals is expected to remain resilient and should help buoy up product prices going forward, said Nat Panassutrakorn, an analyst with KGI Securities in
In the first quarter, significantly weaker petrochemical spreads weighed on the SCG’s earnings, causing its chemicals business’ earnings before interest, tax, depreciation and amortisation (EBITDA) to slump by 82% year on year to Thai baht (Bt) Bt894m ($28.9m).
The industrial conglomerate reported a 35% year-on-year fall in March-quarter net profit to Bt5.97bn, with overall earnings before interest, tax, depreciation and amortisation (EBITDA) declining by 24% to Bt10.3bn despite an 11% growth in sales to Bt102.9bn.
Petrochemicals accounted for 21% of SCG’s profit in the first quarter.
“[First-quarter] earnings fell … as chemical margins fell to their lowest as a result of excess global supply and slower demand. Equity income also fell substantially to Bt344m (vs Bt3.0bn in 1Q11) due to weaker margins at its PTA [purified terephthalic acid] business,” said Naphat Chantaraserekul, an analyst at brokerage DBS Vickers Securities.
PTA spreads averaged $120/tonne (€91/tonne) in the March quarter of 2012, sharply lower than the $350/tonne average in the same period last year, he said.
SCG said on Wednesday that the average naphtha prices increased by $132/tonne quarter-on-quarter and by $105/tonne year on year to $1,021/tonne in the January-March period, pulled up by higher crude oil price.
Ethylene prices also increased due to rising feedstock prices and concern about the availability of the olefin from the Middle East amid heightened tensions between the West and
In the first quarter, the average ethylene price stood at $1,251/tonne, up $190/tonne quarter on quarter, and up $12/tonne year on year. Propylene’s average price at $1,281/ton, decreased $1/tonne quarter on quarter and down $98/tonne year on year, according to SCG.
“[SCG’s] chemical business should recover in the second quarter of 2012 with improving spreads for most products. But recovery will be mild due to still weak macroeconomic conditions in Europe and
The eurozone debt crisis, as well as
SCG, which is
The company's cement business is also expected perform strongly in the second half of the year, with a host of commercial, residential and infrastructure projects under construction in Thailand, according to the analysts.
SCG’s cement sales in the March quarter grew 5% year on year to 7.6m tonnes, driven by
The firm’s cement business contributed 39% to the firm’s overall profit in the first quarter.
For the full year of 2012, SCG’s net profit is expected to grow to around Bt31bn, from the Bt27.3bn last year, analysts said.
“Siam Cement will benefit from Thailand’s improving economic conditions, as 59% of its first-quarter earnings were derived from the construction and related sectors,” DBS’ Chantaraserekul said.
“Weak chemical spreads will cap upside in the near term, but SCC offers a long-term value proposition” he added.
Beyond 2012, Panassutrakorn of KGI said that demand for petrochemicals, particularly from the automotive sector, should pick up and lead margins out of a trough.
SCG’s focus on growing its high value-added (HVA) product offerings, should help boost earnings going forward, analysts said.
The company hopes to increase the share of HVA products to half of group sales by 2015, from 34% currently, they said.
Its move into the southeast-Asian market, where there is strong demand for its core businesses such as plastics, could also help drive its long-term earnings growth, they said.
“SCC is still pursuing mergers and acquistion in ASEAN markets. It plans to spend up to Bt40bn in 2012 and it has Bt45bn cash on hand,” Chantaraserekul of DBS said.
HVA products' margins are 5-10% higher than normal products, and they are spread across all of SCG’s product segments, according to Chantaraserekul.
($1 = Bt30.9; | http://www.icis.com/Articles/2012/04/26/9553805/better-chemical-spreads-to-aid-thai-scg-h2-earnings-analysts.html | CC-MAIN-2015-22 | refinedweb | 680 | 57.1 |
Testing Automation Scripts with the new Maximo 7.6 “Testscript” method (Part 1: MBO based scripts)
Maximo introduced with Version 7.6 a new feature which allows you to test your automation scripts in context of a new or an existing Mbo as well as in context of the Maximo Integration Framework (MIF). The downside of that new function is, that I currently have not found any good documentation and that some features like the object path syntax are not self explaining. In this two part series I would like to introduce the new “Testscript” feature and explain how easy it is to use for your daily script testing. In part 1 we will cover the test of scripts in context of a Mbo and in part two I will show you how to test in context of the MIF.
The old styled “Run Script” testing is no longer visible but can be enabled again using the trick in my other post.
The first thing to mention if you want to test a script with the new Mbo-Test functionality is, that you need to have a script with an Object Launchpoint. Scripts with attribute launchpoints are not tested, or even worse if you have both: an object launch point and a attribute launch point on the same Mbo always the object script runs, even you select the attribute launch point script! (Might be confusing!). On the the other hand side you could utilize an Object Launchpoint testscript to set a certain attribute in a Mbo which then triggers the attribute launchpoint as well 😉
Now lets create a very simple script with an object launch point for the ASSET object like the following:
print "Hello World" print mbo.getString("ASSETNUM")
Press the “Test Script” button and you will see the following dialog:
At the top you will see information about the script and the selected Launchpoint we are running on.
With 1. you will select if we want the script to be tested against a newly created object or and existing object.
In 2. an object path can be specified if we want to reference an existing object. The format I currently found out is:
<OBJECT>[<SQL-WHERE>]
Examples could be:
ASSET[ASSETNUM='11200'] ASSET[ASSETNUM like '11%'] ASSET[ISRUNNING = 1] ITEM[ITEMNUM='0815']
Important to remember, that you always get only a single resulting record to your script. This is the default behavior for an object script, where the resulting set is stored in the implicit Launchpoint variable mbo.
If you select Existing Object and specify an Object Path (remember to copy the Object Path to the clipboard – you have to reenter it for every test!) you can press the Test button.
You might see a result as follows:
- Data contains the resulting MBO in XML format.
- Log contains the output of the Print statements of the script.
With the Set attribute values section you can specify attributes which are overwritten from the original result. This is a nice feature when you need some testing data with certain specification (e.G. We need an asset in status of not running (ISRUNNING = 0), so we just need to specify:
So far we just have discussed the Existing Object path. If you like to create a New Object this also can be done with the testing function. The testing function basically calls an mboSet.addAtEnd() function to append a new record to the given MboSet. With the usage of Set attribute values you can predefine fields of the newly created Mbo before it is handed over to the Jython script.
A bit strange is, that if you try to create an Asset Object and do not specify an ASSETNUM you will get an error, that the asset field needs to be specified. If you will set the ASSETNUM field you will get an error, that it is readonly and cannot be set.
The only solution I found so far is to hardly overwrite the readonly check by using the Field Flag “NOACCESSCHECK”:
from psdi.mbo import MboConstants mbo.setValue("ASSETNUM", "ASS0815", MboConstants.NOACCESSCHECK ) mbo.setValue("DESCRIPTION", "New Test Asset!")
So far for this first tutorial on the new Test script capability. In the next part I will cover the capability to test automation scripts customizing the MIF Interface.
We are using Maximo CD 7.6 and there is no “Test Script” button or signature in the AUTOSCRIPT application.
Is this an OOTB funtionality ?
Basically this is a OOTB functionality introduced in Maximo 7.6. Not sure if it has been introduced by one of the fix packs. In 7.6.0.7 it is definitely
included.
Recently, I installed Maximo 7.6.0.0 and upgraded it to 7.6.0.8 version with Utility and Spatial Add on.
1.When I created a basic Object level script to set a description for Asset and Workorder,in both cases I couldn’t see the changes on UI and even logs are not getting printed.
2.For Jobplan ,Object level script is working.Tried Woactivity and Jobplan attribute launch points,they worked.
Any reason why it is happening.Moreover,in test functionality process log doesn’t show print statements result.
Thanks in Advance!!
UI changes are always hard to archive with automation scripting since you have no control of ui. For the logging issue have you tried to use the logging command as shown in this articel?
Maybe you can help
When I run this:
import sys
print sys.path
if sys.path.count(‘__pyclasspath__/Lib’) == 1 :
print ‘path to /Lib already exists’
else :
print ‘extend path to /Lib’
sys.path.append(‘__pyclasspath__/Lib’)
import socket
I get this:
Traceback (most recent call last):
File “”, line 10, in
File “__pyclasspath__/Lib/socket.py”, line 11, in
File “__pyclasspath__/Lib/string.py”, line 122, in
File “__pyclasspath__/Lib/string.py”, line 115, in __init__
AttributeError: type object ‘re’ has no attribute ‘escape’
do you have any input on what might be the cause? | https://www.maximoscripting.com/testing-automation-scripts-with-the-new-maximo-7-6-testscript-method-part-1-mbo-based-scripts/?replytocom=69 | CC-MAIN-2022-27 | refinedweb | 1,000 | 63.8 |
sem_destroy - destroy an unnamed semaphore (REALTIME)
#include <semaphore.h> int sem_destroy(sem_t *sem);
The sem_destroy() function is used to re-initialised by another call to sem_init().
It is safe to destroy an initialised semaphore upon which no threads are currently blocked. The effect of destroying a semaphore upon which other threads are currently blocked is undefined.
Upon successful completion, a value of zero is returned. Otherwise, a value of -1 is returned and errno is set to indicate the error.
The sem_destroy() function will fail if:
- [EINVAL]
- The sem argument is not a valid semaphore.
- [ENOSYS]
- The function sem_destroy() is not supported by this implementation.
The sem_destroy() function may fail if:
- [EBUSY]
- There are currently processes blocked on the semaphore.
None.
None.
None.
semctl(), semget(), semop(), sem_init(), sem_open(), <semaphore.h>.
Derived from the POSIX Realtime Extension (1003.1b-1993/1003.1i-1995) | http://pubs.opengroup.org/onlinepubs/7990989799/xsh/sem_destroy.html | CC-MAIN-2019-35 | refinedweb | 142 | 59.19 |
In this tutorial, I continue to explain how Object Oriented Programming is used with Python 2.7. I cover some of the more complicated subjects including how to:
If you don’t completely understand Object Oriented Programming after this and the first part of this tutorial Python Object Oriented Programming. Please leave a comment below and I’ll do whatever I can to explain this important subject.
Like always, a lot of code follows the video. If you have any questions or comments leave them below. And, if you missed my other Python Tutorials they are available here:
Here is All the Code from the Video
Note: You have to insert the white space and everything will work. I could have styled it with HTML, but that would have required you to erase all of the tags. Hope this helps?
#! /usr/bin/python
__metaclass__ = type
class Animal:
__name = “No Name”
__owner = “No Owner”
def __init__(self, **kvargs): # The constructor function called when object is created
self._attributes = kvargs
# There is a function called a destructor __del__, but its best to avoid it
def set_attributes(self, key, value): # Accessor Method
self._attributes[key] = value
return
def get_attributes(self, key):
return self._attributes.get(key, None)
def noise(self): # self is a reference to the object
print(‘errr’) # You use self so you can access attributes of the object
return
def move(self):
print(‘The animal moves forward’)
return
def eat(self):
print(‘Crunch, crunch’)
return
def __hiddenMethod(self): # A hidden method
print “Hard to Find”
return
class Dog(Animal):
def __init__(self, **kvargs): # Not needed unless you plan to override the super
super(Dog, self).__init__() # This wouldn’t work without the second line
self._attributes = kvargs
def noise(self): # Overriding the Animal noise function
print(‘Woof, woof’)
Animal.noise(self)
return
class Cat(Animal):
def __init__(self, **kvargs): # Not needed unless you plan to override the super
super(Cat, self).__init__()
self._attributes = kvargs
def noise(self):
print(‘Meow’)
return
def noise2(self):
print(‘Purrrrr’)
return
class Dat(Cat,Dog):
def __init__(self, **kvargs): # Not needed unless you plan to override the super
super(Dat, self).__init__()
self._attributes = kvargs
def move(self):
print(‘Chases Tail’)
return
def playWithAnimal(Animal): # This is polymorphism
Animal.noise()
Animal.eat() # Works even if the method isn’t in Cat because Cat is an Animal
Animal.move()
print(Animal.get_attributes(‘__name’))
print(Animal.get_attributes(‘__owner’))
print ‘\n’
Animal.set_attributes(‘clean’,”Yes”)
print(Animal.get_attributes(‘clean’))
jake = Dog(__name = ‘Jake’, __owner = ‘Paul’)
sophie = Cat(__name = ‘Sophie’, __owner = ‘Sue’)
playWithAnimal(sophie)
playWithAnimal(jake)
# print sophie.__hiddenMethod() Demonstrating private methods
print issubclass(Cat, Animal) # Checks if Cat is a subclass of Animal
print Cat.__bases__ # Prints out the base class of a class
print sophie.__class__ # Prints the objects class
print sophie.__dict__ # Prints all of an objects attributes
japhie = Dat(__name = ‘Japhie’, __owner = ‘Sue’)
japhie.move()
print japhie.get_attributes(‘__name’)
Bothering again, lol,… just to say: don’t you ever stop doing this
Hi,
Thank you for your work, helps me a lot ! I am going to go thru you other tutorials, I am looking to use MVC with Python and will try to use the bean/dao/services paradigms in order to achieve that.
Glad to help. Eventually I’ll get more into design patterns and advanced algorithms in future tutorials. Thanks
Thanks for the wonderful tutorial. It’s very clear and informative. I have a small problem with some of the code in pt 8. get_attributes returns None when I try it.
Thank you 🙂 I’m glad you liked it. There is probably just a typo somewhere. All of the code is available in a zipped archive on this page Python 2.7 Tutorial That should help
interesting, but i wish you had developed
setting attrs with **kvargs in Animals before introducing inheritance–as it becomes difficult to parse concepts. Specifically–i am trying to set up a simlple Animals vario with **kvargs only
class Animal:
def __init__(self, **kvargs):
self._attributes = kvargs
def set_attributes(self, key, value):
self._attributes[key] = value
return
def get_attributes(self, key):
return self._attributes.get(key, None)
def main():
jake = Animal(name = ‘Jake’, owner = ‘Paul’)
print jake._attributes #ok
print jake.name #not ok
if __name__ == ‘__main__’: main()
thanks
Sorry to see you deleted my question–i think i can restate the issue issue this way: the attributes of Animal,
Animal.__name (=”no name”)
Animal.__owner (=”no owner”)
are not being addressed and continue to exist with their null values in original Animal and its derived classes. They are superceded by the dictionary attribute “._attributes” created from **kwargs. I think it would be much clearer if you deleted the attrs
Animal.__name (=”no name”)
Animal.__owner (=”no owner”)
thanks.
I didn’t delete your question. I just can’t allow auto commenting because I’m attacked all of the time. Sorry about that.
You make a good point and I’ll look into your ideas. I often crank these videos out with a focus on just teaching the basics and the code isn’t always optimized. I recently slowed down on quantity and now instead focus on quality.
thanks for clarifying. I like the somewhat
free form and fast style–mostly one can get it “on the rewind”. thanks
I started making the videos faster because I noticed that almost everyone else went very slow. Their videos normally were very short, so I lengthened mine. Then recently I started covering topics that nobody else has. I’m doing my best to make original videos that haven’t been done.
The input you guys provide also dramatically influences what I do. Thanks
Your tutorials are great!!
Thanks for it and keep posting 🙂
Thank you 🙂 thanks for taking the time to say hi
I have been trying to learn python for a while now and I have found your tutorials very captivating and I commend you effort.
Thank you for taking the time to show your appreciation 🙂
Hi Derek
Great video, you explains about automatically entering key value pairs into an attribute called _attributes. Can I add then dynamically to self. = kvargs[key]. I ask because I want them to become inherited and called by the built in function hasattr etc.
Cheers!
I couldn’t figure out why my print statements weren’t working without parens. Then I remembered that I went and installed Python3 to work with another tutorial (not as good as this one). I assume that’s the difference?
(I want to learn 2.7, so that I can learn Django.)
Thanks again for this excellent series.
You’re very welcome. I’m glad you enjoyed it. I’ll see what I can do about Django
Yes! Django is my ultimate goal as well.
I’ll cover Django as soon as I find a cheap hosting company. If anyone knows of one I’ll definitely check it out
Liking the tutorials and appreciating the labor of your efforts.
I have to say though, even though many are asking for more at once, part 8 was like trying to drink from a fire hose.
Reviewing the video again… (and maybe again…)
Thanks!
Thank you very much. If a tutorial doesn’t click, print out the code and take notes. If you don’t understand a specific part feel free to ask questions. I’m here to help
Question…Why do you use a “return” statement at the end of methods that don’t return anything (ex: set_attributes(), noise())???
You don’t have to use return, but I do it out of habit. Just understand that if you don’t call return every python function returns the value None
Hi Derek, tell me please, can Dog class ,in this example, inherit private variables __name and __owner from Animal class, and second is there chain reaction at model creation?Dog object cant exist before Animal object. Thanks, you’re great.
The double underscore isn’t really private, but it just makes it inconvenient to access those variables. Many people prefer to just use a single underscore to tell others not to mess with it, but at the same time it will be available by subclasses. Yes Dog can’t exist unless Animal exists. I hope that helps. Sorry that took so long 🙂
Derek, can you explain me this? When Dog object call Animal constructor -super(Dog, self)-, he actually pass two arguments or not? What Dog as argument means? Can you explain me this, please. Python is little strange language.
Python is a little starnge because you have to pass the Dog in this situation to have the super class Animal set up the attributes. You don’t do it that way in other languages. Does that make sense?
Thank you Derek on answer. I am confused with (Dog, self) statement. Self is same as this in Java, and what Dog represent? I don’t see any purpose of Dog. A type of object? I’m sorry to bother you.
The subclass Dog is initialized by the Super class. That is why it is passed in. This happens with Java, but everything is hidden and you don’t have to pass the subclass like you do with python.
i dont whats going on in this tutorial….actually i am confused about these things like what is sel._attribute etc….i know abt dictionary but dont know what are you doing here….can u suggest me a book or ebook to clear them
???please help me
Here is the best Python book I have ever found | https://www.newthinktank.com/2010/11/python-2-7-tutorial-pt-8/ | CC-MAIN-2018-09 | refinedweb | 1,595 | 67.25 |
Hi all,
I've been having some trouble getting interactive mode to work
correctly with the WxAgg backend. I have these versions:
[craigb@...4216... hists]$ python
Python 2.7.3 (default, Aug 13 2012, 16:13:38)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-52)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
import matplotlib
matplotlib.__version__
'1.1.1rc'
import wx
wx.__version__
'2.8.12.1'
When I do something like this,
from matplotlib import pyplot
import numpy
pyplot.plot(numpy.arange(10), numpy.arange(10), 'r.')
[<matplotlib.lines.Line2D object at 0x5561750>]
pyplot.show()
I get a good-looking window with my plot. I can mouse over the window
and get coordinates, there are buttons on the bottom of the window
that I can push, etc. However, if I turn on interactive mode (I read
somewhere that it's bad to turn on interactive mode after you've
already made a plot, so below I've exited and started a new
interactive session),
from matplotlib import pyplot
import numpy
pyplot.ion()
pyplot.plot(numpy.arange(10), numpy.arange(10), 'r.')
[<matplotlib.lines.Line2D object at 0x153e2ad0>]
Nothing shows at this point. If I continue,
pyplot.show()
Still nothing shows. If I exit right now, I get a brief glimpse of my
plot before python exits completely. I can get the plot to show by
doing
pyplot.draw()
But it's only the top half of the plot, and if I drag another window
on top of it, it doesn't automatically redraw. No buttons, no
mouse-over niceness. Issuing pyplot.draw() again gets me the full
plot, but no mouse-overs, no buttons, and no redraw.
Both MPL and wxPython were built from source on a RHEL5.8 machine, so
maybe it's the libraries I am linking against...? Anyway, I've
attached two screenshots of what my plots look like when they finally
are drawn. Many many thanks in advance!
--cb | https://discourse.matplotlib.org/t/trouble-with-show-not-drawing-in-interactive-mode-w-wxagg/17385 | CC-MAIN-2022-21 | refinedweb | 331 | 68.57 |
Redis Connector
Redis is an open source, advanced key-value store. It is often referred to as a data structure server since keys can contain strings, hashes, lists, sets and sorted sets.
The Redis Connector offers complete support for its CRUD API.
Prerequisites
This document assumes that you are familiar with Mule, Anypoint Connectors, Anypoint Studio essentials, elements in a Mule flow, and global elements.
Namespace and Schema
When designing your application in Studio, the act of dragging the connector from the palette onto the Anypoint Studio canvas should automatically populate the XML code with the connector namespace and schema location.
If you manually code.
Installing
In Anypoint Studio, click the Exchange icon in the Studio taskbar.
Click Login in Anypoint Exchange.
Follow the prompts to install the connector.
When Studio has an update, a message displays in the lower right corner, which you can click to install the update.
Configuring
To use Redis connector in your Mule application, configure a global Redis element that can be used by all the connectors in the application.
The sections that follow provide the properties to configure the global element.
Non-Clustered Configuration
In the image above, the placeholder values refer to a configuration file placed in the src folder of your project. You can either hardcode.
Upgrading Finish.
Navigate through the project’s structure and double click
src/main/mule/project-name.xmlto open it. The steps below are all performed on this file.
Go to the palette and search for HTTP, then drag and drop a new HTTP Connector Listener on canvas. This element is the entry point for the flow and provides the key and value to be set for that key.
Go to the palette and search for Redis, then drag and drop a new Redis Set operation after HTTP connector. This element is going to send data to the Redis server.
Double click the Redis Set operation and set its properties as follows:
Set Display Name to Set Value For Key Into Redis.
Choose from the Extension Configuration drop down Redis__Configuration, which is the default name of a configuration, or any other configuration that you configured.
Choose from Operation drop down "Set".
Set Key to
#[payload.key].
Set Value to
#[payload.value].
Go to the palette and search for Set Payload, then drag and drop a new Set Payload element after the Redis connector. This element creates the response for the incoming HTTP request.
Double click the flow’s top margin to open its properties, and change the name of the flow to "set-flow".
Double click the HTTP Connector [Listener] to open its properties.
Click the green plus sign beside the Connector Configuration drop down menu.
A pop-up appears, leave the default configurations and click ok.
Set Path to
/.
Set Display Name to
Listener.
Double click Set Payload and set its properties as below.
Set Display Name to Set Value Response.
Set Value to Value Successfully Set.
If you configured Redis global element with placeholder values you must now provide values for these placeholders. Open
/src/main/resources/mule-app.properties/. The request’s body should contain a key and a value. For this you can use the following CURL command:
curl -X POST -d "key=test-key" -d "value=test-value" localhost:8081/
Congratulations! You have just set a value for a key into the redis server.
Save a value for a key into Redis server code
Add the Redis namespace to the Mule element:
xmlns:redis=""
Add the location of the Redis schema referred to by the Redis namespace:
Add the HTTP namespace to the Mule element:
xmlns:http=""
Add the location of the HTTP schema referred to by the HTTP namespace:
Add a redis:config element to your project, then configure its attributes:
Add a
http:listener-configelement to your project, and configure its attributes:
Add an empty flow element to your project:
Within the flow element, add an
http:listenerelement:
Within the flow element, add a
redis:setafter the
http:listener:
Within the flow element, add a
set-payloadelement after
redis:set:
When you’re done, the XML file should look like this: | https://docs.mulesoft.com/connectors/redis-connector | CC-MAIN-2018-30 | refinedweb | 691 | 55.64 |
querying pattr system
For some externals I’m planning, I would like to be able to query the values of named [pattr] objects and named UI objects that have been exposed to the pattr system by [autopattr]. I’m kind of lost on where to look for info on this (and with searching specific forums still broken on the new site, that hasn’t been much help).
In the "Preset Support" section of "Enhancements to Objects" in the Max5 API, there is mention of "more powerful and general state-saveing, use the pattr system described below" but no more pattr mentions after that. After browsing around the modules I came across places that suggested reading the PattrSDK from Max-4.5.5-SDK for more details, but as far as I can tell, that only has information on registering your objects with the pattr system (?). Is what I’m looking for documented anywhere? Or if not, does anyone have an example?
To clarify, I am specifically asking how my external can receive a notification when the value of an object bound to the pattr system changes. I’m guessing this could be done with object_attach(), but I’m not sure of the name_space involved.
Although it would also be useful to know how to query the value at a time of my choosing (which it sounded like I was asking in my first post).
To query an object’s attribute value use object_attr_getvalueof() and related functions.
To listen for notifications use object_attach() and then define a "notify" message for your object as described in the SDK.
I’ll correct the errant text you mention in the SDK documentation.
Thanks
Is there documented pattern to the name_space of named [pattr] or named UI objects? Or is the best method to find them for attaching to search the linked list of jboxes in the appropriate jpatcher for the desired [pattr] (Or named UI object)?
P.S. If you are making edits to that page, you may also want to take care of the last thing on the page "buffer support~". Looks like some sort of heading for nothing.
Every object that you can create and use in a patcher is in the "box" namespace. There is also a "nobox" namespace which includes objects you can’t use in a patcher, like atomarray and linklist.
HTH
Thanks for the help with these newbie questions. I meant to edit my last post but ran out of time before I needed to leave for class.
I guess I am asking if there an easy way to find an object already found by an [autopattr] to attach to, or do I need to search the appropriate list of jboxes to find it by varname?
The easy way is to do as you suggest, or use one of the ‘iterator’ objects in the SDK as a model.
Cheers
Forums > Dev | https://cycling74.com/forums/topic/querying-pattr-system/ | CC-MAIN-2015-48 | refinedweb | 483 | 68.6 |
dva 1.0 — a lightweight framework based on react, redux and redux-saga
Hey,
- If you like redux;
- If you like concepts from elm;
- If you want your code clean enough;
- If you don’t want to memeber to mush APIs; (only 5 methods)
- If you want to handle async logic gracefully;
- If you want to handle error uniformly;
- If you want to use it in pc, h5 mobile and react-native;
- If you don’t want to write showLoading and hideLoading hundreds of times;
- …
What’s dva
Dva is a lightweight, react and redux based on, elm style framework which aims to make building React/Redux applications easier and better.
If you like react/redux/redux-saga/react-router, you’ll love dva. :ghost:
This is how dva app is organized, with only 5 api.
import dva, { connect } from ‘dva’;
// 1. Create app
const app = dva();
// 2. Add plugins (optionally)
app.use(plugin);
// 3. Register models
app.model(model);
// 4. Connect components and models
const App = connect(mapStateToProps)(Component);
// 5. Config router with Components
app.router(routes);
// 6. Start app
app.start(‘#root’);
How dva works
View Concepts for more on Model, Reducer, Effect, Subscription and so on.
Why is it called dva
dva is a hero from overwatch. She is beautiful and cute, and dva is the shortest and available one on npm when creating it.
Who are using dva
Packages dva built on
- views: react
- models: redux, react-redux
- router: react-router
- http: whatwg-fetch
You can:
- View dva offical website
- Getting Started and familar with concepts by creating a count app
- Examples like dva-hackernews | https://medium.com/@chenchengpro/dva-1-0-a-lightweight-framework-based-on-react-redux-and-redux-saga-eeeecb7a481d | CC-MAIN-2017-43 | refinedweb | 269 | 62.38 |
Check if all sub-numbers have distinct Digit product
Given a number N, the task is to check if the all sub-numbers of this number have distinct digit product.
Note:
- An N digit number has N*(N+1)/2 sub-numbers. For example, all possible sub-numbers of 975 are 9, 7, 5, 97, 75, 975.
- Digit product of a number is product of its digits.
Examples:
Input : N = 324 Output : YES Sub-numbers of 324 are 3, 2, 4, 32, 24 and 324 and digit products are 3, 2, 4, 6, 8 and 24 respectively. All the digit products are different. Input : N = 323 Output : NO Sub-numbers of 323 are 3, 2, 3, 32, 23 and 323 and digit products are 3, 2, 3, 6, 6 and 18 respectively. Digit products 3 and 6 have occurred twice.
Approach :
- Make a digit array i.e., an array with its elements as digits of given number N.
- Now finding sub-numbers of N is similar to finding all possible subarrays of the digit array.
- Maintain a list of digit products of these subarrays.
- If any digit product has appeared more than once, print NO.
- Else print YES.
Below is the implementation of the above approach :
C++
Java
Python3
# Python3 program to check if all
# sub-numbers have distinct Digit product
# Function to calculate product of
# digits between given indexes
def digitProduct(digits, start, end):
pro = 1
for i in range(start, end + 1):
pro *= digits[i]
return pro
# Function to check if all sub-numbers
# have distinct Digit product
def isDistinct(N):
s = str(N)
# Length of number N
length = len(s)
# Digit array
digits = [None] * length
# set to maintain digit products
products = set()
for i in range(0, length):
digits[i] = int(s[i])
# Finding all possible subarrays
for i in range(0, length):
for j in range(i, length):
val = digitProduct(digits, i, j)
if val in products:
return False
else:
products.add(val)
return True
# Driver Code
if __name__ == “__main__”:
N = 324
if isDistinct(N) == True:
print(“YES”)
else:
print(“NO”)
# This code is contributed
# by Rituraj Jain
C#
PHP
YES
Recommended Posts:
- Check whether a number can be expressed as a product of single digit numbers
- Integers from the range that are composed of a single distinct digit
- Digit - Product - Sequence
- Product of N with its largest odd digit
- Check if frequency of each digit is less than the digit
- Product of non-repeating (distinct) elements in an Array
- Find two distinct prime numbers with given product
- Distinct Prime Factors of Array Product
- First digit in product of an array of numbers
- Last digit of Product of two Large or Small numbers (a * b)
- Maximum number with same digit factorial product
- Largest palindrome which is product of two n-digit numbers
- Maximum of sum and product of digits until number is reduced to a single digit
- Check whether a number has exactly three distinct factors or not
- Check if all array elements are, rituraj_jain | https://www.geeksforgeeks.org/check-if-all-sub-numbers-have-distinct-digit-product/ | CC-MAIN-2019-26 | refinedweb | 499 | 50.4 |
texture map of the game character Ben
download and play! (OSX build)
You’re a visitor in an underground network where the workers follow the same routine day in and day out
download and play! (OSX build)
Welcome to the neighborhood! Take a look around, just don’t look anywhere you’re not supposed to. Things have been a little unstable lately.
download and play! (OSX build)
The Hive is a world based on sound. You fly through the world starting from The Hive, and observe the different plants growing throughout the environment. The purpose was to create a meditative space in which one can go in and become lost in the environment.
download and play! (OSX build)
Two parallel worlds will meet each other.
Text by Milan Kundera’s novel “The Hitchhiking game”
Click and hold left mouse button to move.
Click and hold right mouse button to switch between the two worlds.
Rotate the mouse for exploring.
download and play! (OSX build)
Hillary’s house has just burned down. Amidst the turmoil of emotionally turbulent teenage thoughts, personal objects from the conflagrant material world have entered into a funky paradigm of human consciousness. It is the player’s job to expunge these objects from her thoughts, and restore the natural disorder to this particular cognitive universe. Not a JRPJ.
download and play! (OSX build)
You are a human in a terrarium being farmed by a giant extraterrestrial eye. Trapped in a maze of ugly fake trees and fungus, you listen the sentient creatures that speak to you about your fate, but ultimately your only choice to give in to the alien being and rise.
download and play! (OSX build)
The)
A world created to play on the dry humor (ba-dum-tss) of a family trip through a desert. The majority of the art is clipped textures. I wanted to gather as many desert-like elements together to form the “fullest” desert experience.
download and play! (OSX build)
Nicaragua is a world navigated using the right and left arrow keys. The visual world includes a watercolor styled countryside with rolling hills, dirt roads, and trees. The world is explored through a bus, which rolls past small animations of animals and people. There are a few elements of magical realism, seen in the paper cranes that are flying and the live pinata. The audio in the world is all local. The forests have bird chirping and rainforest noises, the school has a party song, and the pigs oink.
download and play! (OSX build)
Girlfriend-being and Brutal Babe of Hellzone 5. Compartmentalized 80’s claustrophobic environment design creates for a shitty gallery setting wherein plastic bags announce your arrival and Femmezoid’s exhibitionism drools in delight. WIP
download and play! (OSX build)
I literally chose a random metal because I couldn’t come up with a real title. Woop woop.
Press “Escape” to access the debug menu, which contains a list of controls and gives the ability to restart or warp between areas of the game.
NOTE: The game uses shaders that require graphics hardware that is compatible with modern standards – DX11 and Shader Model 3.0 – to run properly. At the moment, I don’t have the time or energy to provide alternative versions for older hardware.
Screenshots:
download and play! (OSX build)
Collect enough objects before the volcano explodes!
Built with unity in my worldbuilding class.
(a work in progress, so still some bugs to fix)
last photo is extra credit printed addendum
download and play! (OSX build)
How much of what you used to have defines who you are?
download and play! (OSX build)
The View from Everywhere is an interactive world building project that explores a highly feminine vision of an unidentified space arena.
Stills from final build
download and play! (OSX build)
download and play! (OSX build)
public String titleName = Demificia;
private int end_of_the_world = a zillion;
for (int i = 0; i < end_of_the_world; i ++) {
debug.log(“worldbuilding makes kids happy”);
}
if (!worldbuilding) {
then = what;
}
debug.log(“lalala”)
try {
continue;
} catch {
meIfYouCan();
}
‘Watch’ is a visual experience of how the products that we consume, actually consume us. It starts as an indefinite black space with a ticking sound. As the user engages with the world, they might recognize the visuals but are overwhelmed by the random jumping around and nonsensical sounds. To achieve this world experience, I focus on layered sound design, familiar and strictly followed colors and icons, and movement that is both user-generated and pre-programmed.
Space Bar as navigation.
download and play! (OSX build)
All Houses Dream In Blueprints is an interactive mixtape set in a loosely recreated version of the studio I lived-in between 2008 and 2010. Each song transforms the environment. These alterations attempt to convey objective (timbre, lyrics) and subjective (what I associate these songs with) aspects of each tune. The objects which occupy the space fit under two categories: those that I still possess and those that I do not. The former were not modeled per-se. Instead, they were scanned using a first-generation Kinect motion controller alongside a software application called Skanect. The other set of objects/furniture were roughly shaped from memory using Maya. All Houses Dream In Blueprints can be played on the Oculus Rift VR Headset, as well as Desktops (be warned, it is quite resource hungry at present). Finally, here is the list of songs featured in the game (title- artist – album):
Black and Brown Blues – Silver Jews – The Natural Bridge (Drag City, 1996)
Bring Me the Head of Paul McCartney on Heather Mill’s Wooden Peg – The Brian Jonestown Massacre – My Bloody Underground (A Records, 2008)
Jim – Swans – My Father Will Guide Me up a Rope to the Sky (Young God, 2010)
Poor Places – Wilco – Yankee Hotel Foxtrot (Nonesuch, 2001)
Say Valley Maker – Smog – A River Ain’t Too Much To Love (Drag City, 2005)
download and play! (OSX build)
Project Upload Instructions:
Make your final build (for mac and/or pc).
Test run it on your computer.
Include instructions if necessary.
Zip the .app (mac) or .exe+ data folder (pc)
Name it using this template:
LastName_FirstName_WorldTitle_Platform.zip (platform = pc or mac)
Upload it to the following folder:
myClasses/F15-171/web/projects/FinalBuilds
(this can be accessed either by the dma cloud (cloud.dma.ucla.edu) or by ftp)
ALSO:
Make a NEW post to the class website – with 3 screenshots and a title of your piece and a paragraph description and any instructions that are not in the game itself.
Video Upload Instructions:
Run and record your build in fullscreen (or desired aspect/resolution) in a computer that has screen recording software. We have a PC in the gamelab that has Frapps and a Mac in our classroom that has SnapZProX.
Edit and compress your video to max 2mins / 300MB, H.264/MPEG-4 format. .
Name it using this template:
LastName_FirstName_WorldTitle.mp4
Upload it to the following folder:
myClasses/F15-171/web/projects/Videos
(this can be accessed either by the dma cloud (cloud.dma.ucla.edu) or by ftp)
Warning: Any project not uploaded properly will not be graded.
What is Frequency Moon? Frequency Moon is a land of confusion, so the better question is, “What is Frequency Moon to you?”
Screenshots:
Hand-made USB stick that holds digital app:
download and play! (OSX build)
download and play! (OSX build)
Here is a prefab for doing 2d sprite sheet animations!
Use it like this:
- Make a sprite sheet in Photoshop or Illustrator (use a grid and center your frames to the grid)
- Import your image
- Create a new material (using the shader unlit/transparent and adding your image as a texture)
- Apply your material to a quad (gameobject -> 3d object -> quad)
- Add the Animate Sprite Sheet script and tweak the values to match your spritesheet.
- Press play!
script and prefab here: spritesheetanimation
more about spritesheets:
As the post title says, I still don’t have a name for this. This mostly has to do with the fact that I am absolutely terrible at naming things.
Below, a workflow shot showing the development process for the first area. The thumbnails link to rough parallax outlines of areas that have not yet been built. More importantly, there is a functional test room (included with my submitted builds) that has working (if still very rough) gameplay and camera mechanics, and an easily expandable structure. Most of my effort over the past week has gone into making the playable structure and thinking about ways to speed up my workflow; now that I have a clearer plan of attack and a functional gameplay backbone, I think I should be able to make progress much faster.
using UnityEngine;
using System.Collections;
public class forestGenerator : MonoBehaviour {
public GameObject[] trees;
// Use this for initialization
void Start () {
Vector3 origin = new Vector3(0.0f, 0.0f, 0.0f);
origin = new Vector3(1.0f, 0.0f, 0.0f);
// gen one tree at origin
// treeGen(origin, 1);
int treeID = Random.Range(0, trees.Length);
treeRowGen (origin, treeID, 10, 1.0f, 10.0f, 4.0f);
}
int treeGen (Vector3 pos, int treeID) {
// get random rotation
float randAngle = Random.Range(0, 360.0f);
//instatiate tree at postion
Instantiate (trees[treeID], pos, Quaternion.AngleAxis(randAngle, Vector3.up));
// Instantiate (trees[treeID], pos, trees[treeID].transform.rotation);
return treeID;
}
void treeRowGen (Vector3 pos, int treeID, int count, float padding, float rowSpacing, float noiseRange) {
int treeNUM;
for (int j = 0; j < count; j++) {
for (int i = 0; i < count; i++) {
// pick tree to spawn
// int treeID = Random.Range(0, trees.Length);
// get extents of the sprite (for spacing)
Renderer rend;
rend = trees[treeID].GetComponent<Renderer>();
float radius = rend.bounds.extents.magnitude;
pos.y = rend.bounds.extents.y;
float noise = Random.Range(0, noiseRange);
pos.x = (i * radius) + noise;
//pos.x += padding;
pos.z = (rowSpacing * j) + noise;
treeNUM = treeGen(pos, treeID);
}
}
}
// Update is called once per frame
void Update () {
}
}
I was browsing the other day and came across a couple neat pages. First, Game Art Tricks, a blog which talks about the technical tricks behind game graphics. A lot of the stuff is too in-depth to apply to this class, but I think it’s a really inspiring site because it shows really creative solutions to complex problems.
Game Art Tricks, by Simon Schreibt
And here’s a really interesting article talking about the development of Shadow of the Colossus. If you haven’t played it, it’s a (really good) game which deals a lot with worldbuilding and creating immersive experiences. Again, most of the actual content is too technical (and outdated, since the game was developed for Playstation 2 and game technology has come a very long way), but it has some neat stuff.
Shadow of the Colossus Developer Interview
Here is a cool game I played at E3…thinking of trying to emulate this style for my project.
:~O
have a look at this piece by Lu Yang. She first made this video:
and later made into a game as well:
7: IMPORTANT!!! Set the category to the project assignment you want to submit to.
6: HIT PUBLISH!
Otherwise we won’t be able to see your project!
read this article about structuring space in games for Wednesday!
Sam found this nice ga,e / world that uses 2D animated textures in a 3D world check it out:
heres the video:
here you can download the game for $5 if you like it:
here is the site with other projects:
E | http://classes.dma.ucla.edu/Fall15/171/?page_id=99 | CC-MAIN-2018-22 | refinedweb | 1,917 | 65.01 |
Finally working on validating my site and doing fine apart from one annoying error that I can't get rid of.
I've inserted the "google plus one button" which gives the following error.
Line 135, Column 12: element "G:PLUSONE" undefined <g: plusone></g: plusone><br><br></center>
This is the doctype:<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "">
Many thanks in advance.
psI inserted the spaces after the g: because it shows the "greenman" if I don't.
I've never seen anything to indicate Google is arbitrarily penalising sites that have invalid code. Their goal is to give searchers the most relevant results, and dropping sites because they have technical errors in their code isn't going to fulfil that goal. (Besides, it would be more than a little hypocritical if they did!)
That doesn't mean that code errors won't harm your search position though. The key word above is 'arbitrarily'. Google reads your code, and uses that to find out what your page is about and determine how and where it should rank. If your code is scrappy and all over the place, riddled with mistakes, there's a fair risk that Google won't be able to understand it properly, and that will harm your search position.
The main reason for caring about validation (apart from your own professional high standards, of course) is that sites with invalid code are much more likely to display incorrectly on some or all browsers. It might look fine in one browser and be wrong in another. That browser that it looks wrong in might not even be out yet – you can test it in every browser available today and it's fine, then tomorrow a new version of (whatever) is launched and it chokes on your errors. It's much easier to check the code is valid than to test it in every version of every browser!
On the other hand, proprietary code, whether it's -moz- prefixes in the CSS or Google/Facebook code in the HTML, is always designed to use 'new' tags that aren't part of any spec. That way, supporting agents will work with them correctly, and all others will just ignore them. So it's no big deal if you have errors resulting from proprietary code, as long as you know why they're there.
Code like that is designed to work properly even if it doesn't validate, so I wouldn't worry about it. The only way to remove that error is to remove the code. Remember that validation is there to warn you of potential problems. This won't be a problem.
It would be better to ditch the transitional doctype, though. That's really for 1990s sites.
It does work fine. The purpose of the exercise to get rid of as many errors as possible, is to speed up the downloads ( people are impatient, and it appears to be a ranking factor for Google ) and the rumour that Google ranks a site better that is has no ( or few ) code errors. Is there some truth in this?
Is there a practical reason like better serp ranking? I am not a star programmer by any stretch of the imagination and learning whilst keeping up a full time job. I don't mind if the old style limits the options; my site will be kept rather plain by choice.Thanks for your quick response.
I'm not sure about that, but I doubt it.
Is there a practical reason like better serp ranking?
I don't think so, but an old doctype signals that the site may be built with outdated coding practices, such as tables for layout, which I do believe affects Google ranking a bit. It would be worth switching to a strict doctype and see what errors you get then. That would be very instructive.
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "">
<html lang="en">
<head>
<meta http-
I was going to post this very same thing, without the Google ranking bit. The transitional doctype is used nowadays for two reasons: to "transition" a site from an outdated, table-driven structure to a newer one without loss of functionality (hence it would be strictly temporary), or to "hide" old, obsolete code behind the transitional doctype and give the impression that the site is valid when it really isn't.
Thanks for the replies folks.My site is handwritten, I do have editors but don't like what they push out. Is there a tangible advantage to switching to a newer version html? Bear in mind that this is somethig I do on the side and I am not a pro-programmer.
There's no big advantage, as such, but your original intention was to sort out any coding errors, so putting a more modern doctype in there would give you a better idea of what needs fixing, if anything. The older doctype just means that the validator will be more lenient on you and flag fewer errors.
The current doctype is:<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "">
Should I change the "Transitional", the "loose" or both?
The "loose" is just for Transitional because it's loose (as in lax) and not strict. Just use the following:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
"">
[quote="xhtmlcoder,post:10,topic:89735"]The "loose" is just for Transitional because it's loose (as in lax) and not strict. Just use the following:
[/quote]
I'd cleaned up a page a couple of days ago, at which point it completely validated. I copied the above doctype into this page, and I got 45 errors. 44 to do with unreconised font definitions/definitions.
I guess it will have to go back to looose, as I have no idea how to do the font specifications other than using css, of which I know virtually nothing.
Could you post an example of the code it rejected? It might be very simple to change.
Below are most if not all of the diffrent types of errors.
Line 25, Column 76: there is no attribute "ALIGN" … <table border="0" cellspacing="0" cellpadding="16" width="700" align="center">
Line 29, Column 29: there is no attribute "COLOR" <h1><font color="navy" size="3">A South London Boiler Engineer who …
Line 29, Column 41: there is no attribute "SIZE" … <h1><font color="navy" size="3">A South London Boiler Engineer who r…
Line 29, Column 44: element "FONT" undefined … <h1><font color="navy" size="3">A South London Boiler Engineer who repa…
Line 86, Column 41: there is no attribute "CLEAR" size=1><br></font><br clear="all"><font
OK, yeah, that's very outdated code, so there wouldn't be much point in messing with it to make the validator happy. The validator is really telling you that the page needs to be rewritten entirely. But if you're not up for that, the page isn't going to fall apart any time soon.
Rewritten in what? Html5? Xtml?
It is not a matter of being up for it. More like only so many hours in the day and needing to do first what pays of most including:
50+ pages to add to the site, 100% inhouse and hand coded.Existing 30 pages need improving in lay out, content and seo.Upgrading seo skills.Adding and maintaining a blog.
If there is a quantifiable advantage in switching to the next generation coding, I will learn it. If it is a mainly a matter of "programming-aesthetics" and not much else, I'd rather spend the time on upgrading my seo skills as that will put bread on the table.
It's not a case of what language it's written in, but writing it in a way that's semantically correct and most efficient.
As a basic rule, HTML is a language for identifying the meaning. For example, if you use a table, that is signalling that your are dealing with tabular data (related by rows and columns). Anything related to style or layout (Presentation) should be handled by CSS. In the bad old days of HTML, before CSS was properly supported in browsers, hacks such as table layouts and font tags were invented. But they are incredibly inefficient and not very accessible, for a start, so it's better to rewrite a page without them.
It's a bit like a building that isn't constructed properly. It's better to tear it down and rebuild from the ground up that patch it up and hope for the best.
I see. How do I go about learning how to do presentation properly?
HTML4 will do fine. At the moment you're using HTML3.2 with some bits of 4 thrown in.
In HTML4, you're supposed to use CSS for all the presentation, formatting, layout and styling information. So instead of putting <font color='"#aa0000"'><h1><font color="navy" size="3"></font> in every time you have a heading, you just use the heading <font color='"green"'><h1></font> in the HTML, and then you have a CSS file (for the whole site), which includes <font color='"green"'>h1 {color:navy; size:1.3em;}</font>, and that sets the style for all <h1> headings on the site.
<font color='"#aa0000"'><h1><font color="navy" size="3"></font>
<font color='"green"'><h1></font>
<font color='"green"'>h1 {color:navy; size:1.3em;}</font>
It is not a matter of being up for it. More like only so many hours in the day and needing to do first what pays of most
I know that feeling well!
Setting your site up to use CSS will bring you massive time and efficiency savings in the medium and long term, maybe even in the short term as well, particularly if you are hand-coding it. Even if you don't go the whole hog with the layout immediately, it really is worth moving to CSS for formatting as soon as you can.
A small addition to what Ralph, Stevie, et al. wrote: If you indeed have 50 pages on the site, which are all hand-coded, switching to a non-table CSS-based layout will literally allow you to reduce the time it takes to re-design the entire website with 98 percent. Once you learn more about CSS, you can improve this even more.
Another point, which is a bit moot, and only really interesting as a technical curiosity: The syntax g:[i][/i]plusone would actually be valid if you page was written in true XHTML (i.e. sent with an application/xml+xhtml MIME type). All it would require would be to define an XML namespace for the g: prefix. Only problem is, any version of Internet Explorer below 9 would be unable to display the page. Also note that, as I recall, this would not actually validate in the W3 validator, even though it's strictly technically correct, as the W3 validator can't handle user-defined namespaces.
g:[i][/i]plusone
application/xml+xhtml
g:
Basically by finding a good book on CSS, or using similar online resources. SitePoint has some good offerings. | https://www.sitepoint.com/community/t/shorttag-validation-error/89735 | CC-MAIN-2016-07 | refinedweb | 1,881 | 71.14 |
Just like Django, Misago defines shortcuts module that reduce some procedures to single functions.
This module lives in
misago.views.shortcuts.
This function is a factory that validates received data and returns Django's Page object. In adition it also translates
EmptyPage errors into
Http404 errors and validates if first page number was explictly defined in url parameter or not.
paginate function has certain requirements on handling views that use it. Firstly, views with pagination should have two links instead of one:
# inside blog.urls.pyurlpatterns += patterns('blog.views',url(r'^/$', 'index', name='index'),url(r'^/(?P<page>[1-9][0-9]*)/$', 'index', name='index'),)# inside blog.views.pydef index(request, page=None):# your view that calls paginate()
Utility function that returns JSON-serializable dict for
Page object defining following keys:
page - The 1-based page number for current page.
pages - The total number of pages.
count - The total number of items on list.
first - None if this is first page, otherwhise
1.
previous - None if this is first page, otherwhise number of previous page.
next - None if this is last page, otherwhise number of next page.
last - None if this is last page, otherwhise number of the last page.
before - Total number of items on previous pages.
more - Total number of items left to display on next pages.
Shortcut function for returning paginated responses from API. Takes one required argument, the
Page object, and following optional arguments:
serializer - Serializer to use. If its omited, no additional serialization step will be taken.
data - Data object to use. If its omited,
page.object_list will be used by default.
extra - Dict with additional data to be included in response's JSON. Because dict in
extra is added to response via
response_json.update(extra), this dict can be used for last minute overrides as well.
Giving
page argument default value of 1 will make
paginate function assume that first page was reached via link with explicit first page number and cause redirect loop.
Error handler expects link parameter that contains current page number to be named "page". Otherwise it will fail to create new link and raise
KeyError.
This function compares model instance's "slug" attribute against user-friendly slug that was passed as link parameter. If model's slug attribute is different this function,
misago.views.OutdatedSlug is raised. This exception is then captured by Misago's exception handler which makes Misago return permanent (http 301) redirect to client with valid link.
Example of view that first fetches object form database and then makes sure user or spider that reaches page has been let known of up-to-date link:
from misago.views.shortcuts import validate_slug, get_object_or_404from myapp.models import Cakedef cake_fans(request, cake_id, cake_slug):# first get cake model from DBcake = get_object_or_404(Cake, pk=cake_id)# issue redirect if cake slug is invalidvalidate_slug(cake, cacke_slug)
You may have noticed that there's no exception handling for either
Http404 exception raised by
get_object_or_404, nor
OutdatedSlug exception raised by
validate_slug. This is by design. Both exceptions are handled by Misago for you so you don't have to spend time writing exception handling boiler plate on every view that fetches objects from database and validates their links.
Naturally if you need to, you can still handle them yourself.
Also, your links should use "slug" parameters only when they are supporting GET requests. For same reason you should call
validate_slug only when request method is GET or HEAD. | https://misago.gitbook.io/docs/shortcuts | CC-MAIN-2021-21 | refinedweb | 572 | 58.48 |
Here are all of the changes that Python 2.5 makes to the core Python language.
class zerodict (dict): def __missing__ (self, key): return 0 d = zerodict({1:1, 2:2}) print d[1], d[2] # Prints 1, 2 print d[3], d[4] # Prints 0, 0 'reverse'..)
def is_image_file (filename): return filename.endswith(('.gif', '.jpg', '.tiff'))
(Implemented by Georg Brandl following a suggestion by Tom Lynn.)
keykeyword parameter analogous to the
keyargument.)
id(self)in __hash__() methods (though this is discouraged).
# -*- coding: latin1 -*-
>>>.)
class C(): pass.
for line in file.)
a = 2+3, the code generator will do the arithmetic and produce code corresponding to
a = 5. (Proposed and implemented by Raymond Hettinger.)
Frame objects are also slightly smaller, which may improve cache locality and reduce memory usage a bit. (Contributed by Neal Norwitz.)
See About this document... for information on suggesting changes.See About this document... for information on suggesting changes. | http://python.org/doc/current/whatsnew/other-lang.html | crawl-001 | refinedweb | 153 | 63.15 |
updated copyright years
1: \ report words used from the various wordsets 2: 3: \ Copyright (C) 1996,1998: \ Use this program like this: 23: \ include it, then the program you want to check; then say print-ans-report 24: \ e.g., start it with 25: \ gforth ans-report.fs myprog.fs -e "print-ans-report bye" 26: 27: \ Caveats: 28: 29: \ Note that this program just checks which words are used, not whether 30: \ they are used in an ANS Forth conforming way! 31: 32: \ Some words are defined in several wordsets in the standard. This 33: \ program reports them for only one of the wordsets, and not 34: \ necessarily the one you expect. 35: 36: 37: \ This program uses Gforth internals and won't be easy to port 38: \ to other systems. 39: 40: \ !! ignore struct-voc stuff (dummy, [then] etc.). 41: 42: vocabulary ans-report-words ans-report-words definitions 43: 44: : wordset ( "name" -- ) 45: lastxt >body 46: create 47: 0 , \ link to next wordset 48: 0 0 2, \ array of nfas 49: ( lastlinkp ) last @ swap ! \ set link ptr of last wordset 50: ; 51: 52: wordlist constant wordsets wordsets set-current 53: create CORE 0 , 0 0 2, 54: wordset CORE-EXT 55: wordset BLOCK 56: wordset BLOCK-EXT 57: wordset DOUBLE 58: wordset DOUBLE-EXT 59: wordset EXCEPTION 60: wordset EXCEPTION-EXT 61: wordset FACILITY 62: wordset FACILITY-EXT 63: wordset FILE 64: wordset FILE-EXT 65: wordset FLOAT 66: wordset FLOAT-EXT 67: wordset LOCAL 68: wordset LOCAL-EXT 69: wordset MEMORY 70: wordset SEARCH 71: wordset SEARCH-EXT 72: wordset STRING 73: wordset TOOLS 74: wordset TOOLS-EXT 75: wordset non-ANS 76: ans-report-words definitions 77: 78: : answord ( "name wordset pronounciation" -- ) 79: \ check the documentaion of an ans word 80: name { D: wordname } 81: name { D: wordset } 82: name { D: pronounciation } 83: wordname find-name 84: ?dup-if 85: sp@ cell nextname create drop 86: wordset wordsets search-wordlist 0= abort" wordlist unknown" , 87: endif ; 88: 89: table constant answords answords set-current 90: warnings @ warnings off 91: include ./answords.fs 92: warnings ! 93: ans-report-words definitions 94: 95: : add-unless-present ( nt addr -- ) 96: \ add nt to array described by addr 2@, unless it contains nt 97: >r ( nt ) 98: r@ 2@ bounds 99: u+do ( nt ) 100: dup i @ = 101: if 102: drop rdrop UNLOOP EXIT 103: endif 104: cell 105: +loop 106: r@ 2@ cell extend-mem r> 2! 107: ( nt addr ) ! ; 108: 109: 110: : note-name ( nt -- ) 111: \ remember name in the appropriate wordset, unless already there 112: \ or the word is defined in the checked program 113: dup [ here ] literal > \ word defined by the application 114: over locals-buffer dup 1000 + within or \ or a local 115: if 116: drop EXIT 117: endif 118: sp@ cell answords search-wordlist ( nt xt true | nt false ) 119: if \ ans word 120: >body @ >body 121: else \ non-ans word 122: [ get-order wordsets swap 1+ set-order ] non-ANS [ previous ] 123: endif 124: ( nt wordset ) cell+ add-unless-present ; 125: 126: : find¬e-name ( c-addr u -- nt/0 ) 127: \ find-name replacement. Takes note of all the words used. 128: lookup @ (search-wordlist) dup 129: if 130: dup note-name 131: endif ; 132: 133: : replace-word ( xt cfa -- ) 134: \ replace word at cfa with xt. !! This is quite general-purpose 135: \ and should migrate elsewhere. 136: \ the following no longer works with primitive-centric hybrid threading: 137: \ dodefer: over code-address! 138: \ >body ! ; 139: dup @ docol: <> -12 and throw \ for colon defs only 140: >body ['] branch xt>threaded over ! 141: cell+ >r >body r> ! ; 142: 143: forth definitions 144: ans-report-words 145: 146: : print-ans-report ( -- ) 147: cr 148: ." The program uses the following words" cr 149: [ get-order wordsets swap 1+ set-order ] [(')] core [ previous ] 150: begin 151: dup 0<> 152: while 153: dup >r name>int >body dup @ swap cell+ 2@ dup 154: if 155: ." from " r@ .name ." :" cr 156: bounds 157: u+do 158: i @ .name 159: cell 160: +loop 161: cr 162: else 163: 2drop 164: endif 165: rdrop 166: repeat 167: drop ; 168: 169: \ the following sequence "' replace-word forth execute" is necessary 170: \ to restore the default search order without effect on the "used 171: \ word" lists 172: ' find¬e-name ' find-name ' replace-word forth execute | https://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/ans-report.fs?rev=1.10;sortby=rev;f=h;only_with_tag=v0-6-0;ln=1 | CC-MAIN-2022-05 | refinedweb | 724 | 60.89 |
DNN LayerFactory
Hi, I am using the dnn module of opencv 3.4.1 with Python 3.5 to deploy nets trained with keras/tensorflow. Some simple models can be deployed in python and c++ with opencv successfully. Now, I would like to implement the missing ClipByValue node/layer and register this layer in the dnn module with python. I am following this tutorial. Unfortuneatly I do not find the function dnn_registerLayer.
import cv2 cv2.dnn_registerLayer(...)
results in
AttributeError: module 'cv2.cv2' has no attribute 'dnn_registerLayer'
where do I find the module for register custom layers in Python?
@SEbert, perhaps you need to update OpenCV. Try to build pythob binding from source using the latest state of master branch or 3.4 branch. | http://answers.opencv.org/question/193788/dnn-layerfactory/ | CC-MAIN-2018-43 | refinedweb | 123 | 61.73 |
Library tutorials & articles
Binary Files
James Crowley, published on 14 Jul 2001
Page 1 of 6
- Introduction
- File Access
- The Basics
- Strings
- More Strings
- Practical Uses
Introduction
Although binary files may at first seem rather primitive, reading and writing bytes one by one, they can be immensely useful, and make it relatively easy to create your own file formats. This tutorial shows you the ins and outs of Binary files, serialization (converting variables to binary) and deserialization (converting binary files to readable variables)...
it helps me a lot, thanks
Thanks it was very helpfull
I want to read a binary file whitch saved by a Grid Option.how Can i Read this file and retrieve data from file.
I am struggling for long time to display a GIF file on an ASP page. I have tried to return a GIF file in few formats
from a webservice.
Webservice returns the images as MemoryStream.GetBuffer(). Its an array of unsigned bytes.I also tried to return it
in base64 encoded format.
I can call this webservice in either of 2 ways
1) Directly from ASP -- It returns
2) From VB(again I am returning byte array) which inturn is called by ASP -- I get the
same junk as above.
I need to convert it as a GIF file in my ASP page.
Can you guys suggest how it could by achived. I am open to change code at any layer.
Note - I am not getting any file back from VB or Web service. I am only getting it in some encoded format(byte array
or base64)
My ASP code is like this -
<%@ LANGUAGE="VBSCRIPT"%>
<% Response.Expires = 0
Response.Buffer = True
Response.clear
Response.contenttype = "image/gif"
Response.AddHeader "content-disposition", "inline; filename=MyMap.gif"
Set myEXE = CreateObject("ExFireSafeMapInfoTool.MapInfo")
varGetImage = myEXE.GenerateMapForHousehold()
RESPONSE.BinaryWrite (varGetImage)
'Value of varGetImage -- Dont know what format
%>
My VB Component function is something like this -
Public Function GenerateMapForHousehold() As Variant
Dim szFileName As String
Dim oSOAP As New SoapClient30 'Soap Type Library 3
Dim bytImage() As Byte
Dim szPicture As String
Dim iCount As Integer
Dim File As Integer
File = FreeFile
'Make the Call to Web Service to retrive the Image.
oSOAP.ClientProperty("ServerHTTPRequest") = True
oSOAP.MSSoapInit ""
bytImage = oSOAP.GetImage("bayshore.gif")
GenerateMapForHousehold3A = bytImage
'I can retreive this image in VB using this code -
'szPicture = App.Path & "\Test.gif"
'Open szPicture For Binary As File
'For iCount = LBound(bytImage) To UBound(bytImage) - 1
' Put #1, , bytImage(iCount)
'Next
'Close #1
'GenerateMapForHousehold = szPicture
'Set Form1.imgTest.Picture = LoadPicture(GenerateMapForHousehold)
End function
And web service(c#) method is something like this-
using System.IO;
using System.Drawing.Imaging;
public byte[] GetImage(string strFileName)
{
StreamWriter sr;
Byte sIn = new Byte();
Image MyImage;
MemoryStream MemStr = new MemoryStream();
MyImage = new Bitmap(Server.MapPath( strFileName));
MyImage.Save(MemStr, ImageFormat.Jpeg);
return MemStr.GetBuffer();
//I have 1 more function to return base64 encoded string but dont know how to use it further.
}
Please help.
Thanks
Mohit
Dim FileNo As Integer
FileNo = FreeFile
Open ... ... As #FileNo
Get #FileNo, , FileStr
[courier new]// Mathias[/courier new]
Hi. I'm having a similar problem, I just can't get the header. I want to check if a .BMP file is in the right format. The header is BM, but i'm just getting M8 as header. That must be wrong
. How do you get the header? Isn't it just
I have the following from the tutorial:
Dim nFileNum As Integer, sString As String
nFileNum = FreeFile
Open "C:\Example\Example.txt" For Binary Access _
Read Lock Read Write As #nFileNum
sString = Space$(6)
Get #nFileNum, 1, sString
Close #nFileNum
MsgBox sString
[/Quote]
It reads the value in the file fine, but since i wrote the file in binary it reads it back in binary and all i get is a ""
Am i missing something... does the file need to be read in a certain way?
I don't know what I am doing wrong, but I would like to be able to read and write to *.exe and *.dll files. The problem is that it only gets the header ( MZ ). Does anyone know what I may have done wrong, or another method that can open EXE and DLL files? Thanks!
-vivi0
How do I find the size of the file?
[edit] Nevermind. LOF(1) seems to be it.
If everything you ever wanted to work with were an INI file, that'd be perfect. But if you want to make any sort of editor chances are you'll be using Binary file access.
Hi Shotgunner,
I assumed that a binary file that could be read by C/C++ could also be read in VB. There are constraints on how to do that but here goes. In the code I posted before I created a binary file. Here's the code I used to read it and write out the equivalent integer (or more correctly long) data in VB:
Private Sub cmdRead_Click()
Dim ibyte(2) As String * 1
Dim tot, temp As Long
Dim ind As Integer
For ind = 0 To 19
Get #1, , ibyte(1)
Get #1, , ibyte(2)
temp = Asc(ibyte(2)) * 2 ^ 8
tot = Asc(ibyte(1)) + temp
txtData(ind * 3).Text = tot
txtData((ind * 3) + 1).Text = Asc(ibyte(1))
txtData((ind * 3) + 2).Text = Asc(ibyte(2))
Next ind
End Sub
Private Sub mnuExitProgram_Click()
Unload frmBin
End Sub
Private Sub mnuFileOpen_Click()
Open "test.dat" For Binary As #1
cmdRead.Enabled = True
End Sub
Obviously I won't be sending you the forms.
Here's the C (or more correctly the VC++ ) code that accomplishes a similar purpose:
include <fstream.h>
include <iomanip.h>
int main()
{
unsigned ind, tot;
unsigned char byte[2];
ifstream infile("test.dat", ios::in );
for ( ind = 1; ind <= 20; ind++ )
{
infile.read( byte, 2 );
tot = byte[0] + ( byte[1] << 8 );
cout << setw(8) << ind << setw(8) << tot << setw(8) << (int)byte[0] << setw(8) << (int)byte[1] << endl;
}
return 0;
}
Both programs read in the binary data from test.dat and displayed (either to console or to form) the equivalent numerical data.
Is it possible to read a C++ written binary file into visual basic and write it again using vb then have it read by c++. If it is then can someone please tell me. also how do you write a typefef structure into a file using c++. Does anyone know. I would really appreciate it.
A binary file is a binary file regardless of which language creates it. The following code reads in a binary file 2 bytes at a time (the read statement) and then writes them to another binary file 2 bytes at a time (the write statement)
#include <fstream.h>
#include <iomanip.h>
int main()
{
int ind, tot;
unsigned char byte[2];
ifstream infile("2000_09_28_18_40_11_Sen2_Grp0.dat", ios::in );
ofstream outfile("test.dat", ios::binary );
for ( ind = 1; ind <= 20; ind++ )
{
infile.read( byte, 2 );
outfile.write( byte, 2 );
tot = byte[0] + ( byte[1] << 8 );
cout << setw(8) << ind << setw(8) << tot << setw(8) << (int)byte[0] << setw(8) << (int)byte[1] << endl;
}
return 0;
}
I plan to test test.dat later and make sure that VB can read it. But I have little doubt that it will. You may wonder what I was writing to the screen with cout. (This is important to consider.) Often binary files contain 2 byte integers split into two separate bytes. Here I was taking the bytes and recombining them into their original value. For example the first two number that are read are 208(byte[0] ) and 7 (byte[1]). Shifting the the 1 byte 7 8 places (the equivalent of multiplying by 256) and then adding it to byte[0] to make an integer 2 bytes long equals 2000. 7 * 256 + 208 = 1792 + 208 = 2000 or looked at another way 00000111 010110000 is a 16 bit representation of 2000.
I hope this helps.
David
And don't forget, data files shouldn't be written to the registry.
The Len() function returns null if the file is open in binary mode. Use the LOF() function instead.
Another thing, String() function is better to use than Space() function because if you have used value bigger than new value, Space() only fills string with spaces, leaving the size as is, but String() resizes the strng.
This is a 2 ways method, if your program very much independent on settings inside the INI files you just have to read and write the INI files at all times.
Or if ur program is mostly put the value inside the registry for a one-time settings of something in conjunction to the system setting, registry would be better.
But i prefer using INI files since it can be separated from the syste, registry and avoid corruption.
One more thing is in registry, you can organize ur setting in parent and child relationship but not INI file.
Since you can do the same kind of stuff like this and save it to the registry, which is easier, why use binary files? I could see it if you had a LOT of values, but then wouldn't you just use and INI file or something for conveienience? I guess keeping things secret, like passwords, you might use a bin file for, but what could it be used for other than that?
This thread is for discussions of Binary Files.
Related tagsvisual basic | http://www.developerfusion.com/article/85/binary-files/ | crawl-002 | refinedweb | 1,568 | 73.88 |
This article explains the new features in Python 3.1, compared to 3.0.
Regular Python dictionaries iterate over key/value pairs in arbitrary order. Over the years, a number of authors have written alternative implementations that remember the order that the keys were originally inserted. Based on the experiences from those implementations, a new configparser module uses them by default. This lets configuration files be read, modified, and then written back in their original order. The _asdict() method for collections.namedtuple() now returns an ordered dictionary with the values appearing in the same order as the underlying tuple indicies. The json module is being built-out with an object_pairs_hook to allow OrderedDicts to be built by the decoder. Support was also added for third-party tools like PyYAML.
See also.
The builtin format() function and the str.format() method use a mini-language that now includes a simple, non-locale aware way to format a number with a thousands separator. That provides a way to humanize a program’s output, improving its professional appearance and readability:
>>> format(1234567, ',d') '1,234,567' >>> format(1234567.89, ',.2f') '1,234,567.89' >>> format(12345.6 + 8901234.12j, ',f') '12,345.600000+8,901,234.120000j' >>> format(Decimal('1234567.89'), ',f') '1,234,567.89'
The supported types are int, float, complex and decimal.Decimal.
Discussions are underway about how to specify alternative separators like dots, spaces, apostrophes, or underscores. Locale-aware applications should use the existing n format specifier which already has some support for thousands separators.
Some smaller changes made to the core Python language are:
Directories and zip archives containing a __main__.py file can now be executed directly by passing their name to the interpreter. The directory/zipfile is automatically inserted as the first entry in sys.path. (Suggestion and initial patch by Andy Chu; revised patch by Phillip J. Eby and Nick Coghlan; issue 1739468.)
The int() type.)
The string.maketrans() function is deprecated and is replaced by new syntax of the with statement now allows multiple context managers in a single statement:
>>> with open('mylog.txt') as infile, open('a.out', 'w') as outfile: ... for line in infile: ... if '<critical>' in line: ... outfile.write(line)
With the new syntax, the contextlib.nested() function is no longer needed and is now deprecated.
(Contributed by Georg Brandl and Mattias Brändström; appspot issue 53094.)
round(x, n) now returns an integer if x is an integer. Previously it returned a float:
>>> round(1123, -2) 1100
(Contributed by Mark Dickinson; issue 4707.)
Python now uses David Gay’s algorithm for finding the shortest floating point representation that doesn’t change its value. This should help mitigate some of the confusion surrounding binary floating point numbers.
The significance is easily seen with a number like 1.1 which does not have an exact equivalent in binary floating point. Since there is no exact equivalent, an expression like float('1.1') evaluates to the nearest representable value which is 0x1.199999999999ap+0 in hex or 1.100000000000000088817841970012523233890533447265625 in decimal. That nearest value was and still is used in subsequent floating point calculations.
What is new is how the number gets displayed. Formerly, Python used a simple approach. The value of repr(1.1) was computed as format(1.1, '.17g') which evaluated to '1.1000000000000001'. The advantage of using 17 digits was that it relied on IEEE-754 guarantees to assure that eval(repr(1.1)) would round-trip exactly to its original value. The disadvantage is that many people found the output to be confusing (mistaking intrinsic limitations of binary floating point representation as being a problem with Python itself).
The new algorithm for repr(1.1) is smarter and returns '1.1'. Effectively, it searches all equivalent string representations (ones that get stored with the same underlying float value) and returns the shortest representation.
The new algorithm tends to emit cleaner representations when possible, but it does not change the underlying values. So, it is still the case that 1.1 + 2.2 != 3.3 even though the representations may suggest otherwise.
The new algorithm depends on certain features in the underlying floating point implementation. If the required features are not found, the old algorithm will continue to be used. Also, the text pickle protocols assure cross-platform portability by using the old algorithm.
(Contributed by Eric Smith and Mark Dickinson; issue 1580)
Added a collections.Counter class to support convenient counting of unique items in a sequence or iterable:
>>> Counter(['red', 'blue', 'red', 'green', 'blue', 'blue']) Counter({'blue': 3, 'red': 2, 'green': 1})
(Contributed by Raymond Hettinger; issue 1696199.)
Added a new module, tkinter.ttk for and bz2.BZ2File classes now support the context manager protocol:
>>> # Automatically close file after writing >>> with gzip.GzipFile(filename, "wb") as f: ... f.write(b"xxx")
(Contributed by Antoine Pitrou.)
The decimal module now supports methods for creating a decimal object from a binary float. The conversion is exact but can sometimes be surprising:
>>> Decimal.from_float(1.1) Decimal('1.100000000000000088817841970012523233890533447265625')
The long decimal result shows the actual binary fraction being stored for 1.1. The fraction has many digits because 1.1 cannot be exactly represented in binary.
(Contributed by Raymond Hettinger and Mark Dickinson.)
The itertools module grew two new functions. The itertools.combinations_with_replacement() function is one of four for generating combinatorics including permutations and Cartesian products. The itertools.compress() function mimics its namesake from APL. Also, the existing itertools.count() function now has an optional step argument and can accept any type of counting sequence including fractions.Fraction and decimal.Decimal:
>>> [p+q for p,q in combinations_with_replacement('LOVE', 2)] ['LL', 'LO', 'LV', 'LE', 'OO', 'OV', 'OE', 'VV', 'VE', 'EE'] >>> list(compress(data=range(10), selectors=[0,0,1,1,0,1,0,1,0,0])) [2, 3, 5, 7] >>> c = count(start=Fraction(1,2), step=Fraction(1,6)) >>> [next(c), next(c), next(c), next(c)] [Fraction(1, 2), Fraction(2, 3), Fraction(5, 6), Fraction(1, 1)]
(Contributed by Raymond Hettinger.)
collections.namedtuple() now supports a keyword argument rename which lets invalid fieldnames be automatically converted to positional names in the form _0, _1, etc. This is useful when the field names are being created by an external source such as a CSV header, SQL field list, or user input:
>>> query = input() SELECT region, dept, count(*) FROM main GROUPBY region, dept >>> cursor.execute(query) >>> query_fields = [desc[0] for desc in cursor.description] >>> UserQuery = namedtuple('UserQuery', query_fields, rename=True) >>> pprint.pprint([UserQuery(*row) for row in cursor]) [UserQuery(region='South', dept='Shipping', _2=185), UserQuery(region='North', dept='Accounting', _2=37), UserQuery(region='West', dept='Sales', _2=419)]
(Contributed by Raymond Hettinger; issue 1818.)
The re.sub(), re.subn() and re.split() functions now accept a flags parameter.
(Contributed by Gregory Smith.)
The logging module now implements a simple logging.NullHandler class for applications that are not using logging but are calling library code that does. Setting-up a null handler will suppress spurious warnings such as “No handlers could be found for logger foo”:
>>> h = logging.NullHandler() >>> logging.getLogger("foo").addHandler(h)
(Contributed by Vinay Sajip; issue 4384).
The runpy module which supports the -m command line switch now supports the execution of packages by looking for and executing a __main__ submodule when a package name is supplied.
(Contributed by Andi Vajda; issue 4195.)
The pdb module can now access and display source code loaded via zipimport (or any other conformant PEP 302 loader).
(Contributed by Alexander Belopolsky; issue 4201.)
functools.partial objects can now be pickled.
(Suggested by Antoine Pitrou and Jesse Noller. Implemented by Jack Diederich; issue 5228.)
Add pydoc help topics for symbols so that help('@') works as expected in the interactive environment.
(Contributed by David Laban; issue 4739.)
The unittest module now supports skipping individual tests or classes of tests. And it supports marking a test as a expected failure, a test that is known to be broken, but shouldn’t be counted as a failure on a TestResult:
class TestGizmo(unittest.TestCase): @unittest.skipUnless(sys.platform.startswith("win"), "requires Windows") def test_gizmo_on_windows(self): ... @unittest.expectedFailure def test_gimzo_without_required_library(self): ...
Also, tests for exceptions have been builtout to work with context managers using the with statement:
def test_division_by_zero(self): with self.assertRaises(ZeroDivisionError): x / 0
In addition, several new assertion methods were added including assertSetEqual(), assertDictEqual(), assertDictContainsSubset(), assertListEqual(), assertTupleEqual(), assertSequenceEqual(), assertRaisesRegexp(), assertIsNone(), and assertIsNotNone().
(Contributed by Benjamin Peterson and Antoine Pitrou.)
The io module has three new constants for the seek() method SEEK_SET, SEEK_CUR, and SEEK_END.
The sys.version_info tuple is now a named tuple:
>>> sys.version_info sys.version_info(major=3, minor=1, micro=0, releaselevel='alpha', serial=2)
(Contributed by Ross Light; issue 4285.)
The nntplib and imaplib modules now support IPv6.
(Contributed by Derek Morr; issue 1655 and issue 1664.)
The pickle module has been adapted for better interoperability with Python 2.x when used with protocol 2 or lower. The reorganization of the standard library changed the formal reference for many objects. For example, __builtin__.set in Python 2 is called builtins.set in Python 3. This change confounded efforts to share data between different versions of Python. But now when protocol 2 or lower is selected, the pickler will automatically use the old Python 2 names for both loading and dumping. This remapping is turned-on by default but can be disabled with the fix_imports option:
>>> s = {1, 2, 3} >>> pickle.dumps(s, protocol=0) b'c__builtin__\nset\np0\n((lp1\nL1L\naL2L\naL3L\natp2\nRp3\n.' >>> pickle.dumps(s, protocol=0, fix_imports=False) b'cbuiltins\nset\np0\n((lp1\nL1L\naL2L\naL3L\natp2\nRp3\n.'
An unfortunate but unavoidable side-effect of this change is that protocol 2 pickles produced by Python 3.1 won’t be readable with Python 3.0. The latest pickle protocol, protocol 3, should be used when migrating data between Python 3.x implementations, as it doesn’t attempt to remain compatible with Python 2.x.
(Contributed by Alexandre Vassalotti and Antoine Pitrou, issue 6137.)
A new module, importlib was added. It provides a complete, portable, pure Python reference implementation of the import statement and its counterpart, the __import__() function. It represents a substantial step forward in documenting and defining the actions that take place during imports.
(Contributed by Brett Cannon.).
(Contributed by Amaury Forgeot d’Arc and Antoine Pitrou.)
Added a heuristic so that tuples and dicts containing only untrackable objects are not tracked by the garbage collector. This can reduce the size of collections and therefore the garbage collection overhead on long-running programs, depending on their particular use of datatypes.
(Contributed by Antoine Pitrou, issue 4688.)
Enabling a configure option named --with-computed-gotos on compilers that support it (notably: gcc, SunPro, icc), the bytecode evaluation loop is compiled with a new dispatch mechanism which gives speedups of up to 20%, depending on the system, the compiler, and the benchmark.
module’s format menu now provides an option to strip trailing whitespace from a source file.
(Contributed by Roger D. Serwy; issue 5150.)
Changes to Python’s build process and to the C API include: sys.int_info that provides information about the internal format, giving the number of bits per digit and the size in bytes of the C type used to store each digit:
>>> import sys >>> sys.int_info sys.int_info(bits_per_digit=30, sizeof_digit=4)
(Contributed by Mark Dickinson; issue 4258.)
The PyLong_AsUnsignedLongLong() function now handles a negative pylong by raising OverflowError instead.)
Added PyCapsule as a replacement for the PyCObject API. The principal difference is that the new type has a well defined interface for passing typing safety information and a less complicated signature for calling a destructor. The old type had a problematic API and is now deprecated.
(Contributed by Larry Hastings; issue 5630.)
This section lists previously described changes and other bugfixes that may require changes to your code:
The new floating point string representations can break existing doctests. For example:
def e(): '''Compute the base of natural logarithms. >>> e() 2.7182818284590451 ''' return sum(1/math.factorial(x) for x in reversed(range(30))) doctest.testmod() ********************************************************************** Failed example: e() Expected: 2.7182818284590451 Got: 2.718281828459045 **********************************************************************
The automatic name remapping in the pickle module for protocol 2 or lower can make Python 3.1 pickles unreadable in Python 3.0. One solution is to use protocol 3. Another solution is to set the fix_imports option to False. See the discussion above for more details. | http://docs.python.org/release/3.1.5/whatsnew/3.1.html | CC-MAIN-2013-48 | refinedweb | 2,075 | 51.44 |
pthread_once - dynamic package initialisation
#include <pthread.h> int pthread_once(pthread_once_t *once_control, void (*init_routine)(void)); pthread_once_t once_control = PTHREAD_ONCE_INIT;
The first call to pthread_once() by any thread in a process, with a given once_control, will call the init_routine() with no arguments. Subsequent calls of pthread_once() with the same once_control will not call the init_routine(). On return from pthread_once(), it is guaranteed that init_routine() has completed. The once_control parameter is used to determine whether the associated initialisation routine has been called.
The function pthread_once() is not a cancellation point. However, if init_routine() is a cancellation point and is canceled, the effect on once_control is as if pthread_once() was never called.
The constant PTHREAD_ONCE_INIT is defined by the header <pthread.h>.
The behaviour of pthread_once() is undefined if once_control has automatic storage duration or is not initialised by PTHREAD_ONCE_INIT.
Upon successful completion, pthread_once() returns zero. Otherwise, an error number is returned to indicate the error.
No errors are defined.
The pthread_once() function will not return an error code of [EINTR].
None.
None.
None.
<pthread,h>.
Derived from the POSIX Threads Extension (1003.1c-1995) | http://pubs.opengroup.org/onlinepubs/7908799/xsh/pthread_once.html | CC-MAIN-2014-15 | refinedweb | 180 | 50.43 |
import a form from Adobe form Central.Asked by ritaadamsfwgmailcom on June 02, 2015 at 05:00 PM
I asked that the form be imported and I put in the URL of the webpage. Tried this twice, I got the email that the form is being imported and may take a while, then nothing, It is a registration form.
- JotForm Support
When did you import the form? Is that URL one of your forms created in your Adobe Forms central account? It seems it's not, because it takes you to a login page:
Please make sure you put the right form's URL, it should be something like this: | https://www.jotform.com/answers/580723-I-am-trying-to-import-a-form-from-Adobe-form-Central- | CC-MAIN-2017-04 | refinedweb | 110 | 87.35 |
computer with Linux Ubuntu 16.04.
If you’re using a different operating system, make sure you follow the right guide:
After installing uPyCraft IDE in your computer, we recommend reading: Getting Started with MicroPython on ESP32 and ESP8266.
Installing Python 3.X – Linux Ubuntu
Before installing the uPyCraft IDE, make sure you have Python 3.X installed in your computer. If you don’t, follow the next instructions to install Python 3.X. Run this command to install Python 3 and pip:
$ sudo apt install python3 python3-pip
Installing uPyCraft IDE – Linux Ubuntu 16.04.
IMPORTANT: at the time of writing this guide, uPyCraft IDE is only tested on Linux Ubuntu 16.04. If you want to run it on a different Ubuntu version or Linux distribution, we recommend using uPyCraft IDE source code and compile the software yourself.
Downloading uPyCraft IDE for Linux Ubuntu 16.04
Click here to download uPyCraft IDE for Linux Ubuntu 16.04 or go to this link.
Open your Terminal window, navigate to your Downloads folder and list all the files:
$ cd Downloads $ ls -l uPyCraft_linux_V1.X
You should have a similar file (uPyCraft_linux_V1.X) in your Downloads folder. You need to make that file executable with the following command:
$ chmod +x uPyCraft_linux_V1.X
Then, to open/run the uPyCraft IDE software, type the next command:
$ ./uPyCraft_linux_V1.X Linux Ubuntu. If you have a different operating system, read one of the following guides:
Learn more about MicroPython with our eBook: MicroPython Programming with ESP32 and ESP8266
15 thoughts on “Install uPyCraft IDE – Linux Ubuntu Instructions”
> ./uPyCraft_linux_V1.0
Traceback (most recent call last):
File “uPyCraft.py”, line 2, in
File “/usr/local/lib/python3.5/dist-packages/PyInstaller/loader/pyimod03_importers.py”, line 714, in load_module
ImportError: /tmp/_MEIz6zimz/libz.so.1: version `ZLIB_1.2.9′ not found (required by /usr/lib/x86_64-linux-gnu/libpng16.so.16)
[10228] Failed to execute script uPyCraft
I don’t know, how to fix this error 🙁
Hi Robert.
At the moment, uPyCraft IDE is only tested on Linux Ubuntu 16.04. If you want to run it on a different Ubuntu version or Linux distribution, we recommend using uPyCraft IDE source code and compile the software yourself.
If it doesn’t work, I recommend following the suggestions on the following article and try a modified version:
longervision.github.io/2018/09/15/Embedded/uPyCraft_PyQt5/
Let me know if it helped.
Regards,
Sara
I am currently working on uPyCraft on Linux because I will need it for a workshop. I started off with the PyQt5 version I found on github. Unfortunately nothing much was working. In the meantime I can edit, save files, upload them to the micro-controller and start them…
However, there are still a few bugs to be ironed out. I think that in a month or so I should have a working version. Are you interested to test it?
Hi Uli.
Are you developing a new version of uPyCraft IDE for Linux?
If it works properly, we’ll be glad to share it with our readers.
Regards,
Sara
Maybe I have understood that I have an Ubuntu 32bit and the software isn’t compatible.
I try to find the 32bit version, if exist. I use for Arduino and ESP an old but useful notebook 32bit.
Thanks, Sara and Rui.
Roberto
uPyCraft V1.1 working on Ubuntu 18
create a virtual env:
1. CD to the location where you want to create your virtual environment
[RUN]: python3 -m venv myVirtEnv
2. To activate the environment
[RUN]: source myVirtEnv/bin/activate
2.a If you don’t create a virtual environment, the uPyCraft v1.0 will break a libz package that is old. Seems there is more risk in breaking your OS by downgrading that package.
3. Follow the steps here:
github.com/jiapei100/uPyCraft_PyQt5
Hi.
Thank you for sharing that information.
It will certainly be useful for our readers.
Regards,
Sara
Hello nebulous,
I’m quite new on ubuntu , I’m using 18.04 and follow your instruction which I was struck on running ~/uPycraft$ pyinstaller -F uPyCraft.py , please see below error message. Could you please help suggest more?
(myVirtEnv) [email protected]:~/uPycraft$ pyinstaller -F uPyCraft.py
Traceback (most recent call last):
File “/home/wr300000/.local/bin/pyinstaller”, line 6, in
from pkg_resources import load_entry_point
File “/home/wr300000/uPycraft/myVirtEnv/lib/python3.6/site-packages/pkg_resources/__init__.py”, line 3088, in
@_call_aside
File “/home/wr300000/uPycraft/myVirtEnv/lib/python3.6/site-packages/pkg_resources/__init__.py”, line 3072, in _call_aside
f(*args, **kwargs)
File “/home/wr300000/uPycraft/myVirtEnv/lib/python3.6/site-packages/pkg_resources/__init__.py”, line 3101, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File “/home/wr300000/uPycraft/myVirtEnv/lib/python3.6/site-packages/pkg_resources/__init__.py”, line 574, in _build_master
ws.require(__requires__)
File “/home/wr300000/uPycraft/myVirtEnv/lib/python3.6/site-packages/pkg_resources/__init__.py”, line 892, in require
needed = self.resolve(parse_requirements(requirements))
File “/home/wr300000/uPycraft/myVirtEnv/lib/python3.6/site-packages/pkg_resources/__init__.py”, line 778, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The ‘PyInstaller==3.4’ distribution was not found and is required by the application
(myVirtEnv) [email protected]:~/uPycraft$
hello dear all – i did all you aviced me to do – on a MX-Linux-System.
but at the end i cannot see any port!?
what goes on here!? What can i doi?!
any and all advices will be greatly appreciated
hi there – while i tried to create a virtual environment i get the error.: [‘/home/martin/myVirtEnv/bin/python3’, ‘-Im’, ‘ensurepip’, ‘–upgrade’, ‘–default-pip’]
hello dear all – i am workin on MX-Linux which is a Debian based version of Linux. At the moment i am facing issues on this version.
does any body can confirm that
– /uPyCraft is running on ubuntu
– but not on other linux-systems /(even if they are also debian based like ubuntu is!?
love to hear from you
It didn’t run on Raspian Buster (Raspberry Pi) neither.
Download en compile the sourcecode, all steps are described !
easy peasy, it did take me less then 15 minutes to get it up and running on RPI4 🙂
(I’m used to it, by using it on my mac with LiliGo TTGO T8 boards, so this was an obvious choice for me)
In the meanwhile, I did write a step-by-step guide to get uPyCraft up-and-running on Debian Buster (on Raspberry Pi)
Enjoy 🙂 | https://randomnerdtutorials.com/install-upycraft-ide-linux-ubuntu-instructions/?replytocom=367371 | CC-MAIN-2020-45 | refinedweb | 1,069 | 59.9 |
The question is to write a function that dynamically allocates an array of integers. the function shoud accept an integer argument indicating the number of elements to allocate The function shoud return a pointer to the array
Here is what i have so far, and it kinda works but at the very end after it runs i get a program error.
#include <iostream> using namespace std; int main() { int numElements; // To hold the number of elements to allocate int *values; // A pointer to the array int count; // A loop counter // Get the array size. cout << "\nEnter an array size: " << endl; cin >> numElements; //Return null if num is zero or negative. if (numElements <= 0) return NULL; // Allocate the array. values = new int[numElements]; //Get the value for each element. cout << "Enter the value of the elements below.\n"; for (count = 0; count < numElements; count++) { cout << "Value #" << (count + 1) << ": "; cin >> values[count]; } //Return a pointer to the array. cout << "The array holds the numbers you entered: \n"; for (count = 0; count < numElements; count++) { cout << *values << " "; values++; } //Free dynamically allocated memory. delete [] values; values = 0; //makes elements NULL. return 0; }
I can't figure out what im doing wrong, i believe i answered the question correctly but i can't find where i mest up on the code that keeps on getting me the error debug assertion failed. Could some one Please help. My instructor also suggested using // Allocate the array.
pointer = arrayAllocator(numElements);
instead of //Dynamically allocate the array
element = new int[num];
But i couldn't get his recommendation to work. | https://www.daniweb.com/programming/software-development/threads/204461/array-allocator-please-assist | CC-MAIN-2017-26 | refinedweb | 260 | 53.21 |
Want to help? Try a newcomer feature! | Roadmap
Hi All. I'm after some advice about how best to replace a blueocean plugin, blueocean-pipeline-api-impl in this case, with a private version. For background, my working environment blocks direct access to GitHub and provides a local Artifactory instance from which I fetch mirrored plugins and to which I deploy my own version. The path I've been down is to mimic the the blueocean-plugin (maven parent) and declare my own plugin (eg blueocean-pipeline-api-impl :1.24.5-myplugin) as a dependency. Things work with hpi:run and I can run blueocean successfully and get the functional changes I want from my own blueocean-pipeline-api-impl . My replacement parent uses my own namespace eg com.mycompany and I ended up having to use io.jenkins in the private Artifactory to succeed with the child plugin. All good until I try to deploy to a staging environment where, when I bake my Dockerised Jenkins image (derived from jenkinsci/docker) using the jenkins-plugin-cli to add my child and parent plugins, the cli instead resolves the original plugin. I've gone through each plugin that declares a dependency on blueocean-pipeline-api-impl and added an exception in the parent plugin to try and workaround this but to no avail.
What I'm after is guidance on the best approach to take for this case of a bespoke version of a child plugin. It may be that my only problem is now with jenkins-plugin-cli but it's also likely that someone can point me to a better pattern for how to go about the whole thing.
2021-06-14 09:43:20.175+0000 [id=11] WARNING o.e.jetty.server.HttpChannel#handleException: /reload-configuration-as-code
@DJViking I think most of the original Cloudbees crew who worked on BlueOcean have left Cloudbees. The way that BlueOcean was implemented was a "big bang" that adopted technologies not used before in the core Jenkins stack.
Unfortunately, BlueOcean is a bit complicated (combinations of Java + NodeJS), so it is tough for the community to step in and maintain the code, and keep it going.
It's too bad, because the visualization of pipelines is quite nice in BlueOcean, and nothing else I have seen in the Jenkins ecosystem comes close.
It looks like the latest commits to the jenkinsci/blueocean-plugin repository are trivial commits to bump up dependencies.
shsteps to have a
label:field, but Blue Ocean is only using some of them properly in the UI | https://gitter.im/jenkinsci/blueocean-plugin?source=orgpage | CC-MAIN-2022-21 | refinedweb | 432 | 53.71 |
Last month, I bogged about Unit Testing ViewModels AND Views using the Silverlight Unit Testing Framework. I wanted to take that post a step further and talk about some more advanced testing scenarios that are possible.
The site itself provides a lot of information about how to get started and what is available with the framework. One thing to keep in mind that is a radical shift from other testing frameworks is that the Silverlight testing framework runs on the UI thread. This means it does not spawn multiple threads for multiple tests and in fact requires the tests to run “one at a time” so they can take advantage of the test surface that is supplied.
This is a bit different than other frameworks but in my opinion, makes a lot of sense when dealing with Silverlight. The framework provides incredible flexibility for configuring and categorizing your tests.
If you are searching for a very comprehensive example of the framework in use, look no further than the Silverlight Toolkit. This comes with all source code and in fact uses the testing framework for its tests. You will find not only advanced scenarios for testing, but thousands upon thousands of tests! (I also am guessing that a new standard for PC performance has been invented by mistake … let’s all compile the entire toolkit and compare how long it takes!)
Tagging Tests
One thing you’ll find if you run the toolkit tests is that you can enter a tag to filter tests. For example, type in “Accordion” to only run the 800 or so unit tests for accordion-type controls.
To use tag functionality, simply “tag” your test like this:
[TestClass] [Tag("MEF")] public class PartModuleTest { }
I’ve tagged the test to be a MEF-related test. When I wire up the framework, I can filter the tag like this:
UnitTestSettings settings = UnitTestSystem.CreateDefaultSettings(); settings.TagExpression = "MEF"; this.RootVisual = UnitTestSystem.CreateTestPage(settings);
When I run the tests, only my tests tagged with MEF will run! The toolkit provides an example of a UI that allows you to select the tag, then run the test.
Asynchronous Tests
It is often necessary to test methods that are asynchronous or require event coordination. An example may be a service that must wait on return values, or a user control that must be loaded into the framework before you can test it. The Silverlight Unit Testing Framework provides the
Asynchronous tag to facilitate this type of test. This tells the framework not to move onto the next test nor consider the current test method complete until an explicit call to
TestComplete is made.
There are several “helper” methods supplied for asynchronous processing that we’ll explore in a minute. To use these methods requires inheriting from one of the base test classes such as
SilverlightTest which provides the methods as well as the test surface to add controls to.
In PRISM, MEF, and MVVM Part 1 of 3: Unity Glue I explored various options for binding the view model to the view. The 3rd and final method I reviewed was using an attached behavior. I would like to write some unit tests for that behavior (indeed, if I were a test-driven development or TDD purist, I would have written those tests first).
In order to test the behavior, I need to attach it to a
FrameworkElement and then validate it has done what I expected it to do. But how do I go about doing that in our unit test environment?
Attached Behaviors
Similar to other controls in other frameworks, Silverlight controls have a distinct life cycle. It varies slightly depending on whether the control has been generated in XAML or through code. There is a great summary table of these events on Dave’s Blog. What’s important to note is that values and properties are set as soon as you, well, set them, but bindings don’t take effect until they are inserted into the visual tree. In XAML, the XAML node becomes part of the tree and fires the
Loaded event once it is fully integrated. In code, this happens after the element is added as the child of some other element that is in the tree. This allows Silverlight to parse the hierarchy and propagate dependency properties.
So what we essentially want to do is take our behavior, attach it to an element, and then wait for the
Loaded event to fire so we can inspect the element and see that it has been modified accordingly (in this case, we expect that the
DataContext property has been set to our view model).
Setting up the Project
The testing framework provides some handy templates for getting started. I add a new project and select the Silverlight Test Project template. I then add references to the projects I’ll be testing and the supporting frameworks like PRISM and MEF.
Next, I’ll want to build some helper classes to help me test my functionality.
Helper Classes
I like to create a folder called
Helper and place my stubs, mocks, and other helper classes there. These may be utilities, like the Exception Expected utility I use, or classes that are used for the testing framework.
First, I’ll create a test view model with a simple string and string collection property for testing:
public class TestViewModel { public TestViewModel() { ListOfItems = new List<string>(); } public TestViewModel(List<string> items) { ListOfItems = items; } public string Property { get; set; } public List<string> ListOfItems { get; set; } }
If my view models have common methods described in a base class or interface, I might use a mocking framework to mock the class instead.
The Test Class
The behavior I created has an affinity to the Unity inversion of control (IoC) container. It could be refactored otherwise, but it made sense for the sake of the demonstration. Therefore, I’ll need to have a container for testing, as well as the view model. My test class starts out looking like this (notice I base it on
SilverlightTest):
[TestClass] public class ViewModelBehaviorTest : SilverlightTest { const string TESTPROP = "Test Property"; IUnityContainer _container; TestViewModel _viewModel; [ClassInitialize] public void ClassInitialize() { _container = new UnityContainer(); _viewModel = new TestViewModel() { Property = TESTPROP }; _container.RegisterInstance<TestViewModel>(_viewModel); ViewModelBehavior.Container = _container; } }
I create a reference to the entire test class for the container and the test view model. When the class is initialized (this is one-time setup, before all tests are run) I create a container, a view model, and tell the container that anytime someone asks for the view model, give them the specific instance I created. I also set the container on the type for the view model behavior class, so it knows what to use when resolving the view model.
The Code Behind Test
For my first test, I’ll programmatically attach the behavior and test that it works. The view model behavior takes in a string that is the fully qualified type name for the view model, and then uses the unity container to resolve it. Therefore, my test looks like this:
[TestMethod] [Asynchronous] [Description("Test creating an element and attaching in code behind.")] public void TestAttach() { TextBlock textBlock = new TextBlock(); textBlock.SetValue(ViewModelBehavior.ViewModelProperty, typeof(TestViewModel).AssemblyQualifiedName); textBlock.Loaded += (o, e) => { Assert.IsNotNull(textBlock.DataContext, "The data context was never bound."); Assert.AreSame(textBlock.DataContext, _viewModel, "The data context was not bound to the correct view model."); EnqueueTestComplete(); }; TestPanel.Children.Add(textBlock); }
There’s a few things going on here, so let’s break them down!
The
TestMethod attribute tags this method to be run by the framework. It is decorated with a description, which I can view on the output when the test is run and helps make the test more, ah, descriptive. The first thing I do is create a test block and attach the view model property. Here, I’m taking the test view model and getting the fully qualified name and using that to set the attached property. We want to make sure everything works fine and there are no errors during binding, so this is where the asynchronous pieces come into play.
The
Asynchronous tag tells the framework that we’re waiting on events, so don’t consider this test complete until we explicitly tell the framework it’s complete. When the text block fires the
Loaded event, we confirm that the data context is not null and that it in fact contains the exact instance of the view model we created in the class initialization. Then we tell the framework the test is complete by calling
EnqueueTestComplete, which is provided by the base class.
Finally, if you were to run this without the last line, the test would stall because the text block would never get loaded. We add it as a child of the test surface, and this injects it into the visual tree and fires the loaded event.
The XAML Test
I’m not completely confident with this test because the whole reason for creating a behavior was so I could attach the view model in XAML and not use code behind. Therefore, I should really test attaching this behavior through XAML. So, at the top of the test class we’ll create the necessary XAML and wrap it in a
UserControl:
const string" + "<Grid x:" + "<ListBox x:" + "</Grid></UserControl>";
If you think the constant is ugly, you can always add an actual XAML file, set it as an embedded resource, then read it in instead. That would give you the full functionality of the editor to tweak the test code. Here, we simply create a control with a grid and a list box. The list box uses the attached behavior and also binds the list.
I want to test the list binding as well, so I add a collection to my test class:
. private static readonly List<string> _testCollection = new List<string> { "test1", "test2" }; .
In the class initialize method, I’ll pass this into the view model’s constructor so it is set on the
ListOfItems property.
Now, we can create the control from XAML, load it, and test it:
[TestMethod] [Asynchronous] [Description("Test creating from XAML")] public void TestFromXaml() { UserControl control = XamlReader.Load(TESTXAML) as UserControl; control.Loaded += (o, e) => { ListBox listBox = control.FindName("ListBox") as ListBox; Assert.IsNotNull(listBox, "ListBox was not found."); Assert.IsNotNull(listBox.DataContext, "The data context was never bound."); Assert.AreSame(listBox.DataContext, _viewModel, "The data context was not bound to the correct view model."); IEnumerable<string> list = listBox.ItemsSource as IEnumerable<string>; List<string> targetList = new List<string>(list); CollectionAssert.AreEquivalent(targetList, _testCollection, "Collection not properly bound."); EnqueueTestComplete(); }; TestPanel.Children.Add(control); }
Now we load the control from XAML and wire in the
Loaded event to test for the data context and the instance. Then, I take the items from the list box itself and compare them with the original list using
CollectionAssert. The
AreEquivalent does a set comparison. Then we signal the test is complete.
There’s no code for this example because it was very straightforward and I’ll likely be posting a more comprehensive example in the future as the result of a talk I’ll be giving. Be sure to tune into MSDN’s geekSpeak on Wednesday, February 17th, 2010 when I will be the guest to cover exclusively the topic of the Silverlight Unit Testing Framework (the talks are all stored on the site in case you read this after the event).
Thanks! | https://www.wintellect.com/silverlight-unit-testing-framework-asynchronous-testing-of-behaviors/ | CC-MAIN-2021-43 | refinedweb | 1,909 | 60.24 |
[.
Also, to preserve the ‘gtm.’ namespace for native GTM features only, I have changed the prefix of the custom event name from ‘gtm.’ to ‘event.’
When the good folks at Mountain View introduced auto-event tracking for Google Tag Manager, a collective sigh was heard around the world (I’m just slightly exaggerating).
Finally, the true power of GTM was unleashed.
With auto-event tracking, one of the more difficult aspects of web analytics, tracking user interactions beyond the page load, was greatly simplified.
However, as a species we are in a perpetual state of dissatisfaction.
When we got the Click, the Link Click and the Form Submit, we wanted more. So then we got the History listener. Again, we wanted more. And we got the Error listener. And then we wanted more again.
To satisfy this undying thirst for more listeners, the community has been very helpful. In fact, before you read on, I want you to familiarize yourself with Doug Hall’s excellent post on extending GTM’s auto-event listeners. The following guide will expand upon the ideas put forth by Doug, while striving to approach the elegance of his prose and wisdom.
I’ve also written many posts on GTM’s listeners. I’ll link to them at the end of this article.
What follows is a step-by-step guide to creating a generic listener for all the various events you want to capture that are not yet captured by GTM’s own listeners. Once GTM introduces a proprietary listener for whatever event you want to handle, it would be best to start using that.
The listener prototype
The prototype of the custom event listener will require two components:
- A Custom HTML Tag for each listener type you want to activate (e.g. change, blur, copy)
- A generic Custom JavaScript Macro, which returns the handler function that pushes the event into the dataLayer
As you can see, it looks pretty simple. And it is. As long as you observe proper design patterns and best practices, these two components will get you far.
The generic event handler macro
Let’s start with the macro, since it’s the truly generic component here.
The macro returns a function, which serves as the handler for the listener you’ll create in the next step. For this function to be as generic as possible, it will be agnostic as to what the event was. It will do this by accepting the Event object as its parameter, and parsing that for all the data it needs.
Showing is easier than telling, so here’s the code for you to copy-paste:
Macro Name: {{generic event handler}}
Macro Type: Custom JavaScript
This, in my opinion, is a pretty close emulation of how GTM’s listeners work. What’s important is that the object pushed into dataLayer does its best to mimic the design pattern of GTM’s own listeners.
Let’s go over the code line-by-line.
return function(e) { ... } is the wrapper. The point here is that this macro must return a function, since the listener we’ll create in the following chapter requires a function or object as a parameter. If the macro wouldn’t return a function but rather run the code itself, you’d be sending some weird empty events every time the listener tag is written to the page template.
dataLayer.push({ ... }); is where the magic happens. Here the triggered event is pushed into dataLayer by observing the design patterns of GTM’s own listeners. Now, you can observe these patterns if you want, or you can use your own syntax. I’m a big fan of symmetry, which is why I prefer to follow GTM’s syntax. This might result in some conflicts if GTM introduces a similar event listener out-of-the-box, but in that case it would be a good idea to migrate to this proprietary listener anyway.
'event': 'event.'+e.type, pushes a value into the ‘event’ data layer variable. It takes the
type property from the event object, which is a string representing what type of listener was fired, e.g. “change”, “blur”, “copy”. So if the event was of type change, this push would actually look like:
'event': 'event.change'. (By the way, remember E-Type?)
'gtm.element': e.target, pushes the element the event occurred on into the dataLayer variable ‘gtm.element’. This is a design pattern, and if you look at your click listeners and link click listeners, a similar object is always found in those as well. The idea here is that you can then use your Data Layer Variable Macro to explore properties of this ‘gtm.element’ object if you want to dig deeper into the DOM.
'gtm.elementClasses': e.target.className || '', pushes the class of the event target into the variable ‘gtm.elementClasses’. If there is no class on the HTML object, an empty string is pushed instead.
'gtm.elementId': e.target.id || '', pushes the ID of the event target into the variable ‘gtm.elementId’. If there is no ID on the HTML object, an empty string is pushed instead.
'gtm.elementTarget': e.target.target || '', pushes the target of the event target (sounds weird) into the variable ‘gtm.elementTarget’. If there is no target attribute on the HTML object, an empty string is pushed instead.
'gtm.elementUrl': e.target.href || e.target.action || '', pushes either the href or the action attribute of the event target into the variable ‘gtm.elementUrl’. If there are no such attributes on the HTML object, an empty string is pushed instead.
'gtm.originalEvent': e is my own addition to this design pattern. Every now and then you might want to access the original event, exposed by the listeners. With GTM’s out-of-the-box listeners, this is currently not possible, since they only expose the
e.target property of this event object. However, especially if you want to do stuff like identify clicks created by code and not by the user, access to the event object is a must. I hope this will be a standard feature in GTM’s own listeners as well.
So that’s what the macro looks like. It’s possible some of the patterns need more work, and nothing is stopping you from extending the number of variables that are pushed into the dataLayer. I consider these variables to be the minimum set you’ll need in order to provide enough information for your tags while still remaining economical and conscious of best practices.
The listener tag
Now that we have our generic event handler, the next step is to create a tag which sets up the listener. The key here is recognizing just which listener you want to set up. Also, for symmetry’s sake I will only show how to prime the event listener on the
document node, because that’s the generic way AND the way GTM’s own proprietary listeners work. If you want, you can attach the listeners to specific DOM nodes, which reduces the risk of propagation problems, but as a solution it won’t be as generic any more.
First, here’s the listener code itself. Put this in a Custom HTML Tag, and add a firing rule which uses either {{event}} equals gtm.js if you’re attaching the listener on the
document node, or {{event}} equals gtm.dom if you’re listening on specific HTML elements.
On the first line, you specify just what type of event you want to listen for. For the full list of supported types, follow this link. Remember, you can also dispatch and listen to your own custom events, which makes this solution even more flexible. Here’s MDN’s excellent guide on creating and triggering events.
Anyway, here’s a list of some of the most popular event types:
- beforeunload – Fire a listener when the window, the document, and all resources are about to be unloaded (e.g. when someone is closing the browser window).
- blur – An element has.
- change – The value of an element changes between receiving and losing focus (e.g. the user enters a form field, types something in, and leaves the field).
- click – A click is registered on an element (use GTM’s Click Listener instead).
- contextmenu – The right mouse button is clicked.
- copy – Text is copied to the clipboard.
- cut – Text is cut to the clipboard.
- dblclick – A double-click is registered on an element.
- focus – An element has.
- keydown – A key is pressed down.
- keyup – A pressed down key is released.
- mousedown – The mouse button is pressed down.
- mouseenter – The mouse pointer is moved over the element where the listener is attached. Won’t really work if the listener is on the
documentnode.
- mouseleave – The mouse pointer is moved off the element where the listener is attached. Won’t really work if the listener is on the
documentnode.
- mouseout – The mouse pointer is moved off the element where the listener is attached or one of its children.
- mouseover – The mouse pointer is moved over the element where the listener is attached or one of its children.
- mouseup – The pressed down mouse button is released.
- orientationchange – The orientation (portrait / landscape) of the screen changes.
- reset – A form is reset.
- scroll – A document view or element is scrolled.
- submit – A form submit is registered (use GTM’s Form Submit Listener instead).
When the Custom HTML Tag is written on the page, it attaches the listener of your choice on the document node. The event handler is the generic event handler function you created in the previous chapter. Then, when the event occurs, the function is executed with the event object as its parameter. This event object is then parsed and pushed into dataLayer with a bunch of properties that you can access with the Data Layer Variable Macro.
Example with the Change Listener
Here’s a simple example. I have some form fields on a web page. Whenever a value of a form field changes, i.e. a user writes / edits / deletes text in it, I want to push a virtual pageview with the URL path /form/<field-name>-<field-value>. So if the form field’s name is “search” and value is “GTM”, I want to send the virtual pageview with the path /form/search-GTM.
So let’s get started. I have my {{generic event handler}} macro which I created earlier, and I have my Change Listener Tag firing on {{event}} equals gtm.js as you can see:
Next, I’ll need to create two new Data Layer Variable Macros to capture the field name and field value, respectively. First, here’s the Data Layer Variable Macro {{field name}}:
As you can see, I use “(not set)” as the placeholder if the field has no name attribute.
And here’s the Data Layer Variable Macro {{field value}} for the field value:
Again, I use “(not set)” if the field has no value.
Finally, I’ll need my virtual pageview tag. It’s just a normal Universal Analytics pageview tag, but it uses the Document Path field. Also, the firing rule for this tag needs to be {{event}} equals event.change, so that the tag fires only when a ‘change’ event is registered by the custom listener.
I concatenate the string “/form/{{field name}}-{{field value}}”, since the macros are resolved at runtime, and I’ll end up with a nice, clean URL path.
I manage to test this live by typing text into the search field and leaving the field. The event.change listener fires, and my debug panel shows that the Document Path has been processed correctly:
Naturally, don’t forget to check GA’s Real Time report to verify the data is flowing in correctly.
Conclusions
This post was about creating a generic event handler for all your custom listener needs. I urge you to explore beyond the out-of-the-box setups that GTM provides. However, once there’s overlap with GTM’s features, I strongly suggest you leverage the tag manager’s own listeners, since that will ensure that they’ll stay up-to-date with possible changes under the hood. I also recommend that you try to observe best practices, and that you emulate GTM’s design patterns to your best ability.
Like I said, I’ve written a lot about GTM’s listeners in various posts. Here are some guides you might enjoy reading as well:
Why Don’t My GTM Listeners Work?
This is still one of the most asked questions I get. Please, read this post. You’ll learn about event delegation, and why so often interfering code prevents events from bubbling up to GTM’s listeners. Here’s a more recent rant on the topic: My Google+ rant.
Google Tag Manager: The DOM Listener
You can use MutationObserver to listen for changes on the page that occur without an actual event firing or page refreshing.
Google Tag Manager: The History Listener
A review of GTM’s History Listener.
Also, I allude to listeners in most of my Google Tag Manager posts, so be sure to read the rest of them if you have time.
This was a long and rather advanced guide. Go and eat an ice cream, you’ve earned it! | https://www.simoahava.com/analytics/custom-event-listeners-gtm/ | CC-MAIN-2017-30 | refinedweb | 2,220 | 72.76 |
Table of Content
Search clouds
Licenses
Author's resources
SCaVis is an environment for scientific computation, data analysis and data visualization designed for scientists, engineers and students. The program incorporates many open-source software packages into a coherent interface using the concept of dynamic scripting It includes many components. First, most obvious, is SCaVis IDE (Integrated development environment), which can be used as a programming powerful editor for desktops running Windows,Linux, OS/2 or any other system which can run Java.
This section describes SCaVis IDE for desktops and any other computers with large screens. If you need a version of SCaVis IDE for small devices with a typical screen size 600×400, you can use an alternative IDE optimised for small screens. In this case, please go to the section working_with_porto.
You can get some ideas about the capabilities of SCaVis using SCaVis examples ([Tools]→[Online examples]). Look at this clip that shows YouTube ScaVis online tutorial
The windows for online examples after selecting [Tools]→[Online examples] is shown below:
The examples are created for all supported languages, Java, Jython, Octave/Matlab, BeanShell, JRuby, Groovy. Icones with read “F” shows free examples that does not require ScaVis activation.
You can also look at this link to see what SCaVis can do: SCaVis Jython Examples.
Many components of ScaVis are free. But many services (including the full access to this manual) and jar components are only accessible for full members. This diagram shows ScaVis structure, where yellow components show GNU-licensed parts of the program.
If you are non-member, you will see the yellow boxes such as this:
Use SCaVis IDE in a similar way as any editor. It supports many programming languages: C/C++, JAVA, PHP, FORTRAN and many more. It is also specially designed for editing LaTeX files. It has several unique features, such as:
The script scavis.sh (Linux/UNIX/Mac) or scavis.bat (any Windows) starts the SCaVis IDE from the file jehep.jar, called jeHEP. Run one of these scrips depending on your platform.
Now you have a choice: either to SCaVis actually can do is to run JHPlot examples. Go to [Tools] and then [examples]. Select any Jython script from the categories and run it. You may also open a file and then click on the icon
) to execute it. The examples are located in “macros/examples” folder of your installations.
The number of examples is about 20.
For advanced users, you can access more than 200 examples from a online database. Select [Tools]-[Online examples]. Some examples are marked with red letter “F”, i.e. “free” (GNU licensed) examples. In order to access all online examples, you should activate ScaVis as [Help]-[Activate]. Here you should enter your SCavis user name.
Activation should be done via this link.
Of course one can use the editor to work with Java, NetBeans or even LaTeX files
Users of SCaVis-Pro (professional edition) receive separate updated jar files regularity. The community edition requires re-installation of the entire program. Use [Help]-[Activate] to activate SCaVis-Pro.
You may wish to activate on-fly spelling for a particular language. Copy OpenOffice dictionaries to the directory dic of the main SCaVis directory where the jehep.jar file is located. Then go to menu Tools - On-fly spelling and select active dictionary. To activate spelling, press the button Start spelling from the main menu. Note: English dictionary is already included to the downloaded package. Use double-click to replace a wrong word or to view alternative proposals To reload either the File Browser or Jython/Bean shell consoles, one should use the reload buttons located directly on small blue tabs. For bookmarks, the user should click on the right border of the jeHEP editor window. One should see a blue mark there, if the bookmark is set. One can click on it to come back to a specific text location. All preference files are locate in the
$HOME/.jehep
directory (Linux) or
$HOME/jehep.ini
for windows. They are: the user dictionary file, JabRef preference files and other initialisation files
SCaVis SCaVis Code Assist for the Editor class. Analogously, one can print all such variables using the BeanShell commands (but using the BeanShell syntax).
One important feature of this IDE is a “CodeView” feature (look at “View→Code]→[ViewCode]
The HTML version of the Jython/Python code is normally generated in a background. You can access HTML code from the directory “cachedir”.
LaTeX equations can be inserted using DragMath (See the Menu [Tools]→[DragMath]. This brings us a window where one can write a LaTeX equations. The one can insert equations using the menu [File]-[push to jeHEP editor], which will insert a LaTeX equations under the cursor.
To run a Jython script, open a Jython file inside the IDE editor and click on the *run* button (indicated with the icon
) from the ToolBar of SCaVis. This executes the script from top to the bottom. The “print” outputs are redirected to the JythonShell (at the bottom of the IDE editor).
One can also use the [F8] key for fast execution of Jython scripts.
In case of run-time error, the SCaVis
The code assist of SCaVis is based on Java serialisation mechanism and Python methods. SCaVis IDE editor:
from jhplot import * f1 = F1D("x*sin(x)",-3.0,3.0) f1. # and press [F4]
Alternatively, one can click on the icon
instead of [F4].
One can get a detailed description of this class in a sortable method and also can insert a necessary method to the place right after the dot. The table will be show as
If the object belongs to the jhplot package, you can get a detailed API information by selecting the necessary method and clicking on the menu “describe”.
Finally, almost each class of SCaVis has a method called “doc()”. Execute it as:
from jhplot import * f1 = F1D("x*sin(x)",-3.0,3.0) f1.doc()
You will on-line help in a Java WWW browser SCaVis source-code editor, use the method dir(f1) to print its methods.
If you are using Java instead of Jython and working with Eclipse or NetBeans IDE, use the code assist of these IDEs. The description of these IDEs is beyond the scope of this book.
The main idea of SCaVis is that you can use Jython scripts to access any Java library. This also means that you can access the TextArea of the scripts itself and all internal variables of the jeHEP IDE. SCaVis. Your HTML file with the equation is ready! The file is located in the directory “cachedir” together with the image of this equation.
Read more about the text processing using Java and Python in Section Text processing/12/02 19:52 | http://www.jwork.org/scavis/wikidoc/doku.php?id=man:general:working_with_ide | CC-MAIN-2015-06 | refinedweb | 1,131 | 57.16 |
… is available for download from Borland Code CentralTwo years ago I’ve been sitting at the C#Builder Train-The-Trainer course with other consultants and presales here in the Amsterdam office. Our master trainer Juan was covering some real basic stuff, so I thought I’d rather try to write a little application for learning purposes.
We were in the middle of discussion how potentially useful is C# XML Documentation feature to automate documentation creation. Quick search on the Internet revealed the existence of the MSDN Mag article XML Comments Let You Build Documentation Directly From Your Visual Studio .NET Source Files. Do we have something like this in C#Builder? Nope. So maybe we could display inside a web browser the XML doc file for the current project transformed to HTML with one of the XSL stylesheets selectable from the combo box? This was the beginning of my BDN article and the C#Builder XML Documentation Viewer Wizard.
Since Delphi 8 it is straightforward to write plug-ins for BDS with Delphi for .NET. In fact you can write them either with managed .NET code or unmanaged Delphi for Win32. AFAIK the later option is the only way to achieve IDE dockable forms. Nevertheless I have decided to stick with the managed code and to rewrite my C# wizard in Delphi for .NET. There are two options for GUI development with Delphi for .NET: VCL for .NET Forms and .NET FCL WinForms. I couldn’t manage to find a simple way of embedding ActiveX controls like Microsoft Web Browser on VCL for .NET forms, so I had to go for WinForms. To be honest I prefer the VCL experience over the FCL, but if you have a good reason like this to use WinForms…
One of the most difficult problems was how to display HTML in the Web Browser control from memory, without saving a resulting document and navigating to it. The hack is to typecast WebBrowser’s Document property to appropriate interface. First of all you have to make sure that Document property is not nil before you can write to it. To ensure this call Navigate(’about:blank’). After this call you can typecast to IHTMLDocument2 interface defined in Borland.mshtml namespace (do not forget to add "Borland.mshtml.dll" to project References) and then call its write method.
procedure TWinFormXmlDocViewer.DisplayHtml(s: string); var aDoc: IHTMLDocument2; objArray: array of System.&Object; begin aDoc := webBrows.Document as IHTMLDocument2; aDoc.close; // clears current doc SetLength(objArray,1); objArray[0] := s; aDoc.write(objArray); end;
The Delphi 2005 version of the XML Documentation Viewer Wizard has received a little face-lift, and now supports exporting transformed documentation to clipboard and saving to files. There is also a new "Refresh" button and possibility to open arbitrary xml doc files. C#Builder plugin was refreshing itself, when active project was changed. The new version refreshes also after every successful compilation.
Most of the XSL stylesheets that come with the wizard are designed for C#, but there is also a simple one for Delphi, that displays just a table of namespaces in current project. Delphi for Win32 and Delphi for .NET compilers support xml documentation, however the approach here is quite different. The C# compiler generates one XML doc file for the whole project, and Delphi generates one xml doc file per each unit. Also names of tags are different, and the structure of files is different. The wizard works for every personality, as it is basically looking for a file with the same name as a current project, but with the extension changed to *.xml.
Share This | Email this page to a friend
Wszystkiego Najlepszego z okazji imienin zyczy ta co czesto krzyczy
Barbasia
The ‘get it at Code Central’ link at the top is not working for me - I only get a page displaying
"ID: Requested Item is not available at this time."
The link to Code Central should be:
Thanks for the very nice tool! | http://blogs.embarcadero.com/pawelglowacki/2005/06/29/20020 | CC-MAIN-2013-48 | refinedweb | 667 | 64.51 |
SM 1.1.18 PSM Initialization fails because NSS 3.12 doesn't support Win9x or Win
NT4
RESOLVED WONTFIX
Status
P1
major
People
(Reporter: benoit, Unassigned)
Tracking
({regression})
Firefox Tracking Flags
(Not tracked)
Details
Even with bug#512187 fixed, the PSM still fails to initialise on -Windows NT 3.51 -Windows 95 -Windows NT4 -Windows 98 with 98lite The presence of IE doesn't seem to matter, except for Windows 98 and Windows Me, as vanilla installs of those Windows versions do not have this problem.
Flags: blocking-seamonkey1.1.19?
Sorry, we cannot go back to an older NSS version due to security issues (MITM of SSL connection, etc) and we also will not work on making NSS 3.12 work with ancient Windows versions, so this is WONTFIX.
Status: NEW → RESOLVED
Last Resolved: 10 years ago
Flags: blocking-seamonkey1.1.19? → blocking-seamonkey1.1.19-
Resolution: --- → WONTFIX
KaiRo, you seem to fail to notice that SeaMonkey 1.1.18 DOES work on Windows 98 and Windows Me, unless they don't have IE. There is something else going on here than the drop of support of Win9x. Furthermore, SeaMonkey 1.1.x officially supports these "ancient Windows versions", so you are committed to them, whether you like it or not.
Benoît, feel free to fix NSS yourself to support Win95 and NT 3.51, we can reconsider if NSS takes a fix, but what I'm saying is the SeaMonkey team will not fix this problem, we are no NSS hackers and we can't go back to an older NSS version unless we want all of our users to be susceptible to MITM attacks on SSL connections.
Re-opened temporarily so that I can ask the NSS team if there is a feasible workaround. Apparently IE4.0 update is included in the requirements since December 2006. See Bug 361340 (Mingw build error in SpecialSystemDirectory.cpp, error: `::SHGetSpecialFolderPathW' has not been declared). SHGetSpecialFolderPathW was introduced in Bug 481968 (Update mozilla-1.9.1 to pick up NSS 3.12.3 or newer). All other places that use SHGetSpecialFolderPathW actually alias it to SHGetSpecialFolderPath: CC Kai, Is it possible to do the same in the NSS code?? I'm guessing that this problem began at some point when a rather new version of NSS (such as version 3.12.3, which was developed for the platforms supported by the 1.9.1 FF branch) was back-ported to one or more older branches, which still have requirements to support older platforms. The array of possible solutions depends in large part on which branch(es) we're talking about here, exactly, and what (if any) requirements the products built from those branches have with respect to being certified for use by anyone in the US government. Old branches that are not shared with Firefox and have no US government certification requirements have the widest range of possible solutions. Please advise.
SeaMonkey 1.1.x comes from the Mozilla 1.8 tree (1.8.1.something). This is on CVS. We pick up what ever NSS version is in that tree at the point of tagging. Firefox 2.0.0.x comes from the same tree but Mozilla Corp has dropped support for the 2.0x series. Some Linux distributions with long term stable versions still support Firefox 2.0x (contact asac for details). I would guess that we could use the ANSI version (SHGetSpecialFolderPathA) on old branches or do some #IFDEF magic like #ifndef SHGetSpecialFolderPath SHSTDAPI_(BOOL) SHGetSpecialFolderPathA(HWND hwnd, LPSTR pszPath, int csidl, BOOL fCreate); SHSTDAPI_(BOOL) SHGetSpecialFolderPathW(HWND hwnd, LPWSTR pszPath, int csidl, BOOL fCreate); #ifdef UNICODE #define SHGetSpecialFolderPath SHGetSpecialFolderPathW #else #define SHGetSpecialFolderPath SHGetSpecialFolderPathA #endif #endif
(In reply to comment #5) >? SM 1.1.x comes directly from the 1.8 branch on CVS and uses whatever NSS is being pulled in by config.mk on that branch - currently that is some 3.12.x version, you surely remember all the discussion of bug 504523 that switched us to that.
Neither the A or W version of SHGetSpecialFolderPath exists in Windows NT 3.51, 95, or NT 4. After looking over the code and what it does, I think the best solution would be to link to that API at runtime in win_rand.c, like it does with the ADVAPI32.DLL crypto functions, and if it is not present then skip the loop retrieving the 4 folder paths in EnumSystemFiles. What it is trying to do is get the actual path names for the recycle bin, the recent folders, the MSIE cache folder, and the MSIE history folder to sample files for "entropy". None of these folders even exist on NT 3.51, and the MSIE folders might not exist under 95 or NT 4, so they would need to be skipped anyway. On a system where the API exists, that mostly ensures these folders do exist and everything is in order.
Status: REOPENED → NEW
Summary: 1.1.18 fails to connect with SSL/TLS secured sites, PSM fails to initialize → SM 1.1.18 PSM Initialization fails because NSS 3.12 doesn't support Win9x or WinNT4
Now that the problem has been discovered and some solutions have been proposed, what shall be done about this?
All that can be done is in the realm of NSS, so it's entirely up to Nelson and his fellow NSS people. The SeaMonkey team specifically can't do anything about NSS code, and we also can't step backl to a version that supports older versions because going to code that allows MITM attacks on SSL connections is out of the question.
The history here is this: In the development of NSS 3.12.x, the NSS team accepted a patch from the fennec team that was intended to solve a problem for WinCE, and was said to work on all windows platforms supported by Mozilla. That patch added the calls to SHGetSpecialFolderPathW. We all knew that this function did not exist in Win9x and WinNT4. What we did not know at that time (or had forgotten) was that products are still being shipped for Win9x and WinNT4 from Mozilla's code base. We believed that the requirement to support Win9x and WinNT4 had been explicitly removed, and indeed there had been bugs files requesting that code whose sole purpose was to support those old platforms be specifically removed. Today, we're looking at the least painful way to restore compatibility with Win9x and WinNT4. It appears that we may need to do nothing more than provide a SHGetSpecialFolderPathW function that emulates the Win2k version of that function, even if it is an imperfect emulation. Perhaps we can do that in PSM. Perhaps we develop an "extension" for use only on Win9x/NT4.
API calls for IE's history and cache folders should return FALSE in this emulation. Does NSS account for that possibility? Emulating the API for the Recycle Bin and the Recent folder seems easy enough. The Recycle Bin is always C:\RECYCLED (as far as I know, and if it's present), and the Recent folder path is stored in the registry along with other special folder paths, at HKEY_CURRENT_USER/Software/Microsoft/CurrentVersion/Explorer/Shell Folders.
I don't think re-implementing SHGetSpecialFolderPathW is quite as simple as it might sound, unless perhaps you mean to just forward it on to the real deal if it is present and return a null path otherwise. Is there any reason why something like the following would not work or not be sufficient? This works for me under Windows NT 3.51, 95, NT 4 and later. Would loading the function at runtime like this cause any problems with Win CE? Or did they do it that way for a reason :) Of course if, going forward, anybody plans on using SHGetSpecialFolderPathW in other places in the 1.8 branch then having the call wrapped in a custom function could be better. typedef BOOL (WINAPI *SHGetSpecialFolderPathWFn)( HWND hwndOwner, LPWSTR lpszPath, int nFolder, BOOL fCreate); static BOOL EnumSystemFiles(Handler func) { HMODULE hModule; SHGetSpecialFolderPathWFn pSHGetSpecialFolderPathW; PRUnichar szSysDir[_MAX_PATH]; static const int folders[] = { CSIDL_BITBUCKET, CSIDL_RECENT, #ifndef WINCE CSIDL_INTERNET_CACHE, CSIDL_HISTORY, #endif 0 }; int i = 0; if (_MAX_PATH > (i = GetTempPathW(_MAX_PATH, szSysDir))) { if (i > 0 && szSysDir[i-1] == L'\\') szSysDir[i-1] = L'\0'; // we need to lop off the trailing slash EnumSystemFilesInFolder(func, szSysDir, MAX_DEPTH); } hModule = LoadLibrary("shell32.dll"); if (hModule != NULL) { pSHGetSpecialFolderPathW = (SHGetSpecialFolderPathWFn) GetProcAddress(hModule, "SHGetSpecialFolderPathW"); if (pSHGetSpecialFolderPathW) { for(i = 0; folders[i]; i++) { DWORD rv = pSHGetSpecialFolderPathW(NULL, szSysDir, folders[i], 0); if (szSysDir[0]) EnumSystemFilesInFolder(func, szSysDir, MAX_DEPTH); szSysDir[0] = L'\0'; } } FreeLibrary(hModule); } return PR_TRUE; } Although I must say I don't really like the idea of SeaMonkey/Firefox poking around at files that don't belong to it. If I were a security program I would slap SeaMonkey/Firefox upside the head for doing that... but whatever.
The immediate problem is the ABSENCE of any function named SHGetSpecialFolderPathW in the process address space. In reply to comment 12: There is (obviously) a range of possible emulations. As a lower bound, NSS will accept an emulation that is nothing more than this: BOOL SHGetSpecialFolderPathW( HWND hwndOwner, WCHAR *lpszPath, int csidl, BOOL fCreate) { *lpszPath = 0; return FALSE; } The more values of csidl for which the function can output valid non-empty string values, the better. Nathan, I very much appreciate that you're trying to help solve the problem. You're part of the Mozilla community at its finest! Now, assume that the given requirement is that the file win_rand.c (or any other source file that goes into freebl.dll) cannot be changed for whatever reason (it doesn't matter what the reason is). How else can you solve the problem?
As we have end-of-lined the 1.x series, stopped all support for it and it's even a security risk to use it because of its vulnerabilities, we better admit that we can't and won't fix this any more. Sorry.
Status: NEW → RESOLVED
Last Resolved: 10 years ago → 9 years ago
Resolution: --- → WONTFIX | https://bugzilla.mozilla.org/show_bug.cgi?id=514955 | CC-MAIN-2019-13 | refinedweb | 1,688 | 62.38 |
Tests::
Allocator: make leak detection work with static variables When definining static variables that own memory, you should use the "construct on first use" idiom. Otherwise, you'll get a warning when Blender exits. More details are provided in D8354. Differential Revision:
Tests: move tests from USD test directory into `io/common` and `io/usd` This commit is a followup of {D7649}, and ports the USD tests to the new testing approach. It moves test code from `tests/gtests/usd` into `source/blender/io/common` and `source/blender/io/usd`, and adjusts the use of namespaces to be consistent with the other tests. I decided to put one test into `io/usd/tests`, instead of `io/usd/intern`. The reason is that this test does not correspond with a single file in that directory; instead, it tests Blender's integration with the USD library itself. There are two new CLI arguments for the Big Test Runner: - `--test-assets-dir`, which points to the `lib/tests` directory in the SVN repository. This allows unit tests to find test assets. - `--test-release-dir`, which points to `bin/{BLENDER_VERSION}` in the build directory. At the moment this is only used by the USD test. The CLI arguments are automatically passed to the Big Test Runner when using `ctest`. When manually running the tests, the arguments are only required when there is a test run that needs them. For more info about splitting some code into 'common', see rB084c5d6c7e2cf8. No functional changes to the tests themselves, only to the way they are built & run. Differential Revision: Reviewed by: brecht, mont29
T73268: Link C/C++ unit tests into single executable This commit introduces a new way to build unit tests. It is now possible for each module to generate its own test library. The tests in these libraries are then bundled into a single executable. The test executable can be run with `ctest`. Even though the tests reside in a single executable, they are still exposed as individual tests to `ctest`, and thus can be selected via its `-R` argument. Not yet ported tests still build & run as before. The following rules apply: - Test code should reside in the same directory as the code under test. - Tests that target functionality in `somefile.{c,cc}` should reside in `somefile_test.cc`. - The namespace for tests is the `tests` sub-namespace of the code under test. For example, tests for `blender::bke` should be in `blender::bke:tests`. - The test files should be listed in the module's `CMakeLists.txt` in a `blender_add_test_lib()` call. See the `blenkernel` module for an example. Reviewed By: brecht Differential Revision:: More C++ like nature for word split test While it looks more longer, but also contains more comments about what's going on. Surely, this function almost never breaks and investing time into maintaining its tests is not that important, but we should have a good, clean, understandable tests so they act as a nice example of how they are to be written. Especially important to show correct language usage, without old school macros magic. Doing this at a lunch breaks, so will be series of some updates in the area..
[Cycles/MSVC/Testing] Fix broken test code. Currently the tests don't run on windows for the following reasons 1) render_graph_finalize has an linking issue due missing a bunch of libraries (not sure why this is not an issue for linux) 2) This one is more interesting, in test/python/cmakelists.txt ${TEST_BLENDER_EXE_BARE} and ${TEST_BLENDER_EXE} are flat out wrong, but for some reason this doesn't matter for most tests, cause ctest will actually go out and look for the executable and fix the path for you *BUT* only for the command, if you use them in any of the parameters it'll happily pass on the wrong path. 3) on linux you can just run a .py file, windows is not as awesome and needs to be told to run it with pyton. 4) had to use the NAME/COMMAND long form of add_test otherwise $<TARGET_FILE:blender> doesn't get expanded, why? beats me. 5) missing idiff.exe for msvc2015/x64 in the libs folder. This patch addresses 1-4 , but given I have no working Linux build environment, I'm unsure if it'll break anything there 5 has been fixed in rBL61751 Reviewers: juicyfruit, brecht, sergey Reviewed By: sergey Subscribers: Blendify Tags: #cycles, #automated_testing Differential Revision: | https://git.blender.org/gitweb/gitweb.cgi/blender.git/atom?f=tests/gtests/testing | CC-MAIN-2020-40 | refinedweb | 738 | 63.29 |
An introduction and set-up for the Heltec Automation WiFi Kit 32 development board with OLED display. Follow the steps below to have the example WiFiScan script show your local access points on the built-in display.
This board is based on the ESP32 chip and has onboard WiFi, Bluetooth, a 0.96 OLED display, lithium battery connector charging and a CP2102 USB to serial interface. It also works with the Arduino IDE. They are available from the Heltec Store on Aliexpress.
Setting Up the Arduino IDE for the ESP32 Range
New Easy Method
If you previously installed the hardware libraries for the ESP32 using the old method you need to delete them. Find the folder where your Arduino libraries are kept by opening File > Preferences in the Arduino IDE:
Inside this folder open the hardware folder and find and delete either the esp32 folder or espressif folder.
Now you can set up the ESP32 libraries the easy way. In the File > Preferences window in the IDE paste the following line into the Additional Boards Manager URL:
If you have entries in this field already then add the new line before them but separate them with a comma:
Then go to Tools > Board > Board Manager:.
For these boards to show in the Arduino IDE you have to install the hardware libraries locally using Git. Git is basically a way to keep local files synchronized with files on the internet. In this case it’s used to download the files used by the IDE to work with the ESP32 boards that are available.
If you don’t have Git installed then you need to download and install it from here:
After installation you should run Git GUI (should be in Programs in the Start menu). Click ‘Clone existing repository’ and…
- In the Source Location box enter:
- In the Target Location box enter: C:/Users/[YOUR_USER_NAME]/Documents/Arduino/hardware/espressif/esp32 replacing [YOUR_USER_NAME] with your login name. You can see this name on the Start menu by mousing over the grey circle icon.
Click Clone to start cloning the files to your PC. This might take a while.
When this has completed navigate to this directory: C:/Users/[YOUR_USER_NAME]/Documents/Arduino/hardware/espressif/esp32/tools and double-click get.exe. Again this might take a while.
First Run
You can now plug in the Heltec board. Windows will attempt to install any necessary drivers. In my case I had to manually install the USB to UART driver from here:
Everything should now be ready to test the board! So…
Start Arduino IDE, select your board in Tools > Board menu and select the COM port.
To test the board is basically working you can use the example WifiScan sketch here: File> Examples > Wifi > WifiScan If you open the Serial Monitor (Tools > Serial Monitor) you will be able to see any WiFi access points in range. Check the baud rate is the same as in the sketch – probably 115200.
Testing the Heltec ESP32 with the Onboard OLED
Once you have the ESP32 libraries installed and you’ve tested that the board can run the basic WifiScan sketch we can install the display libraries for the OLED.
My favourite display library for these OLEDs is the U8g2 () library. This is a lot easier to install as it can be found in the Arduino IDE library manager. Open Sketch > Include Library > Manage Libraries and search for and then install U8g2.
U8g2 has three different display methods. If you want to quickly test all three, the following examples show the correct constructor.
Full Buffer:
In the Arduino IDE: File> Examples > U8g2 > full_buffer > GraphicsTest
Paste: U8G2_SSD1306_128X64_NONAME_F_SW_I2C u8g2(U8G2_R0, /* clock=*/ 15, /* data=*/ 4, /* reset=*/ 16);
Above the line: // Please UNCOMMENT one of the contructor lines below
Upload the sketch
Page Buffer:
In the Arduino IDE: File> Examples > U8g2 > page_buffer > GraphicsTest
U8G2_SSD1306_128X64_NONAME_1_SW_I2C u8g2(U8G2_R0, /* clock=*/ 15, /* data=*/ 4, /* reset=*/ 16);
Above the line: // Please UNCOMMENT one of the contructor lines below
Upload the sketch
U8x8:
In the Arduino IDE: File> Examples > U8g2 > u8x8 > GraphicsTest
Paste: U8X8_SSD1306_128X64_NONAME_SW_I2C u8x8(/* clock=*/ 15, /* data=*/ 4, /* reset=*/ 16);
Above the line: // Please UNCOMMENT one of the contructor lines below
Upload the sketch
So, what are the numbers clock=*/ 15, /* data=*/ 4, /* reset=*/ 16 for in each of the examples above? These are pin numbers for the I2C controlled OLED on the board. This tells the display library which pins to use to communicate with the display.
Show WifiScan Sketch Display on OLED
Below is a sketch that displays the results of the WiFi scan on the OLED using the U8x8 version of the display library
#include "WiFi.h" #include <U8x8lib.h> // the OLED used U8X8_SSD1306_128X64_NONAME_SW_I2C u8x8(/* clock=*/ 15, /* data=*/ 4, /* reset=*/ 16); void setup() { // Set WiFi to station mode and disconnect from an AP if it was previously connected WiFi.mode(WIFI_STA); WiFi.disconnect(); delay(100); u8x8.begin(); u8x8.setFont(u8x8_font_chroma48medium8_r); } static void doSomeWork() { int n = WiFi.scanNetworks(); if (n == 0) { u8x8.drawString(0, 0, "Searching networks."); } else { u8x8.drawString(0, 0, "Networks found: "); for (int i = 0; i < n; ++i) { // Print SSID for each network found char currentSSID[64]; WiFi.SSID(i).toCharArray(currentSSID, 64); u8x8.drawString(0, i + 1, currentSSID); } } // Wait a bit before scanning again delay(5000); } void loop() { doSomeWork(); }
Buy Me A Coffee
If you found something useful above please say thanks by buying me a coffee here...
38 Replies to “ESP32 Built-in OLED – Heltec WiFi Kit 32”
I try all examples above – work perfect. But after it I try example …\Documents\Arduino\libraries\U8g2\examples\full_buffer\SelectionList, my Hitec WiFi KIT 32 board dead… I try come back to sketch working before – boart not responce. Below are error log:
————————————————————————————————————-
Arduino: 1.8.4 (Windows 7), Board: “Heltec_WIFI_Kit_32, 80MHz, 921600”
Archiving built core (caching) in: C:\Users\SLEO~1.REL\AppData\Local\Temp\arduino_cache_452400\core\core_espressif_esp32_heltec_wifi_kit_32_FlashFreq_80,UploadSpeed_921600_9735e408823b91d4dd6cf7c8b5963565.a
Sketch uses 439175 bytes (42%) of program storage space. Maximum is 1044464 bytes.
Global variables use 36312 bytes (12%) of dynamic memory, leaving 258600 bytes for local variables. Maximum is 294912 bytes.
…..
esptool.py v2.1
Connecting….
Chip is ESP32D0WDQ6 (revision 0)
Uploading stub…
Running stub…
Stub running…
Changing baud rate to 921600
Changed.
Configuring flash size…
Warning: Could not auto-detect Flash size (FlashID=0xffffff, SizeID=0xff), defaulting to 4MB
Compressed 8192 bytes to 47…
———————————————————————————————-
What did happend and how to restore my board?
Is it still broken? I’m not sure how to fix it but I saw this..
try to erase the flash of the “bricked” LoPy with the esptool.py form the esp-idf repository
Use
python3 esptool.py -.port com_port –baud 230400 –chip esp32 erase_flash
and then update the device again.
But this is using python to work with the board so it’s a little outside my area of knowledge.
I have the Heltec WiFi_Kit_32 version of this amazing module. I cannot get a text display when using the Adafruit_SSD1306 library. It works with the U8x8 librray, but I have a lot of previous code from other platforms using Adafruit_SSD1306.h that I would still like to re-use on the ESP32 platform. I have tried Wire.begin(4,15) and OLED_RESET 16 and setting 16 LOW then HIGH. I am not not getting any display on the OLED.
It compiles fine, but no display on OLED.
/* Test code using Heltec WiFi_Kit_32 module with built in SSD1306 OLED */
#include // Core graphics library
#include
#include
#define OLED_RESET 16
Adafruit_SSD1306 display(OLED_RESET);
void setup() {
pinMode(16,OUTPUT);
digitalWrite(16,LOW);
delay(100);
digitalWrite(16,HIGH);
Serial.begin(115200);
Serial.print(” Starting OLED display …. “);
Wire.begin();
display.begin(SSD1306_SWITCHCAPVCC, 0x3C);
display.clearDisplay();
display.setTextSize(1);
display.setTextColor(WHITE,BLACK); // set to overwrite mode
display.setCursor(0,0);
display.println(“Hello World”); // print on 1st line of OLED
display.display();
Serial.println(“Hello World printed to OLED on line 0”); }
void loop() {
display.setCursor(0,20);
display.print(“Hello World line 1”); // print on next line of OLED
display.display();
Serial.println(“Hello World printed to OLED on line 1”);
display.setCursor(0,40);
display.print(“Hello World line 2”); // print on next line of OLED
Serial.println(“Hello World printed to OLED on line 2”);
Serial.println();
display.display();
delay(1000);
display.clearDisplay();
display.display();
}
I did initially use the acrobotic SSD1306 library but have since moved to the one by Squix because of the font generator.
In library manager it is called ESP8266 and ESP32 Oled Driver for SSSD1306 display v3.2.7
Haven’t tried the u8g2 library yet.
Here is a quick example using this that works on the Heltec:
#include
//
#include “SSD1306.h”
SSD1306 display(0x3c, 4, 15);
void setup() {
// put your setup code here, to run once:
pinMode(16, OUTPUT);
digitalWrite(16, LOW); // set GPIO16 low to reset OLED
delay(50);
digitalWrite(16, HIGH); // while OLED is running, must set GPIO16 to high
Wire.begin(4, 15);
display.init();
//display.flipScreenVertically();
drawDisplay();
}
void drawDisplay() {
display.clear();
display.setTextAlignment(TEXT_ALIGN_LEFT);
// create more fonts at
display.setFont(ArialMT_Plain_24);
display.drawString(0, 6, “Hello wrlod!”);
display.display();
}
void loop() {
// put your main code here, to run repeatedly:
}
I initially could not get the display to work with the library that I was using. I switched to the u8g2lib library. It had a bit of a leaning curve. You can not feed drawStr a String. You have to feed it an array of char which you can build with
sprintf. Thus:
char humidity_st[32];
sprintf(humidity_st, “Humidity: %d RH”, int(humidity));
display.drawStr(0, 38, humidity_st);
This approach is growing on me. As far as the right pins go, that was also a struggle. Here’s what worked for me.
U8G2_SSD1306_128X64_NONAME_F_HW_I2C display(U8G2_R0, 16, 15, 4);
Thank you for the time making this page. It is showing up as the top google result.
I have 2 questions perhaps you can help me answer.
1) Does the OLED have its own I2C bus? I want to use this amazing board to control some I2C devices but am not sure it the I2C bus is already occupied but the OLED. I read somewhere that some LED libraries are using a software written I2C driver, not the hardware I2C.
2) I had some code for the ESP8266 that someone wrote that would put the chip into AP mode and had a web server. You would connect to its Wifi network to configure the chip to connect to your wifi network through the web pages.
Once configured, the chip rebooted in client mode and would join the network you configured as a client.
Is there similar code out there for this board? Does this program / function have a name?
@Eddiie
Pins 21 and 22 look like they also are a second I2C channel
That’s the LoRa version.
Looks like there are hardware iC2 pins.
The SW in the command for the OLED is SoftWare mode so I would guess hardware mode would work as well for another device:
U8X8_SSD1306_128X64_NONAME_SW_I2C u8x8(/* clock=*/ 15, /* data=*/ 4, /* reset=*/ 16);
For your second question… I don’t really understand what you want to do.
Ah, I have the Wifi kit 8
It looks like there is only the one I2C pins. which are used by the onboard display, GPIO 4 and 5.
Was hoping the pins labeled on the board “SCL” (GPIO14), and “SDA” (GPIO2) would be that hardware I2C but it does not appear to be.
I wonder what pins ” Wire.begin();” uses.
This page (use the translate feature in Chrome) shows the pins for the two different board versions. Doesn’t look like there’s another I2C bus. Can you not connect more than one device to this bus as well as the display?
FYI – That is the wrong pinout. They made a mistake when they printed that. That picture has the pins in reverse. Here is the corrected one:
WordBot thank you for the article!
I have a question:
Do you know how to connect RFID MFRC522 to the board. I tried to connect a RFID to existing MISO,MOSI, SDA, SCK and I the board does not see the RFID.
Thank you for your time.
I don’t know. The pins should be as in the diagram here – and your code should reflect those pins.
I want to do some RFID things but I haven’t bought any hardware to test with yet.
Thank you for your response! Actually I made one step further.
I got response from the RFID reader, but only a firmware version.
⸮Firmw⸮⸮⸮Version: 0x12 = (unknown)
Scan PICC to see UID, SAK, type, and data blocks…
I described it here
If you have any idea what would be wrong it’d be awesome.
Nice clear web site and got me , a newbie, going.
I had problems initially uploading to the ESP32 but found that I needed to press PRG and hold it, then press the RST momentarily, then release the PRG button. The screen would go blank and it would wait until the upload began. This worked at the full speed.
I managed to upload several sketches ok.
However, I’ve since disconnected the ESP32 and reconnected and now I’ve got a driver problem. I did the manual update to the version 10 driver from the site you mention above but that didn’t solve it. It seemed to then mess up my Arduino driver but i have been able to recover that.
Any ideas anyone?
Can you see anything with question marks in Device Manager?
Is the correct port selected in the Arduino IDE?
Yes the ESP32 USB bridge had it. The error was No 10 – Failure to start or something.
After lots of messing with different versions of the drivers I took all the other things out of the USB ports (a GPS, hard drive, bluetooth dongle, the Arduino and the ESP32). Then with just the ESP32 board back in it started ok. I put the rest back in one by one and they all seem ok now. I’m using the v10 driver.
The only problem now is that the full speed upload fails so I’ve reverted to 115200. Not a problem as it’s the compiling that takes most of the time, not the upload. Why does it take so long? A real pain if you are trying to debug a program.
Thanks.
Did you check it was on the correct port? . It does sound like a driver problem if you saw a question mark.
Maybe Windows had assigned a different port when you plugged it in the second time?
I think compiling takes time as the ESP32 is much more complex than a basic Arduino so there’s a lot more work to do when compiling. In general compiling code is a slow process.
anyone connected something via i2c ? I have SHT31 but it works only when I disable OLED…
Is this the Lora board?
I cannot get mine to compile because it can’t find WiFi.mode(). Which WiFi library should I have installed? Error is: ‘class WiFiClass’ has no member named ‘mode’
This is the library – but I think this is installed when the board is installed. Maybe there’s a conflict on your device with another library?
Read the Compiler messages exactly: Ther are more “Wifi.h”.
The wrong was used. I deleted the
C:\Program Files (x86)\Arduino\libraries\WiFi
and used the on this path
C:\Program Files (x86)\Arduino\hardware\espressif\arduino-esp32-master\libraries\WiFi
how to add support built-in OLED Heltec WiFi Kit 32 to ?
There seems to be a solution here:
Is there a way to take control of the one on-board LED that just blinks continuously?
Hi Bruce, no………that LED is part of the battery charging circuit and will only go off when a fully charged battery is connected. Whilst a battery is charging, the LED will go to a steady on state until the battery is fully charged then it will go off.
i want a code to find bluetooth devices
I’ll add it to my list for future tutorials.
did you ever find any sample Bluetooth code/library examples (not BLE)
There’s Bluetooth Serial in the ESP32 examples in the Arduino IDE. Does that help?
Hi,
I’m new to this. I just bought an Heltec Wifi kit (not LoRa). It works fine with examples.
Now I bought some BMP280 sensors to display the atmospheric pressure and temperature. When I connect the sensor to I2C (without the OLED defined) it shows the values on serial. But when I try to add the U8g2 library it shows “Could not find a valid BMP280 sensor, check wiring!”.
Anybody can help me?
Thanks in advance
Hi, Without looking into it much the command for the screen uses pins 15, 4 and 16
U8X8_SSD1306_128X64_NONAME_SW_I2C u8x8(/* clock=*/ 15, /* data=*/ 4, /* reset=*/ 16);
I think you can use different pins for the BMP280 and define them in the sketch. Alternatively the BMP280 might have a place on the back where you can change the address and use the same I2C pins. There’s a I2C scanner here that might help:
Thanks a lot!
I solved it. Switched the sensor to SPI and now both work. I thought that ESP32 has 2 independent I2C buses. I was connecting the sensor to 21, 22 and the screen is on 4,15,16.
I used the scanner and the address is 76. I changed it in library Adafruit_BMP280.h but still nothing.
If this device is compatible with the Arduino IDE, shouldn’t it also be able to be programmed through say Microchip Studio/Atmel Studio for AVR’s like the 328 (which the Arduino is based on)?
Any suggestions on how to do that? I’m more familiar with the Atmel studio than the Arduino IDE (have the files, but not installed yet). I have programmed some stuff on the Arduino through the Atmel Studio.
I don’t think it will be possible because Espressif (ESP32 manufacturer) created the libraries etc to work with the Arduino IDE. You could take a look at the IDF if you want a to develop with the native tools. A lot of people have moved from the Arduino IDE to as well if you wanted to keep using the Arduino ESP32 environment.
Thanks for that information, I’ll take a look at it. I’m not sure how much I plan to do with this device, so maybe the IDE is the best way to go (at least for now). I know the core processors are different. Hopefully, I can keep it simple enough for a while. (I’ll be doing this while living on the road in our RV, so resources can be a bit limited).
Thanks again. | https://robotzero.one/heltec-wifi-kit-32/ | CC-MAIN-2022-40 | refinedweb | 3,134 | 73.78 |
A magic square is a 2-D list where the sum of each row is equal to the sum of each column and is equal to the sum of each of the two diagonals. Here is an example of a 3 x 3 magic square:
4 9 2 3 5 7 8 1 6Note that the sum of each of the rows, columns, and diagonals is 15. This square is not unique. If you flip the rows for the columns (transpose the square) you will get a magic square. If you add a constant value to each element you will get a magic square, and so on.
Input: The input for this program will be in a file. The first line of the program will be a number n denoting the number of squares that you will have to process. It will be followed by the data for each of the n squares. There will be a blank line separating data for each square. For any given square the first line will give the number of rows (or the columns) followed by the rows of the square - one row per line. Here is an example.
2 3 4 1 2 3 5 7 8 9 6 4 16 3 2 13 5 10 11 8 9 6 7 12 4 15 14 1The file name will be of your own choosing. But you can use this sample file squares.txt to test your code.
Output: The output will be in a file. The name that you choose for your output file should be different from the name of the input file. The format of the output file should be similar to the input file. You will write valid or invalid next to the size of each square. For the above input file, the output file should look like this:
2 3 invalid 4 1 2 3 5 7 8 9 6 4 valid 16 3 2 13 5 10 11 8 9 6 7 12 4 15 14 1
In your function main() you will prompt the user to enter the name of the input file and the name of the output file. You will compare the two names to make sure that they are not the same. If the names are the same you will write a message to that effect and quit the program. If the names are different you will open the input file for reading and the output file for writing and process the data. When the program has completed processing all the squares you will write a message to the console that the output has been to written to output file. A sample session would look like this:
Enter name of input file: squares.txt Enter name of output file: result.txt The output has been written to result.txt
For this program, you will write a function isMagic() that will
determine if a 2-D list forms a magic square. The function should be
general enough to accept magic squares of any size greater than or equal
to 3. The function signature should look like this:
def isMagic (b):
The file that you will be turning in will be called MagicSquare.py. We will be looking at documentation, descriptive variable and function names, clean logical structure, and adherence to the coding conventions discussed in class. The file will have a header of the following form:
# File: MagicSquare.py # Description: # Student Name: # Student UT EID: # Course Name: CS 303E # Unique Number: # Date Created: # Date Last Modified:
Use the turnin program to submit your MagicSquare.py file. The TAs should receive your work by 11 PM on Monday, 08 August 2011. There will be substantial penalties if you do not adhere to the guidelines.
Magic Squares have fascinated mathematicians and lay people alike. There is a rich history behind magic squares as well as a lot of research. Here are some references that you may want to look at: | http://www.cs.utexas.edu/~mitra/csSummer2011/cs303/assgn/assgn9.html | CC-MAIN-2016-30 | refinedweb | 663 | 78.89 |
05 October 2012 07:09 [Source: ICIS news]
By Jasmine Khoo
SINGAPORE (ICIS)--India’s domestic toluene prices are expected to be stable-to-firm for the rest of the month in spite of recent weakness, as lower-than-usual volume of imports has been keeping supply tight, market participants said on Friday.
Prices came off by Indian rupees (Rs) $4-5/kg ($77-97/tonne) from an all-time high of Rs 84-85/kg ex-tank hit 20 September as new shipments arrived at the ports of Kandla and Mumbai.
“The cargoes, which just came in, met the immediate needs of the end-users, which brought prices down,” said a buyer.
On Thursday, domestic toluene prices in ?xml:namespace>
This month, around 8,000-10,000 tonnes of imported toluene are expected to arrive in
“Port inventories are still very, very low. Most of the cargoes, which just came in, were snapped up immediately,” one of the buyers said.
It had been taking in lower-than-normal volumes as the depreciation of the Indian rupee in July made imports more expensive.
“Importers are buying around 50% of what they usually buy,” a market source said.
In August and September,
“Congestion at the ports and delays in shipments exacerbated the tight supply situation,” another market source said.
($1 = €0.77 / $1 = Rs51.75) | http://www.icis.com/Articles/2012/10/05/9600896/low-imports-to-keep-india-domestic-toluene-prices-stable.html | CC-MAIN-2014-42 | refinedweb | 224 | 59.64 |
Qt: Load resources from static or shared library
Full project example can be found here
In the linked example there are two libraries, one shared and one static (called MySharedLibrary and MyStaticLibrary). Both libraries have an icon resource loaded and showed in the main window. Since Qt resources are converted into C++ code there is no difference with a standard C++ code, that's mean they can compiled inside the library without any limit (on the contrary, for example, Windows native resource system can be embedded only inside executable and dynamic libraries DLL, but not inside static libraries). The project file automatically connect the libraries to the main application than, for use these "external" resource, it's only needed to programmatically initialize them inside the Qt resource system engine. The macro for make this initialization is Q_INIT_RESOURCE() (official documentation here). As important point to keep in mind is the name of the resource file .qrc can not be the same but have to be different for both application and libraries, we'll see later the reasons. The macro can not be called inside namespace than the best point to call it is inside the main() function just before start execution of the application. It take as argument the name of the .qrc resource file you want to load. In short this macro "compose" and call a global function named with a fixed prefix and the name of the resource file param, something like functionprefix_resourcefilename(). This function initialize the resources and add them to Qt resource system to be called from code. Now you can understand why can not exist two resource files with the same name. Two resource files with same name will create two global function always with the same name and this will be reported as an error by the linker. Since the static library will became part of the main executable file the macro can be called, as already said, from inside the main() body without any particular problem. On the contrary, in case of shared library, the same call will not work cause the global function created inside the shared library is not automatically marked as exported than the linked will not find it during compilation. Solutions to fix this problem can be many but the most simply way would be to export a shared library initialization function to call from main() code and, inside this function, use the resource initialization macro. Follow the example main code:
// main.cpp int main(int argc, char *argv[]) { QApplication a(argc, argv); MainWindow w; w.show(); Q_INIT_RESOURCE(mystaticlibrary); MySharedLibrary::InitResources(); return a.exec(); } // mysharedlibrary.cpp void MySharedLibrary::InitResources() { Q_INIT_RESOURCE(mysharedlibrary); }
The example project show the two icons loaded by the static and shared libraries as follow. Once the initialization macro has been called the function can be loaded from every part of the code as standard main resources.
| https://falsinsoft.blogspot.com/2018/02/qt-load-resources-from-static-or-shared.html | CC-MAIN-2018-51 | refinedweb | 479 | 58.42 |
Thanks for the bug report! Unfortunately, the problem comes from the package itself, not from Autoconf. The configure.ac script needs to be updated. Please, send all this message (which your output attached) to the bug list (or the authors) of the package you were trying to configure. Below two parts of the Autoconf documentation are included: 1. the documentation of AC_CHECK_HEADER(S), and 2. what's to be done to upgrade configure.ac. Thanks! ---------------------------------------------------------------------- Generic Header Checks --------------------- These macros are used to find system header files not covered by the "particular" test macros. If you need to check the contents of a header as well as find out whether it is present, you have to write your own test for it (*note Writing Tests::). - Macro: AC_CHECK_HEADER (HEADER-FILE, [ACTION-IF-FOUND], [ACTION-IF-NOT-FOUND], [INCLUDES = `default-includes']) If the system header file HEADER-FILE is compilable, execute shell commands ACTION-IF-FOUND, otherwise execute ACTION-IF-NOT-FOUND. If you just want to define a symbol if the header file is available, consider using `AC_CHECK_HEADERS' instead. For compatibility issues with older versions of Autoconf, please read below. - Macro: AC_CHECK_HEADERS (HEADER-FILE..., [ACTION-IF-FOUND], [ACTION-IF-NOT-FOUND], [INCLUDES = `default-includes'])' ( <foo.h> # endif ]) ---------------------------------------------------------------------- Header Present But Cannot Be Compiled ===================================== The most important guideline to bear in mind when checking for features is to mimic as much as possible the intended use. Unfortunately, old versions of `AC_CHECK_HEADER' and `AC_CHECK_HEADERS' failed to follow this idea, and called the preprocessor, instead of the compiler, to check for headers. As a result, incompatibilities between headers went unnoticed during configuration, and maintainers finally had to deal with this issue elsewhere. As of Autoconf 2.56 both checks are performed, and `configure' complains loudly if the compiler and the preprocessor do not agree. For the time being the result used is that of the preprocessor, to give maintainers time to adjust their `configure.ac', but in the near future, only the compiler will be considered. Consider the following example: $ cat number.h typedef int number; $ cat pi.h const number pi = 3; $ cat configure.ac AC_INIT AC_CHECK_HEADERS(pi.h) $ pi.h usability... no checking pi.h presence... yes configure: WARNING: pi.h: present but cannot be compiled configure: WARNING: pi.h: check for missing prerequisite headers? configure: WARNING: pi.h: proceeding with the preprocessor's result configure: WARNING: ## ------------------------------------ ## configure: WARNING: ## Report this to address@hidden ## configure: WARNING: ## ------------------------------------ ## checking for pi.h... yes The proper way the handle this case is using the fourth argument (*note Generic Headers::): $ cat configure.ac AC_INIT AC_CHECK_HEADERS(number.h pi.h,,, [[#if HAVE_NUMBER_H # include <number.h> #endif ]]) $ number.h... yes checking for pi.h... yes See *Note Particular Headers::, for a list of headers with their prerequisite. ---------------------------------------------------------------------- Portability of Headers ---------------------- This section tries to collect knowledge about common headers, and the problems they cause. By definition, this list will always require additions. Please help us keeping it as complete as possible. `inttypes.h' vs. `stdint.h' Paul Eggert notes that: ISO C 1999 says that `inttypes.h' includes `stdint.h', so there's no need to include `stdint.h' separately in a standard environment. Many implementations have `inttypes.h' but not `stdint.h' (e.g., Solaris 7), but I don't know of any implementation that has `stdint.h' but not `inttypes.h'. Nor do I know of any free software that includes `stdint.h'; `stdint.h' seems to be a creation of the committee. `net/if.h' On Darwin, this file requires that `sys/socket.h' be included beforehand. One should run: AC_CHECK_HEADERS([sys/socket.h]) AC_CHECK_HEADERS([net/if.h], [], [], [#include <stdio.h> #if STDC_HEADERS # include <stdlib.h> # include <stddef.h> #else # if HAVE_STDLIB_H # include <stdlib.h> # endif #endif #if HAVE_SYS_SOCKET_H # include <sys/socket.h> #endif ]) `stdlib.h' On many systems (e.g., Darwin), `stdio.h' is a prerequisite. `sys/socket.h' On Darwin, `stdlib.h' is a prerequisite. Building linc (as released with gnome-2.2.1) the following error occurred while running configure: configure: WARNING: linux/irda.h: present but cannot be compiled configure: WARNING: linux/irda.h: check for missing prerequisite headers? configure: WARNING: linux/irda.h: proceeding with the preprocessor's result configure: WARNING: ## ------------------------------------ ## configure: WARNING: ## Report this to address@hidden ## configure: WARNING: ## ------------------------------------ ## checking for linux/irda.h... yes | http://lists.gnu.org/archive/html/bug-autoconf/2003-05/msg00028.html | CC-MAIN-2014-15 | refinedweb | 716 | 53.88 |
How I did it:
t = Tk() # new window
t.update()
t.attributes("-alpha", 00)
t.state('zoomed') # maximize the window
height= t.winfo_height() # ...
width= t.winfo_width()
But sadly I do not know of the location of the other screen.
But I think you can do this
create a new window
use winfo_screenheight() and winfo_screenwidth() to find out about the
original screen
use geometry() to move the window around
maximize the window (it should always maximize at the screen where it is)
get geometry()
if geometry is at (0, 0) it is the main screen, proceed with 3.
you found another screen
may be you are trying to do something like this.
class App:
def __init__(self,master):
frame = Frame(master,height=20,width=25)
frame.grid()
#Multiple buttons, over n rows and columns, these are just here to
demonstrate my syntax
for i in range(n):
frame.columnconfigure(i,pad=3)
for i in range(n):
frame.rowconfigure(i,pad=3)
for i in range(0,n):
for j in range(0,n):
self.action =
Button(frame,text="action",command=self.doAction)
self.action.grid(row=i,column=j)
def doAction(self):
print('Action')
Just add the echo to your include file. Something like:
if(!empty($_SESSION['warning'])) echo $_SESSION['warning'];
Presumably you'd also want to clear the value once it's displayed so it
doesn't continue getting displayed on every page after that.
You cannot do this in the regular console. iPython keeps a copy of the
source in case you want to see it again later on, but the standard Python
console does not.
Had you imported the function from a file, you could have used
inspect.getsource():
>>> import os.path
>>> import inspect
>>> print inspect.getsource(os.path.join)
def join(a, *p):
"""Join two or more pathname components, inserting '/' as needed.
If any component is an absolute path, all previous path components
will be discarded. An empty last part will result in a path that
ends with a separator."""
path = a
for b in p:
if b.startswith('/'):
path = b
elif path == '' or path.endswith('/'):
path += b
else:
path += '/
The stack trace suggests that you're trying to get the default screen
GraphicsDevice in headless mode.
The documentation says that getDefaultScreenDevice throws HeadlessException
- if isHeadless() returns true.
You can create a StringIO object instead:
from StringIO import StringIO
...
im = StringIO(img_file.read())
It behaves like a file, but it's not a file.
You have a couple of problems. First, you can't use grid to place labels in
the canvas and expect them to scroll. When you scroll a canvas, only the
widgets added with create_window will scroll. However, you can use grid to
put the labels in a frame, and then use create_window to add the frame to
the canvas. There are several examples of that technique on this site.
Second, you need to tell the canvas how much of the data in the canvas
should be scrollable. You use this by setting the scrollregion attribute of
the canvas. There is a method, bbox which can give you a bounding box of
all of the data in the canvas. Usually it's used like this:
canvas.configure(scrollregion=canvas.bbox("all"))
First you need to create a layout like..
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout
xmlns:android=""
android:layout_width="fill_parent"
android:
<TextView
android:id="@+id/label"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginLeft="15dp"
android:layout_marginTop="5dp"
android:text="@+id/label"
android:textSize="20px"
android:
<CheckBox
android:id="@+id/check"
android:focusable="false"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginLeft="15px"
android:layout_marginRigh
glOrtho sets your viewing area. Since your shape fits inside of 1 square
unit, but you want to be able to see the whole thing, you can either have
your near and far set to enclose the whole shape (which the 2, -2 do) or
you can set your glOrtho to something like glOrtho(-1, 1, -1, 1, 0, -5);
and translate your shape backwards so that it is then in the viewing field.
Another (bad) option is to change the absolute position of your verts so
that they fall within the near and far fields but I wouldn't recomend that
since that is the point of the translates and transforms in general.
Do you mean to have everything in an orthographic view or do you mean to
have perspective? The orthographic view is why translating didn't help
originally.
The reason your program fails to reverse a list is because when the user
chooses menu option 9, all they get is a print statement declaring the code
isn't working yet.
At a high level, reversing a linked list can be accomplished like this:
for each element in the input list, from beginning to end:
remove element from input list
place element in the beginning of the output list
return output list
Variables within functions are local to the function so the new
value/variable "interface" is garbage collected when the function exits. I
would suggest that you spend the time to learn basic classes as it will
make keeping track of variables simpler.
from Tkinter import *
from functools import partial
class WLAN():
def __init__(self):
master = Tk()
master.title('Wireless Warrior')
self.interface = "initial value" ## instance object=available
through out the class
self.label_interface = StringVar()
self.label_interface.set(self.interface)
w = Label(master, textvariable=self.label_interface, fg="blue",
font=("Helvetica", 16))
w.grid(row=0, column=0, columnspan=3)
for but in range(1, 8):
w = Button(master,
Short answer: Python cannot edit environment variables in a way that
sticks. BUT, if all you want to do is run something in a temporarily
modified environment, you can do that with the subprocess module:
import os
from subprocess import Popen
myEnv = dict(os.environ)
myEnv['newKey'] = 'newVal'
shellCmd = Popen(['sh', 'someScript.sh'], env=myEnv)
(shellOut, shellErr) = shellCmd.communicate().
I recommend just declaring the functions outside of the
$(document).ready(function(){ .. });
The console doesn't recognize that the functions exist because you are
basically trying to make local functions.
function clearTheDisplayInitial() {
$("#Resume, #CodingExamples, #AboutMe").hide();
}
function clearTheDisplay() {
$("#Resume, #CodingExamples, #AboutMe, #mainMenu").fadeOut(900);
}
$(document).ready(function() {
$("#displayResume").click(function() {
clearTheDisplayInitial(); // you could just add $("#Resume,
#CodingExamples, #AboutMe").hide(); here
$("#Resume").fadeIn(900);
});
$("#CodingExamples1").click(function() {
clearTheDisplay(); // you could just add $("#Resume,
#CodingExamples, #AboutMe, #mainMenu").fadeOut(900); here
$("#A
I am not sure. You can return the variable and set it that way. To do this
print it.
(python program)
...
print foo
(bash)
set -- $(python test.py)
foo=$1
Looking through the source code for the subprocess module, it's because
using a list of arguments with shell=True will do the equivalent of...
/bin/sh -c 'echo' '$ATESTVARIABLE'
...when what you want is...
/bin/sh -c 'echo $ATESTVARIABLE'
The following works for me...
import os, subprocess
os.environ['ATESTVARIABLE'] = 'value'
value = subprocess.check_output('echo $ATESTVARIABLE', shell=True)
assert 'value' in value
Update
FWIW, the difference between the two is that the first form...
/bin/sh -c 'echo' '$ATESTVARIABLE'
...will just call the shell's built-in echo with no parameters, and set $0
to the literal string '$ATESTVARIABLE', for example...
$ /bin/sh -c 'echo $0'
/bin/sh
$ /bin/sh -c 'echo $0' '$ATESTVARIABLE'
$ATESTVARIABLE
...whereas the second form...
/bin/sh -
No, execfile does not search the PATH. It just takes a normal filename
(which can be relative or absolute) and opens it exactly the same as any
other file-handling function.
On top of that, you very rarely want to use execfile. In this particular
case, what you should probably be doing is running the script from the cmd
("DOS box") prompt, not the Python prompt.
If you really want to use the Python prompt as your "shell" in place of
cmd, you can do that, but you still want to be able to find programs via
the PATH, run them in a separate interpreter instance, etc. The way to do
that is with subprocess. For example:
>>> from subprocess import check_call # you only have to do this
once
>>> check_call(['train.py'])
That's a lot more typing than you need to do from cmd, o
Start -> Computer -> (Right Click) Properties -> Advanced System Settings
-> Environment Variables
You might need to restart your active PowerShell session for the new
environment variables to kick in.
To change it from within PowerShell try:
$env:PYTHONUSERBASE = "c:mysite"
In your specific case, you don't need to use StringVar. This should be
what you want:
import smtplib
from Tkinter import *
import tkMessageBox
def Composemail(sender,password,receivers,message):
try:
server = smtplib.SMTP()
server.connect('smtp.gmail.com',587)
server.ehlo()
server.starttls()
server.login(sender, password)
server.sendmail(sender, receivers, message)
tkMessageBox.showinfo("Sending Mail information","Mail sent.")
# Just a tip, "error" isn't defined yet so it will blow up.
except smtplib.SMTPException, error:
tkMessageBox.showinfo("Sending Mail information","Sending Mail
failed.Try again later.")
a=Tk()
a.title("MailsNow-A new place for sending emails")
a.geometry("1000x700")
b=Label(a,fg="Purple"
You can use tag to show HTML entities You need to encode all
Your HTML entities like < => < like way.
Also you can show a text area in which all those HTML code need to echo, it
will not execute your code simply it will print it.
"ImportError: DLL load failed: %1 is not a valid Win32 application." is
from Windows itself, and means that your PIL or Tkinter install doesn't
work on your Windows version.
One potential cause for this is that you're using a version built with VS
2012 on Windows XP; see:
The error message mentions threads. In the stack trace it looks like you
are altering the state of a variable. If that is true, and you're trying to
alter the state of a widget from a thread other than the one that created
the widget, that's the problem. You cannot call widget methods from any
thread except the one that created the widget.
The Tkinter Label widget has a text option to indicate the text that is
being displayed. If you want to change all the content that the widget
displays, then replace
self.text = tk.Text(frameLabel, ...)
# ...
new = self.queue.get()
self.text.delete(1.0, 'end')
self.text.insert('end', new)
With this:
self.label = tk.Label(frameLabel, ...)
# ...
new = self.queue.get()
self.label.config(text=new)
The only time I don't seem to get this error code is when I try to open a
.txt file. But I'm wanting to open .docx files also.
A docx file isn't just a text file; it's an Office Open XML file: a zipfile
containing an XML document and any other supporting files. Trying to read
it as a text file isn't going to work.
For example, the first 4 bytes of the file are going to be this:
b'PKx03x04`
You can't interpret that as UTF-8, ASCII, or anything else without getting
a bunch of garbage. You're certainly not going to find your words in this.
You can do some processing on your own—use zipfile to access the
document.xml inside the archive, then use an XML parser to get the text
nodes, and then rejoin them so you can split them on whitespace. For
example:
import itertools
import zip
You need RotateNO to be a part of your class. Also zoom doesn't need to
take a zoom argument because you've already initialized self.Flag_Zoom.
Also change root.destroy() to Exe.destroy(). Try this:
# ---------- Imports ------------------------------------- #
from Tkinter import *
import matplotlib
import numpy as np
# ---------- Settings ------------------------------------ #
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg,
NavigationToolbar2TkAgg
# ---------- Classes ------------------------------------ #
class App(Frame):
def __init__(self,master=None):
Frame.__init__(self, master)
self._job = None
self.canvas = None
self.Flag_Zoom = False
self.pack()
self._GU
See:
I think you were going for something like this:
$(document).ready(function () {
$("#btn1").click(function () {
$v = $("#txt").val();
for (i = 0; i < $v; i++) {
var box = $('<input type="checkbox" class="chkbx"
name="chkbx" value="Option' + i + '">Option ' + i + '<br/>');
$("#display").append(box);
}
});
$(document).on("change", ".chkbx", function () {
$("#show").val("");
var selected = [];
$(".chkbx:checked").each(function () {
selected.push($(this).val());
});
$("#show").val(selected.join(", "));
});
});
Sounds like you want to "watch" the value of style.display on that div.
That's not impossible, but unpractical and unstable. The easiest solution
is to add a new change event handler on the dropdown itself. For that, you
can use .live (for that jQuery version), .delegate, or .change (as long as
you do it when the DOM is loaded). And make sure to register that event
after the original event handler for the dropdown is added.
To deal with the timeout, set a timer from your handler, and make sure it's
longer than the maximum time from the other timer.
For example:
$(function(){
$('#dropdown').change(function() {
setTimeout(function() {
console.log('checking display property');
if($('#myitem').css('display') == 'none') {
console.log('the
The text in the textarea isn't html. It is just text, containing regular
line break "
". To display them, you either need to enclose the text in a pre tag, or
replace the "
" with <br>.
I would do the latter, since pre doesn't break at all if there's no break
in the text, so you'll have a single long line and a scrollbar.
iPad doesn't work like that. I believe the reason is that the 2x button
would not scale properly for the 4 inch screen.
Either way, you need to make sure your app works well for the 3.5 inch
screen anyway, this may be a good time to rethink some of the design of
your app to make sure it works well in a 3.5 inch screen.
They have a parameter for that.
$max_items_per_feed = 1;
$feed->set_item_limit($max_items_per_feed);
This sets how many items to pull from each feed. You can set it to 5, 27,
or 1 in your case. You put this code above your $feed->init(); function
call.
Stick vertical-align:top; into #selectedView like this:
At the moment, Firefox is placing your text along the bottom of the table
cell.
I somehow managed to solve the problem. I don't know the exact cause of the
problem but after i uninstalled my pythonwin , it seems to work fine
without giving me anymore error messages. Thanks A.Rodas for your link for
pointing out about pythonwin.
Go to your manifest file see your package name by use this package name you
must create Google Maps API V2 key and use
Ex:
<manifest xmlns:android=""
package="com.venky.loadgooglemapsdemo"
android:versionCode="1"
android:
My package name is "com.venky.loadgooglemapsdemo"
If you want Tutorial and Demo please see this link
@media rules themselves don't have any specificity and because you're
adding CSS rules via inline styles, they will always override any external
styles that aren't using an !important rule. So in this case, your only
option is:
@media print {
.price_match_no_print {
display:none !important;
}
}
Maybe try the N-Up datawindow style otherwise you could flatten the data in
SQL using one of the examples in this stackoverflow question on flattening
data the method you would use might depend on if you know how many occurs
there are. If you need the data to be update-able then you'll need to write
some code to un-flatten on save.
function scene:createScene(event)
local group=self.view
local shieldDisplay = shieldDisplay.new()
group:insert(shieldDisplay)
end
Try changing it to
function scene:createScene(event)
local group=self.view
local shieldDisplay = shieldDisplay.new
group:insert(shieldDisplay)
end
Do you use, jquery ? Did you linked to your jquery file?
If not, you should write:
document.getElementById('art1').style.display = 'none';
or maybe try
document.getElementById("art1").style.display = 'none';
Subclassing UIImageView is not necessary. Just create a container UIView
and add 2 UIImageView subviews.
Edit - this should work for your implementation:
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a
nib.
UIImage *backgroundImage = [UIImage imageNamed:@"Default-568h"];
CGRect containerRect = CGRectZero;
containerRect.size = [backgroundImage size];
UIView *containerView = [[UIView alloc] initWithFrame:containerRect];
[[self view] addSubview:containerView];
float scale=.1;
UIImageView *backgroundView = [[UIImageView alloc]
initWithImage:backgroundImage];
[containerView addSubview:backgroundView];
[backgroundView sizeToFit];
UIImage *noteImage = [UIImage imageNamed:@"note"]
DrawText only knows how to display character strings. To display anything
else, you need to convert to a character string first, then display that.
void show_int(int x, /* ... */) {
std::stringstream buffer;
buffer << x;
DrawText(hdcWindow, buffer.str().c_str(), -1, &rc, DT_SINGLELINE);
}
Globally solution during all the session
options(digits=16)
> x
[1] 1.00042589212565
or locally just for x:
sprintf("%.16f", x)
[1] "1.0004258921256499" | http://www.w3hello.com/questions/How-to-fix-tkinter-TclError-no-display-name-and-no-DISPLAY-environment-variable-error-on-Raspberry-Pi-python-code- | CC-MAIN-2018-17 | refinedweb | 2,790 | 58.58 |
Have you ever had a professor say something along the lines of, “If I had the time, I’d just sit down and talk to each of you 1-on-1 to measure what you learned in this class. But I have to give you all a final exam instead”?
I heard a few of my professors express that sentiment. On one hand, it’s noble — test anxiety is a very real thing. But as I’ve gained more experience over the years as a teacher, I’ve come to view that perspective as a bit misguided.
I dislike it for the same reason I dislike most job interviews. Humans, despite their best intentions, will always have unconscious biases, and it’s very difficult to standardize and structure an interview such that each candidate has the exact same experience.
This is all a long-winded way of saying: I really like tests. They’re not a perfect form of assessment either, but with a little prep, they can be a valid and reliable way to measure knowledge, skills, and abilities.
In this tutorial, I’m going to share the methods I use to measure my students’ knowledge. Even if you’re not an educator, I think you might be able to find something useful to apply to your own line of work.
The assumption
There’s one major assumption I need to acknowledge before we jump into my methodology:
I assume that my tests are all measuring one single underlying construct: knowledge of course material.
The opposite of this would be a history professor who believes that her midterm measures two distinct factors (for instance):
- Knowledge of the American Revolution
- Knowledge of the Civil War
The professor in this example is assuming that knowledge of one period of history does not correlate with knowledge of another period.
In my experience, this just isn’t true. Whenever I perform a factor analysis on student responses, there’s clearly only one factor being measured.
Furthermore, we know that IQ is a single factor, and I recall reading that SAT math and verbal scores are moderately-to-highly correlated. (I can’t find the exact r anywhere, except for a few unsourced claims. Let me know if you find anything.)
So, we’ll continue on with the assumption that all test items are closely related.
Logistics
Now, a bit about my philosophy and the logistics of testing.
I write multiple-choice tests in Google Forms and download the results as a CSV. Therefore, all my tests are untimed and open-note. I get such a variance of scores in spite of this that I’m no longer concerned that the tests might be too easy. I still have to curve the scores at the end; the curve is just sometimes less than it would be if it were an in-class, closed-note test.
A big benefit of this format is that students don’t feel rushed, and it seems to significantly reduce testing anxiety. When my schedule permits, I lower the stakes even more by making the tests more frequent and worth fewer points. Formative assessment is a great tool.
Methodology
Okay, let’s talk about the statistics and Python! First let’s load the csv and remove the columns we don’t need:
data = pd.read_csv('quiz_responses.csv') to_drop = ['Unnamed', 'Timestamp', 'Email Address']: for d in to_drop: data = data[[i for i in data.columns if d not in i]]
The list comprehension is technically slower, but I prefer it because this code works even when the
to_drop columns don’t exist.
Each column represents a test question, so let’s shorten the column names and then set the index equal to the student’s name.
data.columns = [i[:60].strip() + '...' if len(i) > 60 else i for i in data.columns] # Find the name column and set it as the index name_col = [i for i in data.columns if 'last name' in i.lower()][0] data.index = data[name_col] del data[name_col]
Things get a little weird here. The next thing I do is transpose the dataframe with
.T and identify the column that’s my answer key. After that, we can grade by comparing each student’s column to the answer key column.
# Transpose and get answer key data = data.T key = [i for i in data.columns if 'answer' in i.lower()][0] # Create new df called `results` and grade each test results = pd.DataFrame(index=data.index) for i in data.drop(key, axis=1).columns: results[i] = np.where(data[i] == data[key], 1, 0) results = results.T totals = results.sum(axis=1)
This code will turn each student’s test into 1’s and 0’s. They get a 1 when they get an answer right, and a 0 when they get it wrong.
Then we transpose the results dataframe again so that the questions are columns again, and then we calculate each student’s total number correct with
totals = results.sum(axis=1).
At this point we’ve essentially graded all the tests. We can now look at the raw scores before we start improving the test itself.
curve = 0 total_pts = 0 for i in results.columns: total_pts += results[i].max() # [1] # Convert to percentages scores_array = totals/total_pts # Calculate stats print('Mean: ', scores_array.mean()) print('Median:', scores_array.median()) print('SD: ', round(scores_array.std(ddof=0), 3)) print() # Print student scores (totals.sort_index()/(total_pts)) + curve
[1]: By calculating total points this way, I can add additional code to weight some questions to be worth more than others. I won’t get too deep in that process here, but all you have to do is multiply a question column by a scalar.
The code you’ve seen so far makes it really easy to just drop the .csv into my grading directory and run the Jupyter notebook. I could take it a step further and refactor the code into a class, but that really hasn’t been necessary thus far.
Once we run the code, we get a nice printout like this:
Mean: 0.671304347826087 Median: 0.7 SD: 0.179 # (Names are fake) Adam Gates 0.92 Alexa Hurst 0.44 Amber Moon 0.52 Amy Wilkinson 0.48 Andrew Evans 0.68 Angela Bowen 0.82 Becky Williams 0.80 Bryan Young 0.78 Christina Fitzpatrick 0.18 Christopher Hunter 0.80 Corey Gibson 0.72 Eric Scott 0.72 Madeline Todd 0.80 Marc Martin 0.70 Mary Hernandez 0.70 Michael Kirk 0.62 Roberto Schwartz 0.80 Nathan Lewis 0.32 Robert Harrell 0.66 Ryan Hill 0.88 Scott Mcintosh 0.64 Terry White 0.86 Thomas Martinez 0.60 dtype: float64
…and that’s my grading process. Now let’s cover how I optimize my test for fairness.
Test item selection
I determine if a question is “fair” by examining the correlation between getting the question correct and a student’s overall score on the exam. This works because of our assumption that the test is only measuring one construct/factor.
There are more advanced statistical methods I could use here. Factor analysis is very appropriate here (as I mentioned earlier), and Cronbach alpha would be a good measure of reliability. With that said, correlations alone still give me great results and is easiest to explain to students.
“Fair” isn’t necessarily the best word to use. The real goal is to see if the question distinguishes between students who know the material, and students who don’t. But, if you string enough of those questions together, and critically evaluate your test for cultural bias, then you can make a strong argument that your test is, in fact, “fair.”
So, how can these correlations inform our decisions?
- A question that everyone gets right isn’t a good discriminator of knowledge. The correlation between that question and a student’s overall score would be zero.
- The same is true for a question everyone gets wrong.
- A question that high performers get right and others get wrong is likely a good question.
- A question that low performers get right and everyone else gets wrong probably a bad question. (Actually, it’s more commonly a sign I made a mistake on my answer key!)
Here’s the code for calculating correlations:
from scipy.stats import pearsonr correlations = [] for i in results.columns: r = pearsonr(results[i], totals) corr = r[0] pval = r[1] corr = round(corr, 3) correlations.append((i, pval, corr)) correlations = pd.DataFrame(correlations, columns=['question', 'pvalue', 'r']) correlations['absol'] = abs(correlations['r'])
Next, we can single out the questions that have low correlations with overall grade:
# Filters (subjective, but this is a good starting point) low_correlation = correlations['r'] < .30 significant = correlations['pvalue'] < 0.1 # Print bad questions (Sorry, WordPress doesn't like my ampersand) bad_questions = correlations[low_correlation & significant]['question'].tolist() if bad_questions: print('Bad questions:') for i in bad_questions: print(i) correlations = correlations.dropna() correlations.set_index('question', inplace=True)
And then we can use seaborn to make a graphical representation of it:
plt.figure(figsize=(3, len(correlations)//4)) sns.heatmap(correlations[['r']].sort_values('r', ascending=False), annot=True, cbar=False);
Success! The bright numbers with high scores appear to be good questions, while the darker numbers are questions I should consider tossing out. (Sometimes, however, this is a clue that I left students with misconceptions — which means that I should reconsider how I teach something.)
Restructuring the test
Now I know what questions were good measures of knowledge of which questions were bad ones.
Lately, I’ve taken to warning students that several questions will definitely be tossed out come grading time. This tempers their expectations and lets them know that their grade won’t simply be what percentage of questions they got right.
It’s like the SAT, I explain to them: new questions need to be tested to see if they deserve a permanent place in my test bank. In a way, I trade transparency for fairness, but I haven’t gotten any complaints yet!
Now, we can put together all the code we’ve seen to automate the question selection process and grade the tests:
# Grade tests results = pd.DataFrame(index=data.index) for i in data.drop(key, axis=1).columns: results[i] = np.where(data[i] == data[key], 1, 0) results = results.T totals = results.sum(axis=1) results_copy = results.copy() # Remove questions where everyone answered the same # (Sorry, this filter is obnoxious but it works!) everyone_answered_same = results_copy.corrwith( totals).sort_values()[ results_copy.corrwith(totals).sort_values().isnull()] for i in everyone_answered_same.index: del results_copy[i] # Sequentially remove questions that don't correlate with overall score worst = 0 threshold = .30 while worst < threshold: worst = results_copy.corrwith(totals).sort_values()[0] question = results_copy.corrwith(totals).sort_values().index[0] del results_copy[question] totals = results_copy.sum(axis=1) print(question)
Now we grade using the same code we saw earlier:
curve = 0 total_pts = 0 for i in results_copy.columns: total_pts += results_copy[i].max() grades = round(totals2/total_pts, 2).sort_values() print('Mean: ', grades.mean()) print('Median:', grades.median()) print('SD: ', grades.std(ddof=0)) grades.sort_values() + curve
At this point, I decide what my grading curve will be. A dirty little secret of academia is that professors can use just about any curve they want.
My general rule is that I want the median score to be 78% for lower division classes, 74% for upper division classes, and 70% for my stats class (which is supposed to be hard, IMO).
I rarely see medians above those targets, but if I did, I wouldn’t curve downward.
Now that the process is complete, we have test grades that fairly and accurately measure student knowledge!
I could take it a step further and adjust the standard deviation as well, but then my grading system would lose even more transparency, and I’m not willing to make that concession. College students understand:
“I threw out the 5 questions that were the worst measures of your knowledge.”
…but they’ll struggle with:
“And then I curved the test so the average grade was 78%, and then compressed the standard deviation to be 8 points.”
Well… my stats students should understand that, but it’s still an explanation that’s a bit too long.
I might fall just short of a standardized test that’s beyond reproach, but that’s asking too much of a college classroom test. Too many other factors vary from class to class (namely, my lectures), so I don’t think it’s worth spending additional effort on a perfectly uniform grading system. But this still goes a long way in showing me what my students know. | https://vincefavilla.com/category/tutorials/ | CC-MAIN-2020-29 | refinedweb | 2,116 | 67.55 |
Hi everyone, in the previous article, we discussed about lifecycle methods in react native. With that knowledge, you already have gotten basic skills to develop a simple app. In this article, we are going to discuss how to Fetch data from API in React Native.
In previous articles, you learnt only the important things in react native with examples. This time, you are going to learn to Fetch data from API in React Native along with developing an app.
Creating the App
So, let’s build a new app and name it ‘Movies’. Use “react-native init Movies” command and build the app. Then, open android folder which is inside project folder in Android Studio. Then wait sometime until all the process are finished in Android Studio.
Now open the project in VSCode. I hope you remember all these things which you learnt before.
Now we start coding.
Customizing project file structure
As we have discussed before, it’s better to have a nice file structure. Therefore, follow the instructions given below.
- Create a folder named ‘src’ inside project folder.
- Cut and paste App.js file into src folder.
- Now, create another folder called ‘Components’ inside src folder.
- Update the path to App.js in index.js file.
- Now, run the app.
We have done these in previous articles. When you run the app, you should get the default react-native app UI.
Now, erase all the code in App.js and start to write your own.
Creating a Header for the App
We simply created a header before. But that code wasn’t reusable. Normally, mobile apps have more than one user interface. We have to navigate among them. If we don’t make our header reusable, we will have to copy and paste the header code to all the JS files. That’s not practical and that’s not what a good developer do. Therefore, we have to create a separate JS file for header and import it into App.js.
Therefore, create a JS file inside Components folder and name it Header.js. I don’t hope to use state or lifecycle methods inside Header.js file. Therefore, I am not going to need a class based component. I am creating a functional based components inside Header.js. Type the following code in Header.js.
import React from 'react'; import { Text, View } from 'react-native'; const Header = () => { return ( <View style={styles.viewStyle}> <Text style={styles.textStyle}>Movies!</Text> </View> ); } const styles = { viewStyle: { backgroundColor: '#04A5FA', justifyContent: 'center', alignItems: 'center', height: 60, paddingTop: 15, shadowColor: '#000', shadowOffset: { width: 0, height: 2 }, shadowOpacity: 0.2, elevation: 2, position: 'relative' }, textStyle: { fontSize: 20, fontWeight: 'bold' } } export default Header;
Here, we have used only Text and View tags. Therefore, I have only imported those. I named the functional based component ‘Header’ and exported it. I named the text in the Header ‘Movies’. I have added styles as well.
Now, import it in App.js as given below.
import React, {Component} from 'react'; import { View } from 'react-native'; import Header from './Components/Header'; class App extends Component { render() { return ( <View> <Header/> </View> ); } } export default App;
As you can see, I have only imported View tag from react-native library. And I have imported Header component from Header.js file. When we import a js file in react native, we don’t need to add ‘.js’ extension at the end. React Native identifies it as a JavaScript file automatically. That’s why we import it like this.
import Header from './Components/Header';
And we have added Header component as a JSX element inside View tag. Run the app now and you should get the following UI.
Workflow of developing the app
Now here is what we are going to do. We are going to fetch some data from an API I created and display it nicely below the header.
I am going to create a separate JS file for data fetching section. Therefore, I am creating another JS file inside Components folder and name it ‘DataList.js’.
Now let’s code in DataList.js.
I am going to need React Native State and Lifecycle methods in this file. So, I am creating a class based component.
import React, {Component} from 'react'; import { View } from 'react-native'; class DataList extends Component { render() { return ( <View> </View> ); } } export default DataList;
Fetch data from API in React Native
To Fetch data from API in React Native, we have lots of functions such as axios, fetch and so on. This time we are going to use ‘fetch’ function. The format of the fetch function is like this.
fetch('URL of the API', { method: 'POST', headers: { 'Accept': 'application/json', 'Content-Type': 'application/json', }, body: JSON.stringify({ variable: value }) }).then((response) => response.json()) .then((responseJson) => { // Some Code if fetching is successful }).catch((error) => { // Some Code if fetching is failed });
Fetch function helps us to send data to the API as well as get data from API. In the body section of above code, it converts data into json format and send it the URL of the API given inside parentheses. In this app we are creating now, we are not going to send data. We are just going to get data. API is already created and it has data in it. This is the URL for it. Since we are not going to send data, following code of fetch function is enough.
fetch(URL of the API) .then((response) => response.json()) .then((responseJson) => { // Some Code if fetching is successful }).catch((error) => { // Some Code if fetching is failed });
This is how we Fetch data from API in React Native. We have to give the URL of the API inside parentheses and get the output into ‘responseJson’ variable. If data fetching is failed, we have an option called ‘catch’ for error handling.
Creating a helping method
To add this fetch method to the class based component, I need the help of a method called helping method. Helping method is something you can create only inside a class based component to help us add an reusable code inside a class based component. Whenever we need that code, we just have to call the helping method. Here is how we write helping method.
helpingMethod(){ ... }
You can give your helping method any name. I am going to name it ‘fetchMovies’. So, let’s add our fetch method inside helping method and fetch data as given below.
fetchMovies(){ fetch('') .then((response) => response.json()) .then((responseJson) => { console.log(responseJson); }).catch((error) => { console.log("Data fetching failed"); }); }
Calling the helping method
Here, we fetch the data and show them in console using “console.log” code. But we can’t get an output from this yet because we haven’t called the helping method. So, create the lifecycle method ‘componentWillMount()’ and call the helping method inside it as given below.
componentWillMount(){ this.fetchMovies(); }
import React, {Component} from 'react'; import { View, Text } from 'react-native'; class DataList extends Component { fetchMovies(){ fetch('') .then((response) => response.json()) .then((responseJson) => { console.log(responseJson); }).catch((error) => { console.log("Data fetching failed"); }); } componentWillMount(){ this.fetchMovies(); } render() { return ( <View> <Text>Some text</Text> </View> ); } } export default DataList;
This code shows how to Fetch data from API in React Native. Here, I have added a text “Some text” to the code because I didn’t want to leave the View tag empty. Now, we have to import the DataList.js file into App.js file.
import React, {Component} from 'react'; import { View } from 'react-native'; import Header from './Components/Header'; import DataList from './Components/DataList'; class App extends Component { render() { return ( <View> <Header/> <DataList/> </View> ); } } export default App;
As you can see, I imported the DataList.js file into App.js file. Now reload the app. To realod the app in emulator you have to double tap ‘R’. If you are using a test mobile device, you can shake the device and you will get a window asking some options. Press ‘Reload’ and app will be refreshed.
I know what you are thinking. Although we fetched data, those data hasn’t been appeared in the UI. We can only see “Some text” we typed. That’s because we didn’t do anything to show those data in the UI. What we did was fetching data and showing them in the console. Let’s open the console and check whether data has been fetched correctly.
Opening console of a mobile app
To open the console of the mobile app in emulator, press Ctrl+M and you will get a window. Press “Debug js remotely”. In test mobile devices, shake your device and you will get the same window. Then, press “Debug js remotely”.
You will see one of your browser tabs open automatically as given below.
Right click on the body of the tab and go to ‘Inspect’. You will see something almost similar to following image.
There, go to Console.
You will see something like given below.
Click on the arrow head as given below.
Now you will see the data we fetched.
As you can see, we have successfully fetched the data from API and shown in console.
Now, you have obtained the knowledge to Fetch data from API in React Native. Now we just have to show these data in the app. We are going to discuss that part in the next article.
Thank you. | http://coderaweso.me/fetch-data-from-api-react-native/ | CC-MAIN-2020-16 | refinedweb | 1,559 | 68.16 |
This is an instructable on how to read a gyro sensor and plot the data using processing software at your desktop. I am using gyroscope model XV81-000 and an arduino. The device is a rough prototype of what will eventually become a self balance robot, this is the first part of the hole thing (read accelerometer and control a motor to self balance).
Step 1: What you need?
- Breadboard
- Microcontroller, I used the Arduino board
- Wire
- Jumper Wires
- Gyroscope XV-8100
Step 2: Building
1� - plug the gyro at the breadboard
2� - plug a small ceramic capacitor to bypass the DC noise between the ground pin and the vout signal. (optional)
3� - add another capacitor to reduce even more the noise between the ground pin and the vcc pin. (optional)
4� - wire ever thing:
- Vo pin from gyro connected to analog port0 at arduino (Blue wire)
- G pin from gyroconnected to ground (White wire)
- V+ pin from gyro connected to Vdd(3.3V) (Orange wire)
o Codigo do arduino_gyro.pde
é o codigo do processing e não do arduino
Vc poderia corigir.
Agradeço pelo tutorial.
Na busca por informações sobre Giroscópio, cheguei no seu tutorial.
Bem, possuo os seguintes componentes:
- Giroscopio0 XV-3500CB PROTOTYPE PCB
- Arduino Duemilanove
- Servo Motor
Estou tentando fazer o mesmo desse link que segue:
Depois de testes verifiquei que este meu sensor possui um sinal muito baixo na porta analógica
Denominada de ANA0 e verifiquei que você usou dois capacitores para amplificar o sinal.
Meu sensor possui já uma saída de sinal digital compatível com l2c pois assim o sinal fica melhor
Pergunto que capacitores você usou em seu projeto e conectado aonde?
O melhor seria usar o I2C para nao ter problemas de interferencia.
To verify A2D (arduino Analog to Digital) converter I connected analog input 0 to ground. With this configuration the A/D converter is giving me strange values ranging (randomly) from a minimum of 0 to a maximum of 64.
Is that possible? Do you know something about poor A/D conversion with arduino.
What kind of dynamic shows your realization?
Thanks for any comment.
Are you using the same code without modifications?
I fixed the A/D conversion problem on arduiono using a resistive partition on AREF pin on arduino board. AREF is now connected to GND with a 10microF, and with a 10K resistor to Vcc (5V). This give me a reference voltage for A/D of 3.77 V.
Yes, I'm using your sample code.
But I'm still having problem with angular value computation.
The angular value is slowly drifting apart when gyro is kept steady
(no force applied).
But I have just found out that in your code you have this constraint:
if(teta>-1 && teta<1) teta=0; //avoid drift error
teta = teta + ( valor * time ) / 1000;
to avoid drifting.
I would like to change it to:
deltaTeta = ( valor * time ) / 1000; // Angle infinitesimal increment
if(deltaTeta>-1 && deltaTeta<1) {
deltaTeta=0; //avoid drift error
}
teta = teta + deltaTeta;
Probably this would avoid spurious variation is steady state.
But I haven't yet tried this solution.
I would let you know as soon I get tested with new code.
Thanks.
Make a simple code to return the value read from your analog port with your gyro steady. The value show often is yout offset. Just replace this value on the code.
This can help improve the steady problem.
This is what I'm doing. I'm reading teta, the A/D output (from 0 to 1023) which is the gyro output voltage read with the arduino A/D converter.
The gyro output from the A/D has a range from 328 to 330 [count] when gyro is in a steady state ( I do not know if this variability is normal in a gyro or not). So I did use value 329 as offset inside the equation used to compute teta.
This produce, anyway, a drifting value for teta which keeps increasing or decreasing (depending on the value i choose for offset).
Also tuning the offset value using 0.1 resolution (let's suppose 329.4 instead of 329.0) can improve performance in the sense that I reduce the speed with which teta is increasing (decreasing) when gyro is in steady state but i?m not able to stop it.
I do not want to bother you too much with this problem.
Probably I would try with a different gyro with a I2C interface with integrated A/D conversion.
Thanks for you help.
Paolo.
The problem was related to the fact that I was querying the arduino's A/D converter to fast, a was not giving enough delay in between two consecutive readAnalog() calls.
Introducing a delay of at least 50 ms definitively solved my problem.
By the way I did find a solution to increase speed in A/D conversion on arduino.
Here are few lines of "code":
#ifndef cbi
#define cbi(sfr, bit) (_SFR_BYTE(sfr) &= ~_BV(bit))
#endif
#ifndef sbi
#define sbi(sfr, bit) (_SFR_BYTE(sfr) |= _BV(bit))
#endif
#define FASTDAC 1
#if FASTDAC
#define DELAY 10 // delay in ms
#else
#define DELAY 50
#endif
so you can use less than 10 ms as delay time in between A/D calls.
I tested this with 5 ms delay and A/D conversion worked grate.
Thanks.
float sens= 0.512;
float offset= 316 ;
float count= 0;
float valor =0;
float first_time, time;
float teta=0;
//####################################################################//
// //
// Gyro //
// Scale Factor 2.5mV/ �/s = 0.512 counts/ �/s //
// Offset 316 counts = 1562mV = 1.562V //
// ADC 4.88 mV/count // 0.2048 count/mV //
// //
//####################################################################//
count =0;
for(int i=0;i<20;i++){
count = count + arduino.analogRead(0);
}
count = count /20;
valor = (count - offset ) / sens;
time=millis()-first_time;
first_time=millis();
teta=teta+valor*time/1000;
if(teta>-1 && teta<1) teta=0; //avoid drift error
cya
Angle=HPF*(teta) + LPF*(arctg( Ay / sqrt( Ax^2 + Az^2 ))
(from your other instructable)
Sorry my bad english and worst explanation.
From the ADC 5V / 1024 = 4.8mV / count(raw value)
( 2.5mV / (º/s) ) / (4.8mV / count) = 0.512 count(raw value) / (º/s)
so, if you read from the adc channel a value like 2, this means 3.9 º/s
2 / 0.512 = 3.9
3.3mV / (º/s) ) / (4.8mV / count) = 0.512 count(raw value) / (º/s)
and if you would wire up 3.3v to Aref pin it would be
From the ADC 3.3V / 1024 = ...
3.33mV / (º/s) / (3.222mV / count) = 1.0333 count (raw value) / (º/s)
316
You will have something like 1.23V. Or 381 counts.
This is your offset.
So, if you extract the value you are reading from the offset you will have ZERO.
Thanks this realy helped a lot because there isnt much about gyros or 6DOF on arduino.cc
im gonna use this for a quadrocoper with a razor 6dof from Sparkfun
Can you provide the manufacturer of the breakout board or the retailer from who you got it? I have only been able to find the Gyro in SMD w/o breakout board. Thanks
i bought that board on ebay, but I didn't found the seller anymore.
cya
Right now I have it set so if I type "T" into my serial monitor, it sends the command out digital pin 9, through my relay, and "presses" a button on my TV controller, thus turning it on.
Can I do something so that pressing a button in my Processing sketch would be like typing "T"?
color fillVal = color(126);
import processing.serial.*;
void draw() {
fill(fillVal);
rect(25, 25, 50, 50);
}
void keyReleased() {
Serial myPort;
myPort = new Serial(this, Serial.list()[0], 9600);
if (keyCode == 'T') {
println("T key pressed");
myPort.write("T");
}
}
Thank you very much.
I'm pretty new to Processing, so I'm not too sure of all it's capabilities.
1)take a cork and insert in the center a long steel stick
2)put your stick in vibration vertical and hold the cork under your palm
3)turn the cork the axe of the stick stay vertical and you feel a force under your palm
in this type of chips they use a double (A,A')=x=(B,B') lyre the extremity (A and A')is put in vibration by a solenoid the other extremity (B and B') will vibrate then generate a current in a receptive solenoid the difference between the current generate by B and B' with a constant vibration A and A' will mesure the anti couple
Would you please explain it mathematically? For I see here:
if(teta>-1 && teta<1) teta=0; //avoid drift error
That the absolute value of teta is less than 1, then teta could not be in degree...right? next I don't see any conversion to degree, and then it is added to 270 and used as a degree. The program works with my IMU and it means that it is correct. I anyway can't understand the math behind this angle offset! do you see where my problem in understanding is? ;)
if(teta>-1 && teta<1) teta=0; //avoid drift error <- when you integrate the signal from the IMU if the IMU doesn't stay still, an error will be summed to the angle, so, this line try to prevent this.
the line() function on processing works in rad, so we have to make a conversion, that's why I use the function radians(270+teta); i really don't remember why i choose this value of 270, but i know it's work fine for me, xD
if you need some other explanation, feel free to ask. | http://www.instructables.com/id/ArduinoGyroscopeProcessing/ | CC-MAIN-2014-42 | refinedweb | 1,618 | 72.76 |
Import class using mathematica
I have a class a in a file a.sage using the interface to mathematica. I want to use this class in another class b. One way I found was to run first "mv a.sage a.py", which generates a python file. But then I get the error: "NameError: global name 'mathematica' is not defined" because you cannot use mathematica in python as you can use it with sage.
Is there a way importing a self-written class (a.sage) using mathematica in another self-written class (b.sage)?
This would be a problem with anything similar, not just that particular interface. You may want to try
from sage.all import *at the top of your
.pyfile ... | https://ask.sagemath.org/question/37229/import-class-using-mathematica/ | CC-MAIN-2017-34 | refinedweb | 122 | 69.38 |
If a model
Parent
AChild
BChild
@parent.a_childs.count = 1
@parent.b_childs.count = 2
@parent.count_all_children = 3
Assuming that the
Parent model
has_many associates
AChild and
BChild, this is a solution for any number of child models. Drop the following method into the Parent model file:
def count_all_children counts = [] Parent.reflect_on_all_associations(:has_many).each do |assoc| counts << self.public_send(assoc.name).count end counts.sum end
That uses reflection and may be relatively slow. If you only have those two child models and don't intend to add (m)any more, then just sum the two:
def count_all_children self.a_childs.count + self.b_childs.count end | https://codedump.io/share/cuq8QuYcpmW5/1/count-all-child-objects-across-class | CC-MAIN-2018-05 | refinedweb | 104 | 53.47 |
Chess piece on view
I am creating a chess board with normal views, I am trying to put a chess piece on a single view(all the views have the same custom view class).
No ok, I've found my copy error, Now I will check
This works, but you need to define an ImageView as subview of your custom views
It should be better if your boxes were buttons instead of ui.views and fill their background_image
def __init__(self): self.touch_enabled = True iv = ui.ImageView() iv.frame = self.bounds iv.image = ui.Image.named('emj:Angry') #iv.load_from_url('') self.add_subview(iv) def did_load(self): pass
It would be easier to create your 64 views by program, rather than by the ui designer.
Quick and dirty script, only to show, try it please
import ui def baction(sender): sender.superview.name = sender.name v = ui.View() v.background_color = 'white' y = 10 d = 40 for row in range(8): x = 10 for col in range(8): b = ui.Button() b.name = 'row '+str(1+row) + ' / ' + 'col '+str(1+col) b.frame = (x,y,d,d) b.background_image = ui.Image.named('emj:Airplane') b.action = baction v.add_subview(b) x = x + d + 10 y = y + d + 10 v.present()
Ok, I’m going to do some stuff about it and if I get a problem I will come back another day I guess.
Just to show how it could be easy
import ui def baction(sender): sender.superview.name = str(sender.row_col) v = ui.View() v.background_color = 'white' v.name = 'for @AZOM' pieces = ['♜♞♝♛♚♝♞♜','♖♘♗♕♔♗♘♖'] y = 10 d = 50 flip = 0 for row in range(1,9): x = 10 for col in range(1,9): b = ui.Button() b.font = ('<System>',d*0.8) b.tint_color = 'black' b.border_width = 1 b.row_col = (row,col) b.frame = (x,y,d,d) if row == 1: b.title = pieces[row-1][col-1] b.tint_color = 'black' elif row == 2: b.title = '♟️' elif row == 8: b.title = pieces[row-7][col-1] elif row == 7: b.title = '♙' b.background_color = ['beige','brown'][flip] b.action = baction v.add_subview(b) flip = 1 - flip x = x + d + 2 flip = 1 - flip y = y + d + 2 v.present()
You might consider scene instead of UI. Scene would let you animate the piece motions more naturally for example.
@JonB you right, of course, but I only wanted to show it is easier to create the buttons by program instead of the ui designer
And that you don't need images for the chess pieces, because there are ucode characters representing them
@JonB, thanks, it has been a while since I got to advertise this. :-) @AZOM, if you want to not translate everything to scene and still want nice animations, check out Scripter.
I was trying to run scripter-demo.py, but I got:
scripter/init.py", line 63
except Usage, err:
^
SyntaxError: invalid syntax
My fault - wrong installation procedure. | https://forum.omz-software.com/topic/5894/chess-piece-on-view/30 | CC-MAIN-2021-17 | refinedweb | 487 | 68.57 |
Haskell Quiz/IP to Country/Solution Dolio
From HaskellWiki. :)
This just processes the raw file downloaded from the website linked in the quiz, and processes it linearly. One could probably devise an optimized version of the database, or a more efficient searching scheme and gain performance, but the naive solution is still plenty fast.
{-# LANGUAGE PatternGuards #-} module Main(main) where import Data.Maybe import System.Environment import qualified Data.ByteString.Char8 as B import qualified Data.ByteString.Lazy.Char8 as L -- Process a file by line. For each line in the file denoted by -- the FilePath, the function is called. If the result of the -- computation is True, processing is cut off early. -- -- This uses lazy chunked reading, but operates on the chunks -- one by one for (hopefully) maximum speed. processFile :: FilePath -> (B.ByteString -> IO Bool) -> IO () processFile path op = proc . L.toChunks =<< L.readFile path where proc [] = return () proc [c] = proc' (B.lines c) >> return () proc (c:cc:cs) = do b <- proc' (B.lines c') if b then return () else proc cs' where (c', t) = B.breakEnd (=='\n') c cs' = B.append t cc : cs proc' [] = return False proc' (x:xs) = do b <- op x if b then return True else proc' xs -- Given an ip, represented as a 4-tuple, and a line expected to come -- from the ip database, determines whether the ip matches. If it does, -- the corresponding country is printed, and an exit is signaled. ipSearch :: (Int, Int, Int, Int) -> B.ByteString -> IO Bool ipSearch (a,b,c,d) s | Just (from, to, country) <- parse s, from <= ip, ip <= to = B.putStrLn country >> return True | otherwise = return False where ip = d + 256*c + 256*256*b + 256*256*256*a parse s = case B.split ',' s of [f,t,_,_,_,_,c] -> do (from,_) <- B.readInt (B.tail f) (to, _) <- B.readInt (B.tail t) return (from, to, B.tail (B.init c)) _ -> Nothing main = do (ips:_) <- getArgs processFile "IpToCountry.csv" (ipSearch $ ipParse ips) where ipParse = convert . B.split '.' . B.pack convert [a,b,c,d] = fromJust $ do (a',_) <- B.readInt a (b',_) <- B.readInt b (c',_) <- B.readInt c (d',_) <- B.readInt d return (a',b',c',d') | http://www.haskell.org/haskellwiki/index.php?title=Haskell_Quiz/IP_to_Country/Solution_Dolio&redirect=no | CC-MAIN-2014-15 | refinedweb | 374 | 78.55 |
Content-type: text/html
#include <rpcsvc/ypclnt.h>
int yp_update(char *domain, char *map, unsigned ypop, char *key, int keylen, char *data, int datalen);
yp_update() is used to make changes to the NIS database. The syntax is the same as that of yp_match() except for the extra parameter ypop which may take on one of four values. If it is POP_CHANGE then the data associated with the key will be changed to the new value. If the key is not found in the database, then yp_update() will return YPERR_KEY. If ypop has the value YPOP_INSERT then the key-value pair will be inserted into the database. The error YPERR_KEY is returned if the key already exists in the database. network is running secure RPC.
If the value of ypop is POP_CHANGE, yp_update() returns the error YPERR_KEY if the key is not found in the database.
If the value of ypop is POP_INSERT, yp_update() returns the error YPERR_KEY if the key already exists in the database.
See attributes(5) for descriptions of the following attributes:
secure_rpc(3NSL), ypclnt(3NSL), attributes(5)
This interface is unsafe in multithreaded applications. Unsafe interfaces should be called only from the main thread. | https://backdrift.org/man/SunOS-5.10/man3nsl/yp_update.3nsl.html | CC-MAIN-2017-39 | refinedweb | 197 | 64.1 |
These are the instructions for the program I wrote:
A student object should validate its own data. The client runs this method, called validateData(), with a student object, as follows:
String result = student.validateData();
if(result == null)
<use the student>
else
System.out.println(result);
If the student's data are valid, the method returns the value null; otherwise, the method returns a string representing an error message that describes the error in the data. The client can then examine this result and take the appropriate action.
A student's name is invalid if it is an empty string. A student's test score is invalid if it lies outside the range from 0 to 100. Thus, sample error message might be
"Sorry : name required"
and
"Sorry: must have 0 <= test score <=100"
This is my Student class:
public class Student { private static String name; private static int test1; private int test2; private int test3; public Student(){ name = ""; test1 = 0; test2 = 0; test3 = 0; } public void setName (String nm){ name = nm; } public static String getName(){ return name; } public void setScore (int i, int score){ if(i==1)test1 = score; else if (i == 2)test2 = score; else test3 = score; } public int getScore (int i){ if(i==1) return test1; else if(i==2)return test2; else return test3; } public int getAverage(){ int average; average = (int)Math.round((test1 + test2 + test3)/3.0); return average; } public int getHighScore(){ int highScore; highScore = test1; if (test2 > highScore) highScore = test2; if (test3 > highScore) highScore = test3; return highScore; } public String toString(){ String str; str = "Name : " + name + "\n" + "Test1: " + test1 + "\n" + "Test2: " + test2 + "\n" + "Test3: " + test3 + "\n" + "Average: " + getAverage(); return str; } public String validateData() { String message = ""; if(name == "") message = "sorry please enter a name"; if(test1<0 || test1>100) message += "Sorry - Test 1 is not in between 0 and 100 \n"; if(test2<0 || test2>100) message += "Sorry - Test 2 not in between 0 and 100\n"; if(test3<0 || test3>100) message += "Sorry - Test 3 not in between 0 and 100"; return message; } }
And this is my ValidateData class:
import java.util.Scanner; public class ValidateData { public static void main(String[] args) { Scanner reader = new Scanner(System.in); Student student = new Student(); String name; System.out.println("What is the students name? " ); name = reader.nextLine(); student.setName(name); System.out.println("Test #1 score: " ); student.setScore(1, reader.nextInt()); System.out.println("Test #2 score: " ); student.setScore(2, reader.nextInt()); System.out.println("Test #3 score: " ); student.setScore(3, reader.nextInt()); String result = student.validateData(); if(result==null) System.out.println("All is OK\n" ); else System.out.println(result+"\n" ); } }
The program works fine with the exception that it does not prompt the user if the "name" is missing in case the name wasn't entered. The program also does not display the average of the scores and the coding for that is inserted into the Student class. The program does, however prompt the user that the scores entered are not in range if they are >100.
Any help with the syntax of these codes would be greatly appreciated,
Thank you. | https://www.javaprogrammingforums.com/whats-wrong-my-code/11965-validation-based-program-doesnt-function-properly.html | CC-MAIN-2021-04 | refinedweb | 516 | 60.45 |
Created on 2006-07-05 15:31 by bdoctor, last changed 2007-03-13 20:30 by georg.brandl.
This is a patch to commands.py to enable callback
support, which is very useful for long-running
commands. Whenever there is stdout from the process the
callback is fed whatever it got. Example usage:
import commands
cmd = 'top -b -n2'
def fancy(out):
print 'GOT(%s)' % out.strip()
commands.cb = fancy
(s,o) = commands.getstatusoutput(cmd)
print 'OUTPUT (%s)' % o
Please consider adding this. The existing API is not
changed, however as you can see it is simple to use the
callback.
Your semantic for the cb name seems kind of arbitrary. Why is it called for each line of the output and only once with all the output. Plus, iterating through the output line for line can make it much slower than reading it all at once. Also, the next time you run getstatusoutput(), maybe even from another module, the callback will still be called. That could be unexpected. Plus commands is almost deprecated in favour of subprocess. I recommend rejecting this patch.
Rejecting this for a number of reasons:
Module-level globals to be set by the user of a module are bad
1) it is not obvious if it's not set directly before the getstatusoutput() call
2) it's completely confusing with threads
3) commands is quasi-deprecated in favor of subprocess | http://bugs.python.org/issue1517586 | crawl-002 | refinedweb | 236 | 66.23 |
Murdoch's AP Computer Science MOOC Goes Live 67
Posted by Unknown Lamer
from the training-tomorrow's-workers-for-today's-market dept.
from the training-tomorrow's-workers-for-today's-market dept.
theodp writes "Friday saw the launch of Rupert Murdoch's AP Computer Science MOOC. Taught by an AP CS high school teacher, the Java-centric course has students use the DrJava lightweight development environment for the exercises. 'If this MOOC works,' said Amplify CEO Joel Klein, 'we can think of ways to expand and support it.' Only the first week's videos are posted; course content is scheduled to be presented through March, with five weeks thereafter set aside for AP Exam prep. Might as well check it out, you may have helped pay for it — a MOOC-related Amplify job listing notes that 'This position may be funded, in whole or in part, through American Recovery & Reinvestment Act funds.'"
Lesson one: (Score:5, Funny)
public class HelloWorld {
public static void main(String[] args) {
System.out.println("Hello, World. Obama is a muslim.");
}
}
Re: (Score:1)
Well, given the close-minded leftist bent of most wackademics, competition on multiple fronts is a good thing.
It's so sad when people parrot opinions heard in the media as their own.
Was Java a good choice for the AP requirement? (Score:1)).
Re: (Score:1)
NO.
Python or javascript to start tinkering - the Khan academy stuff with processing.js is quite fun.
Then C when they're ready to start learning how a computer works under the hood.
Followed by scheme & SICP to understand some important principles.
And finally, perhaps java to understand all the OOP crap.
Re: (Score:1)
Javascript is a terrible choice. While many claim that it allows you to get something working quickly because there's nearly no compiler or interpreter type checking, what they really mean is that it allows them to get something not-working quickly, and then very slowly fix it as they discover the problems. When you're learning, it's much better to be told "no, that's wrong" immediately, than to just be told "yeh yeh, it's fine", and have things break in odd ways. Something that enforces fairly strong ru
Re: (Score:1)
Python is good in this regard then, as syntax and structures are fairly rigidly enforce.
Perl would also be a good choice, as well as Lua, but it's not nearly as clean-looking as Python (though just as functional), and I have always loved the Perl motto "There Is More Than One Way To Do It."
For it's purpose, though, java works just fine. It's high level enough that you aren't in the weeds dealing with memory locations or garbage collection, yet you can still uncover more advanced material like OOP, threading
Re: (Score:1)
Python would be an even better choice if it weren't dynamically typed. Personally, I think a first learning language should be statically typed - because the concept isn't that hard to learn, and it's better to start with static typing before you move to dynamic typing. This is not to say that Python is a bad language, or even that it is a bad language for first learning; but I think the dynamic typing is the one problem with it as a teaching language.
Re: (Score:3)
Was there some aspect of Java that was seen as particularly useful pedagogically, or did somebody get seduced by the 'Java is the Enterprise Language Of The Future, don't you want students to be learning Relevant Job Skills?' line?
Re:Was Java a good choice for the AP requirement? (Score:5, Interesting)
As best I can determine:
It was Pascal for many years, which had once been widely used as an introductory language. But by the late-'90s Pascal was starting to be seen as an obsolete choice, and the exam was switched to C++ in 1999, with the justification being that C++ was widely used and more practlcal than Pascal.
However this move was seen by many educators as producing significant teaching complexities, since the classes (partly exacerbated by what material the exam chose to test) ended up spending an inordinate amount of time on accidental complexity that obscured real issues for novice programmers, like how iostreams works. I took AP CS in 1999, and we spent weeks on iostreams, along with miscellaneous other C++-specific nonsense. Dissatisfaction was high enough that the exam fairly quickly abandoned C++, but wasn't willing to go back to Pascal, which was still seen as obsolete. So they moved to Java in 2003, with the justification that Java could exercise many of the same concepts as C++ (you could teach OO and whatnot), but with less up-front complexity for novices. And it's stuck there since.
Re: (Score:2)
This pretty much sums it up. The big downside to pascal was that it took more effort to run on multiple platforms. Java had that going.
Since then, FPC and Lazarus made things somewhat less complicated but still not easy, and Delphi is barely entering that market now, with Embarcadero recently releasing cross compilers and Firemonkey.
Re: (Score:2)
I should also mention that there are a LOT of compilers and other solutions available based on the (object) Pascal language today, such as [smartmobilestudio.com]
Good times for pascal.
Re: (Score:2)
Re: (Score:2)
I hear you, but Java made that even simpler (to grasp).
You only build (and debug!!!) something (with a GUI) once, and it works for everyone (ok, over 99% of users) that has the (proper) VM installed.
Re: (Score:1)
I can confirm as the 2002 test was C++. The test required the test-takers to be educated on test-specific classes. I think we used some class called "apstring". Of course you needed to know the ins-outs of that class, but most of the test assumed it was part of the C++ core language. Then you try to do something on your own...
Re: (Score:2)
I took the AP CS course in 2003 in C++, and again in 2004 in Java. "Slack off in the computer lab and learn a new programming language", count me in! Later, this proved to be a wise choice.
Re: (Score:2)
Re:Was Java a good choice for the AP requirement? (Score:5, Informative)
Every part of the method declaration you quoted is important for a student to understand. And not just the student who is learning java.
1. "public". This speaks to the difference between public, private and package visible methods. That is to say, information hiding, which is a key concept in object oriented design.
2. "static". This speaks to the difference between class and instance methods. Again, a key concept in object oriented design.
3. "void". Return types. Or, in this case, the lack of one. You'll be hard pressed to learn how to program w/o learning about functions that return a value.
4. "main". This speaks to the need for the operating system to know where to "start" your code when it's executed.
In fact, I'd even go so far as to to say that java being verbose and requiring that these modifiers be explicitly specified is a positive in the context of it being used in a teaching context.
Re: (Score:2, Insightful)
Right, that's 4 concepts which have to be explicitly explained (or passed over) before we even get to how to put a single character on the screen, or add two numbers....
Re: (Score:3)
Re: (Score:2)
To be honest, I find people who write everything like Java is a much bigger problem than people who write java like C. Almost all coders can manage to get their head around OOP, what's rare is one with the breadth of knowledge to understand how to write imperative (but not OO) programs well, how to write functional programs well, how to write logic programs well, and how to know when to use each one of them. The guy who sits there going "zomg, Java, must write OO code" is a moron.
Re:Was Java a good choice for the AP requirement? (Score:5, Insightful)
Right, that's 4 concepts which have to be explicitly explained (or passed over) before we even get to how to put a single character on the screen, or add two numbers....
Exactly this. It's accidental complexity that has nothing to do with the fundamental tasks of programming that are supposed to the focus of study.
A BASIC/Python "print" or a Pascal "write/writeln" is supposed to be obsolete and clunky, but System.out.println enclosed within a mandatory class that is just not a class, but a public one, with not just a main function, but a static one, and with its name exactly case-matching the filename that declares it while making sure that no other "public" class exists in said filename, that is supposed to be pedagogical progress </rolls eyes>
Re:Was Java a good choice for the AP requirement? (Score:4, Insightful)
Programming is hard. News at 11.
Re: (Score:2, Insightful)
Programming can be taught hard. But compared to other academic fields, it's relatively easy.
Re: (Score:2)
OMG Computer science is Hard. woews meeee!!!
If some one can't understand the 4 basic thing, I don't want them programming computers, thank you very much.
And if you can't easily explain them, I don't want you teaching anyone computer programming.
Re:Was Java a good choice for the AP requirement? (Score:5, Insightful)).
As a professional who has done Java for application (enterprise/web/web service) and system/network protocol development for 12 years, I would say no. Java is not a good choice for a beginning or even intermediate level programming curriculum. As a very productive platform for developing robust systems, Java delivers.
For pedagogical purposes, specially as a starting programming language, it is atrocious. I would have preferred Python or Ruby focusing first on procedural programming, leaving object and functional features for later (rarely does a student leverages OOP and FP cleanly without having a good grasp of procedural programming, data structures and algorithms.) Or *gasp* BASIC or a Pascal variant or even C (students need to know right of the bat what a segfault is.)
I would typically choose Java for development of robust systems. I would never use it as a language in an into-to-programming course.
Re: (Score:3)
Ruby and Python are scripting languages. Students are better off being exposed to Java or C++ from the start so they can see if they have the knack for programming or not. Sink or swim, as it were. Putting students on scripting languages just creates the perception that they are doing true software development when they are not. If you're going to use Python or Ruby you might as well put them on VBScript or JavaScript. Or go whole hog and put them on CoffeeScript so they don't have to deal with the agony of curly braces and proper formatting.
That's bull. Scripting or otherwise doesn't make a difference for the tasks of learning the basics. Again, based on my 12 years of programming experience with Java, Java is not a good pedagogical choice. And C++ (where I also have work experience)? Please, that's another bad choice. Too many semantic subtleties that have nothing to do with elemental programming tasks. Plain old C is a much better pedagogical option. Like C++, no memory management and plenty of segfaults and pointer management. Unlike C++,
Re: (Score:2)
Maybe the problem is you are too stupid to understand complex things?
" Too many semantic subtleties that have nothing to do with elemental programming tasks. Apparently I am right.
Since insulting seems the way to prove you are right (or let me say "your right" so that you get a chance to bash teh gramm3r), maybe I should just let you assume the answer to your question since you seem to be the pedagogical expert here.
See, your typical answer indicates with almost certainty that you are a) either an inexperienced fanboi or perhaps b) an idiot savant that might have many years of experience, and still be an idiot savant. Perhaps you might be c) a software Sheldon Cooper, but probabili
Re: (Score:2)
Unless you want your students to only use scripting languages and whitespace formatting forever, you're setting them up for failure.
This only true if the student does not cover anything but scripting languages. That is not an argument that I made, but don't let that stop you from building a straw man.
What are they to do when they get into the real world and run into "semantic baggage" languages in the enterprise space?
Well, what is a student to do in the real world when they encounter the "semantic baggage" of the enterprise with only one or two introductory programming courses using either C++ or Java? Would you expect a student to be able to be capable to deal with the real world with a course or two, regardless of the programming language used?
Obvio
Re: (Score:2)
1. Slashdot is news for nerds
2. Java is lightweight
3.
4. Profit!
Re:"Java" and "lightweight" in one sentence... (Score:4, Funny)
...head explodes.
Be fair: I ordered an extra 16 GB of RAM not long ago, and the two DIMMs together only weighed maybe 50 grams.
Re: (Score:2)
MOOC (Score:2, Informative)
Nothing on the site explained what "MOOC" stood for, including the FAQ where it should have been question and answer #1. Luckily Google helped me out, but it's still something that should be front and center on the site. This is like Communications 101: define your jargon/acronym the first time it's displayed, don't just assume everyone knows what you're talking about. It's indicative that these people are so far up their own ass they just assume everyone already is on the same page as they are.
Re: (Score:3)
Re: (Score:2, Interesting)
"Nothing on the site explained what "MOOC" stood for"
and your not going to tell us either.
I guess the MO stands for Massively Online
And C stands for Course
I can't guess what the other O is for
Some wear "O" for the rainbow
Weigh a pie
Re: (Score:1)
Maybe it stands for "MOO Cow". It is actually a Gateway computer in disguise!
MOOC: Definition (Massively Open Online Course) (Score:2)
Das Wiki has it here [wikipedia.org]:.[1]
Features associated with early MOOCs, such as open licensing of content, open structure and learning goals, and connectivism may not be present in all MOOC projects,[2] in particular with the 'openness' of many MOOCs being called into question[3] raising issues around the 'reuse' and 'remixing' of resources.[4]
The three main present biggies are compared on this NYT article: Coursera, Edx and Udacity.
huh (Score:2)
Re: (Score:2)
Eclipse requires a class or two unto itself. DrJava is easier to get started and focusing on learing JAVA and not the IDE.
Re: (Score:2)
Why Java? (Score:3)
Python already has a dictator - no role for Rupert.
Lisp is illegal in Russia.
Google uses it so it must be good.
Java is maintained by a large corporation.
Java is not a functional language.
Too many third-world software designers already - first world kids should learn to become something non-exportable like plumbers, waits, or politicians.
Smart phones!
Rupert thought it was just like Javascript, only shorter.
Teaching a language they could use would be too dangerous. Leave cracking to the Nazional Sekurit Apparatus.
Paid off.
Ugh (Score:1)
Sesame Street to launch STEM MOOC (ElMO-OC?) (Score:2)
Sesame Street Widens Its Focus [nytimes.com]:.
The K-12 education market .. (Score:3, Insightful)
Dwight Schultz: A-Team meets ST:Voyager (Score:4, Funny)
Am i the only one that was hoping this to be a story about crazy ass pilot Murdock (actor Dwight Schultz) from the A-Team having built a computer laboratory/system much in the spirit of programmer Zimmerman (actor Dwight Schultz), the creator of the EMH (Emergency Medical Holographic) doctor program from Voyager?
I hope not..
Re: (Score:2)
Re: (Score:2)
Right i must have been confuzzled.
IdeoneAPI in the backend (Score:1)
Safety can be bad (Score:2) | http://news.slashdot.org/story/13/09/03/0124256/murdochs-ap-computer-science-mooc-goes-live | CC-MAIN-2015-18 | refinedweb | 2,770 | 70.33 |
New Features
- LogBrowser console
- Transformation feature provides for a fast and effective way to transform inbound and/or outbound XML messages, please see the TransformationFeature page for more information.
- JIBX databinding
- Faster startup and reduced spring configuration. The Spring support has been redone to be based on the ExtensionManagerBus. This results in much faster startup. It also means that all of the imports of META-INF/cxf/cxf-extension-*.xml are no longer needed and are deprecated.
- WSS4J has been updated from 1.5.x to 1.6. See here (not yet live) for the list of new features and upgrade notes for Apache WSS4J 1.6. Also see Colm's blog for an ongoing list of things that are happening in WSS4J 1.6. Some notable new features for CXF users include:
- SAML2 support: WSS4J 1.6 includes full support for creating, manipulating and parsing SAML2 assertions, via the Opensaml2 library. See here for more information.
- Performance work: A general code-rewrite has been done with a focus on improving performance.
- Support for Crypto trust-stores: WSS4J 1.6 separates the concept of keystore and truststores. See here and here for more information.
- WS-SecurityPolicy support for SAML tokens.
- Initial Apache Aries Blueprint support. This is a work in progress, but many of the Spring namespace handlers that are used with Spring to start/create/configure clients and services now have Blueprint versions as well. See for some of the schemas that have been ported to Blueprint.
Removed modules
The osgi http transport has been removed (cxf-rt-transports-http-osgi) the cxf-rt-transport-http now also supports the osgi case. As the OSGi bundles are separate anyway there are no required changes in the container.
API Changes
- GZIP related interceptors/features have been moved out of the http module so they are usable with other transports such as JMS. As such, their package has changed from org.apache.cxf.transport.http.gzip to org.apache.cxf.transport.common.gzip
- XmlSchema has been updated from 1.4.x to 2.0. As such, any use of XmlSchema classes may have changed. In particular, XmlSchema 2.0 uses Java 5 collections which changes how it's used. Also, many static utility methods that existed in org.apache.cxf.common.xmlschema.XmlSchemaUtils have now been merged directly into the XmlSchema API's and are no longer needed or available.
- WSS4J has been updated from 1.5.x to 1.6. WSS4J 1.6 has dropped the requirement of JDK 1.4, and as such has been upgraded to use Java 5 collections, etc. Some API changes to be aware of include:
- has changed from an Opensaml1 specific Assertion object, to an AssertionWrapper instance, which is a WSS4J specific object which encapsulates an Assertion, as well as some information corresponding to signature verification, etc.
- Some changes have been made to the WSPasswordCallback identifiers that are used in a CallbackHandler implementation. See here for more information.
- Neethi has been upgraded from 2.0.x to 3.0. Due to deficiencies and restrictions in the Neethi 2.0.x API's, CXF has maintained a semi-fork of various parts of Neethi in the org.apache.cxf.ws.policy packages. With CXF 2.4.x and Neethi 3.0, the deficiencies in Neethi have been addressed and the forked changes have been pushed down into Neethi and CXF can better leverage enhancements and new functionality in Neethi directly without duplicating functionality. If you write custom policies for CXF, some changes will be required. These include:
- The CXF AssertionBuilder interface has been removed. We now use the Neethi AssertionBuilders and Assertions directly.
- The "getPolicy()" method of PolicyAssertion has been removed. Policies that can contain nested policies should implement the Neethi PolicyContainingAssertion interface directly.
- Neethi has been updated to be able to process WS-Policy 1.5 policies. Thus, the Assertion interface now has a isIgnorable() method that must be implemented. An implementation of returning false should be adequate and compatible with previous behavior.
- With the removal of the CXF AssertionBuilder and the implementation if the intersection algorithm in Neethi, the "buildCompatible" method that was on the CXF AssertionBuilder is no longer needed. If a policy needs a custom intersect algorithm, they can now implement the Neethi IntersectableAssertion interface.
- All locations in CXF that expected the CXF specific PolicyAssertion now expect a normal Neethi Assertion. If the Assertion needs specific logic to determine if it's been asserted, it can implement the CXF PolicyAssertion interface, otherwise the default logic will be used.
- Since Neethi has been updated to use Java 5 generics, you may need to update and casts and warnings that may occur when calling the new methods that are now typed.
- JAX-RS Search extensions: org.apache.cxf.jaxrs.ext.search.SearchContext has a new getSearchExpression method returning the raw search query; org.apache.cxf.jaxrs.ext.search.SearchCondition has its toSQL method deprecated and a new accept method added. Please see this page for more information.
- JAX-RS WADL generation: org.apache.cxf.jaxrs.ext.Description and org.apache.cxf.jaxrs.ext.xml.XMLName have been moved to org.apache.cxf.jaxrs.model.wadl package given that their purpose is to improve the WADL generation. Also, org.apache.cxf.jaxrs.model.wadl.WadlElement has been renamed to 'ElementClass'.
Runtime Changes
- The ExtensionManagerBus (mostly used when Spring is not available) has been updated to completely support all the features including the WS-SecurityPolicy, WS-RM, etc... features. Previous WSDL documents that contained policy fragments may now behave differently as the policies will be enforced.
- The default CA certs that ship with the JDK are now not loaded by default by the WS-Security Crypto implementation, which is used for encryption/decryption and signature creation/verification.
- WSS4J 1.5.x ignored (enveloped) signatures on SAML (1.1) assertions - this is no longer the case, so deployments which do not set the correct keystore/truststore config for dealing with signature verification will fail.
- The way that UsernameTokens are processed by WSS4J has been changed. See here for more information. The callbackhandler identifier for plaintext passwords is now WSPasswordCallback.USERNAME_TOKEN, the same as for the digest case. The CallbackHandler implementation only sets the password on the callback, and never does any validation of the password.
Property Changes
- The "ws-security.ut.no-callbacks" property has been renamed to "ws-security.validate.token" and thus in order to configure the CXF WS-Security interceptors to postpone the validation of the current (UT) token one needs to set a "ws-security.validate.token" to false.
Please see this section for more information.
- WSS4J 1.6 has added support for separating keystore and truststores. See here and here for more information. The changes are 100% backwards compatible (aside from not loading the default CA certs). | https://cwiki.apache.org/confluence/display/CXF20DOC/2.4+Migration+Guide | CC-MAIN-2015-35 | refinedweb | 1,136 | 51.24 |
We are about to switch to a new forum software. Until then we have removed the registration on this forum.
Hallo. I searched the forum but i could not find something relative.
My question and problem is: I am trying to use the video library on windows 10 (x64 win version) tablet with Z8350 processor and intel AVStream cam
I am using basic code:
import processing.video.*; // βασικό - βιβλιοθήκη Capture cam; // βασικό - δήλωση αντικειμένου κάμερα με όνομα cam boolean preview = true; void setup() { size(640, 480); String[] cameras = Capture.list(); if (cameras.length == 0) { println("There are no cameras available for capture."); exit(); } else { println("Available cameras:"); for (int i = 0; i < cameras.length; i++) { println(cameras[i]); } cam = new Capture(this, cameras[0]); cam.start(); } } void draw() { if (cam.available() == true) { cam.read(); } image(cam, 0, 0); }
the problem is i cannot have the video working in my tablet. Also sometimes even the cameras list with println(cameras[i]); cannot be listed (i just see a grey background).
Sometimes the cam list shows but again no image (in that case a black background is shown in the selected resolution) the cameras work ok (tested) and i also tried with a usb cam but again no luck.
I updated to latest version of windows 10 and tried processing 2.2.1 and 3.3.6 both 32 and 64 versions with same response
Can anyone help that fallen to the same issue?
thank you in advance | https://forum.processing.org/two/discussion/26640/processing-video-library-on-windows-10-tablet-with-z8350-processor-and-intel-avstream-cam-problem/p1.html | CC-MAIN-2022-27 | refinedweb | 246 | 75.71 |
never going to do for fellow EZ members what it, as West Germany, did for East Germany. Culture trumps politics and economics. If the EU was really a union it would be a forum for sharing and mutual aid rather than the pursuit of self interest that we see. If EMU were really a monetary union there would be continual transfer payments between surplus and deficits states and no one would raise an eyebrow. If you want to know what a real union looks like see the UK or US. The EU & EMU should not contain the word union because they are not unions. Rather they are misconceived entities designed to 1. challenge the hegemony of the US, and 2. enable France to suck eternally on the German teat. When Germany finally shrugs off its war guilt, the game is up.
"Germany is never going to do for fellow EZ members what it, as West Germany, did for East Germany. "
You got that one right. Especially after we now are experiencing what the real attitude of our southern friends is. There can only be a union of equals. So good bye to Italy, Greece, Spain and Portugal. And perhaps France. I would be happy to have some kind of Northern EU with Benelux, Netherlands, Scandinavia and perhaps Poland and the Baltic.
'If you want to know what a real union looks like see the UK or US'
'....the UK, US and Canada.'
Germany has taken on guarantees for €55.000 per capita for 11m Greeks. €600bn for ESM, ESF, Target 2.
That's about the same amount of support per capita for East Germans (17m inhabitants) for the unification if you put the price tag for that at 1 trillion. And that raised a lot of eyebrows.
If push comes to shove and the guarantees are drawn, Germany will have done the same for Greece that it did for East Germany but for even less to show.
It's all a demonstration on how little donations help.
We support the Economist's $64,000 question. A greater Federalization of the Eurozone will enable consistant Financial regulations, with a consistant measurement of the fiscal problems facing each country and how to finance them individually.
The demise of the Euro will only appease anti-globalisation populists. Job creation in every country with an emphasis on their competative advantages is paramount. This action will ensure that the heritage and traditions of each country will remain intact.
The best solution is a Break up, lets suppose that we were leaving in a United States of Europe, who actually believes that people would get equal opportunities or representation?,i dont.
Everyone should go back to their currency,and everyone should be allowed to manage his own economy.
So you would rather the EU break up and enter an era of total uncertainty and economic annihilation than a super state which would probably lead to faster recovery and a larger economy? Then you're crazy.
"who actually believes that people would get equal opportunities or representation?,i dont."
Your answer is right here my friend, am i crazy?, Look at greece,the country is about to default(if not allready), what do europeans and especially Germans do?,they offer help but with conditions that lead the country to a faster death, you cant recover when salaries are down 30% and taxes are up 100%, when people cant take it anymore they are threatened to vote whatever mr Scauble and frau Merkel want or else they will stop receiving aid,the very definition of democracy!.
You know i am from Greece and i cant stop thinking, what if we were a superstate, who is to guarantee to me that just because i come from here i wont be treated as a child of a lesser god?.
Is it just me or are the Germans and French accomplishing by economics what they couldn't through centuries of warfare? An alliance and conquest of Europe?
It's just you. As a German I don't really think we conquered anything right now, apart from a mountain of debt and a load of problems...
The current and indeed prolonged state of the EU has been a direct consequence of the political classes abusing their borrowed powers and vacuously signing treaties without the consent of their electorate.
Whatever issue concerning the EU you wish to discuss the underlying notion of democracy, or rather lack thereof, underpins everything.
So you know we live in a representative democracy, right? If you want a direct democracy you can move to switzerland, and thanks to the 'Political elite' you can even more there without a visa! Fancy that.
Indeed I do, and as far as Switzerland goes I couldn't think of a better example to illustrate the supposed detrimental effects of non-EU membership.
And be it representative or direct democracy, my position is such that free people should have the right to help shape the laws under which they live and the power to remove them if need be.
More integration is the way forward. Europe needs a common(single) Army, Navy and Airforce, individual countries can have their own police.
It is the Spanish/Italian navy that has to bear the cost of protecting the shores of Europe, similarly it the Greeks who are on the border with Asia.
The people of the poorly governed countries (or banks) need a better leader, more federalism will bring that.
The 'continent' may benefit from having a 'common' armed forces, but it only benefits the continent, it does nothing for the UK and Eire.
Lets see the €Uro failure countries make it work first, just like the have the common currency. Oh wait....
So you think the UK is somehow different from France or Germany in terms of armed forces? Based on what evidence? Also, I know you would love if the euro failed, but if it did the UK economy would collapse too!
The €Uro is a continental problem, just like the illegal immigration that flows through it. Oh, I know that you prefer to have someone else pay for your problems, but they are not mine so I don't care, sort out your own problems.
The UK and Eire are in a totally different situation, they are not land locked of tied to the continent. Neither the UK or Eire are the main points in which illegals enter any part of the €U.
So just like your €Uro, lets see you make it work. One sec... Scrap that, your €Uro isn't working is it.
UK economy is collapsing already. It the UK had the euro, We, not Greece would be calling for aid first. Resolving euro problems is in our interest too. Our economies depend on one another.
Leaving the €U would actually be a financially good move for Great Britain. Not paying anything more into the €U is a very sensible stance for the UK to take.
PMI index according Markit data to is showing growth in the UK, the financial shrinkage is probably due to this growth achieved cheaper/lower profits, which would equal lower tax returns for the government. UK unemployment is lower that most other €U members.
Collapse, I think not. Readjusting is far more likely given that GDP by PPP is still high enough to have the UK ranked 7th in the world, and is a far better indicator of a countries economic stability.
To summarize opinions:
Northern Europeans want a breakup,
While Southern Europeans wants to keep the Euro (keep the money flowing and Germans paying)
Funny isn't it?
"Funny isn't it?" No, it ain't fun.
Northern Europeans want a break up, except germany is perhaps the most pro european state. I'm sure your sweeping generalizations come in handy though when you want to criticise an institution you don't even really understand.
DEFINITELY funny from the outside $)
Germany, Denmark, Sweden etc act in their national interest in order to sustain low rates of debt. The longer they can sustain this situation, the better. It is in the national interest due to the fact that 1) it boosts exports, thus letting southern countries subsidize unproductive industrial/service jobs. 2) About Netherlands; The main reason Netherlands is prosperous is not due to its people but due to its placement. Netherlands is the harbour for German exports. 3) It puts the politician leaders from Northern Europe in a paradox. If they agree too shared debt, it would penalize houseowners. If they split up the EU it will hurt exports in the short-medium term and it could lead to soaring levels of unemployment. As someone else mentioned, what would be the best option would be to write a new treaty where stronger countries are forced out of the Euro union unless they balance their exports. The only problem is that it is political suicide..
There is no more room for nationalism of any kind..people want freedom to find their hapiness by their own way..with the exception of course of lazy civil servants stealing money via taxation
Lets have avery tiny central government, almost no regulation, no minimum sallary, no right to strikes, no stability in employment...
And lets give education, schools to anyone
And lets have also some social protection with the implication that anyone getting anything from government including subsidies of any kind looses the right to vote and this is extended to parents, sons , cousins, all close relatives
If you cannot pay your bills you cannot have the right to vote ..you will not have the power to increase taxes
With this, politicians will not bribe people using their own money...(Toqueville...)
With this it will be very ease to make all needed structural reforms in Greece, Spain, Portugal and in France
So your advocating some kind of return (in terms of the UK at least) to the 19th and early 20t century. Why not go the whole hog and make taxes voluntary, but only those that pay taxes being able to vote.
The UK already has free schooling for all its citizens, so that is not an issue. I guess you must be from a backward continental place.
And all will be well, Love and peace to everyone. Dream on....
I was with you until you said 'Backwards continental place', and then you just showed how fucking ignorant you are.
No, I have friends from the continent who have enlightened me to just how backward many parts of the continent are.
The one who is 'fucking ignorant' is you.
go on then, tell me where they're from and how they're backward.
Why don't you go travel and/or make friends with people from different parts of the continent and educate yourself. But start with Portugal, then Slovakia, Bulgaria and Hungry. Be sure to have plenty of change to give the police when you get stopped, as is normal custom in each of those countries. Of course, if you have plenty of money and stick to the tourist spots then getting round these countries will be a breeze, but you wont educate yourself.
So these people don't exist. Obv every country has its problems, if you went to inner city london I bet you'd think they were backwards too.
They exist, but I see no reason to enlighten an ignorant idiot who chooses to remain one. You have no experience of real life, nor have spent any time with real people outside your circle of clowns.
Go find out for yourself if you think I am so wrong, those mentioned above are as backward as some African countries, they are far from being 'advanced economies' or have anything close to the safe guards one would 'think' apply throughout the club.
But before you going and get your education, it is worth while doing a little research. !":
So very, very backward.
No debt mutualization. No Eurobonds. No endless fiscal transfers. No recapitalizing parasitic big banks with taxpayers money (pay for it by clawing back the illegitimate bonuses the thieving bankers collected in the last 20 years). No federalism of any kind. No guarantees for Spanish bank deposit at Dutch taxpayers expense.
If anything, the big banks need to be forcibly broken up. And then if necessary again.
Break up the EuroSoviet and abolish the Euro, back to the EEC when we had none of todays problems. End 'TBTF' and end 'centralization of power with the EuroSoviet'.
When will the Economist and all those other pro-bank and anti-people news outlets finally accept that we the people do not want the bankers Euro. Referendum now! And prosecute the big banks for fraud.
For the Economist benefit:
1.Nationalization of the good parts of banks
2.Fire all the bank management (they're incompetent anyway) and put them on trial for theft and fraud
3.Leave the bad parts in the market so Goldman Sachs and co can take the losses for it
"Boy, boy, crazy boy, Get cool, boy!" :)
No, but really, you go first. With ING. Break it up, now. Get out of the eurozone, now. Show us how it's done in a way we also could call a success. Or just shut up. Now. And next time, maybe, smoke a little less pot before posting.
We'd love to go, but our politicians won't let us. Back in 2006 when we voted for a parliamentary majority that would have put Lisbon to a referendum also.
What happened? They said after the election 'no referendum for you' having had some institute produce a bogus report that Lisbon was so much different from Enabling Act (constitution).
Every poll consistently produces majorities against bailouts, against debt mutualization, against Eurobonds etc... yet the pollies keep voting for it even if in their campaigns they had said they were against them.
And if we did leave, we'd be better off. Not having to co-guarantee other countries debts would be a huge plus.
The United Arab Emirates (U.A.E.) offer an example to the E.U. about a federal structucture with their "Spain" (Dubai) and their "Germany" (Abu Dhabi) keeping enough autonomy and personality.
Malaysia also can offer an idea that could be applied, keeping different kingdoms inside the Federation. Both the King of Spain, and the President of France could be also the head of the Union. When the Chief of State is a Monarch (Netherlands, Spain, Luxemburg, Belgium...) it becomes "de facto" a 5-year term "Emperor" of Europe.
Hahahah, are you comparing the UAE to the EU? Jesus christ, just because they both have U in them doesn't mean they're similar at all. Why don't you read about the EU before you show everyone how stupid you are.
When integration proves to not work, integrate further......
Whatever happened to something so sensible as to try to move decision making as close to local communities as possible, instead of concentrating it amongst some faraway emperors?
As you say, finance is the sector that has gone the furthest in integration. It is also the sector that has grown the fastest over the last 30 years. Without really contributing much more in the way of value add than before. Instead, by far the better part of growth has come from cockamamie schemes that don't really amount to much more than collectively pretending collateral assets are worth more than they really are, and lending money based on this fantasy.
Having an ever increasing share of ever higher paid workers spend ever larger shares of GDP on making up nonsense about asset valuations isn't particularly conducive to either growth or actual wealth formation; rather it is simply wealth transfer to the finance sector. SO..., finance should shrink. A lot. Like 80%. Nobody outside the sector will suffer one iota from this. And how does one shrink a sector? Well, by having many of the firms it comprises collapse. It's a good thing, anywhere outside the City of London.
Except, oh, I don't know, all the small businesses which require a strong financial system to borrow money. But you know, once all the small businesses collapse, no one will suffer one iota except those in the business sector! Except all the people who need hired. But you know, nobody outside the working population will suffer one iota! Oh wait..
I am sure shrinking the financial valuation of banks by 80% is excessive but I think Stuki has a point. The financial industry is wasting talents, creating inequality and hence affecting motivation for work, and creating economic instability. However I know they have important basic functions. Perhaps instead of shrinking all of finance by 80%, we should shrink the secondary financial market by very high taxation for secondary trade over a short time. Ie, we force investors to hold onto their investments for 1-2 years so that a) inflation will not be so easily generated by the investors, stealing the value of money from others b) economic cycles will be smoother c) the rich will find it harder to stay rich without work d) people can concentrate on making real progress in industry, service, culture and technology.
By the way I think somehow a very heavy taxation on luxury (non-necessary goods such as expensive cars, jewelry, villas, art collections, musical instrument, employment of servants/drivers for domestic purpose, elite education, any form of unnecessary entertainment, alcohol, cigarettes, toys, party equipment, expensive cloths...) is better than taxation on the rich. Perhaps the EU should raise taxes from raising the cost of these things. On one hand it raises revenue, on the other hand it discourages people from unproductive activity at these hard times.
Do you really think the share of the "financial system" dedicated to serving "small businesses", particularly the actual operations of small businesses, are greater than 20% of the total? Is that what you really believe the armies of million$+ bonus babies in greater NYC and London are doing for their millions? Lending money to small businesses?
Actually, a lot of the private banks do just that. It is not just banks that make up the finance sector either, it is bookkeepers and accountants, financial advisor's and insurance brokers. The first two are quite vital for SME's.
The part of the "financial system" that would collapse sans perpetual bailouts and debasement, is the one that is over levered. "bookkeepers and accountants, financial advisor's and insurance brokers" doesn't have this problem. Or at least shouldn't have, and if any of them do, it's about darned time those ones collapsed as well.
The point is not that me or you or anyone else should sit around and decide how much of the financial system should collapse. But rather that any entity as devoid of a "public service" component as a big bank, that cannot survive without external subsidy, is quite obviously a drain on greater society. And hence that; as far as greater society is concerned, having them disappear is a good thing.
If a bank goes down, then let it go down, on that I agree. But then what about those that have taken out loans, like families and SMEs, who will suddenly find themselves receiving demands that their loans are repaid a lot sooner than expected.
Unless loan contracts in Europe is written to allow banks to call in the loan early, who cares what "demands" debtors receive. All they have to do to keep to the contract, is make payments as promised.
Credit lines and rolling loans could be an issue, since they are pretty much at the bank's discretion. But as long as there is a single standing bank out there, if ones financials and collateral are of sufficient quality to warrant one, one should be able to obtain one somewhere else. Of course, the over levering isn't exclusive to banks and financials, even though that's where the biggest problems are. Many SMBs are probably also operating in LaLa land regarding their credit worthiness, and are basically being kept afloat by bankers who can earn commission for lending them money, confident that taxpayers (and those who can be robbed by inflation) will pick up the pieces, should the loan turn sour. But in that case, is them going under, or having to restructure, really such a bad thing? SMBs "creating jobs" sounds all warn and fuzzy, but value destroying SMBs aren't really of any more benefit than value destroying anything else.
TE, for 2 years now, you have had similar articles, talking about choices, bailouts, remedies, etc.
Please wake up. The EU as dreamed is dead. Time to break it apart and start fresh.
If you think breaking up and starting again at this point is a good idea in a time of economic uncertainty in a crisis based on 'lack of confidence' then you're deluded.
Back to basics in a time of crisis is a solid platform to build on. The ones who are deluded are those that think otherwise.
Sometimes I hope the EU does break up so the economic catastrophe that would follow would show people like you how horrible going back to basics would be.
The problem here is not "lack of confidence." It is insolvency. Hundreds of years of bankruptcy law has evolved to recognize breakup and/or transfer of ownership as the solution in such cases. Aggregation into bigger and bigger entities only serve to create harder to analyze, cross subsidizing hairballs.
All research has shown that any pain for countries such as the UK, Eire, France and Germany would be quite small and only for a few years.
As for the others, their pain may last longer, but it would again be for the short to medium term.
The UK has long ordered that institutions be ready for a collapse, including the government already having plans to pull out British citizen in places that could turn nasty.
Your doing nothing with your scaremongering, the facts are against you.
Show me the facts! Show me the papers (economic papers) showing that somehow the UK, Germany, Ireland and France would arrive unscathed from the collapse of the euro.
I never said unscathed, I said that the pain would be sharp but brief. I can find no paper, other than that commission through €U biased sources that suggest otherwise.
Given that the UK already has contingencies for such a breakup, it would be able to weather the storm. Germany has also made preparations, as have France and Eire.
From one financial analysts from the The Bruges Group, Robert Oulds:"while the UK Treasury is mainly focusing on problems caused by the collapsing euro, it would also be beneficial if the eurozone did indeed collapse..” Despite the initial shock, Oulds says Europe as a whole would be far more prosperous in the long-term.
Let in the creative destruction and go back to the CLASSICS!
Wonder why Titanic's captain was stupid enough not to have realized he needed to berth his ship for complete rebuilding after he met an iceberg.
Total delusion that the ship was unsinkable I think you will find, the same lunacy that isJohnny exhibits.
The Euro IS the catastrophe. It is unfolding right now.
The saying is true, there are none so blind as those who refuse to see.
I really don't see why fiscal union is necessary, or why fiscal union must mean a transfer of money from one country to another. For example, just because CA is part of the US doesn't mean CA will get a bailout when they go bankrupt. Greece etc, can easily go bankrupt AND stay in the Euro.
They are selling you scare stories cousin, they want to scare the Greeks in to doing what they want so that the €U elite can keep feeding from the money trough, supplied by the ever increasing gravy train.
You are naive to think CA won't get a US bailout if it goes belly-up. If the US politicians are willing to pony up billions to bail our GM and Chrysler to buy the votes of a less than a hundred thousand unionize auto workers, you can bet they'll make the rest of the nation bail out CA to buy the votes of tens of millions of Californians.
I think it unlikely unless Dems have control of both houses and the presidency. Because Republicans are unlikely to bailout liberal CA.
I think our EU Founding Fathers actually had this in mind. Let economic integration limp along until we'd gotten ourselves into such a s--- hole that we'd have to accept some real form of federalism.
My worry is that the country that will blow it all away will be the French with their absolutely stupifying capacity for self-delusion (remember the Maginot Line; remember the French Army, stronger than the German Army in the Fall of 1939, NOT moving across the Rhine while the whole Wehrmacht was doing its thing in Poland for fear of aggravating Hitler)
In the meantime, Dear Economist, a real move towards a truly functional Federalism in Europe? Tell it to the Brits!
Federalism is for continentals, leave us Brits out of it.
I can't see how financial integration can be expected to be at all stable in the absence of sincere political integration. What this article suggests would be just one more thing to fall apart in the coming years.
The euro may be saved. But the EU cannot be saved. It was doomed even before it was born. European leaders have ignored the critical advic Scchuman and Adenuaer offered them concerning the survival of the European project.
And here is the 'unknown truth' about European integration: A British lawyer wrote about a political alliance of European nations (how it would develop, its character and future prospects even before the French Founding Fathers of the European Community (Jean Monnet and Robert Schuman) were born in 1888 and 1886. He stated with confidence that a confederation of European nations would develop through a great European crisis, and this European confederacy would become the next major political feature in history after the restoration of the Jews to Palestine. Historical and public events have proved him right. The State of Israel was created in May 1948. Two years later, in May 1950, the European Union evolved from the ashes of the Second World War with the Schuman Plan.
The lawyer warned boundaries in Europe would change, and England and Ireland would become province of Europe, and they would not be saved if UK joined this group of European nations. H e has been proved right. EU laws reign supreme in England and Ireland. EU criminals in UK cannot even be deported. He described the EU “the vile confederacy of the latter days”. He has been proved right. The EU is corrupt and anti-democratic. Its accounts have not been approved for 17 consecutive years. President Van Rompuy admitted that Europeans were misled about the euro. Everything this Briton published about EU has come to pass. It is, therefore, reasonable to suggest that what he recorded about Europe’s future would also be fulfilled.
The British and the Irish must heed the warning of this great Briton and leave the EU. The EU is a ‘corpse’ on its way to a cemetery. No one can save it. Had the passengers who perished on the Titanic had known that the ‘unsinkable’ ship would sink on its maiden voyage, would they have joined the doomed luxurious vessel?
Mosley was the first to use the expression 'European Union' in 1936, and the €U is much like his vision for it. It is one of the biggest factors in being wary of this project and wanting Great Britain out.
The best, the shortest and the most explicit description of eurobonds
is the title of the article published in finance.townhall.com by Daniel J. Mitchell:
"Eurobonds: the fiscal version of co-signing a loan for your unemployed alcoholic cousin who has a gambling addiction".
Yes and you forgot one epithet: most ignorant.
Really thoughtful stuff but for the long haul The Economist shouldn't express too much reluctance towards promoting more federalism. After all, a debt mutualization still seems like a band-aid to a crisis that doesn't address more structural and institutional shortcomings. This affects more important issues like staying competitive - something that is premised on far more complex issues that go beyond bonds and finances. Yes, the context in which the Euro is seen as a problem to most people, is a titantic hurdle to overcome but honest discussion is required about how more centralized power could potentially strengthen political and economic institutions in the long run, and maybe even strengthen democracy in the process. National boundaries always change. So do concepts of identity. There is no reason to hope for a European identity to evolve even if, like the European project, it takes decades to happen.
The Economist's choices of superstate or breakup are way too binary. There is a third option, that only the worst countries get kicked out of the Euro.
Sure Greece will likely get kick, maybe Portugal or Spain too. But that does not prevent Germany, Finland, Estonia, Austria, Luxembourg and perhaps France staying together.
Some countries could or should be kicked out, and I agree that we are heading to it.
But I think it is impossible Spain to be throw out of Euro in a short term, just because of its size and the leverage of the spanish banks.
Spain leaving would make a mass destruction of German banks - their huge credits would suddenly lose value (converted to pesetas and in a few days valuing rubbish) and very likely break some of them. The side effect for Germany and the other northern countries would be really painful.
Greece, Portugal, Ireland and Cyprus could be the possible outgoers - Spain and Italy, maybe in some years, but not now.
Bah! Just do what we did here in Canada move on. | http://www.economist.com/comment/1433966 | CC-MAIN-2015-14 | refinedweb | 5,050 | 71.14 |
Go
Unanswered
|
Answered
Java Programming
~8100
answered questions
Parent Category:
Computer Programming
The Java programming language was released in 1995 as a core component of the Java platform of Sun Microsystems. It is a general-purpose, class-based, object-oriented language that is widely used in application software and web applications.
1
2
3
...
81
>
Psychologically what is the difference or benefits of exercising as part of a class as opposed to training alone? ma…
Popularity: 161
How can you automate the posting of rotating images from your computer to a website using Java?
It is not clear on what you want to do. So I'm going to guess that you want to duplicate a image changing sequence that you have on your desktop and put it on a website. If this is the case then this script will do just that. In the script find the lines that say "image1". "image 2", etc and replace…
Popularity: 131
How do you get rid of Java-ByteVerify?
Answer/Remo…
Popularity: 1007
What is Java and what does it do?
Java is Object Oriented computer programming language. It is used for application as well as system software development. It is best for the web based application such as servlets, XML design...etc. I.e. the application can run in the Internet. It can be used as front end tool for the back end …
Popularity: 6
Do old baseball season passes have value?
They sure do have value, but I'd have to do some research to determine their true value. What is the condition of the passes? Please contact me via email so we can discuss further.
Popularity: 40
What do you mean when you say that Java has two concepts?
Answer You are talking about the implementation point view of Abstract class and the interface. Let's go. 1. Interface helps Multiple inheritance:- In java you can't have a class inherited from more than one class, i.e. the multiple inheritance. Interface helps us in implementing the multiple …
Popularity: 105
Why would you want to use user defined exception handling?
sometimes there are situations where the program is vary long which can make error debugging a long process so java provides a facility to make user defined exception handling suppose we are dividing two numbers a/b and if the user enters the value of b 0, the user wants to display an error of your …
Popularity: 87
How we can say that java is a platform independent language?
When a Java program is complied, it results is a bytecode. A bytecode is machine independent. That is, you can take the bytecode on a computer w/ any OS, hardware, processor, etc, and run it successfully there.
Popularity: 10
How does the Java programming language work?
Java uses a code-compiler, which does not create a machine-code, like normal Windows programs, but a byte-code, which the Java interpreter runs. It is like a mini-operating system, forming a layer between the operating system and the program code. This way, you can run Java programs on almost any pl…
Popularity: 87
How do you get a Java program to display a backslash in output without it interpreting it as a program command?
Follow the backslash with another backslash:System.out.println("\\ " \");will display \ " \ on the screen.
Popularity: 73 is the difference between pass by value and pass by reference in valu…
Popularity: 41 do you read a value from .pdf file using winrunner TSL scripting?
try with file_open command it will work. if not possible asume that as a image and then using coordinates you can get a valuebye chanti
Popularity: 78
What is the difference between catch exception e catch error err and catch throwable t?
In Java it is related to the class hierarchy of exceptions. Throwable is the root object of the heirarchy, and both Exception and RuntimeError subclass it. Methods include a "throws" clause in their signature to indicate errors of type "Exception" that can be thrown in the body of the method and r…
Popularity: 92
Why is multiple inheritance implemened through interfaces?
Interfaces have only methods declared and not defined. So that these various methods can be altered or customized or written according to the need. Hence Multiple inheritance is implemented through Interfaces.Interface is a progam which just declares various methods or functions, so that any user ca…
Popularity: 50
How would you write a program that counts the number of vowels in a string?
A simple way to do this, whatever the language used, is to step through the characters in the string one by one and compare them with a previously established set of vowels, adding one to a count value every time a character matches one of the vowels. The count value at the end will be the number of…
Popularity: 89
What does static variable mean?
A static variable is a variable that is allocated at compile time and that remains in memory for the entire duration the program remains in memory. Contrary to the misleading answer given below, static variables ARE variables, they are NOT constants; the value is initialised at compile time but can …
Popularity: 67 can you pass 2 dimensional arrays as parameter?
Unfortunately your question is to broad. All progamming languages vary in the way things are done. I will just give a general way of doing it. You have to pass the multidimensional array into the function by including it with calling the function. On the receiving end you have to declare another…
Popularity: 46
Why do you use an interface for implementing methods Isn't it just extra work?
Answer By creating an interface, the programmer creates a method to handle objects that implement it in a generic way. An example of this would be the Runnable interface in Java. If the programmer is given a Runnable object, they do not need to know what that object really is to be able to ca…
Popularity: 87
What is encapsulation?
Answer (1) In programming, the process of combining elements to create a new entity. For example, a procedure is a type of encapsulation because it combines a series of computer instructions. Likewise, a complex data type, such as a record or class, relies on encapsulation. Object-oriented program…
Popularity: 69
Can you explain in which scenario the primitive types are used as objects? class…
Popularity: 25
Which is the layout of the toolbar in Java?
Answer The layout for toolbar is Flow Layout.
Popularity: 54
What is the function of catch Exception E?
Answer In Java, if there is a run-time error then it allows the user to explicitly handle it by catching it in the catch block. If there is any error in the try block of code, automatically the flow control will be transferred to the catch block. Here Exception e indicates any exception. Answer…
Popularity: 93
Is it possible to overload the main method in Java?
Answer Yes. The main method can be overloaded just like any other method in Java. The usual declaration for main is: public static void main(String[] args) throws Exception; When you launch a java application it looks for a static method with the name 'main', return type 'void' and a singl…
Popularity: 63
How do you declare an N-Dimensional array using Malloc?
#include <stdlib.h> int **array1 = malloc(nrows * sizeof(int *));for(i = 0; i < nrows; i++)array1[i] = malloc(ncolumns * sizeof(int));
Popularity: 73
What is the difference between 16 bit compilers and 32 bit compilers in C?
16 bit compilers compile the program into 16-bit machine code that will run on a computer with a 16-bit processor. 16-bit machine code will run on a 32-bit processor, but 32-bit machine code will not run on a 16-bit processor. 32-bit machine code is usually faster than 16-bit machine code. -DJ Cra…
Popularity: 44
What is the difference between interface and abstract class?
They are very different. An abstract class is a class that represents an abstract concept (google define "abstract" if you're unsure) such as 'Thoughts' or 'BankAccount'. When a class is defined as abstract it cannot be used (directly) to create an object. Abstract classes are used as super-classes…
Popularity: 67
How do you create a user defined exception?
do a security bypass script...kinda like:if (User == <username>) { SecurityPerms(TRUE) }im not sure if those are the exact values the computer uses...but you should get the idea
Popularity: 36
If you humiliated your narcissist and are now being ostracized is it safe to assume you'll never hear from him again?
I would not say you're outta the woods yet, my dear. Try and avoid him and if he calls, politely hang up on him. Just say, "sorry, don't want to talk to you. Narcissist are vindictive and rage. I never knew one to ostracize.Temporary ignoring is what they do, but they are in the same room so …
Popularity: 49
Why is Java not a pure OOP Language? ty…
Popularity: 96
Why should main be declared static and is declaring it public and void not sufficient?
Answer The static modifier means that it does not have to be instantiated to use it. Before a program runs there are technically no objects created yet, so the main method, which is the entry point for the application must be labeled static to tell the JVM that the method can be used without havi…
Popularity: 65
In ANSI C you can hide data by using private access specifier Justify?
Answer One can always declare a datatype as static which will limit the scope to the file only. this way data hiding can be achived. For more clearance on the same please refer 'the C programming language'. Answer Data hiding means only relevant data is visible to the user and all the back…
Popularity: 67
What is difference between Abstract Class and Interface?
Answer All the methods declared inside an Interface are abstract. Where as abstract class must have at least one abstract method and others may be concrete or abstract. In Interface we need not use the keyword abstract for the methods.
Popularity: 56
What are different ways by which you can access public member functions of an object?
Answer ObjectReference.MemberName Objectname :: memberfunction
Popularity: 48
How are cracker programs written?
Learn wi…
Popularity: 32
What are the best ways to learn and understand Java and what books are good for this?
Best way to learn Java is start from basic.Get a clear picture about JVM and JRE.Even if you write a simple program,try to understand what happens behind your code.I suggest you to read more about java at. At first while writing your code attach java source behind ,d…
Popularity: 47
What is the difference between object-oriented and procedural programming languages?
A major factor in the invention of Object-Oriented approach is to remove some of the flaws encountered with the procedural approach. Object Orientation Languages (OOL) is concerned to develop an application based on real time while Procedural Programing Languages (PPL) are more concerned with…
Popularity: 55
Can a pointer to char data-type be type casted to pointer to integer type?
Yes it is possible in C language.include<stdio.h>int main(void) { char *cptr; int *iptr; iptr=(int*)cptr; return 0; }If you find the info useful Please Vote!!!
Popularity: 63
What are the major differences between Java and other popular programming languages?
There are many differences between Java and other programs, as there are also similarities: - managed code: java does not produce native code, but some byte code that will make it runnable in something called "virtual machine". The best thing about that: compile on windows and deploy that applicatio…
Popularity: 32
What is the difference between pass by reference and pass by value in C?
Answer When you pass a parameter by value, you are creating a copy of the original variable and the receiving (method/function) cannot change the original value. When you pass by reference you are passing a pointer to the original object and therefore any changes are made to the original value.
Popularity: 18
What is the difference between throw and throws in Java?
One declares it, and the other one does it Throw is used to actually throw the exception, whereas throws is declarative for the method. They are not interchangeable. public void myMethod(int param) throws MyException { if (param < 10) { throw new MyException("Too low!); } //Blah, Blah, Bl…
Popularity: 49
What is the difference between Java and J2EE?
Hi! Java is a language and j2ee is a plateform which implements java language. Most people say "Java" when they really mean "J2SE" (Java 2 Standard Edition). In the context you usually see this discussed, the differences are between the J2SE and J2EE specifications. Both are ENVIRONMENTS, n…
Popularity: 8
What is static identifier?
Static identifier is an identifier whose value remain only in the scope in which it has been defined.If "static" declaration is applied to external variable, it will limit the scope of that object to the rest of the source file being compiled.
Popularity: 34
How does one file a lien?
A Mechanic's lien is filed by going to the city or county recorder's office with the documentation required by the state where the property is located.To file a lien against a debtor for other purposes, the lender must sue the borrower and then enforce the judgment award as a lien against real prope…
Popularity: 12
What does 'A Runtime Error Has Occurred. Do You Wish to Debug' mean?
Sadly one of the most annoying messages IE can offer as it (on the face of it) is unhelpful.... What it is trying saying is either: The app you ran tried to read or push data that is incorrect. e.g push 'x'kb to a buffer that is only 'x'-1 in size) or The web page you opened tried to push or read da…
Popularity: 22
Who invented the Java programming language?
James Gosling, PhD (born May 19, 1955 near Calgary, Alberta, Canada), along with other engineers and scientists, invented Java. Gosling is a famous software developer and is actually best known as the father of the Java programming language.
Popularity: 36
What is the difference between function and method?
functions have independent existence means they are defined outside of the class e.g. in c main() is a function while methods do not have independent existence they are always defined inside class e.g. main() in Java is called method. ######## I've been studying OOP lately and had this question my…
Popularity: 24
Who invented the Java computer language?
Java programming language was developed at Sun Microsystems by James Gosling (and originally known by the name "Oak") in early 1990s.
Popularity: 9
When will you use multiple inheritance and when will you use multi level inheritance Describe with an example?
Just take the following book from the library:"Effective C++" by Scott Meyers. He has a whole chapter about pros and cons of Multiple Inheritance and why and when you should and SHOULD NOT use it. Also, the ultimate guide is : "The C++ Programming Language" by B.Stroustrup, the father of C++. Ch…
Popularity: 22
What are the specific uses of pointer to pointer?
Answer A pointer to pointer variable can hold the address of another pointer variable. The Syntax is as follow: type** variable; Example:: //function prototype void func(int** ppInt); int main() { int nvar=2; int* pvar=&nvar; func(&pvar); .... return 0; } One more im…
Popularity: 9
How is JRE or JVM installed on a computer?
The JVM is part of the JRE (Java Runtime Environment) or the JDK (Java Developer Kit). Both the JDK and JRE are packages available from a variety of sources. The most common one is available from Sun (now Oracle). You simply visit the web site, and it will then download and install the JRE for you …
Popularity: 10
Why multiple inheritance is not possible in java?
Let me explain with a example. Suppose…
Popularity: 35
What is Dynamic Polymorphism?
Dynamic polymorphism is a programming method that makes objects with the same name behave differently in different situations. This type of programming is used to allow Java Scripts to run while playing a game on the computer, for example.
Popularity: 6
How can someone get a job as a garbage collector in Philadelphia?
yes, it is
Popularity: 3
What is the advantage of polymorphism?
the biggest advantage lies in creation of reusable code by programmers, you dont care about the specific objects used just like driving a car without knowing what plugs are in the engine. multiple forms of one object are called in the same way
Popularity: 31
Write a program to input a string in the form of name and swap the initials of the namefor example enter a string is rohit Kumar sharma and should be printed RK Sharma?
Get the string from user , then U Split the string with space and put in a string array . Then Find Length of string array , Take first letter of each element in array, Except last. Assigned those to a string, then Fetch Last element of an array, Add it With that String. That's it.
Popularity: 35
Which programming language is more robust C sharp or Java?
While I generally prefer "C" as a more robust language, I also consider anything that is released by Microsoft to be unreliable and likely to fail. Java is the product of Sun Microsystems, and has a very good reputation. While I'm not a Java programmer, I have heard nothing but good reports about Ja…
Popularity: 51
You are a beginner on programming with Java Server Pages could you recommend you some websites?
Popularity: 17
What will happen if you declared constructor in private section?
It becomes a singleton class.
Popularity: 1
What do you mean by data type in Java?
Data type: a set of values together with a set of operations Answer A category of storing information/data that may or may not be interchangeable. An int is a data type, as is boolean. Each class is also considered its own data type. In real life, "fruit" would be considered a data type, as w…
Popularity: 25
How do you inherit private member of a base class in derived class through inheritance?
Answer You can't. But you can do something similar by making the private members in the base class protected as opposed to private. This gives derived classes access to those members.
Popularity: 44
What is the definition of consistency?
The ability of material to maintain its basic properties under different conditions of temperature and pressure.
Popularity: 30, Statem…
Popularity: 23
What is an identifier?
An identifier is nothing but a data type. It may variable, content, structure or a pointer.
Popularity: 27
Why do file name and class name always coincide in Java?
Answer First of all, it only has to be the same when the class is public. And there is no explicit reason for that, it's just a convention that came along with old versions of java and people got used to it... They say it's because of the limited capabilities of the compiler to compile dependenci…
Popularity: 38
How do you empty a file using C programming?
Answer That depends on the type of file and what you mean by "empty". Answer Generally, fopen() called on an existing file, with the "w" option instead of the "a" option will truncate the file to zero length. To delete a file completely is a different process. Answer Function truncate is …
Popularity: 30
Can we overwrite static methods in Java?
Answer Static method cannot be overwritten because it belongs to the class and not to the Object Answer If you're asking about overriding or overloading, then yes. Static methods can be overridden and overloaded just like any other methods.
Popularity: 29
When do you use protected visibility specifier to a class member in C?
a class member declared as private can only be accessed by member functions and friends of that class a class member declared as protected can only be accessed by member functions and friends of that class,and by member functions and friends of derived classes
Popularity: 5
What is the difference between standard and virtual objects?
uhm this is what i think..anyone agree.. uhm, i think that the difference is that a virtual object is on the computer or on a screen like a television or a movie screen. and that the standard obejct is osmthing that the human can touch like a computer keyborad, some food, dirt, water, etc etc etc…
Popularity: 56
How exceptions are handled in java?
Exception: It is a Runtime error Exception handling if done by 1)Error detection 2)Error Reporting 3)Error Handling Is implemented by following keywords try,catch,finally,throws,throws try{} If exception occurs in try block the appropriate exception handlers that is associated with the try b…
Popularity: 10
What is infix to postfix?
#include <iostream>#include <stack>using namespace std;int prec (char ch){// Gives precedence to different operatorsswitch (ch) {case '^':return 5;case '/':return 4;case '*':return 4;case '+':return 2;case '-':return 1;default :return 0;}}bool isOperand(char ch){// Finds out is a charact…
Popularity: 32
What is Java object-oriented computer programming?
Quick answer Java is an object-oriented computer programming language that is used for application as well as system software development. It is best for web based applications such as servlets, XML design...etc., i.e. the applications that can run on the Internet. It can be used as front en…
Popularity: 70
When shall you use Multiple Inheritance?
Multiple inheritance should be applied when it is fit into the design, like any other features, not because it must be applied because it is available, or it is fun to apply. Multiple inheritance occurs when an object IS more than 1 thing, and being one of them, may have nothing to do with being an…
Popularity: 3
Where is the download for the java virtual machine?
While there are several implementation of the JVM, you can find the one from Sun at
Popularity: 34
Write a function that reads an integer and determines and prints how many digits in the integer are 7s?
// Returns number of 7s in num. int numSevens(int num) { int _num = num; int numSevens = 0; while( _num > 0 ) { if( (_num % 10) == 7 ) { ++numSevens; } _num /= 10; } return numSevens; }
Popularity: 24
What is bytecode?
The machine instructions that are interpreted by the virtual machine they are written for.
Popularity: 5
Difference between abstract class and interface?
An abstract class can have a combination of abstract methods and normal methods.Interfaces cannot implement any methods themselves, all have to be abstract.Other classes can extend only one class (abstract or not), but can implement as many interfaces as they want.
Popularity: 3
What is float?
A float is a type of object that can stay on water without sinking to the bottom. These objects are known for their buoyancy, such as buoys.
Popularity: 0
What resources are used when a thread addres…
Popularity: 4.
Popularity: 4
Write a program which reads names of students and their telephones from a file and produce a linked list ordered in alphabetical order by the surname of the student?
Answer please answer the question write a program which reads names of students and their telephones from a file and produce a linked list ordered in alphabetical order by the surname of the student.
Popularity: 18
What is the abstract of numerology?
Answer Numerology in Abstract The abstract of numeralogy is that each person can be labeled with certain qualities by their birth date and name along with any name changes. This interpretation of numbers gives life lessons, luck and other factors in a persons life. This method of life lessons is …
Popularity: 15
How to get the data coming from https protocol in java?
by adding a get request statement String itemQ= request.getParameter("t1");
Popularity: 9
Functional and structural relation between nabard and rrb?
According to NABARD ACT 1982 NABARD was set up to provide refinance to banks for all kind of agricultural investment and small scale industries and other allied activities.Erlier RBI use to the operations of Agricultural Refinance and Development Corporation for RRB. Now NABARD has taken over all th…
Popularity: 1
Why do supertankers float?
because of the mass .it has a lot of air resistance.
Popularity: 1
Types of polymorphism?
run time ,, compile time polymorphism
Popularity: 2
What is a logic error in a programs code?
It is a generic term for screwing the program code up. It means that somewhere in the coding it is not doing what it is supposed to do. Say you wanted to add the amount of apples and then print the amount for inventory. The code says that it is adding the apples then sending a zero amount for the am…
Popularity: 14
What in java is not serializable?
Objects that do not implement Serializable.
Popularity: 36
Java is robust explain?
Java is robust because it is highly supported language, meaning that unlike C you cannot crash your computer with a bad program. Also, another factor in its robustness is its portability across many Operating systems, with is supported by the Java Virtual Machine.
Popularity: 36
What is Singleton class?
"A class containing a static variable that stores a unique, and inaccesible to external classes (private), intance of itself. The static variable is accessed by a static method, with public access, usually called getIntance. The static variable is initiated by the static getInstance method that …
Popularity: 25
Why do you declare a string as char asterisk?
Declaring strings as char* is generally faster than declaring an array of type char. Consider the following: include<stdio.h> int main (void) { char c1[12] = "Hello world"; char c2[] = "Hello world"; char* c3 = "Hello world"; // ... return 0; } The string literal, "Hello worl…
Popularity: 1
What are metadata?
Metadata are data about data. There are two kinds of metadata: "data about data content", and "data about data containers". Consider a file with a picture. The picture or the pixels inside the file are data. A description of the picture, such as "3 cute kittens, April 2012" are "data about data c…
Popularity: 28
1
2
3
...
81
> | http://www.answers.com/Q/FAQ/2861 | CC-MAIN-2017-09 | refinedweb | 4,444 | 54.32 |
This.
When you create a new .NET project using a template, it always uses the same URLs, defined in
Unfortunately, the MacBook had a driver installed that was already bound to port 5000, so whenever the .NET Core project attempted to start, the port would conflict, and they'd the see error above. Not a great experience!
In this post I show one way to resolve the problem by randomising the ports ASP.NET Core uses when it starts the application. I'll also show how you can work out which port the application has selected from inside your app.
Randomly selecting a free port in ASP.NET Core
In my previous post, I showed some of the ways you can set the URLs for your ASP.NET Core application. Unfortunately, all of those approaches still require that you choose a port to use. When you're developing locally, you might not care about that, just run the application!
You can achieve exactly this by using the special port
0 when setting the URL to use. For example, to bind to a random http and https port on the loopback (localhost) address, run your application using the following command:
dotnet run --urls "http://[::1]:0;https://[::1]:0"
This will randomly select a pair of ports that aren't currently in use, for example:
info: Microsoft.Hosting.Lifetime[0] Now listening on: http://[::1]:54213 info: Microsoft.Hosting.Lifetime[0] Now listening on: https://[::1]:54214 info: Microsoft.Hosting.Lifetime[0] Application started. Press Ctrl+C to shut down.
Alternatively, instead of binding to the loopback address, you can bind to any IP address (using a random port) with the following command:
dotnet run --urls "http://*:0"
This binds to all IPv4 and IPv6 addresses on a random port.
The
*isn't actually special, you just need to use something that isn't a valid IPv4 or IPv6 IP address (or
localhost). Even a hostname is treated the same as
*i.e. it binds to all IP addresses on the machine.
The downside of choosing at random port at runtime is that you get a different pair of ports every time you run the application. That may or may not be a problem for you.
When is this useful?
On the face of it, having your application listen on a different URL every time you restart it doesn't sound very useful. It would be incredibly irritating to have to type a new URL into your browser (instead of just hitting refresh) every time you restart the app. So why would you do this?
The one time I use this approach is when building worker services that run background tasks in Kubernetes.
But wait, isn't the whole point of worker services that they don't run Kestrel and expose URLs?
Well, yes, but due to the issues in the 2.x implementation of worker services, I typically still use a full
WebHost based ASP.NET Core app, instead of a generic
Host app. Now, in ASP.NET Core 3.0, those problems have been resolved, but I still don't use the generic host…
The problem is, I'm running applications in Kubernetes. An important part of that is having liveness/health checks, that check that the application hasn't crashed. The typical approach is to expose an HTTP or TCP endpoint that the Kubernetes infrastructure can call, to verify the application hasn't crashed.
Exposing an HTTP or TCP endpoint…that means, you guessed it, Kestrel!
An HTTP/TCP health check endpoint is very common for applications, but there are other options. For example you could use a command that checks for the presence of a file, or some other mechanism. I'd be interested to know in the comments if you're using a different mechanism for health checks of your worker services!
When the application is running in Kubernetes, the application obviously needs to use a known URL, so I don't use random port selection running when it's running in production. But when developing locally on my dev machine, I don't care about the port at all. Running locally, I only care that the background service is running, not the health check endpoint. So for those services, the random port selection works perfectly.
How do I find out which port was selected?
For the scenario I've described above, it really doesn't matter which port is selected, as it's not going to be used. But in some cases you may need to determine that at runtime.
You can find out which port (and IP Address) your app is listening on using the
IServerAddressesFeature, using the following approach:
var server = services.GetRequiredService<IServer>(); var addressFeature = server.Features.Get<IServerAddressesFeature>(); foreach(var address in addressFeature.Addresses) { _log.LogInformation("Listing on address: " + address); }
Note that Kestrel logs this information by default on startup, so you shouldn't need to log it yourself. You might need it for other purposes though, to register with Consul for example, so logging is just a simple example.
The question is, where should you write that code? Depending on where you put it, you can get very different answers.
For example, if you put that code in a hosted service in ASP.NET Core 3.0, then the
Addresses collection on
addressFeature will be null! That's because in ASP.NET Core 3.0, the hosted services are started before the middleware pipeline and Kestrel are configured. So that's no good.
You might consider placing it inside
Startup.Configure(), where you can easily access the server features on
IApplicationBuilder:
public class Startup { public void Configure(IApplicationBuilder app, ILogger<Startup> log) { // IApplicationBuilder exposes an IFeatureCollection property, ServerFeatures var addressFeature = app.ServerFeatures.Get<IServerAddressesFeature>(); foreach(var address in addressFeature.Addresses) { _log.LogInformation("Listing on address: " + address); } } // ... other configuration }
Unfortunately, that doesn't work either. In this case,
Addresses isn't empty, but it contains the values you provided with the
--urls command, or using the
ASPNETCORE_URLS variable, with the port set to 0:
Listing on address: http://*:0 Listing on address: http://[::1]:0
That's not very useful either, we want to know which ports are chosen!
The only safe place to put the code is somewhere that will run after the application has been completely configured, and Kestrel is handling requests. The obvious place is in an MVC controller, or in middleware.
The following middleware shows how you could create a simple endpoint that returns the addresses being used as a comma delimited string:
public class ServerAddressesMiddleware { private readonly IFeatureCollection _features; public ServerAddressesMiddleware(RequestDelegate _, IServer server) { _features = server.Features; } public async Task Invoke(HttpContext context) { // fetch the addresses var addressFeature = _features.Get<IServerAddressesFeature>(); var addresses = addressFeature.Addresses; // Write the addresses as a comma separated list await context.Response.WriteAsync(string.Join(",", addresses)); } }
We can add this middleware as an endpoint:
public class Startup { // This method gets called by the runtime. Use this method to configure the HTTP request pipeline. public void Configure(IApplicationBuilder app) { app.UseRouting(); app.UseEndpoints(endpoints => { // Create the address endpoint, consisting of our middleware var addressEndpoint = endpoints .CreateApplicationBuilder() .UseMiddleware<ServerAddressesMiddleware>() .Build(); // Register the endpoint endpoints.MapGet("/addresses", addressEndpoint); }); } }
Now when you hit the
/addresses endpoint, you'll finally get the actual addresses your application is listening on:
Of course, middleware is clearly not the place to be handling this sort of requirement, as you would need to know the URL to call before you call the URL that tells you what URL to call! 🤪 The point is just that this information isn't available until after you can handle requests!
So where can we put this code?
One option is to hook into the
IHostApplicationLifetime lifetime events. These events are triggered at various points in your application's lifetime, and give the option of running a synchronous callback.
For example, the following code registers a callback that waits for Kestrel to be fully configured, and then logs the addresses:
public class Startup { public void Configure(IApplicationBuilder app, IHostApplicationLifetime lifetime, ILogger<Startup> logger) { // Register a callback to run after the app is fuly configured lifetime.ApplicationStarted.Register( ()=> LogAddresses(app.ServerFeatures, logger)); // other config } // Called after configuration is complete static void LogAddresses(IFeatureCollection features, ILogger logger) { var addressFeature = features.Get<IServerAddressesFeature>(); // Do something with the addresses foreach(var addresses in addressFeature.Addresses) { logger.LogInformation("Listening on address: " + addresses); } } }
This approach gives you access to your application's URLs at one of the earliest points they're available in your application's lifetime. Just be aware that the callback can't be async, so you can't do anything especially fancy there!
Summary
In this post I described how to use the "magic port 0" to tell your ASP.NET Core application to choose a random port to listen on. I use this approach locally when creating background services that I don't need to make HTTP requests to (but which I want to expose an HTTP endpoint for liveness checks in Kubernetes).
I also showed how you can find out the actual URLs your application is listening on at runtime using the
IServerAddressesFeature. I showed that you need to be careful when you call this feature - calling it too early in your application's startup could give you either an empty list of addresses, the requested list of addresses (i.e. the "port 0" addresses), or the actual addresses. Make sure to only use this feature after application configuration is complete, for example from middleware, from an MVC controller, or in the
IHostApplicationLifetime.ApplicationStarted callback. | https://andrewlock.net/how-to-automatically-choose-a-free-port-in-asp-net-core/ | CC-MAIN-2020-24 | refinedweb | 1,599 | 56.05 |
bean object
bean object i have to retrieve data from the database and want to store in a variable using rs.getString and that variable i have to use in dropdown in jsp page.
1)Bean.java:
package form;
import java.sql.*;
import
JSP bean set property
JSP bean set property
... you a code that help in describing an
example from JSP bean set property...:useBean> -
The < jsp:use Bean>
instantiate a bean class
Form processing using Bean
Form processing using Bean
In this section, we will create a JSP form using bean ,which will use a class
file for processing. The standard way of handling...;beanformprocess2.jsp" to retrieve the data
from bean..
<jsp
JSP
access application data stored in JavaBeans components. The jsp expression language allows a page author to access a bean using simple syntax such as $(name). Before JSP 2.0, we could use only a scriptlet, JSP expression, or a custom
Use Of Form Bean In JSP
Use Of Form Bean In JSP
... about the
procedure of handling sessions by using Java Bean. This section provides...
or data using session through the Java Bean.
Program Summary:
There are
Connect from database using JSP Bean file
Connect from database using JSP Bean file....
<jsp:useBean id=?bean name?
class=?bean class? scope... that defines the bean.
<jsp:setProperty name = ?id?
property
jsp - JSP-Servlet
/loginbean.shtml
http...:// a table in oracle using jsp
and the table name is entered in text feild of jsp page FILE UPLOAD-DOWNLOAD code USING JSP
jsp
jsp code for jsp automatic genration using html
Writing Calculator Stateless Session Bean
'
bean.
Writing JSP and Web/Ear component
Our JSP file access the session bean...Writing Calculator Stateless Session Bean... Bean for multiplying the values entered by user. We will use ant
build tool
JSP
JSP How to create CAPTCHA creation by using JSP?
I need an code for CAPTCHA creation by using JSP...
You can create a captcha validation using jsp. Have a look at the following tutorial:
Captcha Creation
UseBean In JSP
then it instantiates
the bean.
JavaBeans is a class that is written using Java...
declares the JavaBean in a JSP page. Using the jsp:useBean you can access... of the scope attribute of <jsp:useBean>
tag specifies that the given bean can
jsp how to assign javascript varible to java method in jsp without using servlet
jsp
jsp how to create multiple tables in oracle 9i using jsp program
Java Bean Properties
Java Bean Properties What are the properties of a normal java Bean(Not EJB)
HI Friend,
Please visit here:
Thanks
Using Beans in JSP. A brief introduction to JSP and Java Beans.
Getting a Property value in jsp
GetProperties()
{}
}
In the above example, we are using bean with <jsp... of
accessing properties of bean by using getProperty tag which automatically sends... a Property Value</H1>
<jsp:useBean
JSP
; Hi Friend,
Please visit the following links:
Thanks
Is JSF using JSP?
Is JSF using JSP? Is JSF using JSP
jsp
jsp p>in my project i have following jsp in this jsp the pagesize..." prefix="bean"%></p>
<p><%
Log log = LogFactory.getLog...()%>" class="inactiveFuncLn" target="bodyFrame"><bean:message bundle hai good morning all
jsp beginner
myself is sathishkumar i am developing a web application jsp. in this application i generate id card.how... an id of some format using the following code.
public class GenerateSerialNumber...
{
response.sendRedirect("/examples/jsp/login.jsp");
}
}
catch
Implementing Bean with script let in JSP
Implementing Bean with script let in JSP...;
This application illustrates how to create a bean class and how to
implement it with script let of jsp for inserting the data in mysql table.
In this example we create sir i am trying to connect the jsp with oracle connectivity... are using oracle oci driver,you have to use:
Connection connection... are using oracle thin driver,you have to use:
Connection connection ques: how to insert data into database using mysql
jsp
;" import = "java.io.*" errorPage = "" %>
<jsp:useBean id = "formHandler... = "java.io.*" errorPage = "" %>
<jsp:useBean id = "formHandler" class...;
<td align="left" valign="middle"><jsp:getProperty
jsp
jsp Hi,please send me login page code using jsp
1)login.jsp:
<html>
<script>
function validate(){
var username=document.form.user.value;
var password=document.form.pass.value;
if(username==""){
alert... displays all of the items selected. The selection of items is made using checkboxes
jsp
jsp how to calculate mark..using radio button?????? Hello,
Please specify some more details.
Thanks
java bean code - EJB
java bean code simple code for java beans Hi Friend... the Presentation logic. Internally, a bean is just an instance of a class.
Java Bean Code:
public class EmployeeBean{
public int id;
public
tree using jsp code
tree using jsp code i want to draw a tree structure of a family hierarchy using jsp code
Jsp
Jsp Hi Sir,
I want to get the values of selected row from dynamic table and insert into mysql table using servlet.Give me a solution as soon as possible.
Regards,
Santhosh
JSP
how can we disable el JSP how can we disable el
You can disable using isELIgnored attribute of the page directive:
<%@ page isELIgnored ="true|false" %>
JSP
JSP What will happened when we are using sendRedirect ().
hi friend,
when sendRedirect() method is used it transfers the control, temporary, to the other resources.
For detail description please go through
JSP
JSP I am now using MySql database
i have one error when connectivity code run that is ...
driver not found
I want to get driver or driver class
or
Plz know me that which class i have to use Like
report generation using jsp
report generation using jsp report generation coding using jsp
.
We will try to understand the above steps in more detail using the figure
JSP
to understand the above steps in more detail using the figure below
jsp
retrieve or update or delete data in database. I am using Netbeans I'm attempting to run the program , I got the following error.I am using
Apache Tomcat/5.0.28 , jdk1.6
HTTP Status 500 -
type Exception report
description The server encountered an internal error
jsp
jsp ques: how to insert data into database using mysql
//index.jsp
<%--
Document : index
Created on : May 20, 2013, 1:20:04 PM
Author : ignite178
--%>
<%@page contentType="text/html
A Java Program by using JSP
A Java Program by using JSP how to draw lines by using JSP plz show me the solution by using program
Error in using java beans - JSP-Servlet
Error in using java beans I am getting the following error when I run the jsp code.
type Exception report
description The server...: Unable to load class for JSP
ScatterPlot using jsp
ScatterPlot using jsp hi,
can anybody provide me code for ScatterPlot using jsp.
thanks
generate charts using JSP
generate charts using JSP any one know coding for generate bar chart or pie chart using JSP
using Bean and Servlet In JSP |
Record user login and
logout timing In JSP... in JSP File |
Alphabetical DropDown Menu In JSP |
Using Bean
Counter... to Open JSP
| Add and
Element Using Javascript in JSP |
Java bean | http://www.roseindia.net/tutorialhelp/comment/100043 | CC-MAIN-2014-52 | refinedweb | 1,196 | 56.25 |
I have a deeply nested configuration hassle.
The problem happens to be in machine learning, where an end-user calling a cross-validation routine, may, or may not specify any of various parameters (e.g. "randomSeed" = 17)
Either way, the parameters then have to be passed first to the cross-validation algorithm, and then on to a first machine learning algorithm. The machine learning algorithm, must be able to set and pass on other parameters, all without the initial user knowing.
Most all of the consumers in the chain of parameter users expect a java Map interface to be doing the look-up from.
Flattening the keys into one library is unattractive for performance reasons -- both CPU and memory -- (the 'root key-name' space) will be used without modification many thousands of times, and each time a number of additional parameters need to be specified before the bundle is passed along.
A decent analog is how the PATH variable works, each element in the path being a directory (key-namespace). When a query is made against the PATH variable (eg. you type 'emacs' at the command line), it looks in each directory (unnamed namespace of keys) for that file-name (specified value) in order, until it either finds it, or fails to find it. If it finds it, you get to execute the specific contents of the executable file it found (get the value of the parameter set). If you have a PATH variable from another, you can append a new directory (anonymous key-space ) in front of it as you pass that PATH variable setting along to a new end-user, without modifying the previous directories (preferences).
Given the name-space on the configuration parameters is effectively flat, a solution like Python's ChainMap would be perfect (eg example usage) but I'm finding no equivalent solution in Java?
Over the weekend I went ahead and created a
ChainMap implementation as well; thanks to Java 8 it's a surprisingly small class. My implementation is slightly different than yours; it doesn't attempt to mirror Python's behavior and instead follows the
Map interface's specifications. Notably:
.containsValue()doesn't match values that are masked by earlier maps.
.put()returns the previous value of the chain map, even if that value was in a later map.
.remove()removes the key from all maps, not just the first map or the visible entry. From the Javadoc: "The map will not contain a mapping for the specified key once the call returns."
.clear()clears all maps, not just the top map.
.equals()and
.hashCode()on the basis of its entry set, so that it is equal to other
Mapimplementations.
I also did not implement push/pop behavior as it felt like an anti-pattern;
ChainMap is already an O(1) view into a series of maps, you can simply construct additional
ChainMaps with the maps you want as needed.
Obviously, if your implementation works for your use case, that's great. But it violates the
Map contract in several places; I'd strongly suggest removing
implements Map<K, V> and just let it be a standalone class.
Many of the class's methods are nice one-liners, e.g.:
@Override public int size() { return keySet().size(); } @Override public boolean isEmpty() { return !chain.stream().filter(map -> !map.isEmpty()).findFirst().isPresent(); } @Override public boolean containsKey(Object key) { return chain.stream().filter(map -> map.containsKey(key)).findFirst().isPresent(); } @Override public boolean containsValue(Object value) { return entrySet().stream() .filter(e -> value == e.getValue() || (value != null && value.equals(e.getValue()))) .findFirst().isPresent(); } @Override public V get(Object key) { return chain.stream().filter(map -> map.containsKey(key)) .findFirst().map(map -> map.get(key)).orElse(null); }
I've written some tests to verify the class's behavior as well. Additional test cases are welcome.
I also extended your idea of using
Maps.asMap() to create an immutable view of a collection of maps; if you don't need mutation this will work nicely. (As I learned, you have to use the three-argument form of
.reduce() to get the generics to behave).
public static <K, V> Map<K, V> immutableChainView( Iterable<? extends Map<? extends K, ? extends V>> maps) { return StreamSupport.stream(maps.spliterator(), false).reduce( (Map<K,V>)ImmutableMap.<K,V>of(), (a, b) -> Maps.asMap(Sets.union(a.keySet(), b.keySet()), k -> a.containsKey(k) ? a.get(k) : b.get(k)), (a, b) -> Maps.asMap(Sets.union(a.keySet(), b.keySet()), k -> a.containsKey(k) ? a.get(k) : b.get(k))); } | https://codedump.io/share/G7qrs7lBw2Ig/1/python39s-chainmap-for-java | CC-MAIN-2018-09 | refinedweb | 754 | 55.84 |
by Josh Juneau
Build data-driven applications for the enterprise using the PrimeFaces JavaServer Faces UI framework.
Published April 2014
PrimeFaces, a popular JavaServer Faces (JSF) UI framework, can be used to quickly develop sophisticated applications for the enterprise or for standard websites. This article focuses on how to efficiently build data-driven applications for the enterprise using PrimeFaces.
In this article, we'll be developing an enterprise application, making use of PrimeFaces to create a user-friendly, robust experience. The application we will be developing is called AcmePools, and it is for a swimming pool installation company named Acme Pools. It is our job to develop forms to input customer, job, and pool information, as well as provide the ability to easily retrieve and update the data, as needed.
Note: You can find the source code for the AcmePools application on GitHub.
Traditional JSF applications are easy to create and configure. One of the biggest boons of JSF has always been easy configuration, and integrating PrimeFaces into a JSF application is no different. The only requirement is to add the PrimeFaces library to a standard JSF application—that's it. There are some minor optional configurations for utilizing additional features within PrimeFaces, such as file upload and custom templating. However, to get started right away, no additional configuration is necessary. NetBeans IDE 8 makes it even easier, with the ability to generate PrimeFaces application pages from entity classes. This article demonstrates how to generate a PrimeFaces application using the NetBeans IDE 8 features, and then progressively customize the application making it more sophisticated.
For the context of this article, we'll be using a NetBeans IDE 8 Maven-based web project. To get started, generate the database tables and sequences that will be used by the AcmePools application. You can do this within NetBeans by starting the default database that comes with NetBeans: Java DB, which is Oracle's supported distribution of the Apache Derby open source database. Then, in the Services window, expand the Databases tree, right-click Java DB, and select Start Server.
For this article, all SQL code will be executed within the sample schema, which is configured and ready to go; we'll be adding only a couple of additional tables. Double-click the sample schema database connection within the Databases tree to connect to the database. Right-click the connection and select Execute Command, which will open up a SQL session with the database. Copy and paste the SQL code that is contained within the
create_database.sql file that is part of the source code for this article, and execute it using the green arrow icon in the SQL editor (see Figure 1).
Figure 1: Creating the database
Next, create a new Maven web application project (see Figure 2).
Figure 2: Creating a Maven-based web application in NetBeans
Enter
AcmePools for the project name, and choose a project location on your file system. Enter
com.acme for Group ID, enter
com.acme.acmepools for Package Name, and click Next.
In the next dialog box, select the application server of your choice. Be sure to use an application server such as GlassFish 4.0 or WildFly, which are compatible with Java EE 7.
Next, set the Java EE Version to Java EE 7. Click Finish to create the application project.
To manually add PrimeFaces as a dependency, expand the newly created project, right-click the Dependencies item and choose Add Dependency (see Figure 3).
Figure 3: Adding a PrimeFaces Maven dependency
In the Add Dependency dialog box, type
PrimeFaces into the Query field. Expand the org.primefaces item within the search results, and choose 4.0 [jar] - central or 5.0 [jar] - central, and then click Add.
Next, enable the JSF framework within the Maven web application project. This can be done by right-clicking the project, choosing Properties, and then clicking the Frameworks category and adding JavaServer Faces (Figure 4). This automatically adds the appropriate configuration to the project's
web.xml file and creates an
index.xhtml welcome file as a starting point for your JSF application.
Figure 4: Adding the JSF framework to your project
Since PrimeFaces has been added as a Maven dependency, any new XHTML files that are created will be ready to use with PrimeFaces, because the appropriate namespace will be added by default.
PrimeFaces makes the development of user-input forms simple. We all know that it is pertinent to validate user input and provide appropriate notifications to the user when something goes wrong. PrimeFaces contains a number of input components to help you develop robust data-input forms. NetBeans IDE 8.0 provides support for easy development of CRUD (create, read, update, delete) applications using NetBeans. This example will demonstrate how to use this support to create the initial input forms and data tables for the AcmePools application.
To begin, create a new package within the project named
com.acme.acmepools.entity. Take advantage of the NetBeans IDE by using the Entity Classes from Database feature (see Figure 5). To do this, right-click the newly created package, select New, and then select Entity Classes from Database. Then select the appropriate data source and choose the CUSTOMER, POOL_CUSTOMER, JOB, and POOL tables.
Note: Be sure to deselect the Include Referenced Classes checkbox, because we do not wish to generate pages from the other associated database tables.
Figure 5: Creating entity classes from the database
Click Next and retain the default selections, making sure that you create a persistence unit. Click Next again, and then click Finish. Each of the database tables should now have an associated entity class.
Now that the entity classes have been created, generate the JSF views by right-clicking the project's Web Pages folder and choosing New and then JSF Pages from Entity Classes. When the dialog box opens, select each of the newly created entity classes, and then choose Next.
On the next screen, retain all of the default values with the exception of the Choose Templates option; for that option, select PrimeFaces rather than the default. Click Finish, and NetBeans will create three separate folders within the Web Pages folder, each named accordingly for their associated entity classes. Within each folder, the following JSF pages are created:
Create.xhtml,
Edit.xhtml,
List.xhtml, and
View.xhtml. NetBeans also automatically generates JSF controllers and Enterprise JavaBeans (EJB) beans for each of the entity classes, and the newly generated pages are wired up to the controllers and ready for use.
Note: At this point, take a moment to organize the code by separating the EJB session beans, controller, and entity classes into their own packages. Use NetBeans IDE to move and refactor the classes once you've generated the new package structure. The example application uses the packages
com.acme.acmepools.session,
com.acme.acmepools.jsf, and
com.acme.acmepools.entity, respectively.
At this point, you can build the application, and then you can deploy it by right-clicking the project and choosing Run. Initially, all that will be displayed is the layout that is shown in Figure 6.
Figure 6: Default
index.xhtml page
Note: It is important to note that automatic primary key generation is not enabled by default. Specify the
@GeneratedValue and
@SequenceValue annotations within your entity classes, along with a database sequence to configure automatic primary key generation.
Clicking one of the links in the
index.xhtml view navigates to the
List.xhtml view for the chosen entity.
First things first, though. Let's make the views more recognizable. Open the
bundle.properties file, which in NetBeans is located within the project's
"Other Sources" -> src/main/resources -> <default package> package. Each of the properties within the file corresponds to an entity view within the application. Change the properties accordingly so that you can recognize which view you have selected. For instance, update the
ViewCustomerTitle property to read
View Customer, and update the
ListCustomerTitle property to read
List Customer.
Although the wizard generates these forms for you, some customization is required to make them more user-friendly. For instance, if you open the
PoolCustomer List.xhtml view and then click the Create button, you'll see lists for
CustomerId and
PoolId objects. By default, the names are not recognizable, but you can alter the code to ensure recognizable values are displayed by editing the
poolCustomer -> Create.xhtml markup and changing the
f:selectItem attributes within the PrimeFaces
SelectOneMenu components to display a label. Listing 1 shows the code for enhancing these components.
<p:outputLabel <p:selectOneMenu <f:selectItems </p:selectOneMenu> <p:outputLabel <p:selectOneMenu <f:selectItems </p:selectOneMenu>
Listing 1: Adding the
itemLabel attribute to
<f:selectItems>
After adding the
itemLabel attributes, the lists will be more easily readable, as shown in Figure 7.
Figure 7: Custom labels for
SelectOneMenu items
You can customize other pages and components as needed, making the application easier to use.
The default index page for the NetBeans-generated PrimeFaces application is very plain. To create a more informative and useful home page, begin by applying the application template to the
index.xhtml page. To do so, add the Facelets, PrimeFaces, and JSF Core namespaces into the view, and make use of
<ui:composition> and
<ui:define> to generate a view that adheres to the template:
xmlns:p="" xmlns:f="" xmlns:ui=""
Let's also add a more useful data set to the body of the index page by adding the customer listing inside the
<ui:define section. To do so, add a PrimeFaces
DataTable component to the view. The key attributes of the
p:dataTable element for creating a simple data table are
value and
var. The
value attribute should specify a collection of data residing within the managed bean controller, which will be displayed within the table. The
var attribute is used to reference each separate record within the table. In Listing 2,
var is set to
item, and by doing so we can reference each separate table record in more detail.
<?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns="" xmlns: <ui:composition <ui:define <h:outputText</h:outputText> </ui:define> <ui:define <h:form <p:panel <p:dataTable <p:column> <f:facet <h:outputText </f:facet> <h:outputText </p:column> <p:column> <f:facet <h:outputText </f:facet> <h:outputText </p:column> <p:column> <f:facet <h:outputText </f:facet> <h:outputText </p:column> </p:dataTable> </p:panel> </h:form> </ui:define> </ui:composition> </html>
Listing 2: Adding Facelets template and PrimeFaces
DataTable to
index.xhtml
To inspect the code a bit further, look in the
CustomerController class, and find the
items property (see Listing 3). If you are using the NetBeans IDE, you can hold down the Command key (for Mac OS X) or the Control key (for Microsoft Windows) and click the
items reference within the
#{customerController.items} expression to be automatically redirected to the property within the controller.
public List<Customer> getItems() { if (items == null) { items = getFacade().findAll(); } return items; }
Listing 3: Managed bean property for populating the
Customer data table
After inspecting the code, you can see that a
List<Customer> is used to populate the
p:dataTable component within the view. The new index page looks like Figure 8.
Figure 8: Newly constructed index page
As seen in the previous section, one of the most efficient ways to display records of data is to list each of them within a table. Have you ever worked on a web application that required you to click a row in a table in order to edit the data, which then took you to a separate edit page? Gone are the days of multiple page navigations to accommodate simple tasks. The pages that were generated for this application by default via NetBeans contain the functionality to show the content of different views as modal dialog boxes, allowing users to stay within a single page to add or change data. This is made possible using the PrimeFaces JavaScript API to bind actions to buttons, applying those actions to specified PrimeFaces components.
For instance, the Create button on the
PoolCustomer List.xhtml view is coded as follows:
<p:commandButton
Listing 4 shows the code for the
prepareCreate method of the
PoolCustomerController class. The code generates a new
PoolCustomer object, which can then be populated by the user and inserted into the database. This method could be customized to suit your application needs.
public PoolCustomer prepareCreate() { selected = new PoolCustomer(); initializeEmbeddableKey(); return selected; }
Listing 4: The
prepareCreate method of the
PoolCustomerController class
The
oncomplete attribute contains a PrimeFaces JavaScript action to open the PrimeFaces component identified as
PoolCustomerCreateDialog, which is located within the
PoolCustomer Create.xhtml view. To associate an identifier with a PrimeFaces component, specify a value for the component's
widgetVar attribute. When the button is clicked, the
prepareCreate method of the
PoolCustomerController class is invoked, and the creation dialog box is displayed when the method is finished. The
update attribute specifies that the
PoolCustomerCreateForm should be asynchronously updated.
The PrimeFaces
DataTable component can be customized very easily, making it easy to develop a highly sophisticated table for displaying and updating application data. For example, the PrimeFaces
DataTable component provides various options for editing without leaving the table view. If you take a look at the List views that were generated for the AcmePools project by NetBeans, you will see that the
DataTable components have been customized to add selection listeners. For instance, Listing 5 shows the
DataTable that is included within the
customer/List.xhtml view.
<p:dataTable <p:ajax <p:ajax ... </p:dataTable>
Listing 5:
DataTable selection
The
selectionMode attribute indicates that a single row will be selected; this could also be set to
"multiple" to indicate that many rows can be selected at once. The
selection attribute is used to specify the object within the managed bean controller to which the selected record item will be assigned. In this case, it is a
Customer object, as seen within the
CustomerController class:
private Customer selected; ... public Customer getSelected() { return selected; }
PrimeFaces
<p:ajax> elements can be embedded inside other PrimeFaces components to apply asynchronous functionality. In this case, when a row is either selected or deselected, the buttons within the view are updated. In the application, if you select a row and then scroll to the bottom of the page, you have the opportunity to edit, view, or delete the record. In Figure 9, a record has been selected and then the Edit button has been chosen.
Figure 9: Edit Customer dialog box
Note: When using NetBeans IDE 8, you can see the PrimeFaces documentation for any component by using the autocompletion feature within the editor (see Figure 10).
Figure 10: NetBeans IDE's PrimeFaces code completion functionality
The NetBeans IDE wizard has generated a fully functional CRUD application, but what if your customers had hundreds of record to edit? This could take a while to do if each row had to be selected, a button had to be clicked, and then edits needed to be made. The PrimeFaces
DataTable cell editing feature makes it easier to edit the data contained within a table.
To add cell editing capability to the
customer/List.xhtml table, add the
editable and
editMode attributes to the
dataTable element, as follows:
<p:dataTable
Next, as shown in Listing 6, use the
p:cellEditor element to mark each row that you wish to make editable, and also indicate how the output is to be displayed (via
f:facet name="output") and indicate how to treat input (via
f:facet name="input"):
<p:column> <f:facet <h:outputText </f:facet> <p:cellEditor> <f:facet <h:outputText </f:facet> <f:facet <p:inputText </f:facet> </p:cellEditor> </p:column>
Listing 6: Marking rows as editable
In Listing 6, the customer
addressline1 column will be editable, and when the cell is selected, the field will change to a
p:inputText component, as shown in Figure 11. This makes editing within
DataTable components very easy.
Figure 11: Editable table cell
To persist the edit, you'll need to do something with the data after it is edited. To invoke an action after a cell has been edited, embed the
<p:ajax> element inside the
DataTable. The following
<p:ajax> element can be embedded inside the
customer/List.xhtml DataTable, and then the
onCellEdit method of
CustomerController will be invoked when a cell is edited.
<p:ajax
The
onCellEdit method can perform any action. The example in Listing 7 saves the contents of the edited table row to the database. 7: Example of
onCellEdit method
Rather than using single-cell editing, it is also possible to add a bit more control to editing functionality by creating a row editor. A row editor includes both submit and cancel functionality, and it enables editing for all the editable fields of a selected row. To add row editing to a
DataTable, omit the
editMode attribute entirely, and add the following column to either the front or the end of the table:
<p:column <p:rowEditor /> </p:column>
After adding the
<p:rowEditor> element to the front of the
customer/List.xhtml table, the New Enterprises row will resemble Figure 12.
Figure 12: Row editing capability
To persist the updates, add an event listener for row edits by embedding a
<p:ajax> element into the
DataTable, as shown below. In this case, two listeners have been added: one for when a row edit is submitted and another for when a row edit is canceled.
<p:ajax <p:ajax
Using the tactics demonstrated in this section, you can develop sophisticated
DataTable components for displaying and editing your enterprise data.
Since asynchronous editing is a valuable feature, it would also be nice to include some feedback for users, so they know whether their edits were successful. PrimeFaces makes it easy to include informative messaging. There are a couple different messaging solutions offered by PrimeFaces: growl and messages. In this article, we'll take a look at the messages component.
To make use of messages, add the
<p:messages> element to all the relevant views that are enclosed within a form. For instance, Listing 8 shows the component added to the Customer List view.
... <h:form <p:messages <br/> ...
Listing 8: Using the
<p:messages> element
As with the other components, there are several attributes that can be specified. To display a message within the component, add a
FacesMessage to the current instance of the
FacesContext. Listing 9 demonstrates how to add a message when a cell is edited within the view. 9: Adding a message to a view
After editing a cell, the message should be displayed within the view, as shown in Figure 13.
Figure 13: PrimeFaces messages component
PrimeFaces 5 includes significant improvements, including new
DataTable features such as frozen cells, draggable rows, and a column toggle that enables you to hide specified columns. Moreover, new components have been added to help you produce even more robust applications.
One of the most important new features is the addition of PrimeFaces mobile, which is a mobile component library based upon the jQuery Mobile framework. This feature provides the ability to include mobile views for your application pages, allowing the application to scale across devices of all sizes. The PrimeFaces mobile feature also includes performance-based features, such as lazy loading. The best part is that this feature allows mobile and desktop user interfaces to share business logic, making it easy to develop applications for the enterprise that have the ability to span all devices.
The JSF framework has proven to be an efficient development framework for solutions of all sizes. PrimeFaces can be included in JSF applications to significantly increase the options available for your applications. PrimeFaces includes components that provide increased functionality compared to the standard JSF component library. It also includes solutions to increase developer productivity, such as
<p:ajax>, which makes it easy to wire components to business logic using less code. This article covered only a handful of features that PrimeFaces has to offer. For more, please visit primefaces.org.
Josh Juneau (juneau001@gmail.com) works as an application developer, system analyst, and database administrator. He primarily develops using Java and other Java Virtual Machine (JVM) languages. Josh is a technical writer for Oracle Technology Network and Java Magazine. He coauthored The Definitive Guide to Jython and PL/SQL Recipes (both Apress 2010) and Java 7 Recipes (Apress 2011). He recently authored Java EE 7 Recipes and Introducing Java EE 7 (Apress 2013). Josh is currently authoring the upcoming Apress title Java 8 Recipes, which will be published later this year.
Join the Java community conversation on Facebook, Twitter, and the Oracle Java Blog! | http://www.oracle.com/technetwork/articles/java/java-primefaces-2191907.html | CC-MAIN-2016-40 | refinedweb | 3,466 | 53.41 |
- Advertisement
CraZeEMember
Content Count349
Joined
Last visited
Everything posted by CraZeE
A program of particle fire
CraZeE replied to ncat's topic in 2D and 3D Arti'm running on a Radeon Mobility X300 - 72fps Just curious, how ARE you doing your rendering for the particles? I'm asking because I've worked on a particle engine a few years back and it gives a decent 100+ fps on my old geforce4 running on a pentium 2 :O Unless you're doing something unique, i believe you shud be getting much better framerates for what (seems) like a simple implementation. Enlighten us :)
Particles in a large world
CraZeE replied to MGB's topic in General and Gameplay Programmingby large worlds, i'm assuming you're worried bout particles in the distance OR huge number of systems across different locations in the world. they are somewhat 2 separate problems that I would treat differently. 1. as distance increases, the detail level will not be perceptible. i would recommend performing some form of LOD to minimize processing. This applie to both the aforementioned scenarios. 2. precision may not be an issue unless you're performing per-particle transformation at world coordinates. Excluding live systems in immediate vicinity of visible area, systems located far away could be rendered to texture. That way, precision is not an issue, and u just need to render the texture ala imposters on the world.
- just came off the top of my head... IF you're having a problem of accessing functions in a namespace due to declaration oerder, then it is NOT a namespace issue. Functions, in general, are required to be at least 'declared' before using. If your code calls a function (in a namespace) and the function prototype hasnt been declared at the point of using it, then its gonna be a standard link error. Either way, its not the namespace thats the problem, its the order you declare the function in the namespace with respect to the function caller. Hope that made sense.
- namespace forward declaration? honestly havent heard of this, nor the need for it. Namespaces are special compared to classes. Class/type declarations must me complete with respect to file. As in: // In A.h class A { // class contents... void firstFunction(void); }; You can't have this after you do the above: // In A-2.h class A { // class content 'extention' void secondFunction(void); }; The above just wont work and compiler will kick ur a** about type redefinition, etc. Namespaces dont have this limitation actually. You can open and close a namespace (almost) anytime and anywhere, each content being universally appended into ONE in the background. So why do you have this namespace issue again?
Confusion: Libraries, .lib and .DLL
CraZeE replied to Barking_Mad's topic in For Beginners's Forumwhoa~ reading up on multithreading is useful but it is overkill for your situation. Here's the breakdown as best as i can explain. Most programs are linked to a C Runtime Library (CRT). The runtime library itself exists in a few forms: 1. Single Threaded 2. Single Threaded Debug 3. Multi Threaded 4. Multi Threaded Debug 5. Multi Threaded DLL 6. Multi Threaded DLL debug Now, without getting to much into detail yet of each variant, here's the important news: You *must* ensure that all components in a project are all compiled using the same runtime library. So if you make an EXE project that links to a LIB file and a DLL (jsut for example), you must make sure that they're all configured to link to the same type of runtime library. In the case of bundled or 3rd party LIB/DLLs, they usually provide different variants of their libraries OR they should tell you explicitly which type their library is linked to. To change this setting, check the project properties. Select C/CC, Code Generation -> Runtime Library dropdown. As to what each variant does, that's a bit more complicated. The easiest to understand is the Debug/Non-debug runtimes. When you're compiling into a debug configuration, you should use the Debug runtime. Easy! Why? Coz the debug runtimes will have debug symbols and information that permit you to do your code debugging properly. As for multi/single threaded, that I cant really explain easily. However, I can help on the Multithreaded vs Multithreaded DLL. The DLL version will generate smaller output files, but the target system must have the runtime libraries installed. Which means, if you create an installer for your app, you wanna make sure that: 1. You check that the user has the runtimes installed 2. If not, you bundle the runtime DLL with your app Point 2 makes its a moot point to use DLLs, especially when developing on .NET 2003 and above coz their runtime files may not be the same as the defaults bundled in the OS. If you wanna avoid all this 'DLL not found' errors, then link to the standard multithreaded runtime. You get a larger file output, but chances are it'll be smaller than bundling the runtimes with your installer :)
What you would want in your particle engine
CraZeE replied to Morpheus011's topic in General and Gameplay Programminghow do you handle motion/path updates for particles? is it hardcoded? if it is, i would recommend implementing some way to provide the particle system with a 'behavior' function. that way, particle behavior is not linked to the system in general although such a behavior can also be replicated if done this way. This would be handy coz you can just swap the 'behavior' and the particle/system will adapt to it. Imagine a fire system in a windless environment, then when you introduce wind, the behavior is updated by just providing a new function. the same can be applied to any modifieable element of the system. I created a system once that had pluggable motion and emission area. It promotes extensibility to the system, though it seriously depends on how much extensibility you'd like to have.
OMG I can not cast base pointer to derived pointer?
CraZeE replied to derek7's topic in General and Gameplay Programminggenerally speaking, casting to a base class is always possible... but you're trying to cast a base class to a derived class. Haven't actually done that myself, but it seems logically valid that it would fail. casting to base class is valid, however, because the base class in its entirety is a part of the derived class. casting to a derived class however is not recommended (if not illegal in the first place ). Imagine: class A { // Simplified for brevity public: virtual int DoSomething(); }; class B: public A { public: virtual int DoSomething(); virtual int DoSomethingElse(); }; // USAGE possiblities? A* toBase = dynamic_cast<A*>( new B() ); toBase->DoSomething(); //toBase->DoSomethingElse(); <-- This will give compile time error B* toDerived = dynamic_cast<B*>( new A() ); // Don't think this will work // Why? Because the following would cause a runtime error if the above // succeeded. A pointer to B will allow access to a defined 'DoSomethingElse' // which is non-existent in the original class-A toDerived->DoSomethingElse(); not sure if my explanation helps, but thats my 2cents.
Can I use CALLBACK functions in class ?
CraZeE replied to 3ddreams's topic in General and Gameplay Programmingyes, as Muncher puts it, most system APi callbacks have at least one void* parameter that can be used for arbitrary data. In most cases, especially in an object-based application, you can (and should) piggyback the 'this' into the parameter. just keep note of this limitation and/or solution whem making your own callback mechanisms in the future. You can apply this same logic into designing your callbacks ;)
why does everything microsoft make sooo complicated (C++ stuff)
CraZeE replied to nullsquared's topic in GDNet LoungeQuote:I do feel that the VS2005 compiler is extremely picky. On VC6 and any other compiler and IDE combo, I could have: MessageBox(NULL,"Hello","Hello",MB_OK); On VC2005, MessageBox(NULL,L"Hello",L"Hello",MB_OK); That L parameter came out of nowhere. It's annoying, especially if you've never used the compiler and don't know it's even supposed to be there. erm.. i think you need to know your IDE more before comparing it with others. The 'L' parameter has been around for a LONG time and not just in VS2005. I've seen it in .Net and .Net2k3 (cant confirm for vc6). What it stands for has already been explained by JohnB. Quote:Is it just me or was that a blank message...? The first parameter is a handle to the owner window. So the Messagebox command specified in the example has a title of 'Hello' and content of 'Hello' with an OK button, and it is linked to the desktop (i think).
C++ Console Colored Text or Alternative?
CraZeE replied to Jemburula's topic in For Beginners's Forumu didnt mention the OS you're running on so its a gamble. If u're using windows (highly likely), then there's a collection of functions on console formatting. However, these use the Win32 API and might be a bit complicated at first. For exact details, check out MSDN.
Operator overloading class members
CraZeE replied to c0uchm0nster's topic in For Beginners's Forumout of curiosity, wats the reason for the operators to be private? If you're gonna use them from outside of the class, then chances are you won't have permission to access private methods.
creating a DLL
CraZeE replied to BloodLust666's topic in General and Gameplay Programmingi'm not very used to anything besides a standard DLL (so COM is not my expertise). So here are wat i (hopefully) know on DLLs: Pros ==== 1. Dynamic loading (duh!) - Since the client app and the DLL are not linked (though possible) at compile-time, you can update the DLL and/or client application without recompiling the other. 2. Reuse - You can more easily integrate a DLL into another project, avoiding linkage issues and compile settings to support the DLL. 3. Can share a library between incompatible languages (bridge Delphi<->C++<->VB etc) Cons ==== 1. VERY tricky to handle passing of class objects between DLL and client app 2. Requires a well defined interface to ensure client code does not get affected by DLL updates 3. Calling convention issues must be handled wisely 4. Harder to debug from the client-app's point of view Erm.. feel free to add as you find more.
Is it possible to define some kind of function around #ifndef FILENAME_H_,....
CraZeE replied to johnnyBravo's topic in General and Gameplay Programmingyes, include guards should be in the header. assuming that's what you wanna achieve. but on a second note, no you CANT define a preprocessor within another preprocessor. meaning you cant: #define #ifndef WHATEVER\ #define WHATEVER\ #endif It just wont work.. or at least I neva got it to work :P
Sprite without glOrtho
CraZeE replied to Dark Rain's topic in Graphics and GPU Programminger.. u mentioned 'having them face the camera' but that would require rotation. So how did that work if this other rotation doesn't? anyway, it looks like either a transformation order or transform state problem. Some existing code snippet would be handy to debug though, coz right now its just a random guess.
Silly visual C++ 2003 problem
CraZeE replied to The C modest god's topic in General and Gameplay Programmingunfortunately, errors are reported by the compiler with a line reference. Since the compiler does not run at realtime, the line info doesnt get updated at realtime. Its a minor annoyance, but you'll get used to it. I don't think it can be fixed (easily) because that would require constant validation of code at runtime, and VS.NEt2003 can be quite slow as it is.
- oh.. truly sorry bout that Diablo. as deavik said, use glPushMatrix() before the tranformation in the particle render, and glPopMatrix right after you draw the particle. all this is in the render function itself. I forgot to add those lines in the sample fix i gave you, though I did use them myself *slaps forehead* anyway, about push and pop... its just a stack (read a book on data structures). Transforms in OpenGL (and most rendering API) use matrices to store the transform info. but there are time you want to 'bookmark' the current matrix so that you can do changes and not worry bout reseting it after that. for instance, u might want to do 2 transforms, then do a third, before reverting to the state prior to the 3rd. You can do the difficult way: 1. transform 1 2. transform 2 3. transform 3 4. clear transform (load identity) 5. transform 1 6. transform 2 this is long winded. wat the push/pop matrix stack does is store the current setting. the above can be done via push/pop as: 1. transform 1 2. transform 2 3. push matrix (glPushMatrix) <-- remember step 1 and 2 4. transform 3 <-- temporarily do this... 5. pop matrix (glPopMatrix) <-- reset back to original (discard transform 3) it might look like saving only one step in the above example, but when u start having complex transformations, push/pop are lifesavers.
- ok, found ur problem. it is related to ur order of transformation.. more precisely ur reversal of transofrmation. take note of opengl tranformation order (example:) 1. translate 1 2. rotate 1 3. rotate 2 Opengl will process this in reverse order. Meaning, it'll rotate1, rotate2 then translate1. In your example, u want rotation to occur relative to ORIGIN, before u translate to the final particle position. Although your code doesn't show this, this is wat actually happened (in opengl order): 1. rev rotatey 2. rev rotatex 3. translate to position this causes all ur particles to be positioned FIRST, then rotated. Hence their rotation occurs relative to a now-offset origin. What you really want is the particles to rotate in their center BEFORE u translate. Eg: 1. translate to position 2. rev rotatex 3. rev rotatey unfortunately, the way your code is structured... u need to modify a bit. Not sure if this is the best way, but this works for me (quick hack) // LESSON10.cpp //undo rotations so particles are perpendicular to viewer // I disabled these two lines //glRotatef(-sceneroty,0,1.0f,0); //glRotatef(-lookupdown,1.0f,0,0); // Added depth mask AND i passed the inverse rotation values into // the particle itself glDepthMask(FALSE); //render particles individually myPart[0]->render(-lookupdown, -sceneroty); myPart[1]->render(-lookupdown, -sceneroty); myPart[2]->render(-lookupdown, -sceneroty); myPart[3]->render(-lookupdown, -sceneroty); myPart[4]->render(-lookupdown, -sceneroty); glDepthMask(TRUE); // JTD_Particle::render( float rotx, float roty ) // Added new transformation code here glTranslatef(x, y, z); // Moves particle to correct position glRotatef(rotx, 1.0f, 0.0f, 0.0f); // Reverses the camera rotation glRotatef(roty, 0.0f, 1.0f, 0.0f); glBegin(GL_TRIANGLE_STRIP); // NOTE: All x,y,z references have been omitted and done by transform glTexCoord2d(1,1); glVertex3f(+0.05f,+0.05f,0); // Top Right glTexCoord2d(0,1); glVertex3f(-0.05f,+0.05f,0); // Top Left glTexCoord2d(1,0); glVertex3f(+0.05f,-0.05f,0); // Bottom Right glTexCoord2d(0,0); glVertex3f(-0.05f,-0.05f,0); // Bottom Left glEnd(); thats quite a bit of change. this is just to get it to look correct, though you shud plan ur changes in the future to avoid hacks like this. order of transformation is tricky and (in your case), you cant fully do it from outside of the particle logic. And remember, always compose your objects relative to their origin. Particles are usually at their center, but other elements may have it differently. Think of it as that the origin is ALWAYS your point of operation. If something is intentionally rendered offset from origin WITHOUT translation, then there better be a reason for it.
- btw, just fyi... enabling/disabling z-buffer is done as: glEnable(GL_DEPTH_TEST) glDisable(GL_DEPTH_TEST) this completely stops performing Z-tests altogether. For the solution above, you want to keep the existing Z-buffer data intact, but dont want to add additional changes. This way, the particles will still be occluded by nearer objects, but they wont overlap each other ;) To enable/disable Z-write, you do: glDepthMask(TRUE)/glDepthMask(FALSE) default: TRUE.
- i can hint on the z-sorting part. actual z-sorting requires a good datastructure that automates object sorting; something i doubt you will be looking into just yet. Instead, for particles, you can follow this trick: 1. render all OPAQUE (non transparent, non-particle) objects in the scene first 2. disable Z-write (which is NOT the same as disabling Z-buffer entirely) 3. render you particles 4. re-enable Z-write For overlapping particles itself, you will need to work on the blending state. Play around with the blend_src and blend_dest flags for the alpha processing to find a combination that suits. Havent touched OGL in a while, so i cant remember the constants straight out of my head. As for your 'moving' particles, it looks like some incorrect reversal of transformation. I havent looked at the code yet, but chances are you are not rotating the particles relative to their center, instead you're doing it from some other point of reference. Will get back to you when and if i have finished looking at ur source. chiaoz.
Pure Virtual functions
CraZeE replied to Mantear's topic in General and Gameplay Programmingpure virtual functions (pvf for short.. i'm lazy) are useful for definining interfaces. Its the closest substitute C++ has to Java's 'interface' and C#'s 'abstract class' keyword. You get the following differences from a standard virtual function: 1. ONE pvf is all it takes to make a class abstract/non-instantiable 2. pvf MUST be implemented in derived classes to make them instantiable. Derived classes are also abstract classes unless an implementation is provided for all inherited pvf. in many C++ applications, object management is important. One way to track objects is to store all your objects in a list. However, if u take an array (for instance), an array is single-typed. An integer array only KEEPS integers and refuses to accept anything else. HOw then? Thats where pure virtuals come in handy. You can keep an array of pointers to the abstract class and automagically you can store ANY derived class within that array. So how is that different from a standard virtual function?... Not much, aside from the fact that it enforces that all types of a certain abstract base class WILL implement a custom behavior. Virtual functions allow you to fallback on a default implementation if you choose to do so.
struct within a struct question.
CraZeE replied to red-dragonX's topic in For Beginners's Forumyour syntax for accessing the struct-in-a-struct is correct. However, when using at runtime, ensure that the struct-in-a-struct is allocated. You're using a pointer, so an allocation is required before u can use that second level of indirection.
C++ Validation Tools?
CraZeE replied to MENTAL's topic in General and Gameplay Programmingafaik, if u use pure virtual functions.. a derived class MUST provide an implementation, else the compiler will give you an error. You theoretically never get to a runtime error, unless the implementation itself is erroneous. That's a whole different problem altogether :P
CALLBACK TimerProc
CraZeE replied to tranduy's topic in For Beginners's Forumu could try using Sleep in ur home-grown 'timer' loop, but Sleep suspends ur active thread. In a GUI application, that would mean even your GUI will not update until the Sleep duration is over. Multi-threading is needed for this to work elegantly. Another option is to use SetTimer (i think thats wat its called). Check out MSDN for a reference on setting up system level timers that work via callbacks. It might be a more elegant and compatible way to get around this issue.
FireBlade Particle Engine
CraZeE replied to KyRo1989's topic in Graphics and GPU Programmingi'm getting a DLL linking error in a language i can't read :P. The rar doesnt seem to include some necessary DLL's; either that or i'm sorely forgetting something. how? how?
floating text above players head
CraZeE replied to kubert's topic in General and Gameplay Programmingbillboarding for text is probably not worth the trouble, especially if there's gonna be a lot. And working with text in 3d *may* not be wat you want because there are issues with scaling based on depth. Unless you intend text in the far background to be small and indecipherable, i believe you'd want Torment-style text that are a fixed font size irrespective of location and depth of the character. (sorry for bad example, couldnt think of a real 3D game with text right now). just unproject the character position from 3d into screen coordinates to get the x-y coordinates, and render the text as needed in (wat i assume) is ortho mode for font rendering.
- Advertisement | https://www.gamedev.net/profile/14035-crazee/?do=content&all_activity=1&page=0 | CC-MAIN-2019-18 | refinedweb | 3,538 | 65.22 |
Peter Kellner
- Total activity 27
- Last activity
- Member since
- Following 0 users
- Followed by 0 users
- Votes 0
- Subscriptions 10
Peter Kellner created a post,
How To Make Resharper Less Chatty When Typing in codeI'm doing video based training and recording my screen with camtasia. I find that when I'm coding I'm completely happy with all the intelisense resharper gives me but when people watch my video th...
Peter Kellner created a post,
Installed EAP but does not seem to have updated?I installed the EAP 6.1 (it shows in win7 programs and features as JB Resharper 6.1 SDK, build 331) but when I run resharper/about it still shows all my 6.0 license stuff and not mention of 6.1. M...
Peter Kellner created a post,
resharper 4.1 doesn't work with intellisense and MVCI'm new to MVC and just installed preview 5 of MVC (as blogged on scottgu's site). With resharper turned on, it does not find the namespace Microsoft.Web.Mvc even thought it is in my bin directory...
Peter Kellner created a post,
turning off resharper for a given file?I'm using LINQ, and when I have a file that has LINQ code, it is almost not editable with resharper on. Is there someway I can disable resharper for just that file and not the full application? H...
Peter Kellner created a post,
ith resharper 3.1, I don't get intellisense out of vs2008 for LINQhow can I get the default intellisense for LINQ command?right now, they look like what I'm attaching.Attachment(s):resharper1.jpg
Peter Kellner created a post,
any info on vs2008 RTM for resharper?any info on vs2008 RTM for resharper? | https://resharper-support.jetbrains.com/hc/en-us/profiles/1374039441-Peter-Kellner | CC-MAIN-2019-43 | refinedweb | 292 | 65.73 |
Dolby Codec Missing in CC 2018, MTS/AC3 No Audiormc2800 Oct 18, 2017 10:37 PM
Hi, I just updated to CC 2018. When I opened my CC 2017 project in CC 2018 I found that all my MTS video files were offline. I discovered that the issue stems from CC 2018 missing the Dolby Codec. More specifically, it is unable to read the AC3 audio codec. This is affecting all my 2018 products that use audio: Media Encoder, Premiere, After Effects, Audition. However, CC 2017 works totally fine.
First of all, I'm on a Mac (High Sierra) and have tried all the solutions that worked for people who had a similar issues after upgrading to CC 2017 last year. This what I have done so far:
1. Uninstalled and reinstalled Premiere CC 2018
2. Reset Premiere 2018 preferences by holding Shift+Option while the software loads
3. Reset my computer's PRAM
4. Deleted the Media Cache and Peak Files folders in User>Library>Application Support>Adobe>Common
5. Deleted the Dolby Codec in Users>Shared>AdobeInstalledCodecs
Loading CC 2018 and importing my video with the AC3 audio results in video with no audio track. Importing a AC3 audio file gives me a "this file contains an unsupported audio format" message.
When I load them in CC 2017 I instantly get a message "Dolby Codec must be installed to use this feature. Clicking OK will install and enable this codec for immediate use." I hit OK, everything works and I see the codec is back in the AdobeInstalledCodecs folder. The option to reinstall the Dolby Codec never appears in CC 2018.
In CC 2017, I also notice that when I export I have the option to use Dolby Digital as an Audio Export setting. This option isn't available in CC 2018.
Anyone know why my CC 2018 can't seem to read the Dolby Codec?
1. Re: Dolby Codec Missing in CC 2018, MTS/AC3 No Audiogiorgiod35066082 Oct 24, 2017 7:26 AM (in response to rmc2800)
hi rmc,
as far as I know Adobe Premiere will not support anymore Dolby codec.
The only way, I think, is a thirdy part codec like Minnetonka or other.
Hope this will help though it is not a good new.......
Regard
Giorgio
-
3. Re: Dolby Codec Missing in CC 2018, MTS/AC3 No Audioerolkarasakal Oct 20, 2017 6:58 AM (in response to rmc2800)
Merhaba ; ben Erol (London Yapım Studios)
Selamlar Görüyorumki Bu konuyla ilgili şansızlık yaşayan bir tek 3-4 kişiyiz.
CC 2018'e güncellendim. CC 2017 projemizi CC 2018'de açtığımda, tüm MTS video dosyalarım çevrimdışı bulundu.bunun nedeni CC 2018'den Dolby Codec'i eksikliğinden kaynaklanıyor.AC3 .MTS ses codec bileşenini seslerini okuyamıyor. Bu şekil ses kullanan tüm 2018 kurulumlarını etkiliyor ve açmıyor. Premiere Pro cc,After Effects Cc, Media Encoder cc,Audition. Ama nedense CC 2017 de tamamen iyi çalışıyor hayırdır bunun içindemi para talep edeceksiniz bizden..
(High Sierra) iMAC kullanıyorum ve bütün bilinen yöntemler yollar hacklemeler hepsini denedim ama olmuyor bilen yomu acaba
Video dosyasını sequence yüklediğim zaman Violet renginde oluyor ve hiç bir şekilde ses dosyası gelmiyor lütfen update yapın ve sorun giderin yada fit verin.
Kimse varmı bilen acaba neden ses dosyası problemi yaşadığımızı ve düzeltme yöntemi olan?
4. Re: Dolby Codec Missing in CC 2018, MTS/AC3 No Audiokanikas17063107
Oct 23, 2017 12:51 PM (in response to rmc2800)
Moving to Premiere Pro CC
5. Re: Dolby Codec Missing in CC 2018, MTS/AC3 No AudioSeppia Oct 23, 2017 2:21 PM (in response to rmc2800)1 person found this helpful
Would this by any chance solve your issue (providing you have a backup of the Dolby encoding framework) ?
6. Re: Dolby Codec Missing in CC 2018, MTS/AC3 No Audiormc2800 Oct 23, 2017 6:19 PM (in response to Seppia)
For me, no, unfortunately it didn't work. My "dolbycodec.framework" was never deleted. I didn't erase any of my CC 2017 apps when updating to CC 2018. I even opened my time machine backup and found that "dolbycodec.framework" file from a September backup and overwrote my current one.
The strange thing is that a friend of mine has CC 2018, Mac High Sierra like me, and tested one of my MTS video files on his Premiere 2018. For him, it imported fine with audio and video. We've been trying to figure it out together and can't seem to figure out the mystery of why it won't work for me.
7. Re: Dolby Codec Missing in CC 2018, MTS/AC3 No Audiogiorgiod35066082 Oct 24, 2017 7:15 AM (in response to rmc2800)
Adobe CC2018 uses native O.S. support for dolby.
This means that you can import Dolby if your O.S. manage it.
But if you want encode (output) dolby audio, you need external supports like Minnetonka or TMPGenc or others.
Try to "google" into Adobe support and you will find usefull informations.
Hope this will help
Giorgio
8. Re: Dolby Codec Missing in CC 2018, MTS/AC3 No Audiostudionoir Oct 25, 2017 3:24 PM (in response to rmc2800)
I am hunting the Apple ProRes 422 in the update as well... Anyone see it? Am I just missing it?
9. Re: Dolby Codec Missing in CC 2018, MTS/AC3 No Audiostudionoir Oct 25, 2017 4:09 PM (in response to studionoir)
Found it! Never mind!
10. Re: Dolby Codec Missing in CC 2018, MTS/AC3 No Audiormc2800 Oct 26, 2017 1:36 PM (in response to rmc2800)
11. Re: Dolby Codec Missing in CC 2018, MTS/AC3 No AudioKombi Life Oct 26, 2017 1:44 PM (in response to rmc2800)
At least they finally acknowledged it - still don't see why they couldn't make the previous version available that does include the dolby support
12. Re: Dolby Codec Missing in CC 2018, MTS/AC3 No AudioMarcus Batley Oct 26, 2017 2:45 PM (in response to rmc2800)
If you install it though it does give you the 2017 AME too, which has the Dolby options. Then you can run 2018 & 2017 alongside each other. The ProRes codecs are found under the GoPro presets at the Video Codecs dropdown...
13. Re: Dolby Codec Missing in CC 2018, MTS/AC3 No AudioProDesignTools Dec 8, 2017 6:54 AM (in response to Marcus Batley)1 person found this helpful
Sorry for the difficulty! Try one of these suggestions if you've lost Dolby functionality:
1. Your simplest and best bet is to recover your CC 2017 CC 2017, at least until December 31st, and press ahead with CC 2018. Meanwhile, for Mac users, macOS upgrades are always free... Windows 8.1 or above and Mac OS 10.11 or above contain native support for Dolby decoding functions. Again, rename the applicable files and reimport them on your new setup. Then clear your media cache.
5. Going forward, when upgrading Creative Cloud apps, use 'Advanced Options' in the Desktop app to retain older CC versions rather than the default behavior, which is removing them.
Hope that helps!
14. Re: Dolby Codec Missing in CC 2018, MTS/AC3 No Audiohye seokk51620128 Dec 7, 2017 3:59 PM (in response to kanikas17063107)
no — give us a solution.... we are all facing our deadlines
don't tell us to download win10 —
Moderator Note: Warning! Do not use profanity. It is against our guidelines.
15. Re: Dolby Codec Missing in CC 2018, MTS/AC3 No Audiormc2800 Oct 29, 2017 2:09 PM (in response to rmc2800)
I solved my own problem, although I am not sure exactly what did it. I mentioned that I cleared all my Media Cache and Peak files, which didn't solve the issue. Yesterday it did.
The one thing I did differently this time is that I went through my plugins folder: Library>Application Support>Adobe>Common>Plug-Ins>7.0 and purged all my 3rd party plugins that I no longer use. I had quite a few of them. I also deleted the 3rd party program FX Factory. I did this, cleared my media cache and peak files, imported my MTS file in CC 2018 and it imported with video and audio.
16. Re: Dolby Codec Missing in CC 2018, MTS/AC3 No Audiodantond4896878 Oct 30, 2017 9:03 PM (in response to ProDesignTools)
This is absolutely ridiculous.
17. Re: Dolby Codec Missing in CC 2018, MTS/AC3 No Audioandersmarshall Nov 5, 2017 12:31 PM (in response to rmc2800)
Hi rmc, this is an issue with High Sierra not playing nicely with AME. I know because my laptop, running Sierra, transcodes these files fine. They can even be imported to Premiere 2018 on my High Sierra desktop with no audio issues whatsoever.
Wish Adobe would test on the newest software - on one of the two huge platforms they serve - to ensure things like this work out of the box. And of course we can't have the original 2017 files with the Dolby codecs working.
18. Re: Dolby Codec Missing in CC 2018, MTS/AC3 No AudioR Neil Haugen Nov 5, 2017 2:51 PM (in response to andersmarshall)
From having met some of the engineers at NAB the last five years ... that's a Mac-centric group, with also pretty decent PC experience ... and running every Mac rig you can think of.
So ... my guess is it's run on goodly variety of Macs before shipping and yes, newest OS. Of course, even the newest Mac "tower" has what, a five-year-old design on the mobo? They have announced they're going to develop a new mobo but I'm not holding my breath.
Neil
19. Re: Dolby Codec Missing in CC 2018, MTS/AC3 No Audioandersmarshall Nov 5, 2017 4:45 PM (in response to R Neil Haugen)
Eh, I'm running a two year old machine on newest OS. And this problem seems
widespread among Mac users, enough to have a few decent threads about it.
Tells me testing mightve overlooked certain configurations. Oh well,
hopefully a fix comes soon!
On Sun, Nov 5, 2017 at 5:51 PM R Neil Haugen <forums_noreply@adobe.com>
20. Re: Dolby Codec Missing in CC 2018, MTS/AC3 No AudioR Neil Haugen Nov 5, 2017 8:03 PM (in response to andersmarshall)
Yea, but ... have you thought that you and some others may be having it, but thousands of others with identical or nearly identical may not? Which with so many of the issues like this is the case.
So when it's only a small subset of a small subset of the total user base, finding the 'cure' can be a bit of a puzzler. And that's why it doesn't get caught to begin with ... they may easily have had a rig just like yours but didn't exhibit this behavior.
Neil
21. Re: Dolby Codec Missing in CC 2018, MTS/AC3 No Audiosergior78589444 Nov 12, 2017 5:49 PM (in response to rmc2800)3 people found this helpful
I got a solution on mac
I have 2 Mac I update just one to the new CC 2018 and have the same problem that all of u
after fight for days format install and do everything
what I do was,
Close all adobe aplications
copy the dolbycodec.framework from my other computer to the updated Mac on
/users/username/shared/AdobeinstalledCodecs
Open premiere
import MTS
and works my both versions of premiere CC 2017 and CC 2018 Worked Fine
NOTE: premiere CC 2018 works but a bit slow importing files MTS, after been imported everthing works perfect
I don't know if my file works in other computers if smb need leave a mail
Sorry if my English is not perfect
22. Re: Dolby Codec Missing in CC 2018, MTS/AC3 No Audiomammanera Nov 13, 2017 2:04 AM (in response to sergior78589444)
Hello, sergior78589444,
could you provide the framework file for who can't get it anymore?
Thanks!
23. Re: Dolby Codec Missing in CC 2018, MTS/AC3 No Audiomammanera Nov 13, 2017 9:36 AM (in response to ProDesignTools)
Thanks; any chance to do the same in macOS?
24. Re: Dolby Codec Missing in CC 2018, MTS/AC3 No AudioProDesignTools Dec 7, 2017 3:34 PM (in response to mammanera)
Sorry, we can't do that.
People can't really share licensed software components outside of an official distribution; it's not legal, nor a good idea.
See if you can get it from one of your previous system backups.
Or, perhaps you can use Windows Restore or Apple Time Machine to bring back the previous state of your computer's applications before CC 2018 was installed.
And check the #13 reply above for more suggestions on solutions for this issue.
25. Re: Dolby Codec Missing in CC 2018, MTS/AC3 No Audiomammanera Nov 13, 2017 10:40 AM (in response to ProDesignTools)
it's not legal, nor a good idea
Maybe it is legal to throw awaya vital component that a lot of customers rely on, but is it a good idea? Do you find it fair? We'r all working here, nobody has time or money to waste. Now we're forced to find a third party solution for a thing we had already paid for, all because the Dolby encoder disappeared in a blink of an eye without notification. This thing doesn't make any sense.
26. Re: Dolby Codec Missing in CC 2018, MTS/AC3 No Audioandersmarshall Nov 13, 2017 10:47 AM (in response to mammanera)
Have to agree. Maybe we could get special permission from staff on this
forum? Since this solution is highlighted as a fix in another
thread, I don't think it's unreasonable to ask for this file.
27. Re: Dolby Codec Missing in CC 2018, MTS/AC3 No AudioR Neil Haugen Nov 13, 2017 11:08 AM (in response to andersmarshall)2 people found this helpful
Folks, you have two companies involved here ... Adobe was just a licensee of a codec owned by Dolby Laboratories. Dolby owns ac3 outright. And has the right to license all usage.
Adobe can't give away something owned by someone else ... period.
By the way, neither company has said anything publicly about this, not one little word even. So it seems likely there's a breakdown in a usable agreement over licensing that property ... the Dolby ac3 codec.
And ain't none of us know what the breakdown was. Or probably will ever have a clue.
Is it a royal pain to many of us users? Oh, yea. Royal.
So is Apple never allowing their ProRes codec to be created on a PC. I get by fine with Cineform & DNxHD/R, but for many, it would be handy if they could export in ProRes on a PC. Nunh-unh. Not no way not no how.
Neil
28. Re: Dolby Codec Missing in CC 2018, MTS/AC3 No AudioProDesignTools Nov 13, 2017 1:51 PM (in response to mammanera)2 people found this helpful
Nobody here can legally share those files, exactly for the reasons that Neil mentioned.
Furthermore, it's not a good idea because you should never install software files from unauthorized or untrusted sources onto your computer, at least if you care at all about your own security and privacy.
What happened is indeed unfortunate and was unexpected, and many of us here have spent a great deal of time trying to help out fellow users. The only external sources that could legitimately and safely provide those files are Adobe or Dolby, and clearly they are no longer able to.
This is why the suggested solution, if possible, was to pull those files from another system you have, or from your own computer's backups, or use Windows Restore or Time Machine to roll back the application state of your machine.
(And there are other possible solutions suggested on the related Forums thread shared above.)
Going forward, please consider retaining your previous versions of tools when upgrading, rather than the default behavior to remove them. With virtually any CC/CS Adobe app, you can have multiple versions installed on the same machine... (For video customers especially, this is considered a good practice.)
However, you have to explicitly tell the CC Desktop app to do this by unchecking the 'Remove old versions' box under the advanced settings of the update dialogue.
29. Re: Dolby Codec Missing in CC 2018, MTS/AC3 No Audiomammanera Nov 14, 2017 12:20 AM (in response to ProDesignTools)
what if I already updated and removed my preferences? This is ridiculous
30. Re: Dolby Codec Missing in CC 2018, MTS/AC3 No Audiomammanera Nov 14, 2017 12:34 AM (in response to R Neil Haugen)1 person found this helpful
How is this our problem? Adobe is one of the biggest software houses on this planet, and we're paying it every month. We should just have the same instrument we used to have, or at least a notification and some alternatives (also: a discount for a third party app, since this thing has been removed under our feet).
31. Re: Dolby Codec Missing in CC 2018, MTS/AC3 No AudioProDesignTools Dec 7, 2017 3:35 PM (in response to mammanera)
Well, we seem to be going around in circles... Have you actually read through the thread above, which offers several possible solutions that could work?
If you can't restore the previous functionality you had using the CC 2017 direct download links, or by using backups or a system rollback or from another computer, and if you don't want to upgrade to Windows 10 for free, then have you tried using third-party tools such as Handbrake? (also free)
32. Re: Dolby Codec Missing in CC 2018, MTS/AC3 No Audiomammanera Nov 14, 2017 6:07 AM (in response to ProDesignTools)
If you recommend to update to Windows 10, you did not read everything either. I'm on a Mac.
33. Re: Dolby Codec Missing in CC 2018, MTS/AC3 No AudioProDesignTools Nov 14, 2017 6:15 AM (in response to mammanera)
Mac OS upgrades are also free, if you wish:
macOS - How to Upgrade - Apple
Mac OS v. 10.11 and above have native Dolby decoding support:
Adobe Creative Cloud apps use native OS support for Dolby
34. Re: Dolby Codec Missing in CC 2018, MTS/AC3 No Audiosergior78589444 Nov 14, 2017 9:19 AM (in response to ProDesignTools)
Even if u update 10.13 mac os and have native Dolby Decoding support u cant play audio in premiere cc
35. Re: Dolby Codec Missing in CC 2018, MTS/AC3 No AudioR Neil Haugen Nov 14, 2017 9:21 AM (in response to sergior78589444)
?
I know quite a few Mac folk doing just fine ...
Neil
36. Re: Dolby Codec Missing in CC 2018, MTS/AC3 No Audiomammanera Nov 15, 2017 6:24 AM (in response to ProDesignTools)
As I stated several times, I need encoding support, not decoding.
Adobe is forcing us to buy additional software after we paid for the CC for years, and without even giving a warning about the upcoming removal of the feature. I'm sure everybody can see I'm not crazy if I complain about this. Adobe makes the best software, but is not cheap: if I lose a feature, I expect a quick, painless solution.
37. Re: Dolby Codec Missing in CC 2018, MTS/AC3 No AudioProDesignTools Nov 15, 2017 7:17 AM (in response to mammanera)
You didn't state that several times. And if you continue to need a particular functionality that was unexpectedly removed, then continuing to use CC 2017 is one recommended option.
The situation is definitely unfortunate but it's really unusual for Adobe – in the past when there's been a significant change in support or requirements for a major tool, they'll give plenty of notice well in advance.
So something happened here that was exceptional and out of the norm, and apologies for the trouble you're going though.
That said, as fellow users on these forums, we're here to help try to find solutions and workarounds. Many of us who are unaffected are spending a lot of our own personal time here doing that.
38. Re: Dolby Codec Missing in CC 2018, MTS/AC3 No AudioPaul_Miller Nov 26, 2017 2:14 PM (in response to rmc2800)
I have a two year old iMac that has Adobe CC installed since I purchased it. My Premiere imports .MTS files with after updating to Premiere 2018.
Just installed High Sierra on a newer Mac Book Pro and then installed Adobe CC for the first time on this Mac Book and Premiere won't import .MTS files with audio. This is what I get. Clicking OK doesn't install the codec or solve the problem.
I'm going to attempt to copy the dolby codec from my other Mac and see if that fixes it.
39. Re: Dolby Codec Missing in CC 2018, MTS/AC3 No AudioCreativeSmiles Nov 27, 2017 2:11 AM (in response to rmc2800)
I'm confused because I don't have this issue with 2017 version as everyone says, but I do have the problem with 2018 ... but I have Win 10, version 1709 OS Build 16299.64 ... so my 2018 version should be allowing me to input ac3 audio... RIGHT? but no go. I've google it to death ... can't figure this out. | https://forums.adobe.com/thread/2397175 | CC-MAIN-2018-09 | refinedweb | 3,589 | 69.92 |
For the first time since 20 year (i.e. ever), the ROOT team plans to break backward compatibility for crucial interfaces - once. This new major version of ROOT will make ROOT much simpler and safer to use: we want to increase clarity and usability. If you are a physicist, please read on - this is about your ROOT.
The ROOT team will be releasing parts of ROOT 7 throughout the coming years.
Previews will gradually sneak into the ROOT sources, in the
namespace ROOT::Experimental for those parts that are not yet cast in stone, and in the
namespace ROOT for those that are.
We will use standard C++ types, standard interface behavior (e.g. with respect to ownership and thread safety), good documentation and tests: we are trying to be nice!
Feedback
The main point of the meeting and this page is to solicit your feedback. Most of it has been taken care of in the code already.
Building ROOT 7
Pre-requisites
Support for the c++14 standard is required. Usage of g++ >= 5 or clang >= 3.4 is recommended.
Relevant cmake variables
The
CMAKE_CXX_STANDARD cmake variables must be set to at least
14.
Building from source would look similar to this:
$ mkdir root7_build $ cd root7_build $ cmake -DCMAKE_CXX_STANDARD=14 path/to/root/source $ cmake --build . -- -j4
Examples
See the relevant tutorials, for instance for drawing and styling the new histograms.
The new interfaces are not about shortening your code - but about robustness. Here are a few examples of what can go wrong with the ROOT6 interfaces:
- #include "TFile.h"
- #include "TH2.h"
- #include "TTreeReader.h"
- #include "TTreeReaderArray.h"
- #include "TTree.h"
-
- // Another function. Who knows what it does in a month from now.
- void someOtherFunction();
-
- void fill(TTree* tree) {
- // Create the file before so it can own the histograms.
- TFile* file = TFile::Open("jetmuontag.root", "RECRAETE");
-
- // Create the histograms (cannot mix fixed and variable size bins)
- const double muonPtBins[] = {0., 1., 10., 100.};
- // The axis titles might have been changed. Impossible to see.
- TH2* hMuPtTag = new TH2F("hMuPtTag", "muon pT versus tag value;tag value;muon pT [GeV]",
- 4, muonPtBins, 10, 0., 1.);
- TH1* hJetEt = new TH1F("hJetEt", "jet ET versus tag value;jet ET [GeV];tag value",
- 10, 0., 1000.);
-
- // Set up reading from the TTree
- TTreeReader reader(tree);
- TTreeReaderArray<float> jetEt(reader, "jet.ET");
- TTreeReaderArray<float> muPt(reader, "jet.lead_mu.pT");
- TTreeReaderArray<float> tag(reader, "jet.tag");
-
- // Fill the histograms
- while (reader.Next()) {
- for (int iJet = 0; iJet < jetEt.size(); ++iJet) {
- hMuPtTag->Fill(muPt[iJet], tag[iJet]);
- hJetEtTag->Fill(jetEt[iJet], tag[iJet]);
- }
- }
-
- someOtherFunction();
-
- // Store the result. Ideally using file->Write() but very few people do that.
- hMuPtTag->Write();
- hJetEtTag->Write();
-
- delete file; // but not the histograms!
- }
Here are the problems:
- Constructing the
TFile,
"RECRAETE"is misspelled.
- The axis titles of
hMuPtJetETare inverted.
- The number of
muonPtBinsis wrong.
- The histogram
hJetEtTagis filled with weights of
*tag; that might be a leftover y coordinate.
- You get two copies of the histogram in the file, one for
histogram->Write(), one because the histogram is already associated with the file and will be written on file destruction.
- The call to
someOtherFunction()might change
gDirectoryand thus
hMuPtJetETmight not be written to jetmuontag.root.
To track these problems down you'd have to spend your time debugging them. Instead, the new interfaces will simply not allow this to happen: no debugging needed! | https://root.cern.ch/root-7 | CC-MAIN-2020-05 | refinedweb | 560 | 68.97 |
Python Deep Copy and Shallow Copy with Examples
Do you ever wonder how one can simply fill data once and then merely make its copy to prefer ease? Not possible for a textbook, of course, but Python has its own magic, and yes, it does this for his coder.
How?
Copies in python are such functions that maintain a skeleton of any code as a carbon copy in the library. This helps the coder to retrieve any such code in case of any error.
There are basically two types of copies: shallow copy and deep copy. But there lies a major difference in the working of both, what is it and where are they used?
Everything is explained in the article. So, let’s get started.
Keeping you updated with latest technology trends, Join TechVidvan on Telegram
Python Copy Module
A copy module is used when some changes are needed in the backup files but not in the main source code. This refers to the internal storage that can be manipulated from the coder’s point of view but not the user’s point of view in the main source file.
Some of the attributes are:
1. copy.copy(x)
It returns a shallow copy of x.
2. copy.deepcopy(x)
It returns a deep copy of x.
3. exception copy.error
It returns an exception if needed.
A shallow copy creates a separate file within the text file which stores information. The original code is never manipulated, but changes can be surely made in the new copied file.
The basic idea of shallow and deep copy constructs a few debugging problems within the text.
Recursive objects face problems in shallow copy, but this is resolved with deep copy. This also intends the deep copy to work more efficiently than the shallow copies in runtime.
This module copies functions and classes both deep and shallow by returning the original object unchanged in the main code itself.
It is generally also compatible with the way these are treated by the pickle module in any code specific run time in python.
How to Perform Shallow and Deep Copy in Python?
Shallow Copy Syntax in Python:
import copy copy.copy(objectvariable_name)
Deep Copy Syntax in Python:
import copy copy.deepcopy(objectvariable_name)
Python Copy Example
#method to print the list thisistech= [1, 2, 3],['#',"*","-"] newtech=[0,9,8,7] #the functions returns as it is print('Old List:', thisistech)
Output:
Python Deep Copy
While a deep copy in Python creates a new object, it also inserts the new object in the original object by efficiently copying it. Similarly it copies an object into another object within the main code. This basically signifies that the required changes in the main dictionary are well automated in the main library also.
Python Deep Copy Example
>>> import copy >>> list1=[1,0,[7,9],6] >>> list2=copy.deepcopy(list1) #Making a deep copy >>>print( list1)
Output:
Addition of Nested Loop: Nested loop, unlike in lists, can be added in the deep copy. The shallow copy doesn’t support the reframe work of nested loops in new versions of python. Though it can be used in the old versions.
Code:
l=int(input("enter number")) old__list = [[1, 1, 1], [2, 2, 2], [5, 5, 5]] print("Old list:", old__list)
Output:
Old list: [[1, 1, 1], [2, 2, 2], [5, 5, 5]]
Issues With Python Deep Copy
While it is possible that recursive objects cause a recursive loop; these are the compound objects that directly or indirectly reference themselves. And it is possible that a deep copy may copy too much.
To deal with these problems, deepcopy() also :
1. Deep copy parses the newly added information in its file. This helps the coder to identify new files easily in the code.
2. Deep copy does not allow the coder to overwrite in the code, but builds a copy of the shallow code using internal files.
3. Deep copy, as well as shallow copy, is made by parsing a list or tuple together in the runtime.
Regardless of this, the copy module uses the registered pickle functions from the copyreg module within the code itself, it does not refer to the original code.
Copying Arbitrary Python Objects: Python arbitrary objects can be compiled in a list easily. It can be done by adding the elements recursively in the runtime for any defined list.
Code:
class Point: def __init__(self, A, B): self.P = A self.Q= B A = int(input("a")
Output:
Python Shallow Copy
While using a Shallow Copy in Python, the coder creates a new object in which he/she recursively puts all the copies of objects into the original code. Similarly, we copy a reference of an object into another object also. Any changes the coder makes to the copy do reflect in the original copy of code in the runtime.
Python Shallow Copy Example:
>>>import copy list=1,0,[7,9],6 list_2=copy.copy(list) #Making a shallow copy print(list)
Output:
>>>
Shallow Copying Dictionaries: This is a general method for providing an interface in the main source code.
Code:
>>> dict_1={'a':0,'b':2,'c':[0,2,3]} dict_2=dict_1.copy() dict_2['c'].append(9) dict_1={"a": 0} print(dict_2)
Output:
>>>
Adding any element as [0,0] to the list, using shallow copy: This method adds elements by extracting from one list to another one by one.
Code:
import copy old_list0 = [[1,1], [2,2]] new_list1 = copy.copy(old_list0) old_list0.append([0,0]) print("New list:", new_list1)
Output:
Adding new nested objects using Shallow copy: The objects are oriented from the list, and then extracted in the copies of the source code.
Code:
import copy l=(“enter number”) li=[] l.append(li) print(li) listone = [[1, 1, 1], [2, 2]] n=(“enter number”) ni=[] n.append(ni) print(ni) new_list = copy.copy(old_list) oldlist0[1][1] = 'pq' print("New list:", new_list1)
Output:
#no new obj to append
Difference Between Python Deep Copy and Shallow Copy
Having discussed what shallow and deep copies are and why we create copies, it’s time to learn about the difference between them. Actually, there are just two core differences and they’re linked with each other:
- Deep copy stores copies of an absolute object’s values, whereas a shallow copy stores references which are absolute to the original memory address in the runtime code.
- Deep copy doesn’t reflect any change made to the new/copied object in the original object; but shallow copy does the same in the run time code.
Example for Python Deep Copy vs Shallow Copy
Code 1:
import copy result_p = [100,90], [30,40] result_q = [10,90], [340] print("Original List: ") print(result_p) print("Deep Copy:") print(result_q)
Output:
([100, 90], [30, 40])
Deep Copy:
([10, 90], [340])
>>>
Code 2:
import copy result_p = [[85, 82], [ 88, 90]] print("Original List: ") print(result_p)
Output:
[[85, 82], [88, 90]]
>>>
Conclusion
As the article concludes, we get to know about the 2 different types of copies that python serves us and their correct usage. Copying is easy, but the assessment of where to do it is a tough task. | https://techvidvan.com/tutorials/python-deep-shallow-copy/ | CC-MAIN-2021-17 | refinedweb | 1,191 | 69.82 |
doubt - Java Beginners
c;
private String driver = "com.mysql.jdbc.Driver";
private String user = "amar";
private String pass = "amar123";
private String url = "jdbc:mysql://192.168.10.211:3306/";
private String db = "amar";
public void beginners doubt!
java beginners doubt! How to write clone()in java strings
Hi Friend.. Doubt on + - Java Beginners
Hi Friend.. Doubt on + Hi friend...
import java.io.*;
class Plus
{
public static void main(String args[])
{
int a=10;
int b= 25...
}
}
Hi friend,
The Java language provides special support
String Array - Java Beginners
String Array Thanks for the help you had provided,,, Your solution did worked... I worked on the previous solution given by you and it worked.... I... again,,, and I'll come back to you , if I had other problem regarding JAVA
Hi Friend.. Doubt on + - Java Beginners
Hi Friend.. Doubt on + Hi friend...
import java.io.*;
class Plus
{
public static void main(String args[])
{
int a=10;
int b= 25... void main(String args[]){
int a=10;
int b= 25;
System.out.println
Doubt on Data Types - Java Beginners
Doubt on Data Types Hi Friend doubt on DataTypes...
How should i......
---------------------------------------------------
import java.io.*;
class Ft
{
public static void main(String args[])
{
float f...;import java.io.*;
class Ft
{
public static void main(String args[])
{
float f
Hi..Again Doubt .. - Java Beginners
Hi..Again Doubt .. Thank u for ur Very Good Response...Really great..
i have completed that..
If i click the RadioButton,,ActionListenr should get... bg;
MainMenu menu;
Vote(Frame frame,String str)
{
setLayout(null);
doubt in the following code of java - Java Beginners
doubt in the following code of java Hi frends,
actually i want... which can accept all the size of matrix........and one more doubt, i want...*;
public class TableDemo extends JFrame{
public static void main(String []args
Hi Friend ..Doubt on Exceptions - Java Beginners
Hi Friend ..Doubt on Exceptions Hi Friend...
Can u please send some Example program for Exceptions..
I want program for ArrayIndexOutOfbounds...
{
public static void main(String[] args)
{
String strAr[] = new String[4
"Doubt on Swing" - Java Beginners
"Doubt on Swing" Hi Friend....
Thanks for ur goog Response..
i need to create a GUI Like...
pic1.gif RadioButton
pic2.gif RadioButton
Pic3.gif RadioButton
If we have select d appropriate radio
sorting an array of string with duplicate values - Java Beginners
sorting an array of string Example to sort array string
sorting an array of string with duplicate values - Java Beginners
sorting an array of string with duplicate values I have a sort method which sorts an array of strings. But if there are duplicates in the array it would not sort properly
Doubt
User request form how to submit the details and how to go the next page after submitting.please clarify my doubt I don't know how to submit details...('?');
var params = new Array();
if (idx != -1) {
var pairs = document.URL.substring
Doubt
How to load page how to submit the details and how to go the next page after submitting.please clarify my doubt I don't know how to submit details...('?');
var params = new Array();
if (idx != -1) {
var pairs = document.URL.substring
Doubt
Submit and process form how to submit the details and how to go the next page after submitting.please clarify my doubt I don't know how to submit...('?');
var params = new Array();
if (idx != -1) {
var pairs
Doubt
load next page after submitting how to submit the details and how to go the next page after submitting.please clarify my doubt I don't know how... = document.URL.indexOf('?');
var params = new Array();
if (idx != -1) {
var pairs
sorting an array of string with duplicate values - Java Beginners
String of Array What is mean by string of array? And how one can add, delete the records from database in string format
doubt
doubt what is the difference between public static void main(String[] args)
public static void main(String args
I'v a doubt - Java Beginners
I'v a doubt Hai to all,
How to break the mysql jar file, and import into the java file by without using any editiors.
With regards,
Terrance. J
sorting an array of string with duplicate values - Java Beginners
string with duplicate values How to check the string with duplicate values
doubt in inheritance program - Java Beginners
doubt in inheritance program how will we get the result 6
2 5 in the inheritance program in the given example i got 6 &2 but i am confused about 5
String in Java - Java Beginners
]);
}
}
}
----------------------------------------------------
I am sending simple code of String array. If you are beginner in java... :
Thanks...String in Java hiiiii
i initialise a String array,str[] with a size
big doubt
;html>
<body>
<%@page language="java" import="java.sql.*"%>
<%@page language="java" import="java.io.*"%>
<%
/* String path = request.getContextPath();
String basePath
Hi ..doubt on DATE - Java Beginners
Hi ..doubt on DATE Hi Friend...Thank u for ur valuable response..
IS IT POSSIBLE FOR US?
I need to display the total Number od days by Each month...
WAIT I WILL SHOW THE OUTPUT:
---------------------------------
ENTER
Hi...doubt on Packages - Java Beginners
Hi...doubt on Packages Does import.javax.mail.* is already Existing Package in java..
I have downloaded one program on Password Authentication... ..Explain me. Hi friend,
Package javax.mail
The Java Mail API allows
doubt on synchronized block in java
doubt on synchronized block in java Hi ! some people are feeling... am a beginner.I am learning java with out any teacher.I need your valuable....
I think you got my doubt.
I request you to clarify my doubt based on below
String doubt replace function
String doubt replace function What is the output and why of below :
String s1 = "Hello";
if(s1.replace('H','H')== "Hello")
System.out.println("yes");
else
System.out.println("No");
if(s1.replace("H", "H")== "Hello
doubt in ejb3 - EJB
doubt in ejb3 hi i am new to ejb3 .i have written simple code which... {
String Username;
Integer Password;
public UserEntityBean( String username... = password;
// this.author = author;
}
public String getUsername
array - Java Beginners
" + maxCount);
}
public static void main(String [] args){
Scanner input=new
plz try to clear my doubt for shuffling multi-dimensional array
plz try to clear my doubt for shuffling multi-dimensional array hi... want to shuffle the ful entire multi-simensional array means wat v want to do... final int size = 5;
private int[][] array = new int[size][size];
private
Pass the array please.. - Java Beginners
Pass the array please.. hi!
i'm having problem... them in an array. When finished receiving the numbers, the program should pass the array to a method called averageNumbers. This method should average the numbers
array split string
array split string array split string
class StringSplitExample {
public static void main(String[] args) {
String st = "Hello...]);
}
}
}
Split String Example in Java
static void Main(string[] args
String Array
Java String Array
... how to make use of string array. In the java programming tutorial
string, which are widly used in java program for a sequence of character. String
Merge Sort String Array in Java
Merge Sort String Array in Java Hello,
I am trying to implement a merge sort algorithm that sorts an array of Strings. I have seen numerous examples of merge-sorting integers but i can not understand how to this with String
String array sort
String array sort Hi here is my code. If i run this code I am... language="java"%>
<%@ page session="true"%>
<%
Connection... result_set=null;
String route_number[]=new String[1000];
String
java doubt
use throws keyword. Point to note here is that the Java compiler very well knows
array manipulation - Java Beginners
example at: manipulation We'll say that a value is "everywhere" in an array if for every pair of adjacent elements in the array, at least one of the pair
Validation doubt
have got that and implemented in my code but i have a doubt in that.
As we try to put string values its not allowing to do tht it gives us message its right... think i am able to tell u what i want to and u have got whats my doubt.
plz give
array - Java Beginners
array WAP to perform a merge sort operation. Hi Friend,
Please visit the following link:
Hope that it will be helpful for you.
Thanks
Sorting String arrays in java - Java Beginners
InterfaceStringArray
{
private String[] arr; //ref to array arr
private int... String[size];
nElems = 0; //create array
}
public int getSize()
{
return...Sorting String arrays in java I have to make an interface
On string - Java Beginners
StringUtilsResources: - string -Display reverse String using string reverse method in Java I wanted to display reverse String using string.reverse() method in Java
Array - Java Beginners
Array how to declare array of hindi characters in java
Hi Friend...Doubt on Uni Cast & Multi Cast - Java Beginners
Hi Friend...Doubt on Uni Cast & Multi Cast Hi Friend..
Can u plz send some Details about the Uni CAst & Multi cast in JAVA... Hi... one-to-many connections, an example for multicast is the java eventhandling
Array in Java - Java Beginners
Array in Java Please help me with the following question. Thank you.
Write a program that reads numbers from the keyboard into an array of type int[]. You may assume that there will be 50 or fewer entries in the array. Your"
array password - Java Beginners
array password i had create a GUI of encryption program that used the array password. my question is can we do the password change program? i mean we change the older password with the new password
Java Array Values to Global Varibles - Java Beginners
Java Array Values to Global Varibles I am working on a program that provides users with 3 loan options. If global variables rate and periods...
}
}
public static void main(String args[]) throws IOException
java array - Java Beginners
java array 1.) Consider the method headings:
void funcOne(int[] alpha, int size)
int funcSum(int x,int y)
void funcTwo(int[] alpha, int[] beta...];
int num;
Write Java statements that do the following:
a. Call
Example - Array to String
Java: Example - Array to String
Here is a simple, but slow, program to concatenate all
of the strings in an array, each separated by a specifed string...()
// Convert an array of strings to one string.
// Put the 'separator' string
Java insertion sort with string array
Java insertion sort with string array
In this tutorial, you will learn how to sort array of strings using string
array with Insertion Sort. For this, we have created a method that accepts string array and size and
returns the sorted | http://www.roseindia.net/tutorialhelp/comment/88769 | CC-MAIN-2014-10 | refinedweb | 1,770 | 66.54 |
There are tons of Vector Tile sources available in the wild but it appears that I can only access Vector Tiles that are hosted on AGOL/Portal. Is this true? Please provide support for the Mapbox Vector Tile spec in ArcGIS Pro. There is already GDAL driver support: MVT: Mapbox Vector Tiles
Thank you!
Can you share details about how you're trying to add vector tiles to your map in Pro? Is the expectation to go to Add data from path:
How do you plan to use the vector tiles in Pro?
In the above image I added Mapbox basemaps to ArcGIS Online webmaps and am using those in Pro.
Thanks Kory. Let me give this a go via the URI path option. I was not aware of that route.
Wait - that was a question on the Pro side:) What you would do is get the URL from the Mapbox site for Third Party > ArcGIS Online and follow the instructions:
In an ArcGIS Online map Add > Add Layer from Web > A Tile Layer and use the URL you got from Mapbox.
Save the webmap and use it in Pro.
Hi Kory,
Thanks for the reply. The above option you noted is how to add an existing MapBox map to either Pro or AGOL as an image tiled layer and not a vector tile layer. I am trying to find out how I can directly stream in 3rd party vector tiles directly into Pro. My use case is that I want to be able to style the data, etc in Pro but I need to access 3rd party vector tile servers outside of AGOL or Portal. Thanks.
Cheers,
Blair
Thanks for the clarification, Blair. Could you please update the Idea's title and description to make the requirement clear? "My use case is that I want to be able to style the data, etc in Pro but I need to access 3rd party vector tile servers outside of AGOL or Portal."
Thank you.
I'm sure you've already seen this, but maybe for others who stumble upon this thread: Design custom basemaps with the new ArcGIS Vector Tile Style Editor
This integration would be very good to have!
Hi, I don't know if this fits with your needs but you can import MVT data with a script tool, here is the source from an example. In my case the MVT data is on disk but it looks like the driver supports a service. Note also in my case the tiles have the extension '.mvt' and not '.pbf' so you can edit that setting.
# MVTImporter.py - Import mapBox vector tiles to GeoPackage
# coding: utf-8
# For ogr methods see: and surf the osgeo.ogr module
import arcpy
import os
from osgeo import ogr
arcpy.env.overwriteOutput = True
# Get zoom level folder
zoomFolder = arcpy.GetParameterAsText(0)
# Copy each layer to a new GeoPackage
level = os.path.basename(zoomFolder)
outgpkgPath = os.path.join(zoomFolder,
'ZoomLevel{}.gpkg'.format(level))
srcName = 'ZoomLevel{}GPKG'.format(level)
mvtDriver = ogr.GetDriverByName('MVT')
mvtDriver.SetMetadataItem('TILE_EXTENSION','mvt')
mvtDataSource = mvtDriver.Open(zoomFolder)
gpkgDriver = ogr.GetDriverByName('GPKG')
gpkgDataSource=gpkgDriver.CreateDataSource(outgpkgPath)
for i in range(mvtDataSource.GetLayerCount()):
name = mvtDataSource.GetLayerByIndex(i).GetName()
arcpy.AddMessage('Copying layer {}'.format(name))
gpkgDataSource.CopyLayer(mvtDataSource.GetLayer(name),
name,
['OVERWRITE=YES'])
gpkgDataSource.FlushCache()
gpkgDataSource.SyncToDisk()
gpkgDataSource.Destroy()
mvtDataSource.Destroy()
# Set output parameter
arcpy.SetParameter(1,outgpkgPath)
arcpy.AddMessage('Done') | https://community.esri.com/t5/arcgis-pro-ideas/i-want-to-be-able-to-style-the-data-etc-in-pro-but-i-need-to/idi-p/931036 | CC-MAIN-2021-25 | refinedweb | 565 | 57.57 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.