text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Created on 2010-08-09 01:35 by dil, last changed 2010-08-12 17:29 by ezio.melotti. This issue is now closed. An error was introduced in 2.6.6 rc1 in the socket flush() call where del is called on the unbound 'view' variable. Associated commit and diff: Tail end of the runtime stack trace: File "/usr/lib/python2.6/socket.py", line 318, in write self.flush() File "/usr/lib/python2.6/socket.py", line 302, in flush del view, data # explicit free UnboundLocalError: local variable 'view' referenced before assignment Thank you very much for testing the alpha and making a report. I've added Ezio to nosy since he backported the patch, and Barry since he's release manager. Thanks for the report and for figuring out what was the cause! Indeed the "view" added after the "del" shouldn't be there, so I will remove it. It would also be great if you could provide a minimal script to reproduce the problem so that it can be turned to a test. None of the tests in the current test suite seem to go through that code path, so the error passed unnoticed. spiv wrote a script to repo the issue in the downstream ubuntu bug: I have a reproduction script on the Ubuntu bug report I just filed for this issue: <> Pasting here for convenience: """ import socket import threading sock_a, sock_b = socket.socketpair() sock_a = socket.socket(_sock=sock_a) def read_byte_then_close(s): data = s.recv(1) s.close() t = threading.Thread(target=read_byte_then_close, args=(sock_b,)) t.start() file_a = sock_a.makefile('w') file_a.writelines(['a'*8192]*1000) file_a.flush() t.join() """ It's not quite good enough to add to the test suite yet IMO, but it's a starting point. Chatting with Taggnostr on IRC I've trimmed that reproduction down to something much cleaner (no magic numbers or threads involved): import socket sock_a, sock_b = socket.socketpair() sock_a = socket.socket(_sock=sock_a) sock_b.close() file_a = sock_a.makefile('w') file_a.write('x') file_a.flush() Here is the patch. Patch accepted, please apply for 2.6.6. Done in r83964.
https://bugs.python.org/issue9543
CC-MAIN-2021-25
refinedweb
353
68.47
Procedures encapsulate a set of commands, and they introduce a local scope for variables. Commands described are: proc, global, and upvar. Procedures parameterize a commonly used sequence of commands. In addition, each procedure has a new local scope for variables. The scope of a variable is the range of commands over which it is defined. Originally, Tcl had one global scope for shared variables, local scopes within procedures, and one global scope for procedures. Tcl 8.0 added namespaces that provide new scopes for procedures and global variables. For simple applications you can ignore namespaces and just use the global scope. Namespaces are described in Chapter 14. procCommand A Tcl procedure is defined with ... No credit card required
https://www.oreilly.com/library/view/practical-programming-in/0130385603/ch07.html
CC-MAIN-2018-51
refinedweb
118
60.72
C programming language libraries provide some standard functions which can be used in different platforms like Linux and Windows. In this tutorial, we will learn how to use fscan()function, return values and parameters with examples. Declaration and Parameters fscanf() function will accept a file stream in FILE type and format specifiers as char type. In this case, format specifiers are important because the given file will be read in this format like "%s %s %s" which means 3 strings with separated with spaces. int fscanf(FILE *stream, const char *format, ...) Return Value fscanf functions will return data with pointers provided as parameter. But as a function, it will also return the function operational status as int . If operations are successfully completed it will return 1 as an integer. Read Example We will start with a simple example where we will read data from the file named test.txt with fscanf() function in the %s %s %s format. Our data file will be named test.txt NAME AGE CITY ismail 34 ankara ali 5 canakkale elif 9 istanbul We will name fscanf_example.c #include <stdio.h> int main() { FILE* ptr = fopen("test.txt","r"); char* buf[100]; while (fscanf(ptr,"%*s %*s %*s ",buf)==1) printf("%s\n", buf); return 0; } We will name the source code as fscanf_example.c and compile with the following gcc command. $ gcc -o fscanf_example fscanf_example.c And binary file fscanf_example can be run like below. $ ./fscanf_example Read To The EOF (End Of File) As examined in the previous example we can read to the end of the file with fscanf() function . We will use the return value of fscanf() . If the return value is equal to 1 which means a data can be read and the file has not reached the end. while (fscanf(ptr,"%*s %*s %*s ",buf)==1) printf("%s\n", buf); 1 thought on “How To Read Input with fscanf() function In C Programming Language?” Hi Ismail, I read your example and found a issue with the code. In the fscanf function, you used a “*” for 3 variables, it means you skip all the value. But the running example you extract the AGE field, so I think you should change the 2th “%s” instead “%*s”.
https://www.poftut.com/how-to-read-input-with-fscanf-function-in-c-programming-language/
CC-MAIN-2022-27
refinedweb
374
65.62
Line drawing function This page shows how advanced functionality can easily be built in python. In this exercise, we will be building a new function that draws a line. This fuction can then be linked to a freecad command, and that command can be called by any element of the interface, like a menu item or a toolbar button. The main script First we will make a script containing all our functionality. We will make a getpoint function, then a line function that will, when executed, call two times the getpoint function, then draw the line. Here is our script: import FreeCADGui, Part def getpoint(): "this function returns (x,y,z) coordinates from a mouse click" view = FreeCADGui.ActiveDocument.ActiveView point = None def getposition(info): down = (info["State"] == "DOWN") pos = info["Position"] if (down): point = view.getPoint(pos[0],pos[1]) callback = view.addEventCallback("SoMouseButtonEvent",getposition) while not (point): pass view.removeEventCallback(callback) return (point.x,point.y,point.z) def line(): "this function creates a line" x1,y1,z1 = getpoint() x2,y2,z2 = getpoint() newline = Part.makeLine((x1,y1,z1),(x2,y2,z2)) Part.show(newline) Explanation import Part, FreeCADGui In python, when you want to use functions from another module, you need to import it before. In our case, we will need functions from the Part module, for creating the line, and frm the Gui module (FreeCADGui), for manipulating the 3D view. def getpoint(): Here we define our first function. It is a good habit, when creating functions and classes, to start them imediately with a string describing what the function does. When you will use this function from the freecad python interpreter, this string will appear as a tooltip.
https://wiki.freecadweb.org/index.php?title=Line_drawing_function&oldid=638
CC-MAIN-2020-24
refinedweb
282
64.3
I've changed what I had from before for this portion: Random r = new Random(); for (float a=r.nextFloat(); x>0.01f; x=r.nextFloat()) // the local variable a is never read ... I've changed what I had from before for this portion: Random r = new Random(); for (float a=r.nextFloat(); x>0.01f; x=r.nextFloat()) // the local variable a is never read ... I am given an error message at a.append, I've taken in what you've said and changed it to a Float, the a is accepted but the append, now states that it is a float and not a Float, but I changed it to... Ok, I understand. I expect my class to plot a random list of numbers of floats given in an array. I was given FloatListDemo to base my FloatList off of so that they would work together. I'm... I don't understand how this will help me understand why the countFloats and append don't work. Example code using FloatList: package lab6; import java.util.Random; /** * Demonstrates use of the classes FloatList and ArrayPlot. You need not package lab6; import java.sql.Array; import java.util.Random; /** * A list of floats that can grow to be as long as necessary. * @author Melissa, Colorado School of Mines. */ public... Ohh! I didn't call it into the main of the program, yes? package lab5; public class ArrayFirst { public static void main(String[] args) { for (int i=0; i<args.length; ++i) System.out.println(args[i]); float[] a = {0.1f, 1.2f, 2.3f, 3.4f}; helloworld922: I actually got it solved, thank you though :D Hello, I'm working on a new project about arrays, yay! I'm given a length of 4 and set that to a. I'm then suppose to make a different int b = a and make another c = the reverse of b, but I... Thanks, I'll modify it. Uh.... I meant for a program.... I have to construct a program for a robot that moves around and its position and battery life are recorded back to the public. public class Robot { public Robot(double x, double y) { x = 0.0; y = 0.0; } public double getX() { return X; } Hi! So, I've been working on it and I think I did somethings right... But now my question, main concern is: How do I get the robot to turn/move? Hi! I'm new to this programming stuff and I'm currently struggling. I was given a project to construct a Robot with a fully charged battery and to make it move with certain buttons. I was given...
http://www.javaprogrammingforums.com/search.php?s=a0425616641cae013385e11e105132c4&searchid=477095
CC-MAIN-2013-20
refinedweb
444
85.08
According to Trever Adams:> Is Linux Linux? Or are we going to pull a Linux is Linux but this and> that syscall are not available on this or that platform so developers> need to know which platform they are on?You ought to check over the syscall lists for various architectures.They're not as unified as you seem to think.> [If] you are going to have a major syscall on several platforms (I> imagine Sparc/UltraSparc isn't the only one), you need to have it on> all.Not necessarily. Sometimes libc is the right place for that kind ofadaptation: int getpagesize() {#ifdef i386 return 4096;#else return sys_getpagesize();#endif }That said ... I happen to like sysconf() infinitely more than a uniquesystem call for just one system characteristic.-- Chip Salzenberg - a.k.a. - <chip@perlsupport.com> "... under cover of afternoon in the biggest car in the county?!" //MST3K-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.rutgers.eduPlease read the FAQ at
http://lkml.org/lkml/1998/10/28/112
CC-MAIN-2014-49
refinedweb
174
56.96
GETGID(2) NetBSD System Calls Manual GETGID(2)Powered by man-cgi (2020-09-24). Maintained for NetBSD by Kimmo Suominen. Based on man-cgi by Panagiotis Christias. NAME getgid, getegid -- get group process identification LIBRARY Standard C Library (libc, -lc) SYNOPSIS #include <unistd.h> gid_t getgid(void); gid_t getegid(void); DESCRIPTION The getgid() function. ERRORS The getgid() and getegid() functions are always successful, and no return value is reserved to indicate an error. SEE ALSO getuid(2), setgid(2), setgroups(2), setregid(2) STANDARDS getgid() and getegid() conform to ISO/IEC 9945-1:1990 (``POSIX.1''). HISTORY The getgid() function appeared in Version 4 AT&T UNIX. NetBSD 9.99 April 3, 2010 NetBSD 9.99
https://man.netbsd.org/getgid.2
CC-MAIN-2021-10
refinedweb
117
52.15
So I thought I'd knock up a quick video before heading off to my second 'Fighting with Marv' session. In today's video I talk off-the-cuff about Steem/ Steemit Communites: I cover - why I'm bullish on Communities - how I see Communities potentially working (based on what I've read and heard) - how Communities will improve the user experience on Steemit - how Communities can be monetised - how Community moderators will need to strike the right balance between censorship and openness, between charging commissions and enticing contributions, between accepting ads and putting off subscribers in order to be successful - how Steem Communities could bring new capital into the Steem eco-system - how Communities and SMTs are different sides of the same coin. I contrast the organic Communities model to value building on the Steem blockchain to the SMT model. It will be fun to see how the different approaches fare. Communities, waiting for them. The way they could work and become monetised sounds like I would see them and know them from other communities too where I have been running and moderating some of them. Question, where did you get the details from or where did you read about how it could work? I would be highly interested how it really will be organised and rolled-out. Let us all hope it will be an open process instead of pre-selected community owners, I am simply speculating at the moment - also see them is very important for the future. (Upvoting my own comment hoping to get more details from others on COMMUNITY Roll Out - maybe @andrarchy @ned @acidyo @timcliff know more about that? There is a good post from Timcliff on Communities recounting a conversation he had with Sneak: My understanding from Andrarchy is that Communities should be ready soon. I believe (through conversations I've had) there will be namespaces held in reserve, that will potentially be seen as having a Steemit Inc stamp of approval, when they are dished out to their moderators. However I don't think that matters too much as anyone can create a Community and people will gravitate to the best communities over time. Sometimes being seen as the 'anointed' Community, can be a handicap especially if another Community is delivering the goods in terms of quality and engagement. Thanks!! :) Thanks mate - sounds interesting the #hivemind "thing" - will re-read both pieces - @andrarchy comment and @timcliff post - a few times later, can't wait for these communities to come and ideally to handle a few. Curious to see what else Hivemind has to offer in terms of opportunities the bad thing in steemit is that just people with a lot of steem power rule, I am on steemit almost 6 months and I don't see any support from whales, I have 250 steem power earned in this six months, I did a lot to grow but nothing help. I understand that just big investors can earn from steemit or if they are supported by them. Will someone support me, I think just couple upvotes make my posts famous and I will be always famous and earn at least a few dollars but not cents. Well - i can not speak for yourself but if you need support try relationship building to people that share common interest but a sentence like your last one probably is not going to help a lot....... I was trying to build relations believe me, I even made giveaway and contests and I never usually ask for upvotes, it's my first time because I am tired, I also tried to moderate and still moderate my group with 18000 memebers and made it just for steemit, for people to resteem to be resteemed, I gave a lot of tips, I share all the good that I can but nothing is working because I don't want to invest in steem because I see that the price is going always down in bitcoin and I don't want to lose my btc. here is my group by the way : I also advertise steemit everywhere, always share my posts on : reddit, twitter, facebook, google plus, linkedin and in steemit chat + discordapp It seems that the idea resteem to be resteemed is not interesting or maybe I just didn't find the right people. What dp you think now ? Hmmm, am not a fan of anything like resteem for resteem or similar. Resteeming is currently not a thing I consider as bringing you more votes or following, maybe the 18k people in your group are the wrong ones. What you say you did for steemians sounds good to me so I am confused you have not more support I think the fact that I have almost 1800 followers is already something, where I can find that support ? any idea ? Thumbs up Great Vlog, Communities, I can't wait to see what comes from it all, though I firmly believe that people should start (Pre) building communities and establishing their presence and what it is they want to do before the function comes out and as you said SMT's and Communities go hand in hand. Either way it will be a awesome time figuring it all out and it gives more resources to new users/existing users. I know that when the function drops I will be using it for the Vloggers Guild I'm creating. Thanks for the talk and discussion, looking forward to the next one. sighs Flags.. Such a messy topic at times. Have a good day. nods I agree people should start building now. The principles are the same now or when Communities/ SMTs arrive. I.e. can you engage and on-board people and build a "Community"? The functionality is a tool to make it easier all round but the basic building blocks are here now and will only add to credibility of 'influencers' claiming to be able to build succesful Communities/ SMTs in the future. Exactly, most people think they need the 'Communities' function in order to form a Community when in reality all it takes is true social interaction and a want to see something grow in a organic direction. At any rate, I believe one of the successful ways to build a community is the merging of 2 distinct groups. People with large SP( aka whales ) whom want to see certain communities grow and are willing to delegate,upvote/resteem & People with low SP, but great work ethic and a good resume ( via their post which act as a body of work ). Communities will work best if we can get rid of this Whale/Minnow complex and start seeing users for their body of work and value. The common bond between both users is there. Its Social Standings & Economic Class Status that divide us. Sorry for that rant! Anyway here is to hoping that both sides of the same coin finally realize they are both valuable. P.S. I just found out a few days ago that 'influencers' is a real thing on the chain. How naive of me. lol Good spot, like that opinion a lot and agree :-). It is unfair that the great harlot takes everything Good idea to build now - we all should build the future on Steemit now :-). The only thing I am not sure yet if we need to build something around a certain special named account we own for a community or around certain tags we might have created. Hope soon we know more, thanks guys! I think tags would be better, that way the community could post freely w/o having to worry about it going through the main account, but maybe have 1 main account that reps the whole community and resteems/upvotes content that comes from them. I think building around tags (for now) and reserving an account with that name might be a good way to go. When Communities arrive you can advise people to post to the Community instead of using the tag. Makes sense - I had a thought on that some months ago and checked available account names and realized for most of the communities I had in mind accounts are already in place / booked. Building around the tag for now is best solution I think for the community. Thanks again guys. The theory for monetization is that the more connections you can create between nodes (people), and the larger the user base, the more wealth gets created. This according to Metcalfe's law. The current problem I think with Steem is that it is just too small and not diverse enough. It is not diverse in social norms, or morals, or ways of thinking. I have seen people get censored for having the wrong views. If communities are going to work then we cannot have whales from outside the community censoring post from within the community. If people within a certain community have controversial views which are fringe then this should be respected unless something clearly illegal is taking place. I have my doubts that the moderators and the whales will allow communities to be independent. References Thanks just getting my head around this and so this is useful! So many videos on Youtube about how to get going! Suppose I just need to blog! Will enjoy the communities when they get going, but for now just going to build my blogs... I am praying that my steem investment turns around eventually. I turned 1 bitcoin in to .25 of a bitcoin in 4 or 5 months here. There is a disconnect between investors and content creators here. Without actually investors content creators wont make crap. Great explanation and video. Maybe someone can enlighten me here, as I am fairly new to steemit. I don't see communities as a good thing at all. With a platform that is a cross between instagram and reddit, only two things will come of it. 1: Communities will just turn into "feature" pages similar to instagram. People spamming the tag in hopes they get featured, in order to gain more followers themselves. Leaving 1 or 2 people in charge of who actually gets to see content. That doesn't sound like the community based platform everyone hopes steemit will be. 2: They become like "subreddits," in "photography" will be a community, controlled by a couple mods, who decide what gets to be posted about photography, and what rules apply to those post. All I see is a group of people being able to take a certain "theme/sub" hostage. Just like reddit; "oh sorry, no blogspam," "oh sorry, no watermark," "oh sorry, I am a mod and I don't like your post for whatever reason." Affectively shutting people out of being able to get more eyes on their own content. Let's be honest here. The whole purpose of this platform is to "blog" or create original content about certain topics, and being rewarded for doing so. Not specific people decided what get's to be posted, and what type of content should seen. That or, a battle of who gets the dominate community first, shutting others out, and then gaining ultimate power because of it. If people are complaining about whales abusing their power now, wait till people become in charge of communities. Wait till those communities start to "sell-out," wait for the content to be on lock-down because of issues I already stated above. I mean, hopefully these things don't happen, and again maybe my understanding is lacking, but.... Edit: I voted my comment so it might have a better chance of being answered or discussed. Thanks. I still don't get why Steem is not $5.00, do other Social Media platforms pay you to post and are as user friendly? ridicules. Good post, upped and following. Shared to my upcoming dedicated Steemit Facebook & Twitter Group👍 Bitcoin 8100 Bulls leading the charge... Your thoughts are wrong its will not happen here lol Well done, thanks for the information! I belive you thanks Great video on explaining the build of multiple communities and the integration of relationships. Good post bro..💞 Follow and upvote now..i will too ok do it with me already did it pls i REALLY need to up my steem power and idk what to do Great post and thank you again for adding further great content into the steemit martial arts community , with quality post like this the community is set to grow and grow and those with passion and truth will prevail . liked followed and resteemed . im working on a series called 'Lessons from the mat ' regarding the benefits i and others have gained FROM bjj that transfere over to other aspects of life . Id really appreciate your thoughts/input osss steemit gonna blow away facebook and twitter Good topics. There are many things to learn about Steemit. It's like being in a rabbit hole with lots of ways to go. Thanks for the update and 411 on the communities. I was not aware of much that you mentioned although I had heard about them. It is exciting to see the newer changes as they come about. I am also looking forward to the "community" aspect of this site...thus far, it seem to hodge podge with the landing page being the trending posts. Sympa ce post .... Ca a de l'avenir c'est sûr ! good video with nice posted great work !! upvoter!! Good post COOL Nice content and pretty interesting. Thanks. Congratulations @nanzo-scoop! Your post was mentioned in the hit parade in the following categories: Ohh Brother U R Great Leader of Steemit I would love to follow you are there specific steps on how to get enough steam power? even it it takes time. I like platform I have been hearing about the communities for a while now. I am very excited about it as it will really help bloggers find their niche audience. I love that idea as it is difficult to filter anything at the moment. Great video, I enjoyed your thoughts quite a lot. You touched on ways the communitites will help in ways I did not realise... informative nice post and pic Dont forget to upvote follow & resteem @nurdinnakaturi Ohhhh yes I really am bullish on this platform as well I think this is a good way to make an investment on social media and love doing it which all matters Just followd you Please follow me & votes my post plz plz boss A great piece @nanzo-scoop Kudos to you this video format really fits you!! Looking forward to the communities and SMT'S in 2018! :) Nice post gam panyoet Thinking this currency has the potential to go to $5 - $10, only probelm I see is that I have heard the Steem supply double every two years or so? (Correct me if I am wrong), however, I suppose as the user base increase that issue is kind of eliminated, but I still think it keeps the price to stable. The circulation can often remove the volatility. nope, your information is outdated This is terrible. 2x every 2 year. Bad news after bad news it's also wrong information. By the way, it's not "news" if it's not "new"... inflation rate of steem has always been public. At some point it was very high, but that is not the case anymore. hay After hardfork 16 inflation has been set to 9.5%pa decreasing by 0.5%pa until it reaches something like 5%. I think your information is outdated. no This is important information to be discussing. Will follow. Great post here! follow me Good All very valid points , love your blogs man :) follow me good post Nice post please follow me @sagheer46 and upvote my posts...I promise to do the same for you thanks for this post , i gave u all that i have , pliz back Keep spirit n succes always bro :) Good post, only what I don't get is the word "Communities" - I thought Steemit itself was one whole community, no matter how difficult it is sometimes. Are you just speaking to those "in the know"?? I'd like to hear please. Happy Steeming! Communities will be to Steemit what Subreddits are to Reddit. It's just a label for sub-sections. I know nothing! OK got it. Seems like it will bring a little more order to the whole community. Currently no matter how wonderful the thought of it is, too many posting all in almost one go, don't make for any kind of order. If the communities are broad based enough, yet also specific to their category, it would probably work better and be less confusing to newcomers. Keep up the good work. Happy Steeming! Steem is actually a quite interesting social experiment in how a "real world" Steem environment would play out without a ruling body/government. Alliances would form, those with power would wield their strength against scammers/abusers of the environment, some people would keep to themselves and not get involved at all, others would pretend to care about the environment but do nothing about it, a ruling body may/would/will form. . .Steemit truly is a reflection of how things could play out in a decentralized world. . .I have noticed all this in my short time here as there seems to be mini wars happening(which I agree with as many are standing up against community/reward pool abuse). I think ; steemit make a like button about votes and comment . Communities won't be around until 2019? Did I hear that correctly? I think they have the potential to really be beneficial too, but is the ETA really that far away? Communities have indeed a huge potential and I also can't wait until they are rolled out. The cleaner community is a great example on how abuse can be fought more effectively. great video post 😍A very informative post. Great job. Keep it up! 😍 Great informative piece, Im excited to what happens in 2018 😊 follow me Have done, and commented on your last post :D Hi there sir, well you quite state a lot of viable point, we are going into 2018 and i bet a lot of things should and will happen in the platform, you have stated, a lot. Thank you for the vlog. Thanks Dear @nanzo-scoop Thumbs up Great Vlog, Communities. It's Wonderful... Thanks for the information! Thanks for sharing very good information. I appreciate your post @nanzo-scoop, I post an article on Steemit, Read it May be helpful:- Congratulations, your post received one of the top 10 most powerful upvotes in the last 12 hours. You received an upvote from @thejohalfiles valued at 103.18 SBD, based on the pending payout at the time the data was extracted. If you do not wish to receive these messages in future, reply with the word "stop". Congratulations @nanzo-scoop, this post is the. Good idea to build now - we all should build the future on Steemit now Thank you communities will take it forward the next step in evolving !! follow me good information. we all should with goolam Good job.....😊👍 Very god job nice video steemit is dead A great video as usual. It open my eyes widely to see our Steemit world differently. Great work! Interesting....) Nice. But I'm confused about it. Nice one. Thanks a lot. Nice..... good job love it !! Glad you are thinking of this. I love Steemit, but I feel like so few of the people (AI?) who comment on posts have actually read them. Communities seem cool, looking forward to learning more. Any ways we can make this experiment work, I am all for! videos that can give appreciation to all who watch it, good work Nice vidio Interesting article. Visit CoinCheckup.com | SteemIT - Daily Technical Analysis thanks for your better information nice..video Sounds similar to google plus! Thanks for providing some insight on the concept concerning the formation communities and their various purposes and possible benefits. It will be very interesting to see how this concept develops over time and whether Steemit becomes a more attractive and robust platform as a result. ha a so nice @abdulsalim well written... Kindly check my article hi i am from pakistan and I upvote you.Wow thank you for this information. I think you are doing a best work on steemit.Really this blog is very information to other.GOOD LUCK✌✌ plz follow me.. Spamming comments is frowned upon by the community..
https://steemit.com/vlog/@nanzo-scoop/vlog-why-i-m-bullish-on-steemit-communities-for-2018-and-models-for-monetisation
CC-MAIN-2018-30
refinedweb
3,444
71.14
sub coolsub { require Mod; ... } [download] print "Mod loaded at 1" if %{Mod::}; require Mod; print "Mod loaded at 2" if %{Mod::}; [download] print "Mod loaded at 1" if %{Mod::}; use Mod; print "Mod loaded at 2" if %{Mod::}; [download] print "Mod loaded at 1" if %{Mod::}; sub require_it { require Mod; } print "Mod loaded at 2" if %{Mod::}; require_it(); print "Mod loaded at 3" if %{Mod::}; [download] print "Mod loaded at 1" if %{Mod::}; sub use_it { use Mod; } print "Mod loaded at 2" if %{Mod::}; use_it(); print "Mod loaded at 3" if %{Mod::}; [download] What does really happen here? The source file for module 'Mod' is searched, read and compiled at run time, as opposed to use, which would lead to compilation of the module at compile time of the script which contains that 'use' statement. Nothing special about scoping here. The namespace of the module 'Mod' is defined after successful compilation, and it has its own scope. The visibility of that namespace is not confined to the block which contains the 'require', it is available to all perl code executed thereafter, no matter if it's in the same scope or even the same} There is one time when requireing a module will limit it's scope. When using threads: C:\test>perl -Mthreads -wle"async{require CGI; print 'T:'.%{CGI::}}->detach; sleep 2; print ' +M:'.%{CGI::};" T:61/128 M:0 [download] The above shows that a module required in a thread is not visible in other threads. But if you use it anywhere, it will be visible (and consume space) in all threads: C:\test>perl -Mthreads -wle"async{ use CGI; print 'T:'.%{CGI::} }->detach; sleep 2; print 'M: +'.%{CGI::};" T:62/128 M:62/128 [download] The two main differences difference between use and require are Typical usages include sub rarely_called { require Expensive::Module; } [download] sub not_backwards_compatible { require 5.010_000; } [download] require for pragmata probably won't do what you think since import is not run. [
http://www.perlmonks.org/?node_id=664843
CC-MAIN-2014-23
refinedweb
329
64.54
.. > Nice can you build some demo about BlendMoes.. Posted by Farrukh Obaid on March 10, 2009 at 08:58 PM IST # Thanks Farrukh. You may refer to "Effects Playground" sample -. Click on "Open Image 2" to load the second image. Now select "Blend" option. It will show controls for different modes such as Multiply, Overlay, Difference etc. Hope this helps. Posted by Rakesh Menon on March 11, 2009 at 08:18 AM IST # @Farrukh I have posted a new blog on BlendMode - Posted by Rakesh Menon on March 12, 2009 at 11:00 AM IST # Hi Rakesh. Do you know how can I refer to a relative image url into the css file? Example: public class MyClass extends Group{ id:"goBack"; var image:Image{} ... init { content = [ ... ImageView{ image:image; } ... ] } } in the css: ... #goBack.image { url:url({__DIR__}/images/img.png) } ... thakyou for advanced Posted by Antonio on May 19, 2009 at 01:24 PM IST # @Antonio I haven't tried using image, so not sure if that will work. Need to try.. Posted by Rakesh Menon on June 11, 2009 at 08:15 AM IST #
https://blogs.oracle.com/rakeshmenonp/en_US/entry/javafx_sudoku_css_support
CC-MAIN-2014-15
refinedweb
185
76.11
Andrew, Al,Please consider adding this series to your trees.I've been using these patches for a while on my laptop to mount fusefilesystems as user, without any suid-root helpers. The setup is asfollows:- link /proc/mounts to /etc/mtab- patch util-linux-ng with- remove suid from mount, umount and fusermount- add a line to /etc/fstab to bind mount ~/mnt onto itself owned by the user- add 'fs.types.fuse.usermount_safe = 1' to /etc/sysctl.confApart from '/dev/sda2' being replaced with '/dev/root' in 'mount' and'df' outputs, I haven't experienced any problems.Thanks,Miklosv8 -> v9 - new patch: copy mount ownership when cloning the mount namespacev7 -> v8 - extend documentation of allow_usermount sysctl tunable - describe new unprivileged mounting in fuse.txtv6 -> v7: - add '/proc/sys/fs/types/<type>vv3 -> v4: - simplify interface as much as possible, now only a single option ("user=UID") is used to control everything - no longer allow/deny mounting based on file/directory permissions, that approach does not always make sensev--
http://lkml.org/lkml/2008/3/17/275
CC-MAIN-2017-04
refinedweb
170
51.07
Details Description Avro should work with Hadoop's newer org.apache.hadoop.mapreduce API, in addition to the older org.apache.hadoop.mapred API. Activity - All - Work Log - History - Activity - Transitions Is there a specific use case where this is failing for you or is it just the use of deprecated APIs that is a problem? I suppose that integrating Avro with another library that is on the newer API could be an issue. I'm also interested in using the newer mapreduce API with Avro, so I'm trying to write an AvroWritable and some input and output format classes that know how to deal with the schemas. I should have a patch next week, but the idea is: - Introduce new classes AvroKey and AvroValue that implement Writable. - Users can call AvroJob.setInputKeySchema(), AvroJob.setInputValueSchema(), AvroJob.setMapOutputKeySchema(), AvroJob.setMapOutputValueSchema(), AvroJob.setReduceOutputKeySchema(), AvroJob.setReduceOutputValueSchema() as needed. - Provide AvroContainerFileInputFormat/AvroContainerFileOutputFormat, AvroSequenceFileInputFormat, AvroSequenceFileOutputFormat that read and write the schemas for the data appropriately. The schema in the sequence files can be stored in the header's metadata. - Users can write Mappers and Reducers as they normally would. Note that this differs slightly from the org.apache.avro.mapred.* way of doing things – I don't plan to supply special AvroMapper and AvroReducer base classes or a new Serialization, since the AvroKey/AvroValue classes are Writable just like any other hadoop key/value type. FYI: Wrapping Avro serialization 'inside' of Writable will work, but there will be some non-trivial performance cost to that. Writable requires more fine-grained reads and writes from the underlying stream preventing optimal buffering for Avro. Thanks for the info, Scott. Trying to avoid putting avro serialization 'inside' of Writables, I came up with this patch that tries to keep features/changes to a bare minimum. Let me know what you think. I have one small issue with this, mapred.AvroSerialization has new protected methods added to it so that mapreduce.AvroSerialization may inherit from it. This makes it a bit problematic to use this patch without fully patching the avro tree. If mapreduce.AvroSerialization was completely separated from mapred.AvroSerialization then this patch could be used alongside existing 1.4.1. I will attempt to rework this patch to do this, but it may not be until next week or so. I'd actually prefer it if the implementations shared more rather than less, so that fixes and improvements would not need to be made twice. For example, AVRO-669 made significant changes to the mapred code that would also be useful for the mapreduce version. So might be nice if both versions of AvroJob shared a common base class, with shared setters and getters, e.g., getInputKeyDatumReader(), etc. to minimize replication of logic. I can definitely see that, My goal is to be able to use the mapreduce api for avro alongside the current stable release of avro. If you have a suggestion of where these mapred/mapreduce base classes should live package structure wise (org.apache.avro.mapreduce.common?), I'll work on it, and rework this patch to apply to current trunk also. I don't have a strong opinion about where the base classes should live. Perhaps they can just live in the mapred package? Thanks for working on this! There's some code at for working with the new MapReduce API Coincidentally, we just announced the release of our avro mapreduce code today as well: Garrett, I just glanced at this and it looks great! You've factored things so that much of the code is shared between the 'mapred' and 'mapreduce' implementations. The stuff in the 'file' and 'io' packages should probably be renamed. Currently the 'io' and 'file' packages are in the main avro jar, which does not require Hadoop. I think it's best not to split packages across multiple jars and these classes depend on Hadoop so probably belong in the avro-mapred jar. Perhaps they should be renamed 'org.apache.avro.mapred.{io,file} '? Also, do you intend this code to be contributed to Apache Avro? (I ask as a legal formality.) Thanks. Yes, I intend this code to be contributed to Apache Avro. When I get some free cycles, I'll upload a patch with the{io,file} packages renamed. But anyone else should feel free if they have time first. Here's a first pass at renaming the packages. Tests pass. I'll take a closer look next week. Garrett, this code looks great! Thanks for contributing it. I renamed all of the packages to reside under org.apache.avro.mapred. So that package now has subpackages named io, file, util and mapreduce. That's consistent with other Avro modules, where classes are under org.apache.avro.<module>. The only exception is org.apache.hadoop.io.AvroSequenceFile. This is in a Hadoop package so that it can access some package-private parts of SequenceFile. This is fragile, as SequenceFile could change these non-public APIs. We should probably file an issue with Hadoop to make these items protected so that SequenceFile can be subclassed in a supported way. I plan to improve the javadoc a bit (adding package.html files to new packages) and move versions for new dependencies from mapred/pom.xml into the parent pom. Then I think this should be ready to commit. > I renamed all of the packages to reside under org.apache.avro.mapred. So that package now has subpackages named io, file, util and mapreduce. Keeping the package org.apache.avro.mapreduce would be more consistent with Hadoop, which has the mapred/mapreduce distinction. I see a few choices: 1. org.apache.avro.{mapred,mapreduce,io,file,util}. This is what the code on github does. This would make the avro-mapred module contain things outside the org.apache.avro.mapred package, and splits Avro's io, file and util packages across multiple modules. 2. org.apache.avro.mapred.{mapreduce,io,file,util}. This is what my patch does. This is back-compatible and consistent with the module name, but places mapreduce under mapred, which is different than the Hadoop layout. 3. org.apache.avro.hadoop.{mapred,mapreduce,io,file,util} . We'd rename the module to be avro-hadoop. This would be incompatible but consistent with Hadoop. For back-compatibility we might leave the mapred classes in their current package. 4. org.apache.avro.{mapred,mapreduce,mapred.io,mapred.file,mapred.util} . This is back-compatible but includes a package that's not under the package of the module name. Tom, are you advocating for (4)? I'd be okay with that, I guess. I'm also leaning towards moving AvroSequenceFile under org.apache.avro and adding just a shim base class into org.apache.hadoop.io that subclasses SequenceFile and makes public the bits we need. That way if we get Hadoop to expose these bits the Avro API would not change. Re #1: its OK to have multiple packages in a single maven module, but not good to have a package split across modules as it causes problems for OSGi and in the future, Java 8 modules. Re #2: This is OK, but a little confusing. Also, if we ever wanted to break apart the mapred module in to two or three (e.g. avro-hadoop, avro-mapred, avro-mapreduce with the common stuff in avro-hadoop and the two APIs in the others) it will be less consistent. Re #3: This is fairly clean, but is incompatible. Re #4: This is decent, but I would propose: org.apache.avro.{hadoop,mapreduce,mapred,hadoop.io,hadoop.file,hadoop.util} . Then the current module would have o.a.a.{hadoop,mapreduce,mapred} and children packages. A future split could divide on these cleanly. One reason to split in the future is that some users may want hadoop stuff that is not related to mapreduce – sequence files, avro data file access via FileSytem+Path, etc. If we split the module, avoiding moving classes around is important. Is it possible to move AvroSequenceFile under o.a.a ? All classes in that package need to be in the base avro maven module, and cannot depend on any hadoop APIs. We also need to consider if we need to produce two otherwise identical modules in a build – one 0.23.x + compatible and one for the 0.20 / 0.22 / 1.0 users. My understanding is that one needs to compile against 0.23.x to work properly there. Organizing the modules so that it is possible to produce an Avro release that supports multiple Hadoop variants would be useful. > Is it possible to move AvroSequenceFile under o.a.a ? I discussed that above. We could move it, but we'd still need a shim in o.a.h.io, since the subclass accesses package-private bits. > if we need to produce two otherwise identical modules in a build – one 0.23.x + compatible and one for the 0.20 / 0.22 / 1.0 users The nested Context classes in mapreduce's Mapper and Reducer went from abstract classes to interfaces ( MAPREDUCE-954), requiring re-compilation of code that references these. But the mapreduce support added here does not reference these. So I think we're spared. I discussed that above. We could move it, but we'd still need a shim in o.a.h.io, since the subclass accesses package-private bits. Let me clarify: Is it possible for AvroSequenceFile to not reference anything in o.a.hadoop.** or o.a.a.{mapreduce, hadoop, mapred} .** ? It has: import org.apache.avro.mapred.AvroKey; import org.apache.avro.mapred.AvroValue; which would indicate to me that it must be in o.a.a.mapred. If it is in o.a.a it must not reference any classes that don't exist in the base avro module that encompases o.a.a (lang/java/avro) Ideally, anything in the .io, .util, and .file packages does not reference the .mapred or .mapreduce packages, so that this can be packaged as a standalone hadoop dependency down the road. I have not looked at all those yet to see what the package dependencies are. > Ideally, anything in the .io, .util, and .file packages does not reference the .mapred or .mapreduce packages [ ... ] Much in these packages references AvroKey and AvroValue and/or AvroJob. These uses aren't mapreduce-specific and could be refactored away, e.g., by moving AvroKey and AvroValue from o.a.a.mapred to o.a.a.hadoop.io, but that would be incompatible. SortedKeyValueFile is the Avro equivalent of Hadoop's MapFile. Arguably it should be moved into o.a.a.io. It depends on AvroKeyValue, which might also be moved to the core. AvroKeyValue is very similar in functionality to o.a.a.mapred.Pair. Perhaps SortedKeyValueFile should be switched to use Pair and both moved to the core. I have implemented a SequenceFile shim and it works. There's now just a tiny class that needs to be in o.a.h.io, a base class that exposes two package-private nested classes from within SequenceFile. I've re-arranged the classes per Scott's #4 variant but can revert that. We need to decide how much refactoring we want to do here. Finally, I note that io.SeekableHadoopInput replicates functionality that's already in mapred.FsInput, so we should replace the former with the latter in the new code. I looked at this again today. AvroKeyValue is similar to Pair but is implemented quite differently. Rather than itself having 'key' and 'value' fields it wraps a GenericData.Record that has those two fields. This is exposed in its APIs. Converting its uses to Pair would thus be a major undertaking. Rather I think we might just tolerate these two similar classes in the project. I'm also no longer convinced that it's worth trying to move SortedKeyValueFile into Avro's core. The reader & writer constructors are Hadoop Path-based and changing this would require inventing a new abstract file interface, since the implementation manipulates the file names. So I just implemented the one other change contemplated in my previous comment (replacing SeekableHadoopInput with the existing FsInput). Here's a new patch with that. Does anyone object to committing this? No objections, but I have not had time for a deep review and won't for more than a week. I don't think we need to hold this up for my full review, I can always create another ticket for later changes. Quick question... it appears that this integration against mapreduce API only supports deflate compression – is that right? Thanks for getting this in. The old mapred API is being un-deprecated for 0.21 and is not going away soon. The new mapreduce API is not yet finished. However we will eventually need to support the newer API.
https://issues.apache.org/jira/browse/AVRO-593?attachmentOrder=asc
CC-MAIN-2017-34
refinedweb
2,152
67.96
Returns the base type of an attribute. #include "slapi-plugin.h" char *slapi_attr_basetype( char *type, char *buf, size_t bufsiz ); This function takes the following parameters: Attribute type from which you wish to get the base type. Buffer to hold the returned base type. Size of the buffer. This function returns NULL if the base type fits in the buffer. If the base type is longer than the buffer, the function allocates memory for the base type and returns a pointer to it. This function returns the base type of an attribute (for example, if given cn;lang-jp, returns cn). You should free the returned base type when you are finished by calling slapi_ch_free(). slapi_attr_types_equivalent()
http://docs.oracle.com/cd/E19424-01/820-4810/aaidf/index.html
CC-MAIN-2013-48
refinedweb
115
74.29
Support Vector Machines Tutorial – Learn to implement SVM in Python Support Vector Machines Tutorial – I am trying to make it a comprehensive plus interactive tutorial, so that you can understand the concepts of SVM easily. A few days ago, I met a child whose father was buying fruits from a fruitseller. That child wanted to eat strawberry but got confused between the two same looking fruits. After noticing for a while he understands which one is Strawberry and picks one from the basket. Same as that child, support vector machines work. It looks at data and sorts it into one of the two categories. Still confused? Read the article below to understand SVM in detail with lots of examples. Keeping you updated with latest technology trends, Join DataFlair on Telegram Introduction to Support Vector Machines SVMs are the most popular algorithm for classification in machine learning algorithms. Their mathematical background is quintessential in building the foundational block for the geometrical distinction between the two classes. We will see how Support vector machines work by observing their implementation in Python and finally, we will look at some of the important applications. What is SVM? Support. Then, we find the ideal hyperplane that differentiates between the two classes. These support vectors are the coordinate representations of individual observation. It is a frontier method for segregating the two classes. Don’t forget to check DataFlair’s latest tutorial on Machine Learning Clustering How does SVM work? The basic principle behind the working of Support vector machines is simple – Create a hyperplane that separates the dataset into classes. Let us start with a sample problem. Suppose that for a given dataset, you have to classify red triangles from blue circles. Your goal is to create a line that classifies the data into two classes, creating a distinction between red triangles and blue circles. While one can hypothesize a clear line that separates the two classes, there can be many lines that can do this job. Therefore, there is not a single line that you can agree on which can perform this task. Let us visualize some of the lines that can differentiate between the two classes as follows – In the above visualizations, we have a green line and a red line. Which one do you think would better differentiate the data into two classes? If you choose the red line, then it is the ideal line that partitions the two classes properly. However, we still have not concretized the fact that it is the universal line that would classify our data most efficiently. At this point, you can’t miss learning about Artificial Neural Networks The green line cannot be the ideal line as it lies too close to the red class. Therefore, it does not provide a proper generalization which is our end goal. According to SVM, we have to find the points that lie closest to both the classes. These points are known as support vectors. In the next step, we find the proximity between our dividing plane and the support vectors. The distance between the points and the dividing line is known as margin. The aim of an SVM algorithm is to maximize this very margin. When the margin reaches its maximum, the hyperplane becomes the optimal one. The SVM model tries to enlarge the distance between the two classes by creating a well-defined decision boundary. In the above case, our hyperplane divided the data. While our data was in 2 dimensions, the hyperplane was of 1 dimension. For higher dimensions, say, an n-dimensional Euclidean Space, we have an n-1 dimensional subset that divides the space into two disconnected components. Next in this SVM Tutorial, we will see implementing SVM in Python. So, before moving on I recommend revise your Python Concepts. How to implement SVM in Python? In the first step, we will import the important libraries that we will be using in the implementation of SVM in our project. Code: import pandas as pd import numpy as np #DataFlair import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap import matplotlib.pyplot as plt from sklearn import datasets from sklearn.svm import SVC from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler %pylab inline Screenshot: In the second step of implementation of SVM in Python, we will use the iris dataset that is available with the load_iris() method. We will only make use of the petal length and width in this analysis. Code: pylab.rcParams['figure.figsize'] = (10, 6) iris_data = datasets.load_iris() # We'll use the petal length and width only for this analysis X = iris_data.data[:, [2, 3]] y = iris_data.target # Input the iris data into the pandas dataframe iris_dataframe = pd.DataFrame(iris_data.data[:, [2, 3]], columns=iris_data.feature_names[2:]) # View the first 5 rows of the data print(iris_dataframe.head()) # Print the unique labels of the dataset print('\n' + 'Unique Labels contained in this data are ' + str(np.unique(y))) Screenshot: ALERT!! You are missing something important – Don’t forget to practice the latest machine learning projects. Here is one for you – Credit Card Fraud Detection using Machine Learning In the next step, we will split our data into training and test set using the train_test_split() function as follows – Code: X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0) print('The training set contains {} samples and the test set contains {} samples'.format(X_train.shape[0], X_test.shape[0])) Screenshot: Let us now visualize our data. We observe that one of the classes is linearly separable. Code: markers = ('x', 's', 'o') colors = ('red', 'blue', 'green') cmap = ListedColormap(colors[:len(np.unique(y_test))]) for idx, cl in enumerate(np.unique(y)): plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1], c=cmap(idx), marker=markers[idx], label=cl) Screenshot: Output: Then, we will perform scaling on our data. Scaling will ensure that all of our data-values lie on a common range such that there are no extreme values. Code: standard_scaler = StandardScaler() #DataFlair standard_scaler.fit(X_train) X_train_standard = standard_scaler.transform(X_train) X_test_standard = standard_scaler.transform(X_test) print('The first five rows after standardisation look like this:\n') print(pd.DataFrame(X_train_standard, columns=iris_dataframe.columns).head()) Output Screenshot: After we have pre-processed our data, the next step is the implementation of the SVM model as follows. We will make use of the SVC function provided to us by the sklearn library. In this instance, we will select our kernel as ‘rbf’. Code: #DataFlair SVM = SVC(kernel='rbf', random_state=0, gamma=.10, C=1.0) SVM.fit(X_train_standard, y_train) print('Accuracy of our SVM model on the training data is {:.2f} out of 1'.format(SVM.score(X_train_standard, y_train))) print('Accuracy of our SVM model on the test data is {:.2f} out of 1'.format(SVM.score(X_test_standard, y_test))) Screenshot: DataFlair’s Recommendation – Customer Segmentation using R and Machine Learning After we have achieved our accuracy, the best course of action would be to visualize our SVM model. We can do this by creating a function called decision_plot() and passing values to it as follows – Code: import warnings def versiontuple(version): return tuple(map(int, (version.split(".")))) def decision_plot(X, y, classifier, test_idx=None, resolution=0.02): # setup marker generator and color map markers = ('s', 'x', 'o', '^', 'v') colors = ('red', 'blue', 'green', 'gray', 'cyan') cmap = ListedColormap(colors[:len(np.unique(y))]) # plot the decision surface x1min, x1max = X[:, 0].min() - 1, X[:, 0].max() + 1 x2min, x2max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx1, xx2 = np.meshgrid(np.arange(x1min, x1max, resolution), np.arange(x2min, x2max, resolution)) Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T) Z = Z.reshape(xx1.shape) plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap) plt.xlim(xx1.min(), xx1.max()) plt.ylim(xx2.min(), xx2.max()) for idx, cl in enumerate(np.unique(y)): plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1], alpha=0.8, c=cmap(idx), marker=markers[idx], label=cl) Screenshot: Code: decision_plot(X_test_standard, y_test, SVM) Screenshot: Output: Convolutional Neural Network – You must learn this concept for becoming an expert Advantages and Disadvantages of Support Vector Machine Advantages of SVM - Guaranteed Optimality: Owing to the nature of Convex Optimization, the solution will always be global minimum not a local minimum. - Abundance of Implementations: We can access it conveniently, be it from Python or Matlab. - SVM can be used for linearly separable as well as non-linearly separable data. Linearly separable data is the hard margin whereas non-linearly separable data poses a soft margin. - SVMs provide compliance to the semi-supervised learning models. It can be used in areas where the data is labeled as well as unlabeled. It only requires a condition to the minimization problem which is known as the Transductive SVM. - Feature Mapping used to be quite a load on the computational complexity of the overall training performance of the model. However, with the help of Kernel Trick, SVM can carry out the feature mapping using simple dot product. Disadvantages of SVM - SVM is incapable of handling text structures. This leads to loss of sequential information and thereby, leading to worse performance. - Vanilla SVM cannot return the probabilistic confidence value that is similar to logistic regression. This does not provide much explanation as confidence of prediction is important in several applications. - Choice of the kernel is perhaps the biggest limitation of the support vector machine. Considering so many kernels present, it becomes difficult to choose the right one for the data. Learn everything about Recurrent Neural Networks and its applications How to Tune SVM Parameters? Kernel Kernel in the SVM is responsible for transforming the input data into the required format. Some of the kernels used in SVM are linear, polynomial and radial basis function (RBF). For creating a non-linear hyperplane, we use RBF and Polynomial function. For complex applications, one should use more advanced kernels to separate classes that are nonlinear in nature. With this transformation, one can obtain accurate classifiers. Regularization We can maintain regularization by adjusting it in the Scikit-learn’s C parameters. C denotes a penalty parameter representing an error or any form of misclassification. With this misclassification, one can understand how much of the error is actually bearable. Through this, you can nullify the compensation between the misclassified term and the decision boundary. With a smaller C value, we obtain hyperplane of small margin and with a larger C value, we obtain hyperplane of larger value. Gamma With a lower value of Gamma will create a loose fit of the training dataset. On the contrary, a high value of gamma will allow the model to get fit more appropriately. A low value of gamma only provides consideration to the nearby points for the calculation of a separate plane whereas the high value of gamma will consider all the data-points to calculate the final separation line. Applications of SVM Some of the areas where Support Vector Machines are used are as follows – Face Detection SVMs are capable of classifying images of persons in an environment by creating a square box that separates the face from the rest. Text and hypertext categorization SVMs can be used for document classification in the sense that it performs the text and hypertext categorization. Based on the score generated, it performs a comparison with the threshold value. Bioinformatics In the field of bioinformatics, SVMs are used for protein and genomic classification. They can classify the genetic structure of the patients based on their biological problems. Handwriting recognition Another area where support vector machines are used for visual recognition is handwriting recognition. Summary In this article, we studied about Support Vector Machines. We learned how these SVM algorithms work and also implemented them with a real-life example. We also discussed the various applications of SVMs in our daily lives. Hope now you understand the complete theory of SVM. What do you want to learn next? Comment below. DataFlair will surely help you. Till then keep exploring the Machine Learning Tutorials. Happy learning😊 dear sir, I wanna know how to create .mat file for feature extarction . like sample.mat contain label and class. i am not getting how to create it. will u help me.? First, create a matrix of features the relevant dataset and then save the variable (feature matrix) using the “save” function. For eg. Assuming the name of matrix is “feature_vector” use the command: save feature_vector.mat feature_vector Sir, I want to know what are the Existing challenges and Existing Solution of the S VM Hi Khasrow, Thanks for connecting DataFlair. The performance of an SVM classifier is dependent on the nature of the data provided. If the data is unbalanced, then the classifier will suffer. Furthermore, SVMs cannot handle multi-label data. This means that any data with more than two labels cannot be handled by the SVM. It is also unable to handle a large amount of data. There are various kernels of SVMs like LS-SVM (Least Squared SVMs), Lib SVMs that provide solutions to some of the challenges faced by the SVMs. You can also reconstruct a kernelized SVM as a linear SVM to handle large data. Hope, it helps you! What could be the possible reasons for performance of SVM model is inferior to ELM model for estimation of hydraulic conductivity by using soil parameters. Hello Satish, ELM is modeled after Artificial Neural Networks. It has been proven through experimentation that an ELM model is more computationally efficient on larger dataset than an SVM Classifier. While SVM can provide greater accuracy in some cases, it is expensive to deploy as compared to ELM. Furthermore, ELM can be applied quickly to the new data which is not possible with SVM. Regards, DataFlair sir how can we implement SVM in apache spark Hi, I hope I’m asking this properly but would you be able to provide an in-depth tutorial about SVM with regards to it’s mathematical concepts. No need to do all the numeric calculations by hand but just each and every concepts e.g. fitting lines, calculating margins, or which algorithm automates the best fit? I’m not sure how gradient descent can be of use for SVM. Lastly, is there any scenario we could expect that how our data could under/overfit? Thank you so much for your reply.
https://data-flair.training/blogs/svm-support-vector-machine-tutorial/
CC-MAIN-2020-16
refinedweb
2,399
57.67
We are still refining and enriching our ADF Faces application and in particular the Tree based pages. One of the more advanced user requests we have to deal with, is the desire to be able to search the tree in its entirety. That is: provide a search criterium, press a button and get the tree displayed with only the nodes that satisfy the search criterium – and all their ancestors in order to make it still a tree. The end result would have to look a little like this: In this article I will discuss how I implemented this search the tree feature. When I started to analyze the requirement, I concluded that there are at least three approaches, each with their own merits and their own disadvantages: - Pass the search criterium to the Model and have it apply directly to the queries under the ViewObjects that feed the tree. While this has the advantage of hiding the details mostly from the application and only querying from the database what we really need, it would give us rather complex set of queries – as each node does not only have to check if it satisfies the query conditions itself but also if any of its children do so! Furthermore, this approach would lead to a rather specialistic solution, not one easy to be reused. - Extend the FacesModel that the FacesCtrlHierBinding class sets up and have it wrap the tree model in a layer that filters the nodes that satisfy the query. Well, this approach is neither efficient nor uncomplicated. - Apply the search criteria in the Render Phase – just not displaying the nodes we do not want there. While this is a relatively simple approach with the added benefit of being generic and easily reusable, it is hardly efficient as we will have to query each and every node in the entire tree from the database before deciding whether we want it or not. For large data-sets, trees with thousands or more nodes when fully expanded, this will be a costly and rather unperforming solution. Besides, it requires the use of client side JavaScript – not ideal. However, of these three approaches, given the tree I am looking at – which is relatively small even when fully expanded – and the ViewObjects – which are already quite complex as it is – I decided on the third option. The tree I start out with for this article – the one I want to add search capabilities to – is the HRM tree (based on EMP and DEPT) shown here: I want to be able to just type in a String and see all nodes whose label contains that String. If I type a Y, I want to see all nodes that have a Y in their label – and their ancestors to preserve the proper tree format. The steps to achieve this functionality: - Add an InputItem for the Search Criterium as well as CommandButtons for the Search to start and end - Add a Managed Bean with a doSearch method that is linked through an EL expression to the action attribute of the Search button as well as an undoSearch method that is bound to the action property of the End Search button - Bind the tree to the managed bean, to make the component and its TreeModel available when the doSearch method is invoked - - Bind the shortDesc attribute with an EL expression to a RenderNode property in the managed bean. Implement the getRenderNode() method – have it verify whether the current node either satisifies the query criteria or is in the TreePath collection -) In somewhat more detail: 1. Add an InputItem for the Search Criterium as well as CommandButtons for the Search to start and end Add something like the following JSF fragment to the tree-page: <afh:rowLayout > <af:panelGroup> <af:panelHorizontal> <af:inputText <af:objectSpacer <af:commandButton <af:objectSpacer <af:commandButton <af:objectSpacer </af:panelHorizontal> <af:tree <f:facet <af:switcher 2. Add a Managed Bean with a doSearch method The initial implementation of the Managed Bean is like this: package nl.amis.view; ... public class MyTreeBean { public MyTreeBean() { } String searchCriterium; private CoreTree tree; boolean inSearch = false; public String undoSearch() { inSearch = false; return null; } public void setSearchCriterium(String searchCriterium) { this.searchCriterium = searchCriterium; } public String getSearchCriterium() { return searchCriterium; } public void setTree(CoreTree tree) { this.tree = tree; if (isShowExpanded()) { expandAll(null); } } public CoreTree getTree() { return tree; } ... } The bean needs to be configured in the faces-config.xml file of course: <managed-bean> <managed-bean-name>HrmTreeTree</managed-bean-name> <managed-bean-class>nl.amis.view.MyTreeBean</managed-bean-class> <managed-bean-scope>session</managed-bean-scope> </managed-bean> 3. Bind the tree to the managed bean See the JSF fragment under 1. where the tree has its binding attribute refer to the HrmTreeTree.tree property. 4. public String doSearch() { JUCtrlHierNodeBinding rootnode = (JUCtrlHierNodeBinding)getSelectedNode(); inSearch = true; getTree().setTreeState(new PathSet(false)); List path = new ArrayList(); // now inspect nodes, find the ones that satisfy the query conditions, // add them to the Qualifying Nodes set // add their parents to the TreeState to make them expanded List children = rootnode.getParent().getChildren(); for (int i = 0; i < children.size(); i++) { path.add(0, Integer.toString(i)); if (inspectNode((JUCtrlHierNodeBinding)children.get(i), path)) { // add node to treestate getTree().getTreeState().getKeySet().add(path); } path.remove(0); } return ""; } private boolean doesNodeQualify(JUCtrlHierNodeBinding node) { return ((String)node.getAttribute("NodeLabel")).toLowerCase().indexOf(searchCriterium.toLowerCase()) > -1; } private boolean inspectNode(JUCtrlHierNodeBinding node, List treePath) { boolean expandParent = false; // check if node satisfies query condition // if it does, add to qualifying nodes and set expandParent to true if (doesNodeQualify(node)) { expandParent = true; } //next: inspect children; expandParent = expandParent || inspectNode (children) boolean expandNode = false; if ((node.getChildren() != null) && (node.getChildren().size() > 0)) { List children = node.getChildren(); expandNode = false; for (int i = 0; i < children.size(); i++) { treePath.add(Integer.toString(i)); // note: not the short circuit OR (||): once we have expandNode == true, // we would not process the children of subsequent nodes if were to use the || operator expandNode = expandNode | inspectNode((JUCtrlHierNodeBinding)children.get(i), treePath); treePath.remove(treePath.size() - 1); } //for if (expandNode) { // add node to treestate getTree().getTreeState().getKeySet().add(treePath); } } return expandParent || expandNode; } 5. Bind the shortDesc attribute with an EL expression to a RenderNode property in the managed bean <f:facet <af:commandLink <af:setActionListener Implement the getRenderNode() method – have it verify whether the current node either satisifies the query criteria or is in the TreePath collection public boolean getRenderNode() { JUCtrlHierNodeBinding currentNode = (JUCtrlHierNodeBinding)getTree().getRowData(); if (inSearch) { return (getTree().getTreeState().isContained() || doesNodeQualify(currentNode)); } else return true; } 6. Have JavaScript function hide the nodes that should not be displayed) function hideUnselected() { links = document.getElementsByTagName('a'); for (var i=0; i<links.length; i++) { if (links[i].title.indexOf('queryQualified=false') != -1) { links[i].parentNode.parentNode.parentNode.className = "invisible"; } }// for } Add the style invisible to one of the CSS stylesheets linked in with the tree page. Add the JavaScript functions to a JS library imported by the tree-page. Note: the search button has an onclick event handler that calls the JS function searchPressed(): function searchPressed() { links = document.getElementsByTagName('a'); for (var i=0; i<links.length; i++) { if (links[i].title.indexOf('search') != -1) { doEventDispatch(links[i]);); } } This function in turns tries to locate a link with title equal to search. This link is defined in the node facet for the root-nodes of the tree. The reason for this complex construction is that we need access to the Nodes in the doSearch method and for some reason that only seems to work from actions invoked from within the tree itself. The command link is defined in the JSF page as follows: <f:facet <h:panelGroup> <af:outputText <af:commandLink <af:setActionListener </af:commandLink> </h:panelGroup> </f:facet> What’s Next? It is very simple to extend the search facilities to a more elaborate query engine. We can have support for wild cards and regular expressions, have the user search for a specific type of nodes instead of all nodes, allow the user to search on other node attributes than just the label as we have been doing here. The main things to work on are: add query properties in the TreeBean and the Tree page and link them together and impelment a more advanced doesNodeQualify() method to leverage the more complex query criteria. 6 thoughts on “Search ADF Faces Tree – show only nodes satisfying Query Criteria” Please send source code to email addressThanks and Regards Please send source code to email address Thanks and Regards Tran Please send source code to email address Tnx Michael, an outside chance that you might see this but can you drop me an email at my firstname dot last name at oracle dot com. Thanks Grant Lucas, I enjoyed your article. It seems your code has a greater understanding of the adf faces tree than either the javadoc for treeModel or the Oracle ADF developer’s guide, or anything else I found on the internet. Can you point me to something which will help me learn more details about this tree and other hidden details about ADF? Also I noticed that some of the code you mentioned in your sample code was not viewable. I would very much like to see it. It has been a long time since I have seen you. I hope you are doing well. Michael Fons I create a ADF Tree , when program was started , i hope automatic expanded all nodes , could you tell me , how to adjust the property ,or need write some program codes , thks !! Jimmy Best Regard
https://technology.amis.nl/it/search-adf-faces-tree-show-only-nodes-satisfying-query-criteria/
CC-MAIN-2021-17
refinedweb
1,587
50.87
BUILD 2012 – Not just for Windows anymore November 5, 2012 4 Comments Last week marked the second BUILD conference. In 2011, BUILD replaced the Microsoft PDC conference in an event that was so heavily Windows 8 focused that it was even host at buildwindows.com. While the URL didn’t change for 2012, the focus sure did as this event also marked the latest round of big release news for Windows Azure. In this post (which I’m publishing directly from MS Word 2013 btw), I’m going to give a quick rundown of the Windows Azure related announcements. Think of this as your Cliff Notes version of the conference. Windows Azure Service Bus for Windows Server – V1 Released Previously released as a beta/preview back in June, this on-premise flavor of the Windows Azure Service bus is now fully released and available for download. Admittedly, it’s strictly for brokered messaging for now. But it’s still a substantial step towards providing feature parity between public and private cloud solutions. Now we just need to hope that shops that opt to run this will run it as internal SaaS and not set up multiple silos. Don’t get me wrong. It’s nice to know we have the flexibility to do silos, but I’m hoping we learn from what we’ve seen in the public cloud and don’t fall back to old patterns. One thing to keep in mind with this… It’s now possible for multiple versions of the Service Bus API to be running within an organization. To date, the public service has only had two major API versions. But going forward, we may need to be able to juggle even more. And while there will be a push to keep the hosted and on-premises versions at similar versions, there’s nothing requiring someone hosting it on-premises to always upgrade to the latest version. So as solution developers/architects, we’ll want to be prepared for be accommodating here. Windows Azure Mobile Services – Windows Phone 8 Support With Windows Phone 8 being formally launched the day before the BUILD conference, it only makes sense that we’d seen related announcements. And a key one of those was the addition of Windows Phone 8 support to Windows Azure Mobile Services. This announcement makes Windows Phone 8, the 3rd supported platform (Windows Store & iOS apps) for Mobile Services. This added to an announcement earlier in the month which expanded support for items like sending email, and different identity providers. So the Mobile Services team is definitely burning the midnight oil to get new features out to this great platform. New Windows Azure Storage Scalability Targets New scale targets have been announced for storage accounts created after June 7th 2012. This change has been enabled by the new “flat network” topology that’s being deployed into the Windows Azure Datacenters. In a nutshell, it allows the tps scale targets to be increased by 4x and the upper limit of a storage account to be raised to 200tb (2x). This new topology will continue to be rolled out through the end of the year but will only affect storage accounts created after the 07/12/2012 as mentioned above. These scale target improvements (which BTW are separate from the published Azure Storage SLA) will really help reduce the amount of ‘sharding’ that needs to be done for those with higher throughput requirements. New 1.8 SDK – Windows Server 2012, .NET 4.5, and new Storage Client BUILD also marked the launch of the new 1.8 Windows Azure SDK. This release is IMHO the most significant update to the SDK since the 1.3 version was launched almost 2 years ago. You could write a blog post any one of the key features, but since they are all so closely related and this is supposed to be a highlight post, I’m going to bundle it up. The new SDK introduces the new “OS Family 3” to Windows Azure Cloud Services giving us support for Windows Server 2012. Now when you combine this with the added support for .NET 4.5 and IIS 8, we can start taking advantage of technology like Web Sockets. Unfortunately Web Sockets are not enabled by default so there is some work you’ll need to do to take advantage of it. You may also need to tweak the internal Windows Firewall. A few older Guest OS’s were also depreciated so you may want to refer to the latest update of the compatibility matrix. The single biggest, and subsequently most confusing piece of this release has to do with the new 2.0 Storage Client. Now this update includes some great features including support for a preview release of the storage client toolkit for Windows Runtime (Windows Store) apps. However, there are some SIGNIFICANT changes to the client, so I’d recommend you review the list of Breaking Changes and Known Issues before you decide to start converting over. Fortunately, all the new features are in a new set of namespaces (Windows.AzureStorage.StorageClient has become simply Windows.Azurestorage.Storage). So this does allow you to mix and match old functionality with the new. But forewarned is forearmed as they say. So read up before you just dive into the new client headlong. For more details on some of the known issues with this SDK and the workarounds, refer to the October 2012 release notes and you can learn about all the changes to the Visual Studio tools by checking out “What’s New in the Windows Azure Tools“. HDInsight – Hadoop on Windows Azure Technically, this was released the week before BUILD, but I’m going to touch on it none the less. A preview of HDInsight has been launched that allows you to help test out the new Apache™ Hadoop® on Windows Azure service. This will feature support for common frameworks such as Pig and Hive and it also includes a local developer installation of the HDInsight Server and SDK for writing jobs with .NET and Visual Studio. It’s exciting to see Microsoft embracing these highly popular open source initiatives. So if you’d doing anything with big data, you may want to run over and check out the blog post for additional details. Windows Azure – coming to China Doug Hauger also announced that Microsoft has reached an agreement (Memorandum of Understanding, aka an agreement to start negotiations) which will license Windows Azure technologies to 21Vianet. This will in turn allow them to offer Windows Azure in China from local datacenters. While not yet a fully “done deal”, it’s a significant first step. So here’s hoping the discussions are concluded quickly and that this is just the first of many such deals we’ll see struck in the coming year. So all you Aussies, hold out hope! J Other news This was just the beginning. The Windows Azure team ran down a slew of other slightly less high-profile but equally important announcements on the team blog. Items like a preview of the Windows Azure Store, GA (general availability) for the Windows Azure dedicated, distributed in-memory cache feature launched back in June with the 1.7 SDK, and finally the launch of the Visual Studio Team Foundation Service which has been in preview for the last year. In closing… All in all, it was a GREAT week in the cloud. Or as James Staten put it on ZDNet, “You’re running out of excuses to not try Microsoft Windows Azure“. And this has just been the highlights. If you’d like to learn more, I highly recommend you run over and check out the session recordings from BUILD 2012 or talk to your local Microsoft representative. PS – Don’t forget to snag your own copy of the great new Windows Azure poster!
https://brentdacodemonkey.wordpress.com/2012/11/
CC-MAIN-2016-44
refinedweb
1,317
69.52
Running ROS nodes for visualization Viewing images on the remote computer is the next step to setting up the TurtleBot. Two ROS tools can be used to visualize the rgb and depth camera images. Image Viewer and rviz are used in the following sections to view the image streams published by the Kinect sensor. Visual data using Image Viewer A ROS node can allow us to view images that come from the rgb camera on Kinect. The camera_nodelet_manager node implements a basic camera capture program using OpenCV to handle publishing ROS image messages as a topic. This node publishes the camera images in the /camera namespace. Three terminal windows will be required to launch the base and camera nodes on TurtleBot and launch the Image Viewer node on the remote ... Get ROS Robotics By Example now with O’Reilly online learning. O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.
https://www.oreilly.com/library/view/ros-robotics-by/9781782175193/ch04s04.html
CC-MAIN-2020-45
refinedweb
157
54.02
Technical Support On-Line Manuals RL-ARM User's Guide (MDK v4) #include <rtl.h> void os_mbx_init ( OS_ID mailbox, /* The mailbox to initialize */ U16 mbx_size ); /* Number of bytes in the mailbox */ The os_mbx_init function initializes the mailbox object identified by the function argument. The argument mbx_size specifies the size of the mailbox, in bytes. However, the number of message entries in the mailbox is defined by the os_mbx_declare macro. The os_mbx_init function is in the RL-RTX library. The prototype is defined in rtl.h. Note None. os_mbx_check, os_mbx_declare, os_mbx_send, os_mbx_wait #include <rtl.h> /* Declare a mailbox for 20 messages. */ os_mbx_declare (mailbox1, 20); __task void task1 (void) { .. os_mbx_init (&mailbox1, sizeof(mailbox.
https://www.keil.com/support/man/docs/rlarm/rlarm_os_mbx_init.htm
CC-MAIN-2020-34
refinedweb
111
52.56
Hello Join the conversationAdd Comment does it complement or replace Visual Assist? I think no, because VA offers heuristic approximations and I think this extension doesn’t. If you’re referring to Visual Assist as a whole, no. Visual Assist offers numerous features beyond these sorts of “quick fixes.” However, in terms of the individual features, particularly the Add missing #include feature, it would intend to wholly substitute for equivalent feature in Visual Assist. Let me know if that helps clarify your question. I like the features that add missing includes or adding the full namespace. I like the hints it gives. I think that having an option to add a missing semicolon is overkill. In general, pretty nice. I wish some were built into Visual Studio but not all of them. I suggest that replace “Search MSDN” with “Search cppreference” . I can certainly agree with the sentiment, though it probably wouldn’t work well for help on compiler errors which are specific to our compiler =) Definitely put it up on UserVoice! visualstudio.uservoice.com We have since updated the extension with a custom online help option; check out the update! fyi, the query string for cppreference would be{0}
https://blogs.msdn.microsoft.com/vcblog/2016/04/06/be-sure-to-try-out-the-cpp-quick-fixes-extension/
CC-MAIN-2018-34
refinedweb
200
58.38
Tom Rini wrote:> On Mon, Mar 26, 2001 at 09:50:53AM -0500, Jeff Garzik wrote:> > PPC guys: this is a gratuitous renaming change that is not required.> > If you have been following the "CML1 cleanup patch" thread, you see that> > Eric is blindly dictating policy when he says that CONFIG_[0-9] needs to> > be cleaned up.> The counter point to this is what does "CONFIG_6xx" or 8xx mean? It's as bad> as CONFIG_Mxxx imho :)No argument.. :) I definitely encourage namespace cleanup in 2.5 -- butplease don't change an identifier just because it begins with a numericprefix... Change it because it needs to be changed.Best regards, Jeff-- Jeff Garzik | May you have warm words on a cold evening,Building 1024 | a full moon on a dark night,MandrakeSoft | and a smooth road all the way to your door.-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2001/3/26/145
CC-MAIN-2014-49
refinedweb
170
72.66
In this article we will discuss a number of ways to retrieve, show, and update data with ASP.NET forms using ADO.NET. Also, we will have a clear idea about the most common server controls in ASP.NET. In particular, with this article we will cover ASP.NET server controls, ADO.NET DataSource, and creating Templated DataBound Controls, ASP.NET forms, using data with controls. We will review each of the proceeding server controls in detail. We will learn how to raise and handle some of the controls events, how to retrieve attributes from the control programmatically on the server, and finally we will demonstrate how to bind to control to a data source. Finally, we will have a complete example of retrieving, showing and updating by using server controls of ASP.NET. Characteristically, when working with information or data from users we need to validate the data they provide us. For instance, if we have a field in our form where the user must enter his date of birth, we probably will want to control that user so that he enters a valid date. In this article we will not be validating the data collected, but we will use several server controls to handle data. Construction Data with ASP.NET and ADO.NET An ASP.NET server control derives directly or indirectly from System.Web.UI.Control. This base class belongs to the System.Web.UI namespace, which contains the elements common to all ASP.NET server controls. Three commonly used controls Page, UserControl, and LiteralControl are belong to System.Web.UI.Control developers generally do not instantiate Page or derive from Page. Page is important because every ASP.NET page is compiled to a Page control by the ASP.NET page framework,. Control developers also usually do not work with UserControl. User controls are developed using the same programming model as ASP.NET pages and are saved as .ascx text files. As it allows text to be encapsulated as a control, Control developers use LiteralControl widely. IIS needs to be properly configured with FrontPage extensions for ASP.NET pages to execute. The required version of FrontPage extensions needs to be installed during the .NET SDK install or the Visual Studio .NET install. To verify this, open the Internet Information Services Manager and right-click the default Web site, and select Check Server Extensions from the All Task menu. By extending an existing Web server control, by combining existing Web server controls, or by creating a control that derives from the base class System.Web.UI.WebControls.WebControl, we can develop a custom Web server control. We will find typical scenarios in which we are likely to develop our own controls and provide links to other topics for further information in the following list. - We have created an ASP.NET page that provides a user interface that we want to reuse in another application. We would like to create a server control that encapsulates the user interface (UI) but we do not want to write additional code. ASP.NET allows us to save our page as a user control without writing a single additional line of code. For details, see Web Forms User Controls. - We would like to develop a compiled control that combines the functionality of two or more existing controls. For example, we need a control that encapsulates a button and a text box. We can do this using control composition, as described in Developing a Composite Control. - An existing ASP.NET server control almost meets our requirements but lacks some required features. We can customize an existing control by deriving from it and overriding its properties, methods, or events. - None of the existing ASP.NET server controls (or their combinations) meets our requirements. In that case, we can create a custom control by deriving from one of the base control classes. These classes provide all the plumbing needed by an ASP.NET server control, thus allowing us to focus on programming the features we need. To get started, see Developing Custom Controls: Key Concepts and Developing a Simple ASP.NET Server Control Many custom controls include a combination of scenarios, where we combine custom controls that we have designed with existing ASP.NET server controls. The ASP.NET server controls that provide a user interface are organized into two namespaces; System.Web.UI.HtmlControls and System.Web.UI.WebControls. While the Web server controls are richer and more abstract, the HTML server controls map directly to HTML elements. Most Web server controls derive directly or indirectly from the base class System.Web.UI.WebControls.WebControl. On the other hand, the four controls in the upper-right corner ( Literal, PlaceHolder, Repeater, and Xml) derive from System.Web.UI.Control. The controls on the left map to HTML elements. The controls in the center are for validating form input. Also the controls that are in the center provide rich functionality, such as the Calendar and the AdRotator controls. The controls which provide data binding support are on the right. Before we start to talk about the server control, we will have some additional information about DataBound controls. Let's take a look at how we can develop a Templated DataBound control.
https://www.developerfusion.com/article/4410/in-depth-aspnet-using-adonet/
CC-MAIN-2018-17
refinedweb
874
50.73
This article is in the Product Showcase section for our sponsors at CodeProject. These articles are intended to provide you with information on products and services that we consider useful and of value to developers. I know people don't often have time to read complete articles, so you can get a very quick overview of what I'm saying here if you just scan down and read only the fix, the solution to this problem finally turned out to take about one week (but at least it taught us a lot about memory usage in .NET). Before we continue, as a very brief introduction to .NET memory management, just bear in mind two simple rules: To understand this example, it may be necessary to know a little bit about the technologies we use, so I'll try to keep this as short and simple as possible:. var uow = new UnitOfWork(); var customers = new XPCollection<Customer>();: var uow = new UnitOfWork(); var newCustomer = new Customer(uow) { Name = "Florian" }; uow.CommitChanges();. Unfortunately, we had to accept that even after running an import with just a couple of hundred objects, the available RAM was still all filled up, and we still got an OutOfMemory Exception.): int i = 0; foreach(var objectToImport in m_importPlugin.GetRecords()) { //... do stuff like matching with existing records to avoid duplicate records and //copy records into the database DoMatching(); CopyToDatabase(objectToImport); //replace unitofwork objects to get rid of the objects we don’t want to be referenced //any more if (i++ == 100) { ReplaceUnitOfWorks(); //*********** EVERY TIME WE HIT THE FOLLOWING LINE WE SHOULD HAVE THE SAME //MEMORY OCCUPATION MessageBox.Show("Now take memory profiler snapshot"); } } I used the comes with a handy Visual Studio integration so you can start it right from your IDE. While running this stripped-down code, I waited until my alert popped up, took a memory snapshot, cleared the alert, and waited for the next one so I had two memory snapshots to compare. As the header suggests, I started my search for the problem by looking at the data-type with the biggest growth in memory consumption. A mysterious class called RBTree<K>+Node<int>[] seemed to have grown most in memory usage since my last snapshot, and as I wanted to know more about the instances of this type that occupied my precious memory, I clicked on the Instance List button.. As expected, there were many more UnitOfWork objects alive than there should have been. All I needed to do was to introduce some code into the ReplaceUnitOfWorks() method to reset the Dictionary that was keeping my UnitOfWork objects alive. public void ReplaceUnitOfWorks() { //HERE IS THE FIX: ExtKey.ClearFrozenContextDictionary(); //HERE IS THE CODE THAT HAS ALREADY BEEN THERE: this.UnitOfWork = new UnitOfWork(); } ... //Somewhere else: public class ExtKey { public static void ClearFrozenContextDictionary() { m_frozenProviderContexts = new Dictionary<Session, ProviderType>(); } }. Eureka! The memory problem was solved!. Here a few tips and best practices that I've picked up when working on this memory problem. All of them are just my own personal opinion, so feel free to write comments if you disagree with any of them: I hope there is someone out there I've helped by describing this special case of memory profiling a .NET application. Please feel free to post comments, criticisms and ideas on.
https://www.codeproject.com/articles/55404/ever-had-to-tackle-an-outofmemory-exception
CC-MAIN-2017-13
refinedweb
548
56.69
Ok, here's an interesting patch based on the current 'next' (since it very intimately requires the new in-memory index format). What it does is to create a hash index of every single file added to the index. Right now that hash index isn't actually used for much: I implemented a "cache_name_exists()" function that uses it to efficiently look up a filename in the index without having to do the O(logn) binary search, but quite frankly, that's not why this patch is interesting. No, the whole and only reason to create the hash of the filenames in the index is that by modifying the hash function, you can fairly easily do things like making it always hash equivalent names into the same bucket. That, in turn, means that suddenly questions like "does this name exists in the index under an _equivalent_ name?" becomes much much cheaper. Guiding principles behind this patch: - it shouldn't be too costly. In fact, my primary goal here was to actually speed up "git commit" with a fully populated kernel tree, by being faster at checking whether a file already existed in the index. I did succeed, but only barely: Best before: [torvalds <at> woody linux]$ time git commit > /dev/null real 0m0.255s user 0m0.168s sys 0m0.088s Best after: [torvalds <at> woody linux]$ time ~/git/git commit > /dev/null real 0m0.233s user 0m0.144s sys 0m0.088s so some things are actually faster (~8%). Caveat: that's really the best case. Other things are invariably going <at> woody linux]$ time git ls-files > /dev/null real 0m0.016s user 0m0.016s sys 0m0.000s After: [torvalds <at> woody linux]$ time ~/git/git ls-files > /dev/null real 0m0.021s user 0m0.012s sys 0m0.008s and while the thing has really gotten relatively much slower, we're still talking about something almost unmeasurable (eg 5ms). And that really should be pretty much the worst case. So we lose 5ms on one "benchmark", but win 22ms on another. Pick your poison - this patch has the advantage that it will _likely_ speed up the cases that are complex and expensive more than it slows down the cases that are already so fast that nobody cares. But if you look at relative speedups/slowdowns, it doesn't look so good. - It should be simple and clean The code may be a bit subtle (the reasons I do hash removal the way I do etc), but it re-uses the existing hash.c files, so it really is fairly small and straightforward apart from a few odd details. Now, this patch on its own doesn't really do much, but I think it's worth looking at, if only because if done correctly, the name hashing really can make an improvement to the whole issue of "do we have a filename that looks like this in the index already". And at least it gets real testing by being used even by default (ie there is a real use-case for it even without any insane filesystems). NOTE NOTE NOTE! The current hash is a joke. I'm ashamed of it, I'm just not ashamed of it enough to really care. I took all the numbers out of my nether regions - I'm sure it's good enough that it works in practice, but the whole point was that you can make a really much fancier hash that hashes characters not directly, but by their upper-case value or something like that, and thus you get a case-insensitive hash, while still keeping the name and the index itself totally case sensitive. And let's face it, it was kind of fun. These things are all _soo_ much simpler than all the issues you have to do in the kernel, so this is just a complete toy compared to all the things we do inside Linux to do the same thing with pluggable hashes on a per-path-component basis etc. (User space developers are weenies. One of the most fun parts of git development for me has been how easy everything is ;) Linus ---- cache.h | 6 +++ dir.c | 2 +- read-cache.c | 97 ++++++++++++++++++++++++++++++++++++++++++++++++++++------ 3 files changed, 94 insertions(+), 11 deletions(-) diff --git a/cache.h b/cache.h index 3a47cdc..409738c 100644 --- a/cache.h +++ b/cache.h @@ -3,6 +3,7 @@ #include "git-compat-util.h" #include "strbuf.h" +#include "hash.h" #include SHA1_HEADER #include <zlib.h> @@ -109,6 +110,7 @@ struct ondisk_cache_entry { }; struct cache_entry { + struct cache_entry *next; unsigned int ce_ctime; unsigned int ce_mtime; unsigned int ce_dev; @@ -131,6 +133,7 @@ struct cache_entry { #define CE_UPDATE (0x10000) #define CE_REMOVE (0x20000) #define CE_UPTODATE (0x40000) +#define CE_UNHASHED (0x80000) static inline unsigned create_ce_flags(size_t len, unsigned stage) { @@ -188,6 +191,7 @@ struct index_state { struct cache_tree *cache_tree; time_t timestamp; void *alloc; + struct hash_table name_hash; }; extern struct index_state the_index; @@ -211,6 +215,7 @@ extern struct index_state the_index; #define refresh_cache(flags) refresh_index(&the_index, (flags), NULL, NULL) #define ce_match_stat(ce, st, options) ie_match_stat(&the_index, (ce), (st), (options)) #define ce_modified(ce, st, options) ie_modified(&the_index, (ce), (st), (options)) +#define cache_name_exists(name, namelen) index_name_exists(&the_index, (name), (namelen)) #endif enum object_type { @@ -297,6 +302,7 @@ extern int read_index_from(struct index_state *, const char *path); extern int write_index(struct index_state *, int newfd); extern int discard_index(struct index_state *); extern int verify_path(const char *path); +extern int index_name_exists(struct index_state *istate, const char *name, int namelen); extern int index_name_pos(struct index_state *, const char *name, int namelen); #define ADD_CACHE_OK_TO_ADD 1 /* Ok to add */ #define ADD_CACHE_OK_TO_REPLACE 2 /* Ok to replace file/directory */ diff --git a/dir.c b/dir.c index 1b9cc7a..6543105 100644 --- a/dir.c +++ b/dir.c @@ -346,7 +346,7 @@ static struct dir_entry *dir_entry_new(const char *pathname, int len) struct dir_entry *dir_add_name(struct dir_struct *dir, const char *pathname, int len) { - if (cache_name_pos(pathname, len) >= 0) + if (cache_name_exists(pathname, len)) return NULL; ALLOC_GROW(dir->entries, dir->nr+1, dir->alloc); diff --git a/read-cache.c b/read-cache.c index 8ba8f0f..33a8ca5 100644 --- a/read-cache.c +++ b/read-cache.c @@ -23,6 +23,70 @@ struct index_state the_index; +static unsigned int hash_name(const char *name, int namelen) +{ + unsigned int hash = 0x123; + + do { + unsigned char c = *name++; + hash = hash*101 + c; + } while (--namelen); + return hash; +} + +static void set_index_entry(struct index_state *istate, int nr, struct cache_entry *ce) +{ + void **pos; + unsigned int hash = hash_name(ce->name, ce_namelen(ce)); + + istate->cache[nr] = ce; + pos = insert_hash(hash, ce, &istate->name_hash); + if (pos) { + ce->next = *pos; + *pos = ce; + } +} + +/* + * void remove_hash_entry(struct index_state *istate, struct cache_entry *ce) +{ + ce->ce_flags |= CE_UNHASHED; +} + +static void replace_index_entry(struct index_state *istate, int nr, struct cache_entry *ce) +{ + struct cache_entry *old = istate->cache[nr]; + + if (ce != old) { + remove_hash_entry(istate, old); + set_index_entry(istate, nr, ce); + } + istate->cache_changed = 1; +} + +int index_name_exists(struct index_state *istate, const char *name, int namelen) +{ + unsigned int hash = hash_name(name, namelen); + struct cache_entry *ce = lookup_hash(hash, &istate->name_hash); + + while (ce) { + if (!(ce->ce_flags & CE_UNHASHED)) { + if (!cache_name_compare(name, namelen, ce->name, ce->ce_flags)) + return 1; + } + ce = ce->next; + } + return 0; +} + /* * This only updates the "non-critical" parts of the directory * cache, ie the parts that aren't tracked by GIT, and only used @@ -323,6 +387,9 @@ int index_name_pos(struct index_state *istate, const char *name, int namelen) /* Remove entry, return true if there are more entries to go.. */ int remove_index_entry_at(struct index_state *istate, int pos) { + struct cache_entry *ce = istate->cache[pos]; + + remove_hash_entry(istate, ce); istate->cache_changed = 1; istate->cache_nr--; if (pos >= istate->cache_nr) @@ -697,8 +764,7 @@ static int add_index_entry_with_check(struct index_state *istate, struct cache_e /* existing match? Just replace it. */ if (pos >= 0) { - istate->cache_changed = 1; - istate->cache[pos] = ce; + replace_index_entry(istate, pos, ce); return 0; } pos = -pos-1; @@ -758,7 +824,7 @@ int add_index_entry(struct index_state *istate, struct cache_entry *ce, int opti memmove(istate->cache + pos + 1, istate->cache + pos, (istate->cache_nr - pos - 1) * sizeof(ce)); - istate->cache[pos] = ce; + set_index_entry(istate, pos, ce); istate->cache_changed = 1; return 0; } @@ -887,11 +953,8 @@ int refresh_index(struct index_state *istate, unsigned int flags, const char **p has_errors = 1; continue; } - istate->cache_changed = 1; - /* You can NOT just free istate->cache[i] here, since it - * might not be necessarily malloc()ed but can also come - * from mmap(). */ - istate->cache[i] = new; + + replace_index_entry(istate, i, new); } return has_errors; } @@ -966,6 +1029,20 @@ static void convert_from_disk(struct ondisk_cache_entry *ondisk, struct cache_en memcpy(ce->name, ondisk->name, len + 1); } +static inline size_t estimate_cache_size(size_t ondisk_size, unsigned int entries) +{ + long per_entry; + + per_entry = sizeof(struct cache_entry) - sizeof(struct ondisk_cache_entry); + + /* + * Alignment can cause differences. This should be "alignof", but + * since that's a gcc'ism, just use the size of a pointer. + */ + per_entry += sizeof(void *); + return ondisk_size + entries*per_entry; +} + /* remember to discard_cache() before reading a different cache! */ int read_index_from(struct index_state *istate, const char *path) { @@ -1016,7 +1093,7 @@ int read_index_from(struct index_state *istate, const char *path) * has room for a few more flags, we can allocate using the same * index size */ - istate->alloc = xmalloc(mmap_size); + istate->alloc = xmalloc(estimate_cache_size(mmap_size, istate->cache_nr)); src_offset = sizeof(*hdr); dst_offset = 0; @@ -1027,7 +1104,7 @@ int read_index_from(struct index_state *istate, const char *path) disk_ce = (struct ondisk_cache_entry *)((char *)mmap + src_offset); ce = (struct cache_entry *)((char *)istate->alloc + dst_offset); convert_from_disk(disk_ce, ce); - istate->cache[i] = ce; + set_index_entry(istate, i, ce); src_offset += ondisk_ce_size(ce); dst_offset += ce_size(ce);
http://article.gmane.org/gmane.comp.version-control.git/71478
crawl-002
refinedweb
1,536
51.07
Alex Kipman - Inside a MS Build Bug Triage meeting - Posted: Oct 25, 2004 at 4:01PM - 58,658 reaction was "Why do I want to watch a movie of something I go through personally that's probably the most unpleasant thing I have to do?" The same applies to watching a video of bug triage. Though perhaps the target audience in this case isn't other software product managers. It's really cool I agree. Yes, there are issues of non-disclosure for the sake of competition but the Channel9 Team should continue to walk that thin line! As a tester who's life's work is basically shredded before his very eyes on an almost daily basis* by a triage team, I find that for the most part, the whole process of triage is one of those things that is usually best left to the imagination, kind of like sausage making. *Note: This doesn't actually happen... Usually. I presume this was 10 mins for Scoble and his camera before they actually had their meeting becuase if that was some of the real thing then I am suddenly no longer surprised it takes 3-4 years for an MS product to go through a revison. Also, loved the two geeks on the sofa hiding behind 3 kilos each of 'mobile' hardware. hah We do go a bit faster in real life but not much. We usually triage three times a week (1 hour blocks), and we usually cover ~20 bugs a triage session. During peak time we tend to meet everyday, and as ZBB approaches we could even impromptu meet every time a bug is opened. I'm surprised you thought we were spending too much time on each bug. If anything sometimes I feel we spend too little time. From our perspective Triage is what sets the quality bar across a given product. Does a given bug meet our current bar, does it have reproducible repro steps, is this an instance of a more generic bug, does this really need to be fixed right now, is this a bug at all. What context do we need to add to ensure the dev can be productive right away? Is this a family of bugs, and can we link them together so the dev thinks about the entire family at once, thus not having to consistently context switch… etc etc etc. Spending a few minutes up front as a group working through these issues saves a bundle of time from a developer perspective since they only look at triaged bugs. When they look at a bug they know it meets the bar, they have enough information in it to be productive right away, and they don't have to worry about searching all over the place for similar bugs to fix while in this code. From my perspective this saves time when a dev actually gets the bug. Amortized across all our devs and all the open issues we have, the time spent on triage is minimal and in my opinion very well spent. But that's just my $.02 and all sounds like a lot of sense to me. I guess i'm coming from a different background (investment banking) so for me products and development cycles are smaller and standard procedure for going through a bug/request list is heads down in a meeting room, burn through the list at double-time, somebody barking out the IDs + description, each party argueing over the priority. To complete the picture, the office becomes a glass room, the desktop PC is linked to a OHP/Magicboard for all to see, all the laptop kit become O2 XDAs. oh and the dress code is raised a few hefty notches sartorially. appreciate you're being friendly and polite... but clearly your're entitled to more than your $.02 worth, you run the MSBuild triage meeting... Good luck with the product, i'm looking forwarded to using it as I'm making do with a copy of the BuildManager from the Patterns&Practises people with my own hacked in code on top. Hi, I must say that I got interested in command line tools in the build process quite recently after having worked for the past 5 months on a j2ee project living and dogfooding latest eclipse wtp builds (I'm a .net-first-person, don't worry). There were things that just simply weren't in the IDE and it seemed natural to automate things outside the IDE. As ant was already integrated inside eclipse I decided I'd give it a go and although some results were easy to do other were obscure and strange, also the whole extension model to me is still incompehensive, why not choosing namespaces and a schema is a mistery to me. Anyhow I had this thing in my faculty to use maven and as I'm one of those get-on-the-bandwagon-first people I decided to go with maven 2 that just got released. Wow, an eye-opener. Although still rough around the edges it gives a plugin-oriented way to manage the whole project lifecycle, all from a command line. This brings me to my question. In the begining people were saying msbuild is like ant for .net, is it so or does it cover/will it ever cover aspects that maven/2 handles or will that be entering too much into VSTS terrytory? thx As usual media failure so couldnt watch the vedio ny one help plz............. Remove this comment Remove this threadClose
https://channel9.msdn.com/Blogs/TheChannel9Team/Alex-Kipman-Inside-a-MS-Build-Bug-Triage-meeting?format=auto
CC-MAIN-2016-07
refinedweb
931
67.28
1 reply on 1 page. Most recent reply: Aug 25, 2008 11:38 AM by David Goodger I am pretty happy with the publication process on Artima. The reason why I am so happy is that the user interface for making a post in the blog is dead simple. You log in, you post your article the first time, you repost it if you need to correct something, and that's it. You don't need to specify keywords or a category. You don't need to set a publication date. You don't need to specify a citation. You cannot post pictures, only link to pictures. The system does not keep track of your revisions, or at least the interface for viewing the revisions is not exposed to the blog author. This absense of features is in my opinion the best feature of the Artima blogging platform. I personally keep my articles on a Subversion repository so I can see the history of revisions with my own familiar tools (i.e. the command line interface and Trac) and it would not make sense for the blogging platform to duplicate (badly) that functionality. I have my own website where I can copy my pictures with a simple scp command, so I don't need and I don't want to be forced to make a manual upload of a file (I hate upload forms). When I write an article containing snippets of code, I just write the code with the text of the article contained in the docstring (for articles about Python) or in a top-level comment (for articles about Scheme and other languages). I have a tool that extracts the text from the script and converts it into HTML/Latex or other formats. Since Artima does support reStructuredText (I love reStructuredText, everything I write is in that format) I don't even need to convert it before posting it. That was a very welcome surprise. This morning I looked at the structure of the edit page for the posts. It is so simple and plain that it took me just 10 minutes to write a script to post my articles with Twill. Now I don't need to cut and paste from my editor (Emacs) to the browser. I do have a Makefile which extracts the reStructuredText and posts it to the blog. Everything is so incredibly simple compared to the publication process I was used to. In the past I have published many articles on Stacktrace, which is an Italian webzine about programming and Internet technology. Stacktrace uses Django as its underlying technology. Django is a framework which was created exactly for publishing articles on the Web so it should do that job well, you would think. Perhaps it does a good job for non-technical writers, the people it was written for. But for developers, especially GUI impaired ones like myself, it made publication much more complex than needed. The edit page (I mean the admin page) was so complex that I renonced from the start to scrip it. Moreover, I could not submit reStructuredText directly: I had first to convert my source file into HTML and then post-process the output by stripping many tags inserted by reST. I did so by writing and HTML parser for that, spending at least a full morning on the job. I know that I am not being fair with Django here and that there are reStructuredText plugins for Django: unfortunately, the editorial board decided to accept only plain HTML submissions. But this is beside the point. The point is that neither Django nor the Artima blogging platform were intended to be used by as I wanted to: nevertheless the simple no-fuss no-nonsense interface of Artima could be perverted much more easily than the complex interface of Django. Semplicity has its advantages. Always. Feel free to comment with your thoughts. Did you experienced the same less is more feeling? In what circumstances? --- Here is the script I cooked up for posting my articles, for the curious guys among you (obviously, you are supposed to change <USERNAME> and <PASSWORD> with your credentials): $ echo post.py """ A script to post articles on my blog """ import sys from twill import commands as c if __name__ == '__main__': try: rstfile, thread = sys.argv[1:] except ValueError: sys.exit('Usage: post <rstfile> <artima-thread-number>') text = file(rstfile).read() c.go('') c.formvalue('1', 'username', '<USERNAME>') c.formvalue('1', 'password', '<PASSWORD>') c.submit() c.go('' % thread) c.formvalue('1', 'body', text) c.submit()
http://www.artima.com/forums/flat.jsp?forum=106&thread=236286&start=0
CC-MAIN-2015-11
refinedweb
760
63.59
Catching an exception while using a Python 'with' statement from __future__ import with_statementtry: with open( "a.txt" ) as f : print f.readlines()except EnvironmentError: # parent of IOError, OSError *and* WindowsError where available print 'oops' If you want different handling for errors from the open call vs the working code you could do: try: f = open('foo.txt')except IOError: print('error')else: with f: print f.readlines() The best "Pythonic" way to do this, exploiting the with statement, is listed as Example #6 in PEP 343, which gives the background of the statement.") Catching an exception while using a Python 'with' statement The with statement has been available without the __future__ import since Python 2.6. You can get it as early as Python 2.5 (but at this point it's time to upgrade!) with: from __future__ import with_statement Here's the closest thing to correct that you have. You're almost there, but with doesn't have an except clause: with open("a.txt") as f: print(f.readlines())except: # <- with doesn't have an except clause. print('oops') A context manager's __exit__ method, if it returns False will reraise the error when it finishes. If it returns True, it will suppress it. The open builtin's __exit__ doesn't return True, so you just need to nest it in a try, except block: try: with open("a.txt") as f: print(f.readlines())except Exception as error: print('oops') And standard boilerplate: don't use a bare except: which catches BaseException and every other possible exception and warning. Be at least as specific as Exception, and for this error, perhaps catch IOError. Only catch errors you're prepared to handle. So in this case, you'd do: try: with open("a.txt") as f: print(f.readlines())except IOError as error: print('oops') oops
https://codehunter.cc/a/python/catching-an-exception-while-using-a-python-with-statement
CC-MAIN-2022-21
refinedweb
308
67.55
In order to learn how to program effectively in the C or C++ language, a student must learn how these languages manage the input and output of data to and from the screen and keyboard. The C language provides several functions that give different levels of input and output capability. These functions are, in most cases, implemented as routines that call lower level input/output functions. The input and output functions in C are built around the concept of a set of standard data streams being connected from each executing program to the basic input/output devices. These standard data streams or files are opened by the operating system and are available to every C and assembler program to use without having to open or close the files. These standard files or streams are called: - stdin : connected to the keyboard - stdout : connected to the screen - stderr : connected to the screen The following two data streams are also available on MS-DOS based computers, but not on UNIX or other multi-user based operating systems. - stdaux : connected to the first serial communications port - stdprn : connected to the first parallel printer port The input/output functions fall into two categories, formatted display and read functions, and non-formatted display and read functions. The following are descriptions of the formatted display and read functions. ** int printf( const char *format [,argument, ...] ); ** where format is composed of literal text, escape sequences used as carriage control, and format specifiers for conversion of data in the arguments to a display format. This function returns the number of characters printed. printf() returns the number of bytes output. In the event of error, printf returns EOF. Example: main() { char name[30]; printf("\nEnter your name:"); gets(name); printf("\nHello %s",name); } The printf() function has the capability to manage conversion control. General form **%[-][width][flags]format** where: **%** : marks the start of the conversion control string **-** : specifies that the data is to be printed left-justified **width** : the width of field or number of spaces to allot on the display **flags** : precision of output to be displayed **format** : the format specifier desired #include "stdio.h" int main() { printf("/%d/\n",336); printf("/%2d/\n",336); printf("/%10d/\n",336); printf("/%-10d/\n",336); return 0; } results: #include "stdio.h" int main() { printf("/%f/\n",1234.56); printf("/%e/\n",1234.56); printf("/%4.f/\n",1234.56); printf("/%3.1f/\n",1234.56); printf("/%10.3f/\n",1234.56); printf("%10.3e/\n",1234.56); return 0; } results: int scanf( const char *format [,address, ...] ); where format is a list of format specifiers indicating the format and type of data to be read from the keyboard and stored in the corresponding address. There must be the same number of format specifiers and addresses as there are input fields. scanf returns the number of input fields successfully scanned, converted, and stored. The return value does not include scanned fields that were not stored. If scanf attempts to read end-of- file, the return value is EOF. If no fields were stored, the return value is 0. #include "stdio.h" int main() { char last_name[30]; int age, ret; printf("\nEnter Last_name and age"); scanf("%s %d%c", last_name, &age, &ret); return 0; } The scanf() function scans the data input through the keyboard and by default delimits values by whitespace. Whitespace is defined as being a TAB, a blank or the newline character (‘n’). Therefore, data that is input with the intention of having embedded blanks as part of the data value will be broken into several values and distributed among the input variables specified in the scanf() statement. The result will more than likely not be what was desired. In the above example, notice that although the prompt asks for the input of a name value and an age value, the scanf() function is told to read values for three arguments. The argument ret is an integer variable but scanf() is reading a %c or character value from the keyboard. The ret variable will hold the newline character input by the user pressing the RETURN or ENTER key on the keyboard. If the newline character is not extracted from the keyboard buffer, the newline will be picked up as the first argument of the next input statement, which could be a scanf(), a gets() or a getchar() function. The format string is a character string that contains three types of specifiers: whitespace characers, non-whitespace characters and format-specifiers. The format-specifiers have the following form: **%[*] [width] [h|l|L] type-character** Each format begins with the percent character, %, after which come the following, in this order: - An optional assignment-suppression character, [*]. This states that the value being read will not be assigned to an argument, but will be dropped. - An optional width specifier, [width]. This designates the maximum number of characters to be read that compose the value for the associated argument. Encountering whitespace before the width is satisfied terminates the input of this value and moves to the next. - An optional argument-type modifier, [h|l|L]. This modifies the type- character specifier to accept format for a type of : h = short int l = long int, if the type-character specifiers an integer conversion l = double, if the type-character specifiers a floating-point conversion L = long double, which is valid only with floating- point conversions NOTE: Simple data objects must be passed by reference in order for scanf() to be able to store data in the correct memory location. To pass by reference means to pass the memory address of a variable. The & operator in front of a variable name signifies that the address of the following variable is to obtained. The following are format specifiers which apply only to printf() and scanf(). Format Specifiers for printf and scanf Type Character Input Argument Format of Output %d integer signed decimal int %i integer signed decimal int %o integer unsigned octal int %u integer unsigned decimal int %x integer unsigned hex int (a,b,c,d,e,f) %X integer unsigned hex int (A,B,C,D,E,F) %f floating point signed value of form [-]dddd.dddd %e floating point signed value of form [-]d.dddd or e**[+/-]ddd %g floating point signed value in either **e or f form trailing zeros and the decimal point are printed only if necessary. %E floating point same as e, but with E for exponent %G floating point same as g, but with E for exponent if e format %c character single character %s string pointer prints characters until a null-terminator is pressed or precision is reached %% none the % character is printed %n pointer to int stores (in the location pointed to by the input argument) a count of the characters written so far %p pointer prints the input argument as a memory address NOTE:**Numerics with **scanf() an h or l or L can be used with each of the following in order to modify the format; h = short; l and L = long. int puts( const char *s ); puts() displays a string literal or a stored character string on the screen. The function automatically carriage return and line feeds at the end of the display. The string can contain escape sequences but not format specifiers. On successful completion, puts() returns a nonnegative value. Otherwise, it returns a value of EOF. #include "stdio.h" int main() { char name[30]; printf("\nEnter your name:"); gets(name); printf("\nHello "); puts(name); return 0; } char *gets( char *s ); gets() reads characters from the keyboard and stores them in a passed character array. The reading of keyboard is terminated when the ‘n’ (RETURN/ENTER) key is pressed. On success, gets() returns the string argument s; it returns NULL on end-of-file or error. #include "stdio.h" int main() { char name[30]; printf("\nEnter your name:"); gets(name); printf("\nHello %s",name); return 0; } int putchar( int c ); putchar() writes a character to the stdout data stream. On success, putchar() returns the character c. On error, putchar() returns EOF. int putch( int c ); putch() writes the character directly to the screen. This function is available only on PC based compilers. On success, putch returns the characer printed, c. On error, it returns EOF. #include "stdio.h" int main() { int c; c = 'A'; putchar(c); putch(c); return 0; } result: AA int getchar( void ); int getch( void ); int getche( void ); getchar() reads a single character the from the input data stream; but does not return the character to the program until the ‘n’ (RETURN/ENTER) key is pressed. getch() reads, without echoing, a single character from the keyboard and immediately returns that character to the program; available only on PC compilers. getche() reads, with echo, a single character from the keyboard and immediately returns that character to the program; available only on PC compilers. #include "stdio.h" int main() { int ch; printf("\nContinue(Y/N)?"); ch = getchar(); return 0; } The result: Continue(Y/N)?Y < RETURN > NOTE: The ‘n’ (RETURN/ENTER) key must be pressed after the response in order for the character to be stored in ‘ch’. Also, the character pressed is automatically displayed on the screen. #include "stdio.h" int main() { int ch; printf("\nContinue(Y/N)?"); ch = getch(); return 0; } The result: Continue(Y/N)? (Y pressed) NOTE: Upon pressing the ‘Y’ or ‘N’ key the character is immediately stored in ‘ch’, but the character pressed is not automatically shown on the screen. This is available only with PC based compilers. #include "stdio.h" int main() { int ch; printf("\nContinue(Y/N)?"); ch = getche(); return 0; } The result: Continue(Y/N)?N NOTE: Upon pressing the ‘Y’ or ‘N’ key the character is immediately stored in ‘ch’ and also is echoed on the screen. This is available only with PC based compilers. Like C, C++ has no built-in facilities for I/O. Instead, you must rely on a library of functions, for performing I/O. In ANSI C, the I/O functions are a part of the standard library, but C++ does not have any standard library yet. Of course, you can call the ANSI C library routines in C++, but for I/O, C++ release 2.0 and above provides an alternative to printf and scanf. C++ comes with the iostream library, which handles I/O through a class of objects. The C++ iostream library is an object-oriented implementation of the abstraction of a stream as a flow of bytes from source (producer) to a sink (consumer). The iostream library includes input streams (istream class), output streams (ostream class), and streams (iostream class) that can handle both input and output operations. The istream class provides the functionality of scanf and fscanf, and ostream includes capabilities similar to those of printf and fprintf. Like the predefined C streams stdin, stdout, and stderr, the iostream library includes four predefined streams: **cin** is an input stream connected to the standard input. It is analogous to C's stdin. **cout** is an output stream connected to the standard output and is analogous to stdout in C. **cerr** is an output stream set up to provide unbuffered output to the standard error device. This is the same as C's stderr. **clog** is like cerr, but it is a fully buffered stream like cin and cout. To use the iostream library, your C++ program must include the header file “iostream.h”. This file contains the definitions of the classes that implement the stream objects and provides the buffering. The file “iostream.h” is analogous to “stdio.h” in ANSI C. Instead of defining member functions that perform I/O, the iostream library provides an operator notation for input as well as output. It uses C++’s ability to overload operators and defines << and >> as the output and input operators, respectively. When you see the << and >> operators in use, you will realize their appropriateness. For example, consider the following program that prints some variables to the cout stream, which is usually connected to standard output: #include "iostream.h" int main() { int count = 2; double result = 5.4; char *id = "Trying out iostream: "; cout << id; cout << "count = " << count << '\n' ; cout << "result = " << result << endl; return 0; } When you run this program, it prints the following: Trying out iostream: count = 2 result = 5.4 You can make three observations from this example: #. The ** << ** operator is a good choice to represent the output operation, because it points in the direction of data movement that, in this case, is toward the cout stream. #. You can concatenate multiple ** << ** operators in a single line, all reading the same stream. #. You use the same syntax to print all the basic data types on a stream. The ** << ** operator automatically converts the internal representation of the variable into a textual representation. Contrast this with the need to use different format strings for printing different data types using printf. Accepting input from the standard input is also equally easy. Here is a small example that combines both input and output: #include "iostream.h" int main() { int count; float price; char *prompt = "Enter count (int) and unit price (float): "; // display the prompt string cout << prompt ; // read from standard input cin >> count >> price; // display total cost cout << count << " at " << price << " will cost: "; cout << (price * count) << endl; return 0; } When you run the program and enter the input shown in boldface, the program interacts as follows: Enter count (int) and unit price (float): **5 2.5 ** 5 at 2.5 will cost: 12.5 Ignoring, for the moment, items that you do not recognize, notice how easy it is to read values into variables from the cin stream. You simply send the data from cin to the variables using the >> operator. Like the ** << ** operator, you can also concatenate multiple ** >> ** operators. The >> operator automatically converts the strings into the internal representations of the variables according to their types. The simple syntax of input from cin is in sharp contrast with ANSI C’s rather complicated scanf function, which serves the same purpose but needs proper format strings and addresses of variable as arguments. Also, cin has the same limitations on input of string type data as scanf. Among the new items in the last example, you may have noticed the identifier endl in the last one line. This is a special function known as a manipulator. Manipulators are functions which are written in such a way that by placing a manipulator in the chain of ** << ** operators, you can alter the state of the stream. The endl manipulator sends a newline to the stream, forcing the cursor to the beginning of a newline. The following table summarizes some of the manipulators available in the iostream package. The manipulators that take arguments are declared in the file iomanip.h the rest are in iostream.h Available Manipulators in C++ Manipulator Sample Usage Effect dec cout << dec << intvar; or cin >> dec >> intvar; Converts integers into decimal digits. Similar to the %d format in C. hex cout << hex << intvar; or cin >> hex >> intvar; Hexadecimal conversion as in ANSI C’s %x format. oct cout << oct << intvar; or cin >> oct >> intvar; Octal conversion (%o in C). ws cin >> ws; Discards whitespace characters in the input stream. endl cout << endl; Sends newline to ostream and flushes buffer. ends cout << ends; Inserts null character into a string. flush cout << flush; Flushes ostream’s buffer. resetiosflags(long) cout << resetiosflags (ios::dec); or cin >> resetiosflags(ios::hex); Resets the format bits specified by the long integer argument. setbase(int) cout << setbase(10); or cin >> setbase(8); Sets base of conversion to integer argument must be 0, 8, 10, or 16). Zero sets base to the default. setfill(int) cout << setfill(‘.’); or cin >> setfill(‘ ‘); Sets the fill character used to pad fields (width comes from setw). setiosflags(long) cout << setiosflags(ios::dec);or cin >> setiosflags(ios::hex); Sets the format bits specified by the long integer argument. setprecision(int) cout << setprecision(6); or cin >> setprecision(15); Sets the precision of floating-point conversions to the specified number of digits. setw(int) cout << setw(6) << var; or cin >> setw(24) >> buf; Sets the width of a field to the specified number of characters. You can use the manipulators for some simple formatted I/O. Formatting refers to the process of converting to and from the internal binary representation of a variable and its character string representation. For example, if a 16-bit integer variable holds the bit pattern 0000 0000 0110 0100, its character string representation in the decimal number system is 100 and 64 in hexadecimal. If the base of conversion is octal, the representation will be 144. You can display all three forms on separate lines using the following output statements: #include "iostream.h" int main() { int i = 100; cout << dec << i << endl; cout << hex << i << endl; cout << oct << i << endl; return 0; } This produces the following output: 100 64 144 What if you want to use a fixed field width of six characters to display each value? You can do this by using the setw manipulators as follows: #include "iostream.h" #include "iomanip.h" int main() { int i = 100; cout << setw(6) << dec << i << endl; cout << setw(6) << hex << i << endl; cout << setw(6) << oct << i << endl; return 0; } This produces the following output: 100 64 144 Here each variable is displayed in a six-character field aligned at the right and padded with blanks at the left. You can change both the padding and the alignment. To change the padding character, you can use the setfill manipulator. For example, just before the cout statements just shown, insert the following line: #include "iostream.h" #include "iomanip.h" int main() { int i = 100; cout << setfill('.'); cout << setw(6) << dec << i << endl; cout << setw(6) << hex << i << endl; cout << setw(6) << oct << i << endl; return 0; } This produces the following output: ...100 ....64 ...144 The default alignment of fixed-width output fields is to pad on the left, resulting in right-justified output. The justification information is stored in a bit pattern called the format bits in a class named ios, which forms the basis of all stream classes. You can set or reset specific bits by using the setiosflags and resetiosflags manipulators, respectively. Following is a sample use of these manipulators: #include "iostream.h" #include "iomanip.h" int main() { int i = 100; cout << setfill('.'); // left-justified labels followed by // right-justified values cout << setiosflags(ios::left); // left justification cout << setw(20) << "Decimal"; cout << resetiosflags(ios::left); // turn off left just cout << setw(6) << dec << i << endl; cout << setiosflags(ios::left); cout << setw(20) << "Hexadecimal"; cout << resetiosflags(ios::left); cout << setw(6) << hex << i << endl; cout << setiosflags(ios::left); cout << setw(20) << "Octal"; cout << resetiosflags(ios::left); cout << setw(6) << oct << i << endl; return 0; } This produces the following output: Decimal................100 Hexadecimal.............64 Octal..................144 This output amply illustrates how the setiosflags and resetiosflags manipulators work and how the they should be used. All you need to know are the names of the enumerated list of formatting flags so that you can use them as arguments to the setiosflags and resetiosflags manipulators. To use any of the format flags in the following table, insert the manipulator setiosflags with the name of the flag as the argument. Use resetiosflags with the same argument to revert to the format state before you use the setiosflags manipulator. Additional Manipulators Name of Flag Meaning When Flag Is Set ios::skipws Skips whitespace on input. ios::left Left justifies output within the specified width of the field ios::right Right justifies output. ios::scientific Uses scientific notation for floating point numbers (such as -1.23e+02). ios::fixed Uses decimal notation for floating-point numbers (such as -123.45). ios::dec Uses decimal notation for integers. ios::hex Uses hexadecimal notation for integers. ios::oct Uses octal notation for integers. ios::uppercase Uses uppercase letters in output (such as F4 in hexadecimal, 1.23E+02). ios::showbase Indicates the base of the number system in the output (a 0x prefix for hexadecimal and a 0 prefix for octal). ios::showpoint Includes a decimal point for floating-point output (for example, -123.) . ios::showpos Shows a positive sign when display positive values. ios::internal Padding after sign or base indicator ios::unitbuf Flush all streams after insertion ios::stdio Flush stdout, stderr after insertion In addition to the above method of manipulating output and input, another method can be used which involves accessing methods (functions) of the cout and cin objects of the iostream class. This manipulation or formatting can be done with the following set of functions: Functions Instead of Manipulators Function Name Purpose setf Set a formatting flag unsetf Undo a flag set ty setf width Read/set the field width fill read/set the padding character precision Read/set digits of precision #include "iostream.h" int main() { cout.setf(ios::right|ios::showpoint|ios::fixed); cout.precision(2); cout.width(20); cout << 500000.0 << endl; return 0; } The correspondence between iostream.h methods (functions) and iomanip.h manipulators is as follows: Correspondence between iostream and iomanip iomanip.h iostream.h setiosflags(...) setf(...) resetiosflags(...) unsetf(...) setbase(10) setf(ios::dec) setbase(8) setf(ios::oct) setbase(16) setf(ios::hex) setfill(‘.’) fill(‘.’) setprecision(2) precision(2) setw(20) width(20)
https://docs.aakashlabs.org/apl/cphelp/chap03.html
CC-MAIN-2020-45
refinedweb
3,555
53.81
If you play a lot of games, you probably noticed at some point in time that the version number or build number of the game is often presented clearly on the screen. This is often not by accident and is instead a way to show users that they are using the correct version of your game. Not only that, but it can help from a development perspective as well. For example, have you ever built a game and had a bunch of different builds floating around on your computer? Imagine trying to figure out which build is correct without seeing any build or version information! In this tutorial we’re going to see how to very easily extract the build information defined in the project settings of a Unity game. If you came here expecting something complex, think again, because what we’re about to do won’t take much. In fact, to get the version number you really only need the following line: Debug.Log(Application.version); However, printing the version number to your console isn’t really going to help anyone. Instead, you might want to create a game object with a Text component attached. Then when the application loads, you can set the game object to display the version information on the screen. Start by creating a VersionNumber.cs file within your Assets directory. In the VersionNumber.cs file, add the following C# code: using System.Collections; using System.Collections.Generic; using UnityEngine; using UnityEngine.UI; public class VersionNumber : MonoBehaviour { private Text _versionNumberText; void Start() { _versionNumberText = GetComponent<Text>(); _versionNumberText.text = "VERSION: " + Application.version; } } Like previously mentioned, the above code will get a reference to the Text component for the game object this script is attached to. After, it will set the text for that component to the application version. This will work for all build targets. Attach the VersionNumber.cs script to one of your game objects with a Text component and watch it work. I’d like to say it is more complicated than this, but it isn’t. A video version of this tutorial can be seen below.
https://www.thepolyglotdeveloper.com/2022/02/extract-version-information-game-unity-csharp/
CC-MAIN-2022-40
refinedweb
353
58.48
In-Depth True artificial intelligence is years away. But we can demonstrate the challenges of AI decisionmaking using C# to derive solutions for the classic sliding tiles puzzle. [Editor's Note: Several readers have been asking about a sample file, Heap.cs, that was missing from the Code Download. It's there now; apologies for the inconvenience to the author and to our readers.] Artificial intelligence refers to the process by which we develop a nonliving rational agent. A rational agent is an entity capable of perceiving his environment and acting rationally to the perceptions obtained on it. Rationality in this sense is given by the appropriate decision-making of the agent, considered appropriated if it's aimed towards maximizing some desired outcome. Humans are seen as the finest example of living rational agents -- we receive perceptions from our environment all the time and react to them by making decisions that usually go in favor of our standard of living. The human mind is still the most capable, complex and sophisticated intelligence in the known universe. There's probably not going to be an artificial intelligence in the near future that will achieve the power that the human mind currently possesses. Even if that's the case, if we have such sophisticated, complex, evolved mind, why do we need to create artificial intelligences? For some domain-specific environments, artificial intelligences provide a better reaction to perceptions than the reaction provided by humans. Artificial intelligences present a feasible, sometimes optimal outcome to extremely complicated problems by taking advantage of their facility to compute millions of instructions in short periods of time. Even though we have a more elaborated intelligence in terms of reasoning, computers actually defeat us in the field of calculations. For calculation-related environments we create artificial intelligences that can speed up the process of obtaining a maximized outcome. The calculator, that simple machine that we frequently use in our daily life is a mere example of how we exploit an artificial intelligence in a calculation environment; we lack the ability to correctly execute algebraic operations in short periods of time, therefore, we created the calculator to ease our life. We can demonstrate another example of artificial intelligence, using a rational agent represented by a C# program for solving a sliding tiles puzzle. The rationality of the agent will be provided by an A* search algorithm tied to several heuristics that will help guide the search towards a maximized outcome. The Puzzle The Sliding Tiles Puzzle was created by chess player and puzzle maker Sam Loyd (1841-1911) in the 1870s. The puzzle consists of a N x M board shown in Figure 1, where each cell could be represented as a number, a letter, an image or basically anything you can think of. The task in the sliding tiles puzzle is to rearrange the tiles of some starting state in a manner in which all of them end up arranged matching another configuration known as the goal state (see Figure 2). To be able to reach the goal state a sequence of moves is required; one move consists of swapping the empty tile with some other tile. The solution to the previous example would be obtained as shown in Figure 3. A logical question probably pops up into the reader's mind when he thinks of the starting state; could I truly find the goal state from this initial state? Sam Loyd once offered $1,000 to anyone who could solve the puzzle in Figure 4. Although several people claimed they found a solution, the reality is that the previous puzzle has no solution. So, how do we know if a given initial configuration is solvable? It has been proved that a configuration is solvable if the following conditions hold: An inversion occurs when some tile precedes another tile with a lower value; the goal state has zero inversions. For instance, considering a 4 x 4 board the number 12 at the upper left corner will have a total of 11 inversions as numbers 1 through 11 come after it. In general, an inversion is a pair of tiles (a, b) such that a appears before b, but a is greater than b. The A* algorithm The agent we'll be developing searches the state space (universe of all possible board configurations) using an A* Informed Search (*_search_algorithm Hart, 1968). The search is informed as it selects the next step considering domain knowledge which is represented by the value of a function associated to every state of the problem (see Figure 5). The possible states are: up, down, left and right each, related to moving the empty tile in those directions. The key element in the A* resides in the value assigned to every state as it gives the possibility of exponentially reducing the time invested in finding the goal state from a given initial state (see Figure 6). The function outputting the value bound to every state s can be decomposed as: f(s) = g(s) + h(s) where g is the cost of reaching s from the initial state, usually translated as the number of moves executed from the initial state up to s and h is the rational component of the agent, the component defining the cost of all possible paths starting at s, this component is known as the heuristic. The heuristic function is the manner in which we attach our empirical rules to the agent -- therefore, the better rules we add the sooner we'll reach the goal state. The function MisplacedTiles(s) that outputs the number of misplaced tiles for some state s can be considered as a feasible heuristic since it always provides insight on how far we are from reaching the goal state. An important note when creating or including a heuristic is its admissibility, necessary in order to guarantee that the A* algorithm will be complete (it finds a solution) and optimal (it finds an optimal solution). A heuristic is admissible if it never overestimates the minimum cost of reaching the goal state from some node s, and its value must remain lower or equal than the cost of the shortest path from s to the goal state. The Misplaced Tiles heuristic is admissible since every tile out of place must be moved at least once in order to arrange them into the goal state. As depicted in the last figure the set of states can be modeled as a graph where the children of a state node are obtained by moving the empty tile in all possible directions. Since the problem can be modeled as a graph it's logical to use a graph search algorithm for locating the goal state. The basic skeleton of the A* is that of a Breadth First Search. The difference lies on the selection of the next node to enqueue or expand. In a BFS all nodes possess value 1 therefore it does not matter what node to expand -- in the A* we'll always expand the node that maximizes the desired outcome, the node with minimum cost (shortest path to reach the goal state) in the particular case of the Sliding Tiles Puzzle. Figure 7 illustrates the execution of the A* algorithm using Misplaced Tiles as heuristic function. It's important to note when calculating any heuristic that we never take into account the empty tile -- if we do then we could be overestimating the real cost of the shortest path to the goal state, which makes the heuristic non admissible. Consider what would have happened in the prior example if we would have taken into account the empty tile, as shown in Figure 8. In the previous node we are just one step from reaching the goal state but the heuristic claims that we are at least two steps (8 and empty misplaced) from the goal state. It's clearly an overestimation of the real cost, thus, non admissible. To start analyzing part of the code let us examine the StateNode class in Listing 1 that we'll be using to represent states or configurations on the board. Listing 1: StateNode Class class StateNode<T>: IComparable<StateNode<T>> where T: IComparable { public double Value { get; set; } public T[,] State { get; private set; } public int EmptyCol { get; private set; } public int EmptyRow { get; private set; } public int Depth { get; set; } public string StrRepresentation { get; set; } public StateNode() { } public StateNode(T[,] state, int emptyRow, int emptyCol, int depth) { if(state.GetLength(0) != state.GetLength(1)) throw new Exception("Number of columns and rows must be the same"); State = state.Clone() as T[,]; EmptyRow = emptyRow; EmptyCol = emptyCol; Depth = depth; for (var i = 0; i < State.GetLength(0); i++) { for (var j = 0; j < State.GetLength(1); j++) StrRepresentation += State[i, j] + ","; } } public int Size { get { return State.GetLength(0); } } public void Print() { for (var i = 0; i < State.GetLength(0); i++) { for (var j = 0; j < State.GetLength(1); j++) Console.Write(State[i,j] + ","); Console.WriteLine(); } Console.WriteLine(); } public int CompareTo(StateNode<T> other) { if (Value > other.Value) return 1; if (Value < other.Value) return -1; return 0; } } The first point to notice in the StateNode class is its generic feature. We are making it generic since we want to have a sliding tiles puzzle with numbers, letters, images, etc. and the T generic type could embody any of the previous types. We are also requiring that the T type be comparable, this can prove to be useful -- even necessary -- if you want to implement a solvability test. Remember that you'll need to calculate the number of inversions through elements comparisons. Also note that we are assuming the board will have the same width and height (n=m) and that the empty tile is "0". A description of every field, property is presented in this list: The Print() methods prints tiles values on the board in a CSV fashion and the Size property returns the size of the board. Taking into consideration that we inherit from IComparable<StateNode<T>> we need to implement the CompareTo method, and its logic is straightforward. The AStar class is presented in Listing 2. Listing 2: AStar Class class AStar<T> where T:IComparable { public int StatesVisited { get; set; } public Dictionary<string, int> PatternDatabase { get; set; } private readonly StateNode<T> _goal; private T Empty { get; set; } private readonly PriorityQueue<StateNode<T>> _queue; private readonly HashSet<string> _hash; public AStar(StateNode<T> initial, StateNode<T> goal, T empty) { _queue = new PriorityQueue<StateNode<T>>(new[] { initial }); _goal = goal; Empty = empty; _hash = new HashSet<string>(); } public StateNode<T> Execute() ... private void ExpandNodes(StateNode<T> node) ... private double Heuristic(StateNode<T> node) ... private int MisplacedTiles(StateNode<T> node) ... private int ManhattanDistance(StateNode<T> node) ... private int LinearConflicts(StateNode<T> node) ... private int DatabasePattern(StateNode<T> node) ... private int FindConflicts(T[,] state, int i, int dimension) ... private int InConflict(int index, T a, T b, int indexA, int indexB, int dimension)... } } Once again a description of each field, property follows: The sliding tiles puzzle is a cyclic game, starting from some state and after a sequence of moves we could end up in the same state, thus, getting caught up in an infinite loop, we avoid that by saving every visited state in the hash. Now let us review the Execute() and Expand(StateNode <T> node) methods in Listings 3 and 4. The rest are all related to heuristics and we'll leave them for the end when we start running the algorithm and making comparisons. Listing 3: Execute Method public StateNode<T> Execute() { _hash.Add(_queue.Min().StrRepresentation); while(_queue.Count > 0) { var current = _queue.Pop(); StatesVisited++; if (current.StrRepresentation.Equals(_goal.StrRepresentation)) return current; ExpandNodes(current); } return null; } The Execute method is pretty simple. It resembles the body of the BFS algorithm. We add the string representation of the starting state and then get into a loop that ends when _queue has zero nodes. The current variable always holds the node with minimum value and this is the node to be expanded. The expansion consists in all possible moves from the current state. Listing 4: ExpandNode Method private void ExpandNodes(StateNode<T> node) { T temp; T[,] newState; var col = node.EmptyCol; var row = node.EmptyRow; StateNode<T> newNode; // Up if (row > 0) {); } } // Down if (row < node.Size - 1) {); } } // Left if (col > 0) {); } } // Right if (col < node.Size - 1) {); } } } In each if statement we check whether the move attempted is possible, if it's then we clone the current state (remember arrays are reference types) and then swap the blank tile with the tile in the position that corresponds to that move. In case the string representation of the newNode hasn't been added to the hash then we enqueue the newNode and add it to the hash. After having described the basic skeleton of the A* algorithm we can now focus on the main component of the search, the heuristics, in Listing 5. Listing 5: Heuristic Method private double Heuristic(StateNode<T> node) { return DatabasePattern(node); } private int MisplacedTiles(StateNode<T> node) { var result = 0; for (var i = 0; i < node.State.GetLength(0); i++) { for (var j = 0; j < node.State.GetLength(1); j++) if (!node.State[i, j].Equals(_goal.State[i, j]) && !node.State[i, j].Equals(Empty)) result++; } return result; } Heuristic #1: Misplaced Tiles The first heuristic we'll analyze has already been explained along this article and is probably the simplest of heuristics for this problem: the number of tiles out of place. Listing 6 shows the laboratory we have set up for testing our heuristics. Listing 6: Heuristics Lab, Misplaced Tiles static void Main() { var initWorstConfig3x3 = new[,] { {8,6,7}, {2,5,4}, {3,0,1} }; var initConfig4x4 = new[,] { {5,10,14,7}, {8,3,6,1}, {15,0,12,9}, {2,11,4,13} }; var finalConfig3x3 = new[,] { {1,2,3}, {4,5,6}, {7,8,0} }; var finalConfig4x4 = new[,] { {1,2,3,4}, {5,6,7,8}, {9,10,11,12}, {13,14,15,0} }; var initialState = new StateNode<int>(initWorstConfig3x3, 2, 1, 0); var finalState = new StateNode<int>(finalConfig3x3, 2, 2, 0); var watch = new Stopwatch(); var aStar = new AStar<int>(initialState, finalState, 0) { PatternDatabase = FillPatternDatabase() }; watch.Start(); var node = aStar.Execute(); watch.Stop(); Console.WriteLine("Node at depth {0}", node.Depth); Console.WriteLine("States visited {0}", aStar.StatesVisited); Console.WriteLine("Elapsed {0} miliseconds", watch.ElapsedMilliseconds); Console.Read(); } For testing our heuristics we'll be using one of the worst configurations for a 3x3 board, shown in Figure 9. It requires 31 moves to be completed. The results obtained are shown in Figure 10. The A* algorithm with the Misplaced Tiles heuristic takes about 2.5 seconds to find the goal state. Let us attempt to find a cleverer heuristic that will lower the time frame and the number of nodes visited. Heuristic #2: Manhattan Distance The Manhattan Distance or Block Distance between points A=(x1, y1) and B=(x2, y2) is defined as the sum of the absolute difference of their corresponding coordinates, that is: MD = |x1-x2| + |y1-y2| As a heuristic, the Manhattan Distance in Listing 7 is admissible since for each tile it returns the minimum number of steps that will be required to move that tile into its goal position. Listing 7: Heuristic #2, the Manhattan Distance private int ManhattanDistance(StateNode<T> node) { var result = 0; for (var i = 0; i < node.State.GetLength(0); i++) { for (var j = 0; j < node.State.GetLength(1); j++) { var elem = node.State[i, j]; if (elem.Equals(Empty)) continue; // Variable to break the outer loop and // avoid unnecessary processing var found = false; // Loop to find element in goal state and MD for (var h = 0; h < _goal.State.GetLength(0); h++) { for (var k = 0; k < _goal.State.GetLength(1); k++) { if (_goal.State[h, k].Equals(elem)) { result += Math.Abs(h - i) + Math.Abs(j - k); found = true; break; } } if (found) break; } } } return result; } The results applying A* + MD are shown in Figure 11. The reduction in time and nodes visited is substantial; we are providing better information to guide the search hence the goal is found much quickly. Heuristic #3: Linear Conflict The Linear Conflict heuristic provides information on necessary moves that are not counted in by the Manhattan Distance. goal position of tj is to the left of the goal position of tk. Figure 12 shows tiles 3 and 1 in their corresponding row but reversed. To get them to their goal positions we must move one of them down and then up again, these moves are not considered in the Manhattan Distance. A tile cannot appear related in more than one conflict as solving a determined conflict might imply the resolution of other conflicts in the same row or column. Hence if tile 1 is related to tile 3 in a conflict then it cannot be related to a conflict with tile 2 as this may become an overestimation of the shortest path to a goal state and could turn our heuristic into non admissible. The methods implementing this heuristic are presented in Listing 8. Listing 8: LinearConflict Method private int LinearConflicts(StateNode<T> node) { var result = 0; var state = node.State; // Row Conflicts for (var i = 0; i < state.GetLength(0); i++) result += FindConflicts(state, i, 1); // Column Conflicts for (var i = 0; i < state.GetLength(1); i++) result += FindConflicts(state, i, 0); return result; } private int FindConflicts(T[,] state, int i, int dimension) { var result = 0; var tilesRelated = new List<int>(); // Loop foreach pair of elements in the row/column for (var h = 0; h < state.GetLength(dimension) - 1 && !tilesRelated.Contains(h); h++) { for (var k = h + 1; k < state.GetLength(dimension) && !tilesRelated.Contains(h); k++) { // Avoid the empty tile if (dimension == 1 && state[i, h].Equals(Empty)) continue; if (dimension == 0 && state[h, i].Equals(Empty)) continue; if (dimension == 1 && state[i, k].Equals(Empty)) continue; if (dimension == 0 && state[k, i].Equals(Empty)) continue; var moves = dimension == 1 ? InConflict(i, state[i, h], state[i, k], h, k, dimension) : InConflict(i, state[h, i], state[k, i], h, k, dimension); if (moves == 0) continue; result += 2; tilesRelated.AddRange(new List<int> { h, k }); break; } } return result; } private int InConflict(int index, T a, T b, int indexA, int indexB, int dimension) { var indexGoalA = -1; var indexGoalB = -1; for (var c = 0; c < _goal.State.GetLength(dimension); c++) { if (dimension == 1 && _goal.State[index, c].Equals(a)) indexGoalA = c; else if (dimension == 1 && _goal.State[index, c].Equals(b)) indexGoalB = c; else if (dimension == 0 && _goal.State[c, index].Equals(a)) indexGoalA = c; else if (dimension == 0 && _goal.State[c, index].Equals(b)) indexGoalB = c; } return (indexGoalA >= 0 && indexGoalB >= 0) && ((indexA < indexB && indexGoalA > indexGoalB) || (indexA > indexB && indexGoalA < indexGoalB)) ? 2 : 0; } To test the Linear Conflict heuristic, we'll use the 4 x 4 board in Figure 13, requiring 55 moves to the goal state. The value of a node s will now be f(s) = depth(s) + md(s) + lc(s). We can combine both heuristics as the moves they represent do not intersect, and consequently we will not be overestimating. Figure 14 shows the results. After approximately two minutes the algorithm provided a result. The same board was tested using only the Manhattan Distance and after a 5-minute wait no result had been obtained. Heuristic #4: Pattern Database The Pattern Database heuristic in Figure 15 is defined by a database of different states of the game, each state is associated with the minimum number of moves necessary to take a pattern (subset of tiles) to its goal position. In this case we built a small pattern database by making a BFS backwards starting at the goal state (3 x 3). The results were saved in a .txt file of only 60,000 entries. The pattern chosen for the database is known as the fringe -- it contains tiles from the top row and the left most column. The pattern database heuristic function in Listing 9 is computed by a table look-up function. In this case, it's a dictionary lookup that has 60,000 stored patterns. It philosophically resembles those of the divide and conquer technique and the dynamic programming technique. Listing 9: Pattern Database Heuristic Function private int DatabasePattern(StateNode<T> node) { var pattern = node.StrRepresentation .Replace('5', '?') .Replace('6', '?') .Replace('7', '?') .Replace('8', '?'); if (PatternDatabase.ContainsKey(pattern)) return PatternDatabase[pattern]; return ManhattanDistance(node); } Figure 16 shows the results. The more entries we add to the database the lower the time it will take to find the goal state. In this case the compromise between memory and time favors using more memory to get a better running time. This is how it usually works -- you use more memory in order to reduce the execution time. The pattern database heuristics represents the definitive alternative when you want to solve 4 x 4 puzzles or m x n puzzles where n and m are greater than 3. The intention of this article was to awake the reader's interest in the amazing world of artificial intelligence and to provide an easy guide into some of the different mechanisms for solving interesting puzzles like these. Build your own pattern database and start solving sliding tiles puzzles in less than 50 milliseconds. Printable Format I agree to this site's Privacy Policy. > More Webcasts
https://visualstudiomagazine.com/articles/2015/10/30/sliding-tiles-c-sharp-ai.aspx
CC-MAIN-2019-13
refinedweb
3,572
52.29
Details - Type: Bug - Status: Closed - Priority: Major - Resolution: Fixed - Affects Version/s: 10.3.3.0 - - Component/s: Documentation - Labels:None Description Issue Links - breaks DERBY-5905 Derby html documentation doesn't render properly and prints garbage on Internet Explorer - Closed - is part of DERBY-5135 Derby documentation needs accessibility improvements - Closed - is related to DERBY-5349 Clean docs build fails to pick up customized map2htmtoc.xsl - Closed DERBY-5359 Missing xmlns attribute for html element in docs - Closed Activity - All - Work Log - History - Activity - Transitions. Attaching an initial patch, DERBY-4408.diff, that fixes the DOCTYPE and META tag issues for the lib/index.html file, resulting in content similar to that for other topics; the attached index.html file is the output file for Getting Started. I didn't remove the IBM copyright in this patch. I'll try to figure out how to get the DOCTYPE into the toc.html file, but that'll be harder. I think I figured out how to fix the toc file, by comparing the map2htmtoc.xsl file that generates it with the dita2htmlImpl.xsl file that generates ordinary pages and adding the needed code. I'm attaching a new version of the DERBY-4408.diff patch that includes the changes to map2htmtoc.xsl as well as to the index file, as well as the generated toc.html file from Getting Started. Hope this works for your accessibility tool, Myrna. I cannot check any time soon, so I think it is best to go ahead and if I need further adjustments, we can tackle them later. Thanks for working on this issue! I took a look at the patch, and noticed some small issues with the generated HTML files. I'm not sure if fixing all of these is within the scope of this bug report, but here goes. - I think the <?xml ...?> line needs to be the first line in the file (both index.html and toc.html) in order to be well-formed XML. - The doctype for index.html should be XHTML 1.0 Frameset, not XHTML 1.0 Transitional, since the <frameset> element isn't defined in the transitional DTD. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Frameset//EN" ""> - The patch makes the doctype of toc.html XHTML 1.0 Transitional. However, the XHTML DTDs require the tags to be in lower case, whereas the document uses upper case <UL>..</UL> and <LI>..</LI>. We should either use a doctype that allows upper case tags (for example HTML 4.01 Transitional), or change the case of those tags to match the doctype. I think I prefer the latter. - The html element needs the attribute xmlns="" in order to pass as valid XHTML on (assuming the above issues are fixed first). This is great information, Knut – I am pretty clueless about these DTDs. The <?xml ...?> line comes after the license comment in all our files, and has ever since 10.2, when we first put the Apache license in the frameset output files. I can try tweaking the order in which the template calls are invoked. It should not be hard to change the case of the list elements in the toc. I will look into that, too. Thanks again. I'll try to figure all this out and hopefully file another patch later on. Some of this is not so hard, but There are a couple of problems. First, the Apache license is inserted after all the other processing, in the following code within the html.dita target of the build.xml file. <filterchain> <concatfilter prepend="$ /lib/apache-license-html.html"/> </filterchain> So I'll have to bring the license insertion into the XSL files somehow. Also, getting the xmlns attribute into the html element is not trivial. If I simply insert it into the html element in the stylesheet – <html xmlns=""> then in the toc frame, I get empty xmnls attributes in the meta and ul tags: <html xmlns="" lang="en-us" xml: <head> <meta xmlns="" content="text/html; charset=utf-8" http- ... <ul xmlns=""> Whereas if I do the same thing in the stylesheet for the non-toc pages, I get the empty attributes in the head and body elements only: <html xmlns="" lang="en-us" xml: <head xmlns=""> ... <body xmlns="" id="tgsactivity4"><a name="tgsactivity4"><!-- --></a> However, if I try to create a template to set the attribute, it is completely ignored. <xsl:call-template ... <xsl:template <xsl:attribute</xsl:attribute> </xsl:template> The "ant -verbose" command says, Warning! Illegal value used for attribute name: name Looks like the namespace attribute is the one attribute you can't set. At it says that "the HTML validator at w3.org does not complain when the xmlns attribute is missing in an XHTML document. This is because the namespace "xmlns=" is default, and will be added to the <html> tag even if you do not include it." This suggests that we may not really need to add this after all? This is a little scary, but there is a simple solution to the placement of the xml declaration – specify the attribute omit-xml-declaration="yes" in the xsl:output tag, and put the xml declaration at the top of the prepended license file. I removed the declaration from the index.html file too. Attaching DERBY-4408-2.diff, DERBY-4408.stat, and DERBY-4408.zip. The following files have changes: M lib/dita2htmlImpl.xsl M lib/map2htmtoc.xsl M lib/apache-license-html.html M lib/index.html The zip file contains the changed index and toc and a sample topic page from Getting Started. This revised patch not only adds the doctype and charset to the toc and index files, but also lowercases the elements in the toc and places the xml declaration at the top of each file. I agree that making the xml declaration part of the license template file feels a bit ugly. What about generating the license headers in the XSL scripts instead? Something along the lines of the attached insert-header.diff patch. That seems to insert the license header at the correct place in the html files. Forgot to say that I ran the html files in DERBY-4408.zip through, and there were no errors except the aforementioned missing xmlns attribute, so the DERBY-4408-2.diff patch looks like a good improvement. Thanks, Knut! That does work. Part of the patch was rejected when I applied it (the part with the xsl:output tag) so the omit-xml-declaration attribute stayed in. I fixed that. I needed to put the xml declaration and license in the index.html file, too. I don't think we need to put the license into the -single html files, since the full license is near the top of the file anyway – that's why it wasn't there before. With the frames version, the license was in only one topic rather than every file. However, it can do no harm. What do you think? Attaching another patch that incorporates Knut's changes: DERBY-4408-3.diff, DERBY-4408-3.stat, and DERBY-4408-3.zip. The changes are to the following files: M build.xml M lib/dita2htmlImpl.xsl M lib/fo2html.xsl M lib/map2htmtoc.xsl M lib/index.html The zip contains files from the Tools Guide this time. I did not include the one-page HTML file since it's so big, but it begins like this now: <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional/=utf-8">< title>Derby Tools and Utilities Guide</title><META content="text/css" http-<style type="text/css"> ... Thanks again for your help, Knut. Attaching yet another patch, DERBY-4408-4.diff, that adds an overwrite attribute to the build.xml file's copy command for the map2htmtoc.xml file to resolve DERBY-5349. Hope this works. Thanks, Kim! The latest patch looks great. As to the problem with applying the patch, I think it's caused by lib/dita2htmlImpl.xsl and lib/map2htmtoc.xsl not having the svn:eol-style property set. The files even have a mix of unix line endings and windows line endings. We should fix that, but first let's get this patch in, so that we don't need to regenerate it. Thanks, Knut, for the review and for catching the problem of the line endings. I'll commit the patch Monday and we can take it from there. Committed patch DERBY-4408-4.diff to documentation trunk at revision 1150706. Merged to 10.8 doc branch at revision 1150712. I'm not resolving the issue because of the problems with the svn:eol-style property for lib/dita2htmlImpl.xsl and lib/map2htmtoc.xsl. I can file a patch that changes the eol-style for those two files (and also for index.html, which doesn't currently have it set). But should I run dos2unix or some such on the files first, to remove the current mix of line endings? I think it's sufficient to run "svn propset svn:eol-style native" on the files. In similar exercises in the past, the line endings in the files weren't updated until the files were committed, so you shouldn't be alarmed if you still see incorrect endings after the svn propset command. Thanks, Knut, for that info. Attaching DERBY-4408-5.diff and DERBY-4408-5.stat, with changes to the following: M lib/dita2htmlImpl.xsl M lib/map2htmtoc.xsl M lib/index.html Think that'll do it? Yes, but the property should be named svn:eol-style (note the svn: prefix). I see now that I get a warning about inconsistent line ending style when I try to set that property, and the property isn't actually set unless I say svn propset --force, or run dos2unix on the file first, as you suggested. That was careless of me! I see the same warning now – thanks. I guess running dos2unix to get the endings consistent first is the best bet? Yes, that ought to do the trick. +1 Thanks again, Knut. Well, now the patch is a lot bigger! And the changes look like this (both to files and properties): MM lib/dita2htmlImpl.xsl MM lib/map2htmtoc.xsl MM lib/index.html Looks good. +1. Thanks again for all your help, Knut. Committed patch DERBY-4408-6.diff to documentation trunk at revision 1151090. Merged to 10.8 doc branch at revision 1151098. Set properties on doc trunk files at revision 1151099. Set properties on 10.8 doc branch files at revision 1151103. Those files are generated by DITA, so I don't know if there's much we can do. For the record, I tried to generate docs with the newest DITA (1.4.1, whereas the build scripts currently use 1.1.2.1) and can verify that DOCTYPE and META are still missing in index.html. However, the toc.html file has a DOCTYPE tag with that version of DITA.
https://issues.apache.org/jira/browse/DERBY-4408
CC-MAIN-2016-40
refinedweb
1,840
77.84
Your Account by chromatic Rails fans are understandably proud of the magic metaprogramming facilities of Ruby, the database introspection capabilities of ActiveRecord, and the fact that the most basic model class is only two lines long (at least in every tutorial I've seen). I say that's two lines too many. Here's how to have zero-line model modules in Perl -- as many as you want. (If you have a complete CRUD application, you can use the same idea to generate RESTful controllers, too.) Active Record doesn't need to use eval at all since Ruby has such niceties as define_method, const_missing, and method_missing. However, define_method captures scope so when classes are reloaded in development mode you have a memory obesity issue that eval doesn't suffer. You can do class Prufrock < ActiveRecord::Base; end or const_set some_model_name, Class.new(ActiveRecord::Base) or even class Prufrock < ActiveRecord::Base; end const_set some_model_name, Class.new(ActiveRecord::Base) module Models def self.const_missing(name) const_set name, Class.new(ActiveRecord::Base) end end Models::Prufrock Models::WhateverElse Cheers! were you trying to show this to be more convenient than Ruby/RoR? Man, I don't know. As someone who does both perl and ruby, this may not be the best example! :) © 2017, O’Reilly Media, Inc. (707) 827-7019 (800) 889-8969 All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
http://archive.oreilly.com/pub/post/get_rid_of_activerecord_situps.html
CC-MAIN-2017-22
refinedweb
236
58.99
catalogue 2. Problem solving analysis 2.1 matrix representation of calendar of each month 2.2 handling of holidays and compensatory public holidays 2.3 find the maximum rectangle 2.3.2 sliding window search 1. Problem description 2. Problem solving analysis 2.1 matrix representation of calendar of each month In python, it can be implemented with monthcalendar() of the calendar module. See another blog Common and interesting usage of Python calendar module . The following are calendars printed in two ways. The first statement sets Sunday as the first day of the week. import calendar calendar.setfirstweekday(calendar.SUNDAY) calendar.prmonth(2014,4) print(np.array(calendar.monthcalendar(2014,4))) The printing effect is as follows: 2.2 handling of holidays and compensatory public holidays First consider holiday data. Holiday data is stored in the file as follows: You can read them in the form of string by line, then use the module in python datetime to transform them into datetime.datetime object, extract the year, month and day information, and then set the corresponding elements of the matrix obtained in the previous section to 0 (indicating non working days) based on the year, month and day information. Next, consider the data of public holidays that become working days due to compensatory leave. The storage format is the same as above and can be handled in the same way, except that this time the corresponding element is set to a non-0 value (for example, 1) In the following code, use the readfile() function to read data from the above two files, extract date information, recover the year, month and day, and then store them in a dict type variable, with (year, month) as the key, and value is a list containing the days in the corresponding year and year. Among them, the design uses the datetime module for processing. For a brief introduction to the usage of the datetime module, see the blog: Date time processing in Python: a practical example of the datetime package 2.3 find the maximum rectangle 2.3.1 violent search First of all, the problem of finding the maximum rectangle can certainly be solved by the method of violent search. For example, how many rectangles are there in a 2 * 2 matrix with the top left lattice (0,0) of the matrix as the top left corner? Exactly four. For a general n*m matrix, there are n*m rectangles with the leftmost upper lattice (0,0) of the matrix as the leftmost upper corner. Scan these n*m rectangles, exclude the rectangle with 0 element in the middle (or set its area to 0, which is simpler), and find the maximum area of the remaining rectangle, that is, the maximum area of "the rectangle with the leftmost upper corner lattice (0,0) of the matrix as the leftmost upper corner". Next, similarly, we can find the lattice (0,1), (0,2) (1,0), (1,1) is the maximum area of the rectangle at the top left corner. Then find the maximum of these maximum values to obtain the maximum rectangular area without 0 element in the current matrix. What is the complexity of such a violent search? For simplicity, consider that the original matrix is square and the size is n*n First, scan the lattice in the top left corner of the rectangle, n*n Secondly, the number of possible rectangles corresponding to the top left lattice candidate of each rectangle depends on its coordinates. Assuming its coordinates are (i,j), the number of possible rectangles is (n-i)*(n-j) In this way, the total number of rectangles whose area needs to be evaluated is: This scheme can only be thought of as a benchmark reference, not implemented. 2.3.2 sliding window search Violence search is based on the grid (considering a grid as the upper left corner of the rectangle). You can also consider the sliding window scheme from another angle, and consider sliding on the calendar rectangle with rectangular boxes of different sizes and shapes. Because the maximum rectangular area is required, the sliding rectangular window area for scanning is arranged in order from large to small. In this way, the first sliding position that does not contain 0 is found, and the maximum rectangular area required by the original problem is found. Because it is possible that rectangular boxes of multiple shapes have the same area, for example, the area of rectangular boxes of 4 * 2 and 2 * 4 is 8. So first build a dictionary, with the area as the key and the corresponding list of possible shapes as the value. The code is as follows: # 2. Construct the dictionary for rectangulars area-shape pair area_shape = dict() for i in range(1,6): for j in range(1,8): if i*j not in area_shape: area_shape[i*j] = [] area_shape[i*j].append((i,j)) With the above preparations, the processing flow for a month is as follows: Note 1: when resetting the values of the corresponding elements of holidays and extra workdays, its corresponding position in the matrix needs to be determined according to the date information. First, you need to determine the weekday (day of the week) corresponding to the first day of the current month, so that you can determine the position of the first day of the current month in the matrix, and then you can deduce the position of the specified date in the matrix. This processing corresponds to the following code (the processing of extra workday is the same): # 3. Code and test # -*- coding: utf-8 -*- """ Created on Thu Nov 11 09:35:28 2021 @author: chenxy """ import sys import time from datetime import datetime import math # import random from typing import List from collections import deque import itertools as it import numpy as np import calendar # Set SUNDAY to the first weekday calendar.setfirstweekday(calendar.SUNDAY) calendar.prmonth(2014,4) print(np.array(calendar.monthcalendar(2014,4))) def readfile(filename:str)->dict: ''' Read holiday file and extra-workday file Parameters ---------- filename : string Returns ------- A dictionary to store the data ''' print('Read {0} line by line, and store the holidays into a dictionary...'.format(filename)) dat = dict() f=open(filename,'r') if f.mode == 'r': f_lines = f.readlines() for line in f_lines: # print(line,end='') date_object = datetime.strptime(line[:10], "%Y/%m/%d") # Strip the last '\n' in line # print("date_object ={}-{}-{}".format(date_object.year,date_object.month,date_object.day)) y,m,d = date_object.year,date_object.month,date_object.day if (y,m) not in dat: dat[(y,m)] = [] dat[(y,m)].append(d) f.close() return dat # 1. Read the data file h = readfile('q62-holiday.txt') e = readfile('q62-extra-workday.txt') # 2. Construct the dictionary for rectangulars area-shape pair area_shape = dict() for i in range(1,6): for j in range(1,8): if i*j not in area_shape: area_shape[i*j] = [] area_shape[i*j].append((i,j)) # 3. loop over year/month to find the maximum rectangular of each month max_area = dict() for y in range(2014,2015): for m in range(4,7): # calendar.prmonth(y,m) c = np.array(calendar.monthcalendar(y,m)) # Set the first and the last column to 0 c[:,0] = 0 c[:,6] = 0 # print('The original month calendar:\n',c) # find the first weekday of the current month fst_wkday, num_days = calendar.monthrange(y, m) fst_wkday = (fst_wkday + 1)%7 # Because the SUNDAY is set to the first weekday # # Set extra-workday to 100--any positive value is OK if (y,m) in e: extras = e[(y,m)] for eday in extras: # Find the position of the current extra workday in month calendar matrix i = (eday + fst_wkday - 1)//7 j = (eday + fst_wkday - 1)%7 c[i,j] = 100 # print('The month calendar after holidays and extra workdays setting:\n',c) # Search for the maximum rectangular only covering workday found = False for a in range(35,0,-1): # print(a) if a in area_shape: ij_list = area_shape[a] for (i,j) in ij_list: for i0 in range(5-i+1): for j0 in range(7-j+1): rect = c[i0:i0+i,j0:j0+j] # print(a,i,j,i0,j0, rect) if np.all(rect): max_area[(y,m)] = a found = True break if found: break if found: break if found: break print(max_area) Operation result: {(2014, 4): 16, (2014, 5): 20, (2014, 6): 16} 4. Postscript Because I am not familiar with the processing of date and calendar, I spent some time learning the two modules of calendar and datetime in python. The problem of finding the maximum rectangle, which should be the core algorithm of this problem, is dwarfed by the processing of date and date. Previous: Q61: do not cross a stroke Next: Q63: Maze rendezvous For the general catalogue of this series, see: Programmer's interesting algorithm: detailed analysis and Python complete solution
https://programmer.group/programmer-s-algorithm-fun-q62-the-largest-rectangle-in-the-calendar.html
CC-MAIN-2021-49
refinedweb
1,483
50.06
What would you like to do? Does there exist any way to make the command line arguments available to other functions without passing them as arguments to the function? Which of the following observations is the strongest argument in favor of the hypothesis that protein structure and function are correlated? denatured proteins do not function normally What mark is used to separate arguments in a function? comma , , , , function using reference variables as arguments to swap the values of a pair of integers? #include using namespace std; void swap(int &i, int &j) { int temp = i; i = j; j = temp; } int main() { int i,j; cin>>i>>j; cout … Functions with no arguments and no return values? Are usable. Example: void hello (void) { puts ("Hello"); }; } If you want that any wildcard characters in the command line arguments should be appropriately expanded are you required to make any special provision If yes which? Yes .you have to compile a program like tcc myprog wildargs.obj unjust person will live badly 7. Therefore a just person is happy Write a function that will scan a character string passed as an argument and convert all lowercase letters onto uppercase letters? /* Write a function that will scan a character string passed as an argument & convert all lowercase characters into their uppercase characters*/ #include #include #include… #include int str_upp(char c[]) { int i; char x; printf("\\n \\n"); for(i=0;i The facts that a function assumes to be true of the arguments that it receives are called? assumes fact What happen when a c program passes an array as a function argument? When an array name is passed as a function argument, the address of the first element is passed to the function. In a way, this is implicit call by reference. The receiving fu…nction can treat that address as a pointer, or as an array name, and it can manipulate the actual calling argument if desired. Answered What do you mean by command line argument? the commands are given at compile time not in the program. Answered What passes arrays as arguments to a function? value ke reference Answered What are the arguments that God does not exist? Please do not include comments that do not answer this question. Opinions from contributors: Opinion Humans have always been very curious about their origins…. With the advent of science, not only human origins can be investigated but also the origins of the Earth and Universe. Obviously why the world seemed so wonderful and why anything should bother to exist seemed inexplicable to them. For some psychological or evolutionary reason, it seemed logical to the ancient peoples that the world and themselves should have come into existence through the work of some powerful being (invisible, yet powerful). They couldn't find such a force or being. But there were other advantages to the notion of a god than just explaining the improbability of nature. The notion of a spirit watching over them seemed comforting to some. Children were told about their god who would protect them. If a child asked where did the world come from, a ready made answer could easily be found. Over generations of story telling, a culture obviously becomes convinced of a real god. It is merely a matter of culture - where one was raised and in what time - which particular brand of nonsense infects the child brain, as Dawkins puts it - that determines which religion an individual will follow. Opinion Scientific Arguments: These arguments are based on science; specifically, the large number of cases where what the scientific method has near-proven about our world and our universe, is largely incompatible with religious dogma. There are far more scientific contradictions of God than philosophical ones. The bedrock of the so called intelligent design movement is that matter cannot come from nothing. One of the many reasons intelligent design isn't normally allowed in the class room is that physics shows that matter does indeed spontaneously materialize, and that the true evidence of a universe with a God, would be one in which nothing existed. In fact, it has been said by Nobel Prize winning scientists that because there is material in the Universe that is proof God doesn't exist. 'Intelligent Design', and most quasi-scientific religious arguments, are based on the Argument from Improbability. It usually manifests itself as something akin to the following: "Phenomenon X is unbelievably complex. All of its parts work together in perfect order. This could not have spontaneously self-generated?" Opinion In reference to a biological system: The discovery of Evolution. No sane person has ever suggested that a tree, or a bacterium, or a fish, or a person, came about by chance. The idea is absolutely ludicrous. The religious people claim that evolution is a theory of chance, and indeed, if the two alternatives were 'it generated itself by chance' and 'it was created' then intelligent design may carry some weight. But it does not, because nobody is suggesting chance as an alternative to design. The two opposing theories are intelligent design and evolution by natural selection. The theory of evolution is one of stunning simplicity - there are very, very slight changes to an organism in each generation, and they are small enough changes that anybody could accept they had come about by chance. Some of these very small changes will be advantageous, and increase an organism's survival chances, thereby causing the genes for themselves to become more prevalent in the gene pool. Over a vast timescale of millions of years, the effects of these tiny changes add up to become greatly noticeable, and giving us the wealth of diverse life we have today. Intelligent design immediately raises a huge question: if everything complex was designed, then who designed the designer? If God has 'always existed', then why could not life have 'always existed'? Ditto the spontaneous self-generation of God. Opinion Science has provided much more accurate and verifiable explanations for the observable world and universe; so good, it has been said, that had we had these scientific explanations to begin with, religion would have never taken root in the first place. Opinion Most religions claim that their God is a loving God, and that he loves and cares for his people. Certainly, mainstream Christianity, Islam and Judaism all preach this. However, there is the rather obvious problem that the world includes a lot of suffering, and evil. Religions attempt to overcome the problem of evil by attributing evil to Satan, however, if God were indeed a sovereign and all-powerful God, his authority would surely preside over all things including Satan,. Opinion It has been pointed out by Richard Dawkins that if you are a Christian, you have been told that Christianity is correct. You believe this. You also think you know that all other religions are completely incorrect and belief in them would be heretical. If you were a Muslim or Jew or Hindu you would think that you know that your respective religion were truly and undeniably the correct one and believe passionately that Christianity were incorrect. As you see, the idea of God is simply an opinion, with no actual truth in any statement about him anywhere. There cannot be a truth if all other religions in the world think the exact opposite. And their religion isn't true either as every other religion in the world other than themselves is against their doctrines too. God is in the eye of the beholder as it were. Most of the evidence 'for' God (even ignoring the fact that it is largely pseudo-scientific ramble) is evidence 'for' Yahweh, 'for' Allah, 'for' Baal and Jupiter and every other creator being that has ever been postulated. So it does not go anywhere towards proving one particular set of fantastical beliefs. Opinion Rebuttal of Pascal's Wager Pascal's wager, simplified, is this: Believe in God, and if you're right, you are rewarded with heaven. If you're wrong, you get nothing. Don't believe in God, and if you're right, you get nothing. If you're wrong, you get punished with Hell. Therefore, it makes more sense to believe in God. This is clearly fallacious on two counts: firstly, that faked belief in God (I know that I personally could never 'believe' in something for the sake of a bet) is unlikely to win you his favour, and secondly that it would be ludicrously easy to worship the wrong God, since there are thousands of them that have been proposed, and hundreds of belief systems that are currently followed. God said "they [humans] will live no longer than 120 years", yet somebody lived to 122 years. Therefore, God's word is not correct, though it is said to be perfect. Therefore, a perfect God's word must be correct. Therefore, God cannot be perfect, and therefore cannot exist. Answered Can there be at least some solution to determine the number of arguments passed to variable argument list function? Yes. Use a control parameter before the variable argument list that determines how many arguments will follow. E.g., the printf() function uses a string parameter as the c…ontrol parameter. The number of escape sequences in the string determines the number of arguments that are expected to follow, and each escape sequence in the string is expanded from left to right according to the arguments that follow. A simpler approach is to pass the number of arguments that will follow as a named argument. E.g., void print_nums(int n, ...) { // n tells you how many numbers are to be expected in the va_list. } Answer Yes, there can be solution. #1: explicitly specifing: extern void variadic_1 (int n, ...); variadic_1 (3, "first", "second", "third"); #2: using terminator: extern void variadic_2 (...); variadic_2 ("first", "second", "third", NULL); Answered Which command lines tools can you use to test the network functionality? ping.exe, tracert.exe, netstat.exe and so on.
http://www.answers.com/Q/Does_there_exist_any_way_to_make_the_command_line_arguments_available_to_other_functions_without_passing_them_as_arguments_to_the_function&updated=1&waNoAnsSet=1
CC-MAIN-2017-17
refinedweb
1,666
61.36
hello, can somebody help me with the coding to make the usb weather board from sparkfun () work with the rn-xv wifly module () ? thanks hello, can somebody help me with the coding This implies that you are going to do some of the work. So, what have you done so far? If you meant "hello, can somebody do the coding for me", you need to post this over in Gigs and Collaboration, and offer some compensation. well, I have some progrmming skils, I just need some advice, I have the sketch () for the board and was trying to use the GitHub - harlequin-tech/WiFlyHQ: WiFly RN-XV Arduino Library library for wifly module, but after loading the header file from wiflyhq lib and defining the variables for wifi I get something like $%^*#@()$@ in consol and then it stops … this is the code added to the sketch above #include <WiFlyHQ.h> const char mySSID[] = "myssid"; const char myPassword[] = "my-wpa-password";
https://forum.arduino.cc/t/usb-weather-board-rn-xv-wifly-module/126934
CC-MAIN-2021-43
refinedweb
160
62.51
27 June 2013 16:14 [Source: ICIS news] CAMPINAS, ?xml:namespace> Within the last 12 months, from June 2012 to May 2013, apparent consumption of chemicals rose 2.6%, Abiquim said. According to statistics technical director Fatima Giovanna Coviello Ferreira, domestic demand is still being met mainly by imports, whose volumes had a 32.8% increase year over year for the first five months of 2013. Imports of intermediates for fertilizers and basic petrochemicals increased 51.6% and 27.1% in May, respectively, Abiquim said. Urea and methanol imports rose 78.7% and 25% in volume, respectively, the organisation said. Polyethylene (PE) imports increased 20.6% in the period,
http://www.icis.com/Articles/2013/06/27/9682639/brazil-chemical-production-increases-2.94-in-may.html
CC-MAIN-2014-41
refinedweb
109
62.95
# Multithreading in Photon **What this article is about** In this article, we will talk about multithreading in the backend.  * how it is implemented * how is it used * what can be done * what we invented ourselves All these questions are relevant only if you develop something for the server side - modify the Server SDK code, write your own plugin, or even start some server application from scratch.  #### What is Photon? *Photon* or *Photon Engine* is a well-known solution for implementing multiplayer games. Using one of their client libraries, developers (or even a single developer) implements data exchange between players. The client library establishes a connection to the backend which can be the *Photon Cloud* or the developer’s own servers. #### How does Photon solve the issue of multithreading? The photon server application accepts requests from multiple client connections at the same time. I will call such connections ***peers***. These requests form queues. One for each peer. If the peers are connected to the same room, their queues are merged into one - the room queue.  There are up to several thousand such rooms, and their request queues are processed in parallel.  As a basis for the implementation of task queues in Photon, the Retlang library was used, which was developed on the basis of the Jetlang library.  #### Why don't we use Task and async/await It’s because ofthe following considerations:  1. Photon Server development started before the appearance of these features 2. The number of tasks that are performed by fibers is huge - tens of thousands per second. Therefore, there was no point in adding another abstraction, which, as it seems to me, also causesGC (Garbage Collector). The fiber abstraction is much more subtle, so to speak. 3. For sure, there is a *TaskScheduler* that does the same thing as fibers and I would have learned about it in the comments, but in general, I did not want to reinvent the wheel. #### What is a Fiber? A fiber is a class that implements a command queue. The commands are queued and executed **one after the other** - FIFO. We can say that the template multiple writers - single reader is implemented here. Once again, I want to draw attention to the fact that the commands are executed in the order in which they were received, i.e. one after the other. This is the basis for the security of data access in a multithreaded environment.  Although in *Photon* we use only one fiber type, namely *PoolFiber*, the library provides five types. All of them implement the *IFiber* interface. Here is a short description of each.  * ***ThreadFiber*** - an **IFiber** backed by a dedicated thread. Use for frequent or performance sensitive operations. * ***PoolFiber*** - an **IFiber** backed by the .NET thread pool. Note**:** execution is still sequential and only executes on one pool thread at a time. Use for infrequent, less performance-sensitive executions, or when one desires to not raise the thread count. * ***FormFiber***/***DispatchFiber*** - an **IFiber** backed by a **WinForms**/**WPF** message pump. The **FormFiber**/**DispatchFiber** entirely removes the need to call Invoke or BeginInvoke to communicate with a window from a different thread. * ***StubFiber*** - useful for deterministic testing. Fine grain control is given over execution to make **testing races simple**. Executes all actions on the caller thread #### About PoolFiber Let’s talk about tasks execution in PoolFiber. Even though it uses a thread pool, the tasks in it are still executed sequentially and only one thread is used at a time. It works like this:  1. We enqueue a task in the fiber and it starts to be executed. To do this, the *ThreadPool.QueueUserWorkItem* is called. And at some point, one thread is selected from the pool and it performs this task. 2. If while the first task was running, we set several more tasks, then at the end of the first task, all the new ones are taken from the queue and the *ThreadPool.QueueUserWorkItem* is called again, so that all these tasks are sent for execution. A new thread from the pool will be selected for them. And when it finishes, if there are tasks in the queue, everything repeats from the beginning. That is, each time a new batch of tasks is executed by a new thread from the pool, but ***only*** ONE at a time. Therefore, if all the tasks for working with the game room are placed in its fiber, you can safely access the room data from them (tasks). If the object is accessed from tasks running in different fibers, synchronization is required.  #### Why PoolFiber *Photon* uses *PoolFiber* everywhere. First of all, just because it does not create additional threads and anyone who needs it can have their own fiber. By the way, we modified it a little and now it can't be stopped. I.e. *PoolFiber.Stop* will not stop the execution of the current tasks. It was important for us.  You can set tasks in the fiber from any thread. All this is thread-safe. A task that is currently being executed can also enqueue new tasks in the fiber in which it is being executed.  There are three ways to set a task in fiber:  1. put the task in the queue 2. put a task in a queue that will be executed after a certain interval 3. put a task in a queue that will be executed regularly. It looks something like this:  ``` // equeue task  fiber.Enqueue(()=>{some action code;});  // schedule a task to be executed in 10 seconds  var scheduledAction = fiber.Schedule(()=>{some action code;}, 10_000); ...  // stop the timer  scheduledAction.Dispose()  // schedule a task to be executed in 10 seconds and repeat every 5 seconds var scheduledAction = fiber.Schedule(()=>{some action code;}, 10_000, 5_000); ...  // stop the timer  scheduledAction.Dispose()  ``` For tasks that run at some interval, it is important to keep the reference to the objectreturned by *fiber.Schedule*. This is the only way to stop the execution of such a task.  #### Executors Now about the executors. These are the classes that actually execute the tasks. They implement the Execute(Action a) and Execute(List a) methods. *PoolFiber* uses the second one. That is, the tasks fall into the executor in a batch. What happens to them next depends on the executor. At first, we used the *DefaultExecutor* class. All it does is:  ``` public void Execute(List toExecute)  {  foreach (var action in toExecute)  {  Execute(action);  }  }  public void Execute(Action toExecute)  {  if (\_running)  {  toExecute();  }  }  ``` #### What else did we invent ourselves **BeforeAfterExecutor** Later, we added another executor to solve our logging problems. It is called *BeforeAfterExecutor*. It "wraps" the executor passed to it. If nothing is passed, *FailSafeBatchExecutor* is created. A special feature of *BeforeAfterExecutor* is the ability to perform an action before executing the task list and another action after executing the task list. The constructor looks like this:  public BeforeAfterExecutor(Action beforeExecute, Action afterExecute, IExecutor executor = null)  What is it used for? The fiber and the executor have the same owner. When creating an executor, two actions are passed to it. The first one adds key/value pairs to the thread context, and the second one removes them, thereby performing the cleaner function. The pairs added to the thread context are added by the logging system to the messages and we can see some meta data of the object that left the message.  Example:  `var beforeAction = ()=>` `{` `log4net.ThreadContext.Properties["Meta1"] = "value";` `};` `var afterAction = () => ThreadContext.Properties.Clear();` `//we create an executor` `var e = new BeforeAfterExecutor(beforeAction, afterAction);` `//we create PoolFiber` `var fiber = new PoolFiber(e);` Now, if something is logged from a task that runs in *fiber*, log4net will add the *Meta1* tag with the value *value*.  **ExtendedPoolFiber and ExtendedFailSafeExecutor** There is another thing that was not in the original version of *retlang*, and that we developed later. This was preceded by the following story**:**There is *PoolFiber* (this is the one that runs on top of the .NET thread pool). In the task that this fiber executes, we needed to execute a HTTP request synchronously.  We did it in a simple way like this:  1. before executing the request, we create *sync event*;  2. the task that executes the request is sent to another fiber, and, upon completion, puts *sync event* in the signaled stage;  3.after that, we start to wait for *sync event*.  It was not the best solution in terms of scalability and began to give an unexpected failure. It turned out that the task that we put in another fiber in step two falls into the queue of the very thread that started to wait for *sync event*. Thus, we get a deadlock. Not always. But often enough to worry about it.  The solution was implemented in *ExtendedPoolFiber* and *ExtendedFailSafeExecutor*. We came up with the idea of putting the entire fiber on pause. In this state, it can accumulate new tasks in the queue, but does not execute them. In order to pause the fiber, the *Pause* method is called. As soon as it is called, the fiber (namely, the fiber executor) waits until the current task is completed and freezes. All other tasks will wait for the first of the two events:  1. Call of method *Resume* 2. Timeout (specified when calling the Pause method). In the *Resume* method, you can also set a task that will be executed before all the queued tasks. We use this trick when the plugin needs to load the room state using an HTTP request. In order for players to see the updated state of the room immediately, the room's fiber is paused. When calling the *Resume* method, we pass it a task that applies the loaded state and all other tasks are already working with the updated state..  By the way, the need to put the fiber *on pause completely killed the ability to use \_ThreadFiber* for the task queue of game rooms.  **IFiberAction** *IFiberAction* is an experiment to reduce the load on the GC. We can't control the process of creating actions in .NET. Therefore, it was decided to replace the standard actions with instances of the class that implements the *IFiberAction* interface. It is assumed that instances of such classes are taken from the object pool and returned there immediately after completion. This reduces the load on the GC.  The *IFiberAction* interface looks like this:  `public interface IFiberAction` `{` `void Execute()` `void Return()` `}` The *Execute* method contains exactly what needs to be executed. The *Return* method is called after *Execute* when it is time to return the object to the pool.  Example:  `public class PeerHandleRequestAction : IFiberAction` `{` `public static readonly ObjectPool Pool = initialization;` `public OperationRequest Request {get; set;}` `public PhotonPeer Peer {get; set;}` `public void Execute()` `{` `this.Peer.HandleRequest(this.Request);` `}` `public void Return()` `{` `this.Peer = null;` `this.Request = null;` `Pool.Return(this);` `}` `}` `//now we use it next way` `var action = PeerHandleRequestAction.Pool.Get();` `action.Peer = peer;` `action.Request = request;` `peer.Fiber.Enqueue(action);` #### Conclusion In conclusion, I will briefly summarize: To ensure thread-safety in *Photon*, we use task queues, which in our case are represented by fibers. The main type of fiber that we use is *PoolFiber* and classes that extend it. *PoolFiber* implements a task queue on top of the standard .NET thread pool. Due to the small performance footprintof *PoolFiber*, everyone who needs it can have their own fiber. If you need to pause the task queue, use *ExtendedPoolFiber*.  The executors that implement the *IExecutor* interface directly perform tasks in fibers. *DefaultExecutor* is good for everyone, but in case of an exception, it loses the entire remainder of the tasks that were passed to it for execution. *FailSafeExecutor* **seems like a reasonable choice in this regard**. If you need to perform some action before the executor executes a batch of tasks and after it, *BeforeAfterExecutor can be useful*
https://habr.com/ru/post/559314/
null
null
1,968
66.44
#include <CbcEventHandler.hpp> Collaboration diagram for CbcEventHandler: Up front: We're not talking about unanticipated events here. We're talking about anticipated events, in the sense that the code is going to make a call to event() and is prepared to obey the return value that it receives. The general pattern for usage is as follows: The return value associated with an event can be changed at any time. Definition at line 77 of file CbcEventHandler.hpp. Data type for event/action pairs. Definition at line 115 of file CbcEventHandler.hpp. Events known to cbc. Definition at line 84 of file CbcEventHandler.hpp. Action codes returned by the event handler. Specific values are chosen to match ClpEventHandler return codes. Definition at line 100 of file CbcEventHandler.hpp. Default constructor. Copy constructor. Destructor. Return the action to be taken for an event. Return the action that should be taken in response to the event passed as the parameter. The default implementation simply reads a return code from a map. Assignment. Clone (virtual) constructor. Set model. Definition at line 162 of file CbcEventHandler.hpp. Get model. Definition at line 167 of file CbcEventHandler.hpp. Set the default action. Definition at line 172 of file CbcEventHandler.hpp. References dfltAction_. Set the action code associated with an event. Definition at line 177 of file CbcEventHandler.hpp. Pointer to associated CbcModel. Definition at line 195 of file CbcEventHandler.hpp. Referenced by getModel(), and setModel(). Default action. Definition at line 199 of file CbcEventHandler.hpp. Referenced by setDfltAction(). Pointer to a map that holds non-default event/action pairs. Definition at line 203 of file CbcEventHandler.hpp. Referenced by setAction().
http://www.coin-or.org/Doxygen/CoinAll/class_cbc_event_handler.html
crawl-003
refinedweb
274
53.98
Mac::Spotlight::MDQuery - Make a query into OS X Spotlight use Mac::Spotlight::MDQuery ':constants'; use Mac::Spotlight::MDItem ':constants'; $mdq = new Mac::Spotlight::MDQuery('kMDItemTitle == "*Battlestar*"c'); $mdq->setScope(kMDQueryScopeComputer); $mdq->execute(); $mdq->stop(); @results = $mdq->getResults(); foreach $r (@results) { print $r->get(kMDItemTitle), "\n"; print $r->get(kMDItemKind), "\n"; MDQuery. See the MDItem POD for MDItem's methods and a complete list of all the available Spotlight attributes. new Create a new MDQuery that will run the given query string when it executes. The query string must be supplied to the constructor and cannot be changed once the object is created. For the full format of query strings see the URL. For the full list of attributes that can be queried see the POD for MDItem. Do note that unlike the mdfind command, you must provide at least one attribute to be queried. (If you don't provide one to mdfind it picks a few for you.) new() will return undef if the query string is malformed or if the underlying Core Foundation object cannot be allocated. setScope setScope() takes a list of zero or more MDQuery constants which define the scope of the query when it is executed. The constants are imported into your current namespace when you use the ':constants' tag. Currently there are three defined constants: Limit the query to the current user's home directory Limit the query to all locally mounted volumes Try to include currently mounted remote volumes in the query You can do $mdq->setScope() which will effectively stop the query from doing anything when it is executed. scope Return the list of scopes set for this query. execute Do it! This runs the query and holds the results until they are retrieved with getResults(). If the query fails to start for any reason execute() will return undef. All MDQuery queries are executed synchronously so execute() will not return until the Spotlight query is complete. Once execute() is called on an MDQuery object you cannot call execute() on that object again. stop Even though execute() currently runs synchronously, that may not always be the case in the future. Spotlight has the ability to return an initial set of results and then continue to update those results in the background as it finds more matches. Running execute() synchronously tells Spotlight not to return anything until it finds everything. It is still a good idea to call stop() before you call getResults(). If you don't, then if something changes in Mac::Spotlight in the future, you may find that Spotlight is trying to update your search results at the same time as you are trying to access them. getResults Returns an array of zero or more MDItem objects. Each object represents one filesystem object that matched your query. See the POD for MDItem. None by default. If you use the ":constants" tag when you use Mac::Spotlight::MDQuery, you will pull the kMDQuery* constants into your current namespace. If you chose not to you can still access the constants via their fully qualified namespace..
http://search.cpan.org/~miyagawa/Mac-Spotlight-0.06/MDQuery/lib/Mac/Spotlight/MDQuery.pm
CC-MAIN-2013-48
refinedweb
512
71.44
This is my code! def compute_bill(food): total=0 for x in food: price= prices[x] st=stock[x] if stock[x]>0: total=total +price st=stock[x] -1 return total compute_bill(shopping_list) The message I got is'Oops, try again. calling compute_bill with a list containing 1 pear, 2 oranges and 8 bananas resulted in 38.0 instead of the correct 30.0.' I think 38.0 is the correct answer. Please tell me what should I do and what the number ,30.0, is! I'm not a native.speaker. Maybe only I can not understand the instructions.
https://discuss.codecademy.com/t/12-stocking-out-a-day-supermarket/45340
CC-MAIN-2018-39
refinedweb
101
70.6
From navigating to a new place to picking out new music, algorithms have laid the foundation for large parts of modern life. Similarly, artificial intelligence is booming because it automates and backs so many products and applications. Recently, I addressed some analytical applications for TensorFlow. In this article, I’m going to lay out a higher-level view of Google’s TensorFlow deep learning framework, with the ultimate goal of helping you to understand and build deep learning algorithms from scratch. An Introduction to Deep Learning. Backpropagation is a popular algorithm that has had a huge impact in the field of deep learning. It allows ANNs to learn by themselves based on the errors they generate while learning. To further enhance the scope of an ANN, architectures like Convolutional Neural Networks, Recurrent Neural Networks, and Generative Networks have come into the picture. Before we delve into them, let’s first understand the basic components of a neural network. Neurons and Artificial Neural Networks An artificial neural network is a representational framework that extracts features from the data it’s given. The basic computational unit of an ANN is the neuron. Neurons are connected using artificial layers through which the information passes. As the information flows through these layers, the neural network identifies patterns between the data. This type of processing makes ANNs useful for several applications, such as for prediction and classification. Now let’s take a look at the basic structure of an ANN. It consists of three layers: the input layer; the output layer, which is always fixed or constant; and the hidden layer. Inputs initially pass through an input layer. This layer always accepts a constant set of dimensions. For instance, if we wanted to train a classifier that differentiates between dogs and cats, the inputs (in this case, images) should be of the same size. The input then passes through the hidden layers and the network updates the weights and recognizes the patterns. In the final step, we classify the data at the output layer. Weights and Biases Every neuron inside a neural network is associated with parameters, weight and bias. The weight is an integer that controls the signals between any two neurons. If the output is desirable, meaning that the output is in proximity to the one that we expected it to produce, then the weights are ideal. If the same network is generating an erroneous output that’s far away from the actual one, then the network alters the weights to improve the subsequent results. Bias, the other parameter, is the algorithm’s tendency to consistently learn the wrong thing by not taking into account all the information in the data. For the model to be accurate, bias needs to be low. If there are inconsistencies in the data set, like missing values, fewer data tuples, or erroneous input data, the bias would be high and the predicted values could be wrong. Working of a Neural Network Before we get started with TensorFlow, let’s examine how a neural network produces an output with weights, biases, and input by taking a look at the first neural network, called Perceptron, which dates back to 1958. The Perceptron network is a simple binary classifier. Understanding how this works will allow us to comprehend the workings of a modern neuron. The Perceptron network is a supervised machine learning technique that uses a binary classifier function by mapping a vector of binary variables to a single binary output. It works as follows: - Multiply the inputs (x1, x2, x3) of the network to their corresponding weights (w1, w2, w3). - Add the multiplied weights and inputs together. This is called the weighted sum, denoted by, x1*w1 + x2*w2 +x3*w3. - Apply the activation function. Determine whether the weighted sum is greater than a threshold (say, 0.5), if yes, assign 1 as the output, otherwise assign 0. This is a simple step function. Of course, Perceptron is a simple neural network that doesn’t wholly consider all the concepts necessary for an end-to-end neural network. Therefore, let’s go over all the phases that a neural network has to go through to build a sophisticated ANN. Input A neural network has to be defined with the number of input dimensions, output features, and hidden units. All these metrics fall in a common basket called hyperparameters. Hyperparameters are numeric values that determine and define the neural network structure. Weights and biases are set randomly for all neurons in the hidden layers. Feed Forward The data is sent into the input and hidden layers, where the weights get updated for every iteration. This creates a function that maps the input with the output data. Mathematically, it is defined as y=f(x), where y is the output, x is the input, and f is the activation function. For every forward pass (when the data travels from the input to the output layer), the loss is calculated (actual value minus predicted value). The loss is again sent back (backpropagation) and the network is retrained using a loss function. Output Error The loss is gradually reduced using gradient descent and loss function. The gradient descent can be calculated with respect to any weight and bias. Backpropagation We backpropagate the error that traverses through each and every layer using the backpropagation algorithm. Output By minimizing the loss, the network re-updates the weights for every iteration (One Forward Pass plus One Backward Pass) and increases its accuracy. As we haven’t yet talked about what an activation function is, I’ll expand that a bit in the next section. Activation Functions An activation function is a core component of any neural network. It learns a non-linear, complex functional mapping between the input and the response variables or output. Its main purpose is to convert an input signal of a node in an ANN to an output signal. That output signal is the input to the subsequent layer in the stack. There are several types of activation functions available that could be used for different use cases. You can find a list comprising the most popular activation functions along with their respective mathematical formulae here. Now that we understand what a feed forward pass looks like, let’s also explore the backward propagation of errors. Loss Function and Backpropagation During training of a neural network, there are too many unknowns to be deciphered. As a result, calculating the ideal weights for all the nodes in a neural network is difficult. Therefore, we use an optimization function through which we could navigate the space of possible ideal weights to make good predictions with a trained neural network. We use a gradient descent optimization algorithm wherein the weights are updated using the backpropagation of error. The term “gradient” in gradient descent refers to an error gradient, where the model with a given set of weights is used to make predictions and the error for those predictions is calculated. The gradient descent optimization algorithm is used to calculate the partial derivatives of the loss function (errors) with respect to any weight w and bias b. In practice, this means that the error vectors would be calculated commencing from the final layer, and then moving toward the input layer by updating the weights and biases, i.e., backpropagation. This is based on differentiations of the respective error terms along each layer. To make our lives easier, however, these loss functions and backpropagation algorithms are readily available in neural network frameworks such as TensorFlow and PyTorch. Moreover, a hyperparameter called learning rate controls the rate of adjustment of weights of a network with respect to the gradient descent. The lower the learning rate, the slower we travel down the slope (to reach the optimum, or so-called ideal case) while calculating the loss. Mechanics of TensorFlow 2.0 TensorFlow is a powerful neural network framework that can be used to deploy high-level machine learning models into production. It was open-sourced by Google in 2015. Since then, its popularity has increased, making it a common choice for building deep learning models. On October 1, a new, stable version got released, called TensorFlow 2.0, with a few major changes: - Eager Execution by Default: Instead of creating tf.session(), we can directly execute the code as usual Python code. In TensorFlow 1.x, we had to create a TensorFlow graph before computing any operation. In TensorFlow 2.0, however, we can build neural networks on the fly. - Keras Included: Keras is a high-level neural network built on top of TensorFlow. It is now integrated into TensorFlow 2.0 and we can directly import Keras as tf.keras, and thereby define our neural network. - TF Data Sets: A lot of new data sets have been added to work and play with in a new module called tf.data. - 1.0 Support: All the existing TensorFlow 1.x code can be executed using TensorFlow 2.0; we need not modify any of our previous code. - Major documentation and API cleanup changes have also been introduced. The TensorFlow library was built based on computational graphs a runtime for executing such computational graphs. Now, let’s perform a simple operation in TensorFlow. a = 13 b = 24 prod = a * b sum = a + b result = prod / sum print(res) Here, we declared two variables a and b. We calculated the product of those two variables using a multiplication operation in Python (*) and stored the result in a variable called prod. Next, we calculated the sum of a and b and stored them in a variable named sum. Lastly, we declared the result variable that would divide the product by the sum and then would print it. This explanation is just a Pythonic way of understanding the operation. In TensorFlow, each operation is considered as a computational graph. This is a more abstract way of describing a computer program and its computations. It helps in understanding the primitive operations and the order in which they are executed. In this case, we first multiply a and b, and only when this expression is evaluated, we take their sum. Later, we take prod and sum, and divide them to output the result. TensorFlow Basics To get started with TensorFlow, we should be aware of a few essentials related to computational graphs. Let’s discuss them in brief: - Variables and Placeholders: TensorFlow uses the usual variables, which can be updated at any point of time, except that these need to be initialized before the graph is executed. Placeholders, on the other hand, are used to feed data into the graph from outside. Unlike variables, they don’t need to be initialized. Consider a Regression equation, y = mx+c, where x and y are placeholders, and m and c are variables. - Constants and Operations: Constants are the numbers that cannot be updated. Operations represent nodes in the graph that perform computations on data. - Graph: The backbone that connects all the variables, placeholders, constants, and operators. Installing TensorFlow 2.0 Prior to installing TensorFlow 2.0, it’s essential that you have Python on your machine. Let’s look at its installation procedure. Python for Windows You can download it here. Click on the “Latest Python 3 release - Python x.x.x”. Select the option that suits your system (32-bit - Windows x86 executable installer, or 64-bit - Windows x86-64 executable installer). After downloading the installer, follow the instructions that are displayed on the set-up wizard. Make sure to add Python to your PATH using environment variables. Python for OSX You can download it here. Click on the “Latest Python 3 release - Python x.x.x”. Select “macOS 64-bit installer,” and run the file. Python on OSX can also be installed using Homebrew (package manager). To do so, type the following commands: /usr/bin/ruby -e "$(curl -fsSL)" export PATH="/usr/local/bin:/usr/local/sbin:$PATH" brew update brew install python Python for Debian/Ubuntu Invoke the following commands: $ sudo apt update $ sudo apt install python3-dev python3-pip This installs the latest version of Python and pip in your system. Python for Fedora Invoke the following commands: $ sudo dnf update $ sudo dnf install python3 python3-pip This installs the latest version of Python and pip in your system. After you’ve got Python, it’s time to install TensorFlow in your workspace. To fetch the latest version, “pip3” needs to be updated. To do so, type the command: $ pip3 install --upgrade pip Now, install TensorFlow 2.0. $ pip3 install --user --upgrade tensorflow This automatically installs the latest version of TensorFlow onto your system. The same command is also applicable to update the older version of TensorFlow. The argument tensorflow in the above command could be any of these: - tensorflow — Latest stable release (2.x) for CPU-only. - tensorflow-gpu — Latest stable release with GPU support (Ubuntu and Windows). - tf-nightly — Preview build (unstable). Ubuntu and Windows include GPU support. - tensorflow==1.15 — The final version of TensorFlow 1.x. To verify your install, execute the code: $ python3 -c "import tensorflow as tf; print('tensorflow version', tf.__version__);print(tf.add(1, 2));" The output should be printed as follows: tensorflow version 2.0.0 tf.Tensor(3, shape=(), dtype=int32) Setting up the Environment: Jupyter Now that you have TensorFlow on your local machine, Jupyter notebooks are a handy tool for setting up the coding space. Execute the following command to install Jupyter on your system: $ pip3 install jupyter Working on Tensor Data Now that everything is set up, let’s explore the basic fundamentals of TensorFlow. Tensors have previously been used largely in math and physics. In math, a tensor is an algebraic object that obeys certain transformation rules. It defines a mapping between objects and is similar to a matrix, although a tensor has no specific limit to its possible number of indices. In physics, a tensor has the same definition as in math, and is used to formulate and solve problems in areas like fluid mechanics and elasticity. Although tensors were not deeply used in computer science, after the machine learning and deep learning boom, they have become heavily involved in solving data crunching problems. Scalars The simplest tensor is a scalar, which is a single number and is denoted as a rank-0 tensor or a 0th order tensor. A scalar has magnitude but no direction. Vectors A vector is an array of numbers and is denoted as a rank-1 tensor or a 1st order tensor. Vectors can be represented as either column vectors or row vectors. A vector has both magnitude and direction. Each value in the vector gives the coordinate along a different axis, thus establishing direction. It can be depicted as an arrow; the length of the arrow represents the magnitude, and the orientation represents the direction. Matrices A matrix is a 2-D array of numbers where each element is identified by a set of two numbers, row and column. A matrix is denoted as a rank-2 tensor or a 2nd order tensor. In simple terms, a matrix is a table of numbers. Tensors A tensor is a multi-dimensional array with any number of indices. Imagine a 3-D array of numbers, where the data is arranged as a cube: that’s a tensor. When it’s an nD array of numbers, that's a tensor as well. Tensors are usually used to represent complex data. When the data has many dimensions (>=3), a tensor is helpful in organizing it neatly. After initializing, a tensor of any number of dimensions can be processed to generate the desired outcomes. Mathematics With Tensors TensorFlow represents tensors with ease using simple functionalities defined by the framework. Further, the mathematical operations that are usually carried out with numbers are implemented using the functions defined by TensorFlow. Firstly, let’s import TensorFlow into our workspace. To do so, invoke the following command: import tensorflow as tf This enables us to use the variable tf thereafter. Now, let’s take a quick overview of the basic operations and math, and you can simultaneously execute the code in the Jupyter playground for a better understanding of the concepts. tf.Tensor The primary object in TensorFlow that you play with is tf.Tensor. This is a tensor object that is associated with a value. It has two properties bound to it: data type and shape. The data type defines the type and size of data that will be consumed by a tensor. Possible types include float32, int32, string, etc. Shape defines the number of dimensions. tf.Variable() The variable constructor requires an argument which could be a tensor of any shape and type. After creating the instance, this variable is added to the TensorFlow graph and can be modified using any of the assign methods. It is declared as follows: tf.Variable("Hello World!", tf.string) Output <tf.Variable 'Variable:0' shape=() dtype=string, numpy=b'Hello World!'> tf.constant() tf.constant() can take the following arguments: tf.constant( value, dtype=None, shape=None, name='Const' ) The tensor is populated with a value, dtype, and, optionally, a shape. This value remains constant and cannot be modified further. The following code snippet explains the creation of a constant tensor: tf.constant([1,2,3,4], shape=(2,2)) Output <tf.Tensor: id=180, shape=(2, 2), dtype=int32, numpy, array([[1, 2], [3, 4]], dtype=int32)> A few basic operations to start with will give you a glimpse at how TensorFlow works. Declaring a Scalar Rank-0 tensors can be declared as follows: Using tf.Variable float_var = tf.Variable(19.99, tf.float32, name="float") int_var = tf.Variable(11, tf.int64) string_var = tf.Variable("Cookie", tf.string) print("{0}, {1}, {2}".format(float_var, int_var, string_var)) Output <tf.Variable 'float:0' shape=() dtype=float32, numpy=19.99>, <tf.Variable 'Variable:0' shape=() dtype=int32, numpy=11>, <tf.Variable 'Variable:0' shape=() dtype=string, numpy=b'Cookie'> The name parameter assigns an optional name to the tensor. The shape was empty because the values being printed there are scalars. Using tf.constant float_cons = tf.constant(19.99) int_cons = tf.constant(11, dtype=tf.int32) string_cons = tf.constant("Cookie", name="string") print("{0}, {1}, {2}".format(float_cons, int_cons, string_cons)) Output 19.989999771118164, 11, b'Cookie' b around the word Cookie indicates that it is a bytes object. Declaring a Vector Rank-1 tensors can be declared as follows: Using tf.Variable float_var = tf.Variable([19.99], tf.float32, name="float") int_var = tf.Variable([11, 19], tf.int64) string_var = tf.Variable(["Cookie", "Monster"], tf.string) print("{0}, {1}, {2}".format(float_var, int_var, string_var)) Output <class 'tensorflow.python.ops.resource_variable_ops.ResourceVariable'> <tf.Variable 'float:0' shape=(1,) dtype=float32, numpy=array([19.99], dtype=float32)>, <tf.Variable 'Variable:0' shape=(2,) dtype=int32, numpy=array([11, 19], dtype=int32)>, <tf.Variable 'Variable:0' shape=(2,) dtype=string, numpy=array([b'Cookie', b'Monster'], dtype=object)> array indicates that the output is a list of values. Using tf.constant float_cons = tf.constant([19.99]) int_cons = tf.constant([11, 19], dtype=tf.int32) string_cons = tf.constant(["Cookie", "Monster"], name="string") print("{0}, {1}, {2}".format(float_cons, int_cons, string_cons)) Output [19.99], [11 19], [b'Cookie' b'Monster'] Declaring a Matrix Rank-2 tensors can be declared as follows: Using tf.Variable float_var=tf.Variable([[19.99],[11.11]],tf.float32,name="float") int_var = tf.Variable([[11, 19]], tf.int64) string_var=tf.Variable([["Cookie","Monster"],["Ice","Cream"]], tf.string) print("{0}, {1}, {2}".format(float_var, int_var, string_var)) Output <tf.Variable 'float:0' shape=(2, 1) dtype=float32, numpy= array([[19.99], [11.11]], dtype=float32)>, <tf.Variable 'Variable:0' shape=(1, 2) dtype=int32, numpy=array([[11, 19]], dtype=int32)>, <tf.Variable 'Variable:0' shape=(2, 2) dtype=string, numpy= array([[b'Cookie', b'Monster'],[b'Ice', b'Cream']], dtype=object)> The shape parameter in the output was initialized with the respective shapes of the declared tensors. The first one was a 2x1 matrix (2 rows and 1 column). The second one was a 1x2 matrix, and the 3rd one was a 2x2 matrix. Using tf.constant float_cons = tf.constant([[19.99], [11.11]]) int_cons = tf.constant([[11, 19]], dtype=tf.int32) string_cons = tf.constant([["Cookie", "Monster"], ["Ice", "Cream"]], name="string") print("{0}, {1}, {2}".format(float_cons, int_cons, string_cons)) Output [[19.99] [11.11]], [[11 19]], [[b'Cookie' b'Monster'] [b'Ice' b'Cream']] Basic Operations Now that we’ve thoroughly explored the initializations, have let’s perform some basic operations using TensorFlow. tf.zeros()/tf.ones()/tf.fill() tf.zeros() takes the shape as an argument and returns a tensor filled with zeros. tf.ones() takes the shape as an argument and returns a tensor filled with ones. tf.fill() allows initializing a tensor with a random value, not limiting to 0 or 1. zero_tensor = tf.zeros(2,dtype=tf.float32) print(zero_tensor) print(zero_tensor.numpy()) one_tensor = tf.ones((2,2),dtype=tf.int32) print(one_tensor) print(one_tensor.numpy()) fill_tensor = tf.fill((2,2),value=3.) print(fill_tensor) print(fill_tensor.numpy()) Output tf.Tensor([0. 0.], shape=(2,), dtype=float32) [0. 0.] tf.Tensor( [[1 1] [1 1]], shape=(2, 2), dtype=int32) [[1 1] [1 1]] tf.Tensor( [[3. 3.] [3. 3.]], shape=(2, 2), dtype=float32) [[3. 3.] [3. 3.]] zero_tensor, one_tensor, fill_tensor are the references that point to the tensors created. To extract the values stored, numpy() was used. This returned numpy.ndarray objects. Slicing Tensors To access a value from a vector, invoke the following code: float_vector = tf.constant([2,2.3]) float_vector.numpy()[0] Output 2.0 [0] returned the value at the 0th index. To access a value from a matrix, invoke the following code: string_matrix = tf.constant([["Hello", "World"]]) string_matrix.numpy()[0,1] Output b'World' [0, 1] returned the value present at the 0th row and 1st column. To slice a matrix, invoke the following code: string_matrix = tf.constant([["Hello", "World", "!"], ["Tensorflow", "is", "here"]]) print(string_matrix.numpy()[1, 2]) print(string_matrix.numpy()[:1]) print(string_matrix.numpy()[:, 1]) print(string_matrix.numpy()[1, :]) Output b'here' [[b'Hello' b'World' b'!']] [b'World' b'is'] [b'Tensorflow' b'is' b'here'] [1, 2] extracted the element present at the 1st row and 2nd column. [:1] extracted the 1st row (all the rows before 1). [:, 1] extracted the 1st column. [1, :] extracted the 1st row. Tensor Shape To access the shape of a tensor, invoke the following code: string_matrix = tf.constant([["Hello", "World", "!"], ["Tensorflow", "is", "here"]]) string_matrix.shape Output TensorShape([2, 3]) There were two rows and three columns in the given tensor. Math Operations Let’s look at a few math operations that can be implemented using TensorFlow. Element-Wise Math Here’s a code snippet that compares add, subtract, multiply, and division functions: x = tf.constant([2, 2, 2]) y = tf.constant([3, 3, 3]) print((x+y).numpy()) print(tf.add(x, y).numpy()) print((x-y).numpy()) print(tf.subtract(x, y).numpy()) print((x*y).numpy()) print(tf.multiply(x, y).numpy()) print((x/y).numpy()) print(tf.divide(x, y).numpy()) Output [5 5 5] [5 5 5] [-1 -1 -1] [-1 -1 -1] [6 6 6] [6 6 6] [0.66666667 0.66666667 0.66666667] [0.66666667 0.66666667 0.66666667] Both the operators and the TensorFlow functions gave identical outputs. All the operations were implemented element-wise. Tensor Reshape tf.reshape() returns a tensor that has values rearranged with respect to the shape given as an argument. Take a look at this code snippet: x = tf.constant([[1, 2, 3]]) print(x.shape) x = tf.reshape(x, [3,1]) print(x.shape) print(x.numpy()) x = tf.reshape(x, [-1]) print(x.shape) print(x.numpy()) Output (1, 3) (3, 1) [[1] [2] [3]] (3,) [1 2 3] Initially, the shape of the tensor was (1, 3). It was then reshaped to (3, 1). When the tensor was reshaped to [-1], the size was computed such that the total size remained constant. To be precise, the shape flattened to a 1D array. Matrix Multiplication Let’s now see how Tensorflow does matrix multiplication using the following code snippet: x = tf.constant([[3., 3.]]) y = tf.constant([[1., 2., 3.], [4., 5., 6.]]) tf.matmul(x, y).numpy() Output array([[15., 21., 27.]], dtype=float32) x’s shape was (1, 2) and y’s shape was (2, 3). When multiplied, the resultant shape was (1, 3). TensorBoard TensorBoard is Tensorflow’s visualization tool kit. By simply integrating TensorBoard into your neural networks, you can track all the metrics recorded during the training process, and thereby estimate the performance of the neural network. Using TensorBoard, you can also view histograms of weights, biases, and other tensors as they change over time. This not only helps you in visualizing data but also assists you to tweak your neural network so that its performance and accuracy are high. Many people out there are passionate about deep learning and artificial intelligence but aren’t sure of the prerequisites and basic fundamentals for getting started. This tutorial can help you carve your own path and serve as a guide while you build your own deep learning or machine learning models from scratch.
https://builtin.com/machine-learning/introduction-deep-learning-tensorflow-20?utm_campaign=&utm_source=&utm_medium=&utm_content=
CC-MAIN-2022-40
refinedweb
4,222
50.33
A Python setup that finally works Vincent Thorne · Posted 29 Jun 2022 My punctual use of Python lead to a rocky situation on my computer: multiple installed versions of Python with some managed by Anaconda, unruly export PATHs and multiple virtual environments. It (kind of) worked when I needed to, but I felt a complete lack of control. In short, my computer was the embodiment of xkcd’s famous comic: That was until my colleague Matteo took the time the other day to teach me his own protocol to install and maintain a healthy Python ecosystem. This post will walk the reader through the steps that helped me achieve Python sanity on macOS. The Steps These commands are to be run in the terminal. They assume that you have Homebrew installed (which I recommend — some reasons why): install instructions are available here. Controversial Step 0: if installed, remove Anaconda completely and clean your .zshrc of all export PATHs created by Anaconda or conda. More details on Anaconda’s help page. Not taking a stance for or against Anaconda/conda here, just advising to clean up old layers of Python before laying the sturdy foundations! - Install Python from Homebrew brew install python - Make sure that the export PATHin .zshrcis set to the one prescribed by Homebrew and that there are no other export paths export PATH="/usr/local/opt/python/libexec/bin:$PATH" - Restart the terminal session - Navigate to the directory of the project cd path/to/project - Install virtualenv with pip pip install virtualenv - Create a new virtual environment virtualenv [environment-name] - Activate (source) the new environment source [environment-name]/bin/activate - Install required packages with pip pip install [package] - If necessary later on, upgrade packages pip install [packages] --upgrade - Once done managing the environment, deactivate it deactivate Spyder setup Accustomed to RStudio and other statistical IDEs, Spyder is a close enough cousin to ease my transition into Python. A few further steps are necessary to link it with the custom environment created above. - Install spyder-kernelsin the virtual environment pip install spyder-kernels - Depending on your version of Spyder, it might ask you to install a specific version. Try with the version displayed first pip install spyder-kernels==2.3.0, for example - In Spyder preferences - Go to the Python Interpreter tab and select “Use the following Python interpreter”. - Click on the file icon - In the Finder prompt, navigate to your virtual environment, the bin folder and select the pyhtonfile. - Click OK and restart Spyder - Check if it works by import [package]a package installed in your virtual environment using the IPython console in Spyder Important: if you create a new virtual environment, you will have to make sure that it has spyder-kernels installed (step 1) and re-link Spyder to that new environment (step 2). Updating and sharing your environment - Export a list of packages with their version to a text file pip freeze > requirements.txt - Update the packages listed in requirements.txt pip install --upgrade -r requirements.txt - Send requirements.txtto someone, and they just need to pip install -r requirements.txtto install all the packages If nothing works Take a deep breath and ask your knowledgeable friends and colleagues to take a look at your computer and help you figure things out. There is light at the end of the Python tunnel!
https://www.vinceth.net/2022/06/29/python-setup.html
CC-MAIN-2022-33
refinedweb
557
50.06
Thanks for the overwhelming amount of feedback and discussion. After work yesterday, I spent some of my time reading through the comments to my post (instead of reading the 9/11 Commission report). It looks the comments have grown some since I pulled this together last night. Here is my basic breakdown of what I am seeing in the response to my post. I list them in no particular order; most are improvement requests while some are data points. I have to say my favorite is the comparison of IE to Courtney Love. I love a good analogy. - Thanks again for all the comments, I also get the general sense that people want more “content”. We are just getting started, so look for more of that coming up. Scott it’s nice to see you’re actually reading this stuff, but what would make more people more content is the following: – tabbed browsing: when? – fully compliant (*FULLY* compliant) to HTML 4.01/CSS 2.1: when? – making this blog properly validate on w3, hopefully in HTML 4.01 strict so you yourselves can see how much trouble web developers go through trying to fix IE’s quirks: when? Oh, and one thing you forgot to list that has been mentioned in a few places is a MS hosted bug tracking system. That’s a pretty accurate summary as far as I am concerned. As far as the Courtney Love comment, I think the important thing to take away from that is that lots of developers are eager to get on with the "next cool tech", and it’s Internet Explorer that is percieved as slowing us all down. A detailed roadmap would be nice, but I think everybody is interested in one question most of all – is there any chance of a new version of Internet Explorer before Longhorn? Is it completely out of the question, is it likely, or are you guys still making your minds up? Oh yeah, a bugzilla would be nice, even if it’s just a standard place to list workarounds. "People want people to download Mozilla Firefox " THIS made me laugh!! 😉 (I think the tone called for humour but I’m not sure!) According to the IEBlog (Weblog of people who make IE):"IEv6x is the Courtney Love browser in a world of Kirsten Dunst browsers"Basically this was one of many un-/useful comments the blog has received since it started. I agree though, Courtney… "- fully compliant (*FULLY* compliant) to HTML 4.01/CSS 2.1: when? " You do understand that: 1) it’s not possible to be fully compliant with both those standard simultaneously (they contradict each other) 2) those standards do not fully specify how a web browser must render HTML and CSS (the specifications have gaps) ok, so HTML 4.01 and CSS2.1 contradict each other, but what about XHTML 1.1 and CSS 2.1? Looks you understand users/possible users. 4999 employees left… 😉 Please, a date for IE7? 2007 may be? (Longhorn shipping year) I’ll be content with standard CSS2 support, without having to pull my hair out finding hacks for stuff, if nothing else the standard box model would rock. Tabbed browsing is a cool feature but I think standards compliance is far more important. Oh yeah, and security fixes, but that is such a nebulous requirement that it might take forever to fix it. I switched to Firefox due to my fear of Download.Ject, but I’d move back to IE if it supported standards and was a little more secure. Joining in the WHATWG / Web Forms 2 effort would be cool. If Microsoft really wants to show that it’s a ‘kinder, gentler company’ that can play nice with others, joining the group that every other browser maker is using to extend the standards would be nice. Sorry if you already seen this one, but I wanted to make sure it was seen on the IE Blog, this has been a terrible bug or maybe a feature since IE 4 but please fix it. It has been in there way to long. If it is a feature please give me a way to turn this off, it has plagued me since IE 4. HTML 4.01 and CSS contradict each other? I’d like to see specific, detailed examples of how and where they contradict each other … considering that their basic raisons d’être are different — HTML is structural, CSS is presentational. NOTE: presentation suggestions in the standards don’t count — because they’re suggestions. Oh and a side note or though, a quick way to turn off SMIL on a site would be nice with the advent of Pop up blocking SMIL overlaying and expanding of adds is begining to become just as Anoying as POP Ups. I can already see it becoming the new Pop up annoyance. A mute switch would be the greatest feature ever added to a browser. A volume slider would be even better. It has been said before, but one can’t say it enough: setup some sort of bug tracking system! Like this you’ll always have a todo list while you don’t even have to bother maintaining the damn thing… 🙂 Besides, then no one will complain again about you guys not listening to the webdevelopment community! This blog seems to approach the issue with a good sense of humor, and that can only be a good think. I look forward to any innovations that come from this interaction. A mute switch for a browser is one of greatest ideas i heard this year. I personally hate those flash-based websites which i need to surf through will on the phone (my pc is my phone). Please make independend controls for each window. If combined with tabbed browsing, independend controls for each tab are neat. I want IE to stop downloading things like .cab files when I am on a site, I really, really hate that. Please, please, don’t fix any of the bugs in the CSS parser before you fix the CSS rendering bugs. I love Firefox, not only because of the tabs, themes, security (well -sense- of security anyway ;p) and standards compliance, but also because of the great extentions and the way they can tweak and add small details or add totally new browser features, like a chatclient or mousegestures or an RSS reader A simular system for IE would be great, the only thing IE seems to be able to do at the moment is adding spyware and toolbars.. anyway, the best of luck with your new IE version! A "Never Trust this company" check box, there is an "always" but not the other way around. OH! And make an option to actually delete your temp files when closing! TweakUI says it’ll do it but it never works! Firefox does this beautifully. The pop-up blocker by Analog-X is a nice tool that would integrate well into IE and could be used to prevent some trickey pop-ups. Hey JonR… XPSP2 has two of these features ("never trust" and pop up blocker). If you can get an improved IE before Longhorn that would be awesome. That’s what I want. Nice to see you taking the comments with an open mind and a sense of humor. I’d put my 2 cents in about IE, but it’s already been said a number of times. It would be nice to hear the devs’ wishlists of what they’d like to see added/changed in IE if they had the freedom to do whatever they wanted. -b The easiest way to show that you care about people asking for "Standards" might be to make _this_ page validate: I have two suggestions for the IE dev team. 1. Every member of the dev team should be *forced* to use Opera or Firefox (their choice) for a month. They can (and should) go back to IE afterwards, but during this one month period, no cheating. 2. During this time, every member of the dev team should assigned a project: reimplement a reasonably complex layout using valid CSS2 and valid HTML 4.01 Strict or XHTML 1.1. This page, microsoft.com, it doesn’t matter. Get the code valid and displaying properly in Firefox/Opera. Then look at it in IE6. Basically, at this point you’ve got to eat the *other guy’s* dogfood for a while in order to really grok what needs to be fixed. I vehemently HATE tabbed browsing. Please for the love of all things good, if you do add tabbed browsing to IE, allow us to take it out. Thanks. Sushant Wow, MS Developer asking users what they think and what they want. And it looks that they think about it. Unbelievable :O I think most important things (Tabbed B., Tansparent pngs, popup block, etc.)are mentioned in the list above! One fine thing in Firefox is that i could integrate many additional extensions and, of course, themes. I would like to doubly and triply and quadruply emphasize what Steve said when he said "Please, please, don’t fix any of the bugs in the CSS parser before you fix the CSS rendering bugs." Wise words indeed. Web developers are depending on the bugs in the CSS parsing to work around the bugs & quirks in the CSS rendering. I wish you the best of luck in dragging the IE6 codebase into the 21st century. There’s another 108 (and growing) features you could add to that list here Think the place to begin would be porting the "Firesomething" extension to IE – "IESomething": You start it and the title bar shows a random name like "Word", "Windows Explorer", "Network Neighbourhood", "Active Desktop", "Notepad" and so on. Amused users are happy users. Without this essential feature, IE’s market share will collapse within three weeks. A programmable api (like firefox) so that 3rd party developers can create extensions and skin packages (though I admit I’m not a fan of skins, they seem popular). I’ve got to ask what I hope is a moderately constructive question. What reasons do you have for continuing development of IE over joining the Mozilla project? What can you possibly gain from releasing you own browser when you could take advantage of all the work that has been ongoing over the past couple of years. Apart from lock-in. I am convinced that you could get *some* of your ideas accepted on merit after discussion with other Mozilla members, and you know – maybe some people with experience outside of IE not accepting all of your ideas for the browser could actually be a good thing? Considering how little (i.e. no) money you make directly off IE I am surprised Microsoft is still considering putting in more effort – no offense and I am not suggesting that Microsoft disband the team, far from it but you know .. benefit-effort-cost and all that 🙂 I suddenly have hope for this blog. However, I must disagree with this huge amount of requests for tabbed browsing. The people here are advanced surfers, but most computer users of moderate skill that I know barely opens new windows, and when they do, they certainly don’t manage enough of them to warrant tabbed browsing. I think Explorer should focus on being a simple and sleek browser for the beginner and intermiedate user, and allow Firefox (or IE interfaces such as myIE2) and the like to tend to the advanced audience. Oh, yeah, and I second (fourtheenth?) the suggestion about a bug tracker. That would be nice. No need to request NO TABBED BROWSING. Every browser with tabbed browsing allows single window/tabbed navigating and multiple tab navigating. People making this complaint have probably never used a tabbed browser. this blog of your is great. You guys put some real personality across and I greatly appreciate it. To many corporate blogs don’t have as much personality. I think even google’s blog doesn’t have as much personality as yours does. Keep it up! I believe this blog will affect the outcome of Internet Explorers’ outlook completely… oh and did i remember to say, 100% no faults Standards compliants. No MS "special" css tricks. Just CSS in its entirety not ur CSS in the way you want it to be. Any chance of syndicating this feed with just the legit articles and not the comments? I’d like to hear about what the IE team has to say, but I refuse to wade through over 300 comments of garbage from the Microsoft bashers. If I want to know about how wonderful Mozilla or Firefox is, I’ll subscribe to a feed about them. These guys are ruining this blog right from the start. Mouse gestures! I use them in Firefox all the time now when I’m at home, and miss them in IE when I’m at work. "If I want to know about how wonderful Mozilla or Firefox is, I’ll subscribe to a feed about them. These guys are ruining this blog right from the start.". " That’s your opinion and I posted mine. It was intended as a request to see if those of us that didn’t come here for that could be spared the idocy of the posts. I’m not an RSS expert, but I would think a feed could be constructed that would spare us the syndication of the comments. Most of what I have been browsing through is NOT constructive. If that is not possible, I’ll simply unsubscribe the feed. It’s a shame though, I was looking forward to hear what the IE team has to say, not all of this other crap. I applaud the effort that you guys are doing to improve IE. Especially with so many adversely negative comments. However, I must say that I am no longer an IE user. Let me specifically state why and what it would take for me to return to give you a fair chance to attract me back. It was mainly two things that made me switch: 1) web browsing history and 2) ad blocking. For the first issue, Internet Options –> Delete Files does not delete all stored files that IE creates. Particularly bad are the many index.dat files that are always open by a process. This is where to find a few of them (not all): Documents and Settings<UserName>Cookies Documents and Settings<UserNameLocal SettingsTempCookies Documents and Settings<UserName>Local SettingsTemporary Internet FilesContent.IE5 Documents and Settings<UserName> Local SettingsTempTemporary Internet FilesContent.IE5 Documents and Settings<UserName> Local SettingsHistoryHistory.IE5 I do not want my typed Urls stored in the registry (SoftwareMicrosoftInternet ExplorerTypedURLs). Please do not store my recently opened Html (stored on disk) documents in the Recent Documents directory (the path that ::SHGetSpecialFolderPath(NULL, szPath, CSIDL_RECENT, FALSE) returns to szPath). This is an issue with Windows, but something that perhaps the IE team could work with the Windows team on. Often temp files are not cleared out and I must manually take care of them my self. Last year I wrote a program to automatically clear much of the above tastes (and other tasks more specific to Windows). It works, but I got tired of needing to run it to clear my history 10-15 times a day… To some (probably many) these are not important issues, but they are crucial to me. The other issue is filtering Html prior to displaying it. There is an extension for Firefox called Adblock. I can filter stuff like ad.doubleclick.net, /ads/, *valueclick*, *googlesyndication*, *banners.*, *fastclick*, etc. out prior to loading. I have incredibly reduced the amount of advertisements from the webpages that I view, often to zero. I don’t think I could go back to a browser that did not have a similar feature or extension. I will gladly give IE another chance when 1) I can delete _all_ of the history that IE leaves behind, and 2) when I can filter incoming Html for specific strings in Urls. Best of luck. One thing that I find really refreshing about MSDN blogs is the really genuine innocence of Microsoft employees – I almost feel like welcoming them to the real world. I guess what makes it weird is to read Steve Ballmer or various VPs explain features of MS software by saying, "We asked our customers what they wanted and this is it," but then the actual MS developers are totally surprised at the feedback. Its just kind of amazing that a lot of these complaints have been circulating for years. Organizations have sprung up specifically to influence Microsoft to support web standards and help create a better IE. Its very clear that large numbers of users and developers are dissatisifed with many of IE’s problems. Of course, all that has changed. The new Microsoft loves us, listens to us, reads us bed-time stories, which is awesome and cool. Very impressive! But one question remains: If MS employees are saying, "Well, *now* we’re listening," is that a tacit admission of what has been suspected all along, that in the past at least, MS has ignored its customers? And since this is a very recent effort by Microsoft, is it reasonable to conclude that there’s still a great deal of instituitional inertia that does not favor customer feedback? I guess time will tell… I know you IE people have been taking a whole lot of shit and I would hate to be in your position, but you’ve got to face up to the facts. Everyone predicted that, as soon as MS won the browser wars, innovation and improvements would cease and sadly that’s exactly what happened. People have every reason to be pissed off. Let us not have any of that nonsense again. > Most of what I have been browsing through is NOT > constructive. Jerry, yes, there are many comments that are not really constructive, but the majority is about standard compliance and this is (IMHO) very constructive. –Thomas 1. Copying features (tabbed etc.) is not going to cut it (don’t need a clone of (firefox…). But there is some(lot of?) catching up to do (standards, security)) 2. think and deliver next generation 3. Support Win2000, WinNT, Win98… (we paid $s too) 4. Need it now. and not in 2005 with Longhorn + fine grain control over cookie management (if I set my IE cookie handling o HIGH, Hotmail does not work! You can surely fix that) Scott, You forgot some BIG ones on that list! 1. Make all changes available for Windows 2000 also. 2. Make all changes available for Windows 2000 also. 3. Make all changes available for Windows 2000 also. 4. Make all changes available for Windows 2000 also. 5. Make all changes available for Windows 2000 also. Comments continue to be added to the discussion of your previous post. You may want to read through the new ones, as some are constructive comments with worthwhile information. If you guys get serious about fixing the problems in IE, you can still win back a lot of the people you’re now losing. Obviously, the fanboy crowd is a lost cause, but you never really had them to begin with. Their numbers are dwarfed by those who will use whichever product best fits their needs. Right now, I think it’s tough to contend that’s the case for just about anyone. Add application/xhtml+xml to the registry, I know you can do it and it would stop people (standards geeks admittedly) moaning about XHTML 1.0 being parsed as tag soup ([HKEY_LOCAL_MACHINESOFTWAREMicrosoftWindowsCurrentVersionInternet SettingsAccepted Documents] people) Disclaimer: registry hacking is bad for your health! Oh yeah, if you could make attribute seletors in CSS2 work properly please. Here’s one thing that doesn’t seem to have been mentioned. Make loading ActiveX objects transparent. Or something. What I’m really saying is save me from having to write the following: if(window.ActiveXObject) { var oXMLHTTP = new ActiveXObject("Microsoft.XMLHTTP"); } else { var oXMLHTTP = new XMLHttpRequest(); } Just let me write the latter, which is more cross browser compatible. Developers need a quick and polished ‘standards based’ browser. Who needs such snott when there are such great alternatives. Why have I wasted so much time here? ….microsoft marketing wins again… Making the IE team use Opera or Firefox and then have to go back to IE is a brilliant idea! Be sure to make them pay some bills and buy some products online. Initially I moved to Firefox and Thunderbird to avoid all the security holes and to help wean myself off windows-centric applications so the move to Linux would be easier (and it was). When I first tried Firefox, I wondered what the big deal was about, okay so it’s smaller and faster, it’s just rendering web pages isn’t it? but the more you use it, and the more you try out the various extensions (seen the Web Developer extension?), the more impressive it becomes, and the crappier IE looks. Pretty sad that it’s a v6 vs pre-v1 product and the pre-v1 wins hands down. Except for the security patches, MS should give up on IE (everyone else has) and suck the brains of the Firefox developers, and then start on IE.NET using a similar design approach: a small fast standards based browser which is extensible and customisable. You’ve forget one point: "Upgrading an entire OS to use a simply pached Browser is a bad idea." In addition to what’s listed (FAQ? <g> ): 1) Have an HTML safe mode For better or worse (I’d say worse): – you are using HTML as "generic rich text" – readmes and .chm help system – CD-ROM Autorun "front doors" – active desktop and "view as web page" HTML has these basic risks: – active content by design (scripts, ActiveX etc.) – active content through code defects (holes) – links with misleading pseudo-URL text on top – other, e.g. IFrame, zone leaks, cookies, remote pull-downs Forcing plain text in OE isn’t enough – ppl disable it as too "tight". Instead, you want a system-wide HTML mode that: – suppresses all active content – does not pull down remote material – repeats links explicitly after the overlying text – only the explicit link is clickable And duh, don’t process scripts in cookies, no matter what zone. 2) Expect paradoxical zone settings For example, expect folks to set "My Computer" zone as tight as Restricted – so take any reverse zone leakage (e.g. reaching local HD via LAN shares) seriously. 3) Show zone and allow on-the-fly lockdown Take Windows Explorer’s "view" drop-down as an example of how this UI might work; let the user drop into a higher (or lower?) zone on the fly, perhaps with an option to "always process this URL in this zone" (add URL to zone) You may do (3) differently, i.e. choosing from user-defined templates that are unrelated to zones, e.g. the built-in High, Medium and Low plus user-defined. 4) Be careful what you make scriptable I know the temptation; what you let the user do interactively, sysadmins want to automate – and the next thing every web site and malware is automating the same thing. So what starts as meaningful user control degenerates to mere "Simon Says" inconvenience for web sites and malware. 5) Wean web developers off the right to program PCs Web sites, unsolicied mail "messages" and "documents" should never have been granted even the slightest ability to program visitor’s PCs. After that blunder, it’s crazy to have expected things not to have gone wrong. As MS’s own advice says, "if the bad guy can run code…" So we need these "content providers" to give up the rights they are currently enjoying. Start by setting a safer standard we can live with (aspire to zero client-side programming) and offer that as a "safer site" logo thing. Web sites that comply can display the "safer site" logo that links to a blurb on why un-logo’d sites should be treated with caution, etc. On tabbed browsing: Yes, it’s a must-have; MDI was a good idea, and still is. But you need both "open in a new window" and "open in a new tab", i.e. not force one or the other. Dial-up users hate waiting for the same page to load, so after a Google, they flick open links in new windows. But what if you run three different Google searches at once, or visit a site that does the usual spoonful of content and "next…", or an MS /kb that has "more…" links? A combo of new windows and tabs lets you group related pages together on an ad-hoc basis, i.e. you might kick a search’s links as tabs in the same window except for a monster page that starts in a window of its own with its links as tabs in that window. The most pathetic thing was MS Office losing MDI to stay dumbed-down to IE’s limitations. Never again, please. Smaller duhfault IE web cache, PLEASE!! WTF is the point of a 500M web cache? * Any connection fast enough to populate that within a few days doesn’t need caching for speed. * Any connection slow enough to need caching will take days to populate even a 50M cache. As it is, every user account has its own bloated cache that collects thousands of small files that take weeks to purge themselves off the system. Hullo, defrag. Not IE territory, but also crucial: Need to preset settings in the template from which new accounts are created – else ability to spawn new accounts becomes a support nightmare. Tabbed browsing is something I sorely miss in IE. It makes life so much easier. Ever have your whole CS, Ultra-Edit (tabbed coding :P), five Windows Explorer-windows, iTunes (less RAM-usage than WMP!) and maybe some Messenger-windows in your taskbar? I have. Every day. And then, tabbed browsing saves lives if you have about 15 sites opened. The Web Developer Toolbar really is my favourite extension, especially editing CSS on the fly is fun and handy. How about an easier way to whitelist activex controls? There’s an "administrator approved only" option in security settings, but for the life of me I can’t figure out how to make a control "approved". Additionally, the whitelist should be configurable by regular (non-admin) users, with the ability to override it in group policy. Can I add another feature not mentioned here: same level as webstandards behaviour as Mozilla and Opera and Safari? I guess it has not been mentioned yet 🙂 And if a rule in the CSS specs are a bit ambigoues, it would be nice if a future release/update of IE would behave exactly as Mozilla, Opera and Safari on that spec. Before every webcoder gets bald from pulling their hair out. You as a company have won the browser war, no need to do tings differently anymore. And if you want to add another ton of features on top of it, be my guest. Some of them might drip over to other browsers. More… * Make IE 100% compliant to W3C Standards * Tabs, pleeeeeeeease, tabs! * Remove support for VBScript (is very anoying) * Support Mozilla-like javascript features * Something like Mozilla’s XUL * Tabs tabs tabs tabs. Newbies would have a much easier time dealing with multiple browser windows with tabs. Win XP’s taskbar group is similar, but can be confusing. Make the behavior configurable easily. * Make VBScript support disabled by default; * Lock the durn thing down! Let users enable features only as-needed and give sufficient warnings when they enable something that is a security risk * Performance Updates. There is an endless stream of Security Updates describing how IE is potentially going to hand out free copies of my credit cards and allow anyone to take control of my life, er, computer. Instead of making us all wait til 2007 for Longhorn’s IE, start issuing Performance Updates to IE 6. * Standards support. * Standards support, especially for CSS 2. * Standards support. Why on earth is a rag-tag band of rebels able to make browsers that make IE look old and slow? Word is getting out among the general populace too – e.g. many non-tech acquaintances of mine are switching to Mozilla. And the browser is a gateway drug, you know. Next thing, they’ll convince their bosses to use OpenOffice… Lastly, it was pointed out earlier that your blog does not validate with the doctype specified – HTML 4.0 transitional – a standard that is years old. Can developers be expected to be credulous about where you stand with regard to web standards when your primary communication venue doesn’t even validate to a minimum standard? This blog is great, but the code it produces is as much part of the message as the postings’ content. On a positive note, it’s great to have this venue for communication! TABS Please TAAAAAAAAAAAAAAAAAABS!!!!!! " It’s a shame though, I was looking forward to hear what the IE team has to say, not all of this other crap. " I think we both agree actually… the IE team -needs- the input from Firefox users to know why they switched and what features they liked (nothing wrong with ‘borrowing’ some features ;p ) the crap posts like "omg IE suxx0r — F1r3F0X=13373r!!1" are another thing though. this -is- a blog and your posts go directly to a real living person working for Microsoft who happens to be in the new IE team. Why would you spam this guy’s blog? what the hell did -he- do to you? there haven’t that a lot of those kind of posts yet, though. well that’s the way I look at it anyway =) right. let’s retry that sentence: there haven’t BEEN a lot of those kind of posts yet, though. o_X (Nearly) all of the Above and: Just use the VS.NET UI for IE! It is really nice to customize and it allows tabbed Browsing. But when the UI is in "tabbed mode", all Popups are opened in maximum Size; if you switch to "mdi mode", you can not enjoy the tabs… It would be nice to have multiple, flexible-positionable Sidebars (like tool-Windows), ability to place Toolbars everywhere (bottom, left, top, right) and (for the n00bs of us): skinning, but with the ability to disable (because I hate skins…) Just a small suggestion… I am not sure about the people with normal eyesight, but I have to wear thick glasses to see stuff and for me fixed fonts set by web-designers that are better described as web-disasters are royal pain in the ass to cope with. The improvement is as easy as that – please move "ignore font size’ settings somewhere to the front end, to the menus I suppose – the way it is done in other browsers…. font sizing setup available in "View" menu doesn’t resize the fonts with fixed size set at the moment, and inefficient approach used in "ignore font sizes" option ignores all the font sizes completely instead of ignoring fixed ones, and only fixed ones, rendering normal sites unreadable or not comfortable to read… getting to the dialog boxes depth to switch this option every 5 minutes is a PITA 🙁 Before you develop new version of IE (I thing it’s impossible to preserve backward compatibility and complience standards), People will land on Mars. I really don’t understand why are you spending time to develop something new when you easily could implement Gecko engine, or buy Opera engine. I thing IE is dead, most people know this, you cannot do browser that correctly show page coded for IE6, you have to make totally new browser. And you know, the develop of new broser engine take 5 years at minimum. built-in SVG supprot would be cool – I’d be happy if at least SVG Tiny will be implemented – soon it will be common feature of new cell phones so why not do it in "normal" PC browsers? Oh Please for the Love of GOD put in Distrust this Wbesite (zone) option for certificates. If you say no you are prompted a bazillion times to install the damn thing then most people get tired of clicking No and they click yes. This will prevent at least 80% of spyware/adware from being installed on systems. Welcome to the fishbowl, guys. Let the games begin. 😉 — Ben Goodger Lead Engineer, Mozilla Firefox so you have "inventarised" in a nice short list. but can you make a post now in which you adress which issues you’ll take up and which issues you won’t take up? it is not very clear, nor any improvement in "the openness of the IE team" if you just ‘mirror’ what people say and then continue working without really adressing some issues and taking a stance. Jan. Overwhelming Response So Microsoft is slowly getting the picture. It seems that now they have a blog for Internet Explorer. They are starting to acknowledge that "Yes", people do think its a bad idea to integrate things into the OS. "Yes" please release a version 7. There were a lot more, but why am I pointing out these points? Well, mainly because Microsoft has said that from their "so-called" research, that people in general want the opposite, or in the best interests of consumers. An integrated browser, no standalone browser, ie no IE7 for previous and current release of Windows…. Good. You have got the idea. Now all you have to do is implement all that stuff as well as set up a road map for including support for standards yet to be recommended when they are recommended such as XHTML 2.0 and CSS3 (and not half a decade+ later) and I’ll be happy. And also cut IE out of Windows so others Windows Pre-Longhorn can have the yet-to-arrive-good IE. Or make another product like "Windows Explorer" (well….maybe something else as that is taken) like you did with "Windows Messenger" and "MSN Messenger" so avoid the lawyers :P. Anyway, good to hear you finally have to idea. Go get ’em, Microsoft. "No need to request NO TABBED BROWSING. " —Acutally I want stress it again so Tabbed browsing is not made into a standard. I hate, I say again, I HATE tabbed browsing. I want to see multiple pages simultaneously withouth having to tab through. If IE even installs with tabs and even if it has an option to take it off, I will stop using IE and write my own. No Tabs Not Tabs No Tabs. PLEASE PLEASE PLEASE. "Every browser with tabbed browsing allows single window/tabbed navigating and multiple tab navigating. People making this complaint have probably never used a tabbed browser. " —-I have use Firefox but I dont like the UI or the options UI. If I can’t find an option within 10 seconds, I will not use the product. Intuitive UI is very important. ALSO, how about an option of disabling New Window() or Dialog Box() (something to the effect). Thanks guys. Oh I forgot to mention. Sometimes I get asked to install Macromedia AGAIN AND AGAIN AND AGAIN. How about a DONT ASK ME FOREVER option in the dialog boxes. Another thing. I could at max have 62 open IE windows before it crashed. My machine is 2.7Ghz, 1 GB Ram, Win Xp Pro. Can u make it work for up to 150 on the same specs? What about Blog support for IE. Sometimes I put blogs onto my Favourites list and then have to go back and manage them every week when the list becomes too big. I would like to see a type of favorite where I can bookmark a blog for up to a week. Then after that, the blog will be removed from the list. Also how about adding custom notes to pages. When IE sees a page that has a note attached to it, it will display that note. That way I can keep track of the many web site notes I make. Save me from writing them down using notepad. Thanks. Hi there, Of all the suggestions above, I couldn’t find the following (maybe I just don’t know how to do it correctly) – can you make IE remember the View->Text Size setting. That’s 3 clicks every time I start the browser I’d be about to save. Cheer, James. A lot of the stuff above is vvery important, and I think someone said something similar in the middle there, but I’m going to rephrase it. Microsoft Internet Explorer is the default browser for Windows. Mrs. Average doesn’t want tabbed browsing and a squillion skins to keep her eyes glazed… What she wants is a browser that she can handle to search for knitting patterns and collectable porcelain dolls. If Professor Power User wants to use his browser with auto-organising tabs, an RSS feed reader, and an office assistant, why can’t he just download the relevant extension, and have it install. I know this is kind of in the ‘exploitable’ realm, but surely the R&D team can come up with some intuitive way around this? The default browser should be simple, fast, easy, and secure. Leave the fancy tools to those who know what they’re doing. The other issues of standards support, and security are both also extreeemely essential, as I’m sure you’ve already understood. The only thing stopping me going deeply into this are the few hundred posts up there that say the same thing. Now, I’m no avid Microsoft fan, but I just had to put in my two cents. I know my mother would confused to the point of having question marks sprouting her ears if she had to re-learn her browsing habits. That in mind, something fresh is always a good thing. 😉 Now the Internet Explorer team is resurrected, I wonder what will happen with the product. The latest version — IE6 — hasn’t been updated for a while and is only patched for security reasons. I think Microsoft came to realise… > Sometimes I get asked to install Macromedia AGAIN AND AGAIN AND AGAIN. How about a DONT ASK ME FOREVER option in the dialog boxes. If you are using Microsoft Windows XP, install service pack 2 when it comes out. Instead of a popup window asking you to install a plugin, a toolbar appears at the top. That’s a lot easier to ignore. This blog is great, and I want to see more from the MS developers. It irritates me when people criticise IE in (most) other techy blogs. I’ve even seem some blogs which encourage people to design for Mozilla, Opera etc and disregard IE. What a ridiculous attitude! These people have lost the meaning of the web. The internet used to be a techy playground for geeks but it is so much more than that now. The w3c, in my opinion, has had it’s day. We’ve been waiting for CSS3 for ages.. and it appears the w3c has just become a glorified online discussion board for self-proclaimed ‘internet experts.’ Mozilla and their ilk are guilty of rendering bugs and also proprietary CSS (see). The hypocrisy is dreadful. Microsoft released innovations like htc behaviours years ago – a model that uses the CSS DOM to propogate javascript behaviour through a website. That was, and still is, way ahead of its time. Also the powerful filter classes (IE4+, 1997) should have become the defacto standard. But no, Mozilla, Opera etc have to have their own way. "MozOpacity" — what a joke! I say, the w3c should be enforcing the standards that MS have made de-facto over their many years of hard work and research into internet technology [Snickers at all of the so-called arguments against tabbed browsing] [rolls eyes] Perhaps an IE "Developer Edition" is a good idea (that wasn’t an invite to charge for it– keep it free guys), featuring a Javascript console, and other advanced features for developers. A feature I’d like to see is the ability to synchonize Favorites with third party browsers via an option in the Internet Options panel. I do a lot of opening and closing of Mozilla, Opera and IE, I use Mozilla as my primary browser, so having my favorites available on all three would be an immensely convienient feature. More importantly is the ability to synchronize with Gecko rather than Opera. And since the Gecko bookmarks are stored in a plain HTML file, it wouldn’t be an incredibly difficult feature to add. The ability to mangage and access Favorites from either or would be fantastic. Of course I’m not suggesting to change the way it already handles Favorites, but a feature in addition to or instead of. Also, if such a feature is added, it would be nice if it is more than just importing, I stress the ability to synchronize, both access and management would be ideal. > I’ve even seem some blogs which encourage people to design for Mozilla, Opera etc and disregard IE. What a ridiculous attitude! I agree, but I haven’t seen that attitude. What I have seen are people encouraging people to design for browsers that conform to the various specifications, and merely do the minimum necessary to make sure that the website works for people who use broken browsers such as Internet Explorer. Are you sure you aren’t confusing the two viewpoints? > We’ve been waiting for CSS3 for ages. What are you talking about? CSS 3 is a group of specifications. Many of those specifications have already been finished and are available in their final form as recommendations. Others are at candidate recommendation stage, which means they are ready for implementing. Mozilla and Opera developers are already implementing these specifications. If you are impatient at not being able to use CSS 3, you need to look in Microsoft’s direction. The specifications are there for them to implement, and they haven’t finished implementing the six year-old CSS 2 specification, let alone any of the CSS 3 specifications! > Mozilla and their ilk are guilty of rendering bugs "Mozilla and their ilk" implement the vast majority of the CSS 2 specification, contain a few bugs, and have people working on solving them. Internet Explorer misses out entire sections of the CSS 2 specification, contains numerous, page-destroying bugs, and has not had any work done on the rendering engine for years. There is a *massive* difference between the two situations. What we are seeing here is Microsoft beginning to work on Internet Explorer again, and I am pleased about that. But it doesn’t change the fact that Internet Explorer was effectively abandoned and is nowhere near as capable as its competitors. > and also proprietary CSS (see). Every proprietary CSS property that I am aware of that is implemented by a non-Internet Explorer user-agent is prefixed with an -id- prefix to ensure there are no conflicts with future specifications. Internet Explorer, on the other hand, doesn’t bother to prefix their proprietary properties. This means that future specifications either have to obey Microsoft’s rules when adding properties with those names, or avoid those names completely, lest they break things. I am not concerned with properties that are properly shielded from interfering with future specifications. I would not complain if Microsoft added more proprietary properties as long as they do not interfere with future specifications. Furthermore, the examples you give of Mozilla’s and KHTML’s opacity properties *are part of CSS 3*. The only thing that makes them "proprietary" are the names – their initial implementations have been shielded from the public as when they were first introduced, the CSS 3 specification that deal with the opacity property was not yet at a stable stage. Once the specification and implementations are stable, you will see them simply renamed as ‘opacity’ and not ‘-khtml-opacity’ etc. Please note that you are criticising the Gecko and KHTML developers for implementing a W3C specification that has two Microsoft employees listed as authors, when Internet Explorer has used proprietary syntax instead! And you simultaneously criticise the W3C for not being quick enough about CSS 3! CSS 3 is here, it is being implemented, and you are seeing people criticise Microsoft for not getting around to finishing their CSS 2 implementation. That is not hypocrisy, that is frustration. I am a happy Internet Explorer User. The browser is intuitive, simple, and integrates nicely into my Windows OS. It would be great if I could customize my browser with small applications, like have stock tickers integrated, or a blog dialog where I can just insert text and hit a button to publish it. Take the toolbar thing further 🙂 In the future, I would like to do more work with the browser instead of using separate applications. Example: integrated pim services. Also, I want to see good identity management. I don’t get how it works right now. What about the similarly old and barely updated Outlook Express? It seems that everyone is talking about Internet Explorer but what about Outlook Express. When is it going to be updated with new and much needed features? Or, Outlook Express has been forgotten? For me a spam filter should have been added long ago. Most users use Outlook Express. It is the most popular e-mail client. Shouldn’t it have a spam filter. Especially since Microsoft is so much interested about our security and privacy these days. Why should we buy Outlook 2003 to get spam filtering? Mozila include spam filters and for free. Mozila is not only a browser but it includes an e-mail application and it is time to look at the other features it has that Outlook Express might lack. Is adding a spam filter to Outlook Express so difficult? Everybody needs it. When I heard that MS was preparing a security update to XP, the upcoming sp2, I was astonished to find out that although you were claiming that the security features, like attachment blocking, that MS were adding to Outlook Express were fantastic, yet no spam filter was thought of, which should have been the first thing to be added. What about other updates that Outlook Express needs? And why should the simplest updates only be available via a service pack, like XP sp2? Just curious, is any of the (above outlined) requested features news to you people? Anything at all? If so, please explain how you were able to avoid learning about that. Disable third-party browser extensions (aka BHOs) and ActiveX by DEFAULT, but give users the option to enable manually, not automatically. BHOs are one of the worst killers of a PC. Make a built-in popup blocker. I think this is a great way to help make IE much better than it already is. I do disagree with some of the features people want, most specifically tabbed browsing and their distaste towards OS integration. OS Integration is a nice feature because it standardizes operation of the operating system to that of working on the internet. Much like the domain structure of Windows 2000, I find it helps make things have a much better feel–no matter in which environment you are working in (local or network-based). Tabbed browsing isn’t exactly the greatest feature, though I do use Opera and it utilizes tabbed browsing. I haven’t found it really ‘helps’ me browse the web more than IE, after all, with taskbar grouping in Windows XP, it doesn’t really have a problem with a gazillion Windows open anymore as was a problem in previous versions of Windows. Let’s hope we can all work together to make the most used browser to being the best. -Mike Future MCSA I think it is very important that ie stays simple. Minimum Menus and buttons. Why do you think Mozilla FireFox is far more popular to Mozilla’s Flagship all in one web/mail/editor/news/messanger/etc. The best software is simple software, that doesn’t need any explaining, it just works. A web browser only needs to display web pages properly, nothing else. An can i suggest you also start a OE Blog? Nothing personal but that has more holes than IE THe people who are advoocating against tabs: In Mozilla and Firefox, you don’t need to use tabs. As a matter of fact, unless you ACTIVELY do something to get them, you’ll never see one. Yo only get tabs when you middle click, shift click , or use the Ctrl+T command. Otherwise, you can use mozilla just like IE. So basically, your arguments are a moot point. If IE does enable tabs similar to Mozilla, you’d never know unless someone told you. You obviously never really tried a browser with tabs Did anybody mention full support of W3C standards? Wb developing is a Hell because of IE bugs. Please please please please please please please please please please please please please please please please please please please please please please please please please please please please please please please please please please please please please please please please please please please etcetera. Aim for full HTML, XHTML & CSS2.1 compliance. You won’t get there, but put the effort in and you’ll get most of the way. And if the are discrepancies or gaps in the standards, have a chat with the Mozilla org, Opera, Apple and the W3C and see if an agreement can be made for a way forward and inclusion in future standards. It’s easy if you try… Users across the world will, if not straight away, benefit from improved web sites that work across platforms and browsers. It’s important, and I for one am not interested in any propriatary (sp?) extensions in IE. Thanks for starting to do some work on IE. As you have now seen for yourself the web IS imprtant so thats why it is nice you found the IE development folder somewhere in your desktop. Who had it by the way? The CSS support for IE5 Mac was one big step in the right direction. So I know you can do exellent if you want. Make open standards work in new IE or and stick to them. CSS, PNG, XHTML. At least fix render bugs in the upcoming servicepack for XP. It doesnt matter if some stuff you make up is better (like the CSS boxmodel) keep in line. My customers are tierd of paying me for develop fallbacks, hacks & tricks for things that is suppose to be simple. After work, write 100 times on the blackboard: "I will never defy the W3C" "I will never defy the W3C" "I will never defy the W3C" "I will never defy the W3C" … Until explorer supports standards as good as Mozilla and friends:-D:-D One important issue about everyone’s favorite issue "full compliance with standards": it is frequently at odds with another important priority, BACKWARDS COMPATABILITY. How many websites out there in the "wild" have code blocks that look something like: if (window.navigator.appName.toLowerCase().indexOf("microsoft") > -1) { doIt One Way; } else { doIt Another Way; } Didn’t have to go far… this is code from THIS PAGE. So if MS browsers have always worked a certain way (deemed "wrong" by all the posters here), then they change the way IE interprets common code, then what percentage of websites will it break? 25%? 75%? 99%?!?! What’s the solution? I dunno. Maybe return a new unique phrase for window.navigator.appname that doesn’t include the word "microsoft" ? (not likely). I would very much like to hear the MS IE team’s response to this issue along with the larger issue of standards compliance. Brad Corbin, Do you have reason to believe that IE would suddenly stop working the old way? Yes, browser sniffing is stupid, but there is no inherent conflict here. Also, non-compliance is deemed by the *W3C*, who define the lingua franca (as soon as IE plays ball) of the web, so that HTML is HTML. Without this, we’d have a web built entirely from Word documents, PDF files, and Flash, and no one could build a website without spending large amounts of cash for proprietary development tools. Venting will not go on forever. : ) A lot of press has been given to Mozilla/Firefox’s tabbed browsing. Blah! It’s better than IE’s open a new window and clutter up my taskbar style navigation to be sure. But I think the best kept secret to Browser UIs has gone largly unknown. I certainly haven’t seen any mention of it here. If you want the perfect IE7 user interface, something that can compete with Mozilla out the gate that won’t take until 2008 to develop, then look at iRider (). This little shell extension for IE has the most innovative browser user interface I’ve ever seen, and I’ve seen a LOT! Among it’s most usful features is a side-bar based navigation system that is like a cross between tabbed browsing and the history navigation in IE. It makes Mozilla’s basic tabbed browsing look stone-aged! After a year of working with iRider I find I absolutly can not live without it’s navigation system (though I modify the settings so that new tabs are only created when I right-click). In fact, out of all the bazillion side-bars various browsers have included, iRider is the first and only browser to ever offer anything in a side-bar that was worth giving up the horizontal space for. If Microsoft could license the iRider interface, or even buy the company outright, you’d have the perfect UI today. Then all you’d have to do for IE to make it extra perfect is concentrate development on standards support, rendering, and security. If iRider offered their UI shell for Mozilla instead of IE, you’d pretty much have the perfect browser already! As much as I love Mozilla and Firefox’s rendering and standards support, I myself find that the UI of iRider, even with it’s IE core, is simply more important to me. I’m a professional web application developer, so that is a painful statement to make; but if I, as a developer, wont switch because of advanced UI functionalty, I can probably feel safe in saying that the average user wont either. Of course there is more to iRider than its navigation system so check it out! Actually, Brad raises an interesting point. The actual source code in question on this page is: if (window.navigator.appName.toLowerCase().indexOf("microsoft") > -1) { theform = document.Form1; } else { theform = document.forms["Form1"]; } The thing is, at least Microsoft Internet Explorer 5.0, 5.5 and 6.0 (Windows 2000) and Microsoft Internet Explorer 5.2.3 (Mac OS X) understand the document.forms["Form1"] syntax. I can’t test easily on other user-agents Microsoft have produced, but I suspect they are similar. Which leads me to the conclusion that this switch doesn’t achieve anything useful. However it does have a downside – if a browser pretends to be a Microsoft user-agent, but doesn’t support the syntax supplied to Microsoft user-agents, it will break. Of course, I could be wrong, and the code is there to support Internet Explorer 4.0 users or some other Microsoft user-agent. Hopefully somebody will chime in to correct me if that is the case. Actually what I would like in IE is already in MyIE/Maxthon. MyIE/Maxthon is really feature packed and support plugins and skins. I still remember what web-life was like *before* IE.. thank you thank you! I don’t think anyone even cared about standards before IE – perhaps everyone has forgotten about the horror that was Netscape? IE at least *tried*. Granted, it hasn’t been 100% bug-free compliance-wise, but hopefully that will change in 7. A few things I’d like to see (probably mentioned 100 times already..) * Fix the current problems in the CSS/HTML implenentations before adding new, half finished standards. * JPEG 2000 support. This is probably my #1 feature! The poor standard has been rotting on the backshelf, it will never be accepted until it’s native in IE. * I noticed a nice hack many websites are doing to get around the new popup-blocking features. They are adding "onclick" code to the ENTIRE site, so that whenever I click a link to a subpage, on the site itself to select, etc, it loads a popup – bypassing the popup blocker because I initiated the action. Shesh. * A feature to turn off certain evil javascript features, like pages that block right-clicks. I want my context menu to open other pages in new windows, darnit! 🙂 * Change the IE plugin model so that it doesn’t accept any-old active-x or registry key to enable a plugin. I realize this would be a HUGE change, but it would help with security. Not only can bad websites invoke buggy active-x controls, but weirdo applications being installed on the machine can quietly slip in nasty hooks into IE that steal information. If IE had some encrypted registry area, it could notice when new hooks are installed by their lack of an encrypted value, and bring up a dialog when IE opens. If the user accepts the mysterous plugin, IE encrypts some info into a different area saying the hook was accepted. Nothing is worse than spyware, or other applications trying to install dumb toolbars or buttons into IE without my permission. ICQ/Office/etc. * Pages with Flash tend to freeze IE occasionally in XP SP2 RC2. * Make the encoding detection consistent across all implentations of IE. For example, if I have the japaneese text addon installed, and I go to a japaneese site that was lazy and didn’t put the language type correctly into their site, I still see gibberish. Even if I don’t understand japaneese, the foreign symbols look a heck of a lot better than japaneese-in-ascii 😉 * People seem to want tabbed browsing. I’m not a huge fan of it myself, but I can see why people want it. Have you guys taken a look at what Microsoft Research is doing with taskbar "tasks"? (Go look at the channel 9 video). If you implement tabbed browsing, it would be cool to have them auto-group based on the site, as opposed to the order they are opened. Thanks guys, IE is so successfull that its understandable people vent when there’s a bug – it pretty much affects the entire world now! What you guys do can influence every corner of the globe, lol! There are two technologies in IE that I have really enjoyed: Behaviours and DataBinding. Here are my requests: * Enable DataBinding against XSD schema instead of xdr! * Fix attribute based databinding * Fix behaviours on databound objects/tables (very seriously broken!) One good reason for skipping IE (at the moment) is the lack of IDN (international Domain Names) support… IE vNext and Outlook vNext _should_ support IDN… as FireFox and others do. I believe that most people that use IE, use it because they a) Don’t know enough about computers to use anything else b) Need to access a site that only works with IE Who uses it because it actually has an innovative feature that others lack? No, it is used because we are stuck with it. But not for long… If you want users to respect and maybe even prefer IE, stop trying to lock users in by breaking W3C standards, get rid of ActiveX entirely, and add cool new features that are ahead of the competition. If there is one, I don’t see it. If you can’t think of one, get out of the browser business and use the Gecko engine! Also, give users *total* control of their browsing experience. Users want adblock, bugmenot, and other consumer-oriented features, in addition to pop-up blockers. When I demo these extensions in Firefox to my friends, they ask, "Where did you learn about this? Install it for me!". Casey The onyly thing I want is better standards support. It would be so much easier to run my weblog 1. Take ActiveX out behind the barn and kill it with an axe. 2. Allow user to select the option of IE never ever ever stealing the application focus, and separately not flashing in the status bar. 3. A "fit to window width/fit to paper width" option for viewing/printing overwide web pages. Fix the gzip related bugs. These bugs prevent us from turning gzip compression on our servers and are wasting the world millions of dollars in additional bandwidth charges. If I could fix one single thing… It would have to be the address bar. Its so simple, yet, aggrevating. This includes url selection, page stealing focus while I am typing, inconsistencies with favicon.ico… ARG! Surely everyone has something to say about this piece of functionality! IE is relegated on my box for those sites that are too confused to handle anything else. I look forward to widespread XP SP2 availability forcing sites to fix a lot of their breaks. Decent CSS and consistent scripting would stop people having to code two versions of a site -and SP2 does not do anything in that direction. So, first off, a version of IE that is secure even in the hands of incompetents. It should be impossible for trojans and other things to subvert the browser, yet today it is trivial. Then I want enough standards compliance that people can do web sites that work for all browsers And I want this across all systems. Even if I run SP2 betas, I have family in the house who are stuck in Win2K, and relatives still in Win98. They need a secure system too. Then finally: innovate. Its ok to innovate, but lets do it in a way that makes sense. For example, what happened to all that secure wallet stuff a few years back? How about bringing that back as a defence against phishing, with built in support for smart card smart USB authorisation. That way, banks can move beyond the current login+secret authentication that makes phishing at all possible. It could even make windows+IE a more secure system than the alternatives. Thanks – I started to use netscape when they introduced tabbed browsing last year. It’s the greatest. But because microsoft products are the base of my computer usage – I end up with two browsers open. But please bring on the tabbed browsing. Also, thank you for taking the initiative to get input froom users of your products. – kudos also – if there was a better way to manage my bookmarks (of which I’ve added this) that would be great. Let me expand on my previous comment: Obviously, the W3C is the "authority" on how things SHOULD work/behave/render/display/etc. (Please note also that there are some standards that are not completely fleshed out, or are left as "recommendations") The problem is when a feature that displays one way in IE6 suddenly displays a different way in (the hypothetical "standard compliant") IE7. According to, IE6 is 71% of the browsers used out there. I can only assume that a reasonable size of this 71% is people who have simply installed the latest operating system and patches, and just happen to now have IE6 as their installed browser. (Rather than people who deliberately choose to go and get IE6). Lets assume MS releases IE7, which changes the way that feature XYZ is displayed (pick one: table padding, absolute positioning, your frustration of the day). Some high percentage of people run their Windows updates, and nearly overnight, IE7.0 is installed in (lets say) 30% of the systems out there. So what happens to all the websites that use logic like this: If (This is a MS Browser) { Funky proprietary IE tweaks that make the page look right in IE6 } else { "Standard Compliant" code that makes it look right in Mozilla } How will this display in IE7? Depends on how the test is written, I suppose. As I’ve written it, it will do the "Funky Tweaks" that looked right in IE6, but now won’t display properly, when we really want it to do the else block. A few possibilities: 1) The pages that use this type of browser specific code are few and unimportant. Not likely. I see this kind of thing all the time. 2) The practical impact of these changes on existing web pages won’t be big enough to worry about. Maybe. If the changes you want don’t have a big impact, however, why is it such a big deal that IE be standards-compliant? 3) IE7 can be written to be both standards-compliant and backwards-compatible. In some cases, probably. If IE doesn’t support a particular CSS feature, it can be added, but what about cases where different browsers simply interpret things differently? Is table padding counted as part of the table width? Yes, or No, there is no "sometimes". 4) The changes to websites can be made quickly and easily. Maybe, but IE6 will still be around for a while, even with the release of new versions. IE5 still makes up 8% of the install base. Legacy browsers are a huge issue in web development of all kinds, and even if IE changes overnight, we still have legacy webSITES to deal with. My ultimate point is that even if everyone (at MS) agreed that moving toward 100% standard compliance was goal #1, actually making it happen without breaking a lot of existing web pages could be quite a bit more complicated than that. Brad, the scripting situation you are referring to is commonly referred to as browser detection vs object/feature detection. Lots of experienced scripters advocate using object/feature detection to check for support rather than browser detection because it is more robust, as you mention. But, again, as you say, lots of people don’t bother, so it’s necessary to cater to this type of script. Have you checked out the discussion about doctype switching? Internet Explorer has already found a way around this "legacy websites" issue in the manner of your third option, and there are other ways of achieving the same thing, as I have suggested elsewhere on this weblog. I want better printing. Most of the longer technical articles I print out are not readable because text is clipped off the right side. See for an example. Print it out – it’s unreadable because of the clipped text. This is the number one thing that annoys me about IE 6. I used to love IE and was completely against alternatives Netscape/Mozilla ect. However after using them, I certainly won’t go back to IE, unless some issues are fixed. Apart from security and other issues, what’s with the performance?! Whenever I start IE, it takes a whooping 60MB of RAM and that increases as the browsing goes on. Netscape on the other hand takes about third of that?!?!? And please don’t try to avoid the real issues by blaming spyware, or something similar. The problem is in the IE itself. Let’s break it down, shall we? # Tabbed browsing : WHO???? # Better Standards support (improved CSS, Transparent PNG support, XHTML, etc) : YEAH RIGHT!!!! # IEv6x is the Courtney Love browser in a world of Kirsten Dunst browsers : v6’x’ WHAT’STHE X FOR???? # Better pop-up blocking : YEAH RIGHT!!!! # People want people to download Mozilla Firefox : FACT. # Release an IE7/down-level release : THE PAIN!!!! # Fix the security problems : IF WISHES WERE HORSES!!!! # Better performance : JOKING RIGHT???? # Faster update turnaround : BIG DOWNLOADS : WHY TROUBLE???? # Integrating browsing into any OS is a bad idea : NOW YOU REALISE !!!! # Developer tools are goodness for web devs : HMMM… # A Windows Service Pack is not the same size as a Mozilla Firefox or Opera install : BUT IT DOES HAVE 10000 TIMES THE BUGS # Did I mention standards support already? : DID YOU NOW? # I shouldn’t take this personally, people have been waiting for a while to vent on somebody :SORRY. # People want to understand the roadmap for IE : SPARE THE TROUBLE. Sorry if it hurt you. Tabbed browsing isn’t that much important. There are a lot of cool frontends to IE (NetCaptor, AvantBrowser, etc.) that implement this feature. I wouldn’t give it too much attention if I were you. IE should first be a good, solid net-related control for browsing HTML and surfing on networks. About the standards : they suck. They have gaps, they contradict each other, and sometimes they are plain stupid (I prefer a lot more IE’s implementation of the box model than FireFox’s, and don’t even get me started on floating DIVs). BUT you have to have a consistency to web pages. So even if those standards suck and are stupid, you have to implement them (well in fact, they’re not even standards, they’re recommandations…). On the other hand, I’d like to see MS get a lot more involved with the W3C in order to clean up their mess. If you look at the browser validation pages for CSS1 on their website (dating back to 98 at least), you’ll notice that they don’t even test everything, so you basically have IE and FireFox both validating for CSS1 compliance, but when you design something only with CSS1 features, you can easily end up with huuuuuge differences. This is just completely outrageous, knowing how old CSS1 is. So I’d like a more standards-compliant IE, but I’d also like to see a W3C that is a bit less incompetent. Security problems : obviously a top priority. Please look at here: This is such an amazing css2 effect and it doesn’t work with IE 🙁 FWIW, here are the four things I love about Firefox the most:- 1. pop-up blocker 2. tabbed browsing 3. the web seach bar (the text box just to the right of the location bar) 4. the ease of extensibility of the search bar (starting from zero knowledge I wrote a plugin to search the Yahoo! Movies site in about 20 minutes). Start quote ————– Actually, Brad raises an interesting point. The actual source code in question on this page is: if (window.navigator.appName.toLowerCase().indexOf("microsoft") > -1) { theform = document.Form1; } else { theform = document.forms["Form1"]; } ————— End quote This is why PPK of quirksmode.org fame recommends testing for features (e.g. if (document.Form1) { stuff; } else { stuff; }) rather than browser sniffing. This particular example is retarded because you should really just use document.getElementById(), which all the modern browsers support. Anyway, my first wish is for better CSS support (selectors first, then new properties like min-width). Second is better XML/XHTML support (application/xhtml+xml, anyone? see). Third is more standards-compliant DOM support; it’s already good, and I like some of the MS extensions (innerHTML is great), but I’d be delighted to eliminate JS code branching. Browser features like tabbed browsing? Let MyIE and CrazyBrowser handle that stuff. OTOH, it may be the only thing that can get Joe User to download a new version of IE. Why would he care about rendering engine updates? Oh, I forgot PNG32. Man, that definitely comes right after CSS and before better XHTML and DOM support. Yes. Communicate IE’s roadmap to the community… It feels like the clock has stopped! IE6 and that’s it 🙁 I’m sure you guys have plans… give us a taste of things to come… First I want to say that I greatly appreciate the time and effort that Microsoft and its employees are taking to better Internet Explorer. Now to my wish list. 1. Better CSS1 and especially CSS2 support. This is a _huge_ deal, since if CSS support were there – we would be able to start delivering our HTML content as just that, HTML content without having to put look/feel elements (layout, style etc) into our HTML documents. 2. XHTML, reasoning is similar to wish list #1. 3. Transparent PNG support as well as SVG support. See wish list #1 for reasoning. 4. Block popups (or whatever is allowing SpyWare/MalWare to be installed on my computer without me authorizing it). The fact of the matter is, is that people are starting to turn away from using Internet Explorer because of the lax Standards that it implements and secondily – people are getting fed-up with their browser being hi-jacked by SpyWare/MalWare. As for the rest, I don’t have any problems with performance of IE launching or running as well as having any problems with what others have stated. IE consumes little resources compared to alternate browsers. Tabbed browsing would be nice to have some day – but I would greatly like to have the Standards (CSS, XHTML, PNG, SVG) implemented and security taken care of FIRST before any new "features" were added. I hope my comments help your development team. Many Thanks, Micheal aaa Not better standards support, the BEST and COMPLETE standards support. Please! I think IE’s a pretty good browser. I have to work with CSS support from version four browsers, and compared to that, IE, as well as any modern browser is incredible. There have been two features I’ve always pined for, though. They’re big features that would probably be a lot of work, but here goes: * Some kind of compiled Javascript, not unlike Java bytecode. Oftentime I’ve been working with DHTML user interfaces and have been limited by the speed of the interpreter and HTML renderer. I don’t know what sorts of speed boost that would give, but it’s out there now. * Also, some means with CSS to sync heights. One of the reasons I still use tables for layouts is I have trouble making one container the same size as another. Maybe something along these lines: <style> #x { height: #someElementsID } </style> Anyways, this blog is pretty cool. It’s nice to see some real human response from a company as big as Microsoft. dnl2ba, > This is why PPK of quirksmode.org fame recommends testing for features (e.g. if (document.Form1) { stuff; } else { stuff; }) rather than browser sniffing. Yes. I disagree with PPK on a hell of a lot of things, but this is one case where he is absolutely right. > This particular example is retarded because you should really just use document.getElementById(), which all the modern browsers support. I’m pretty sure that Internet Explorer 4.0 (among others) doesn’t support that, whereas I don’t know of a single user-agent that doesn’t support document.forms[] but does support document.formname – my point was that there was *zero* downside to using a single line of code instead of their conditional. Joe, > Also, some means with CSS to sync heights. One of the reasons I still use tables for layouts is I have trouble making one container the same size as another. CSS 2 has had this feature for over SIX YEARS! Nobody uses it because Microsoft never bothered implementing it in Internet Explorer. This is precisely why so many people are clamouring for Microsoft to address CSS support. In addition to features mentioned before, stylable form elements (such as combobox and radiobutton) would be very nice as well 😉 — I want IE to stop downloading things like .cab files when I am on a site, I really, really hate that. — How about a checkbox on that spyware installer window stating "No I honestly never ever want to install any crap from GAIN/Gator corporations" instead of "Always trust content from GAIN/Gator which the average idiot always selects 🙁 The cab file isn’t wrong, the most popular applications using it just are. What about a status update on whats going on after all these responses? Just so that people know that all thats written here is actually read. I like to save web pages localy. IE will fail the whole save if one item in the page is unsavable. This process should be user over-ridable, without coding. Furhter very often I want to remove just a piece of the page. It would be nice to have an annotated xml record attatched to the cip board with the content. Additionally it would be nice to have a link view of the page, and a scriptable interface so that I can give insttuction on page down load batch instructions along the link tree. Most importantly move IE out of the COM object model and into the .NET object model. Com interfaces are Kludgy, from the managed world. Make it possibile to take over client space in the browser without Active X. Infact completely move all COM features of IE into the .NET/managed world. Since moving .NET’s object model to the managed world should also make it easier to access from non-.NET language environment’s ie Java, for non-Windows IE users. Just to further emphasize what’s already been said. Give us W3C standards! One simple request: when I tell IE to download something to a certain drive (i.e., not the drive IE was installed on), I want it to ACTUALLY download the file to the drive i specify. I do not want it to download the file to a temporary folder on the installation drive first and then move it over to the drive I specified. Simple things like that are really annoying.: filetypes : no cheating with filetypes (transformation based upon name for instance). no "clever" guessing of filetypes (transfomation based upon contents of the document). XHTML compliance (with correct filetypes). ALWAYS obey Content-Type header, and no cheating after that. Let people fix their server crap configuration. Making cheats *an option* that is NOT set by default. I mean if you see "  " instead of " " you should not automatically fix it unless the browser has been manually set to (and even so, it’s stupid, just let people fix their garbage HTML and forget this behaviour). I have to agree that integrated SVG support would be a killer feature, enough for me to swich back (provided the rest is reasonably on par with other browsers of course). I switched to IE when v3.02 came out, because of the better support for CSS (as many did when IE4 was released). SVG would have a similar impact. Even SVG Tiny would be fine, provided it is version 1.2, with (at least) MicroDOM support. Heck, even Macromedia is doing SVG these days, you’re the last kids in the game 😉 Stupid question : making IE a cool, robust and secure browser is wonderful but… isn’t it going to lead MS into court regarding an abuse of leading position ? Aren’t other browser developping companies going to whine that MS doesn’t give them a chance ? Since OE is distributed along with IE, what about giving OE a facelift too? 1. Fix issue with OE (sometimes) downloading the same message twice 2. Empty Junk Folder option 3. Empty Trash/Junk even if there is only 1 message in the folder Quote: "Stupid question : making IE a cool, robust and secure browser is wonderful but… isn’t it going to lead MS into court regarding an abuse of leading position ? Aren’t other browser developping companies going to whine that MS doesn’t give them a chance ?" Opera will whine probably, just like Real did about WMP. But as with Real, people are not not using Real because WMP is unfair competition, but because their product is a spyware-bloated bad working piece of crap. WMP is just a tool so anyone can play music and video on their computer even if they don’t know how to download: everyone I know uses WinAmp. If Real would fix their product, they would get a following. The exact same applies to Opera: Mozilla has demonstrated with their main suite and with Firefox that it’s possible to fight the ‘monolith that is Microsoft’ with a simply superior product. Hell, even I am using Firefox for my main browsing duties. That is exactly what Microsoft should aim for with IE: like WMP the browser should be standards-compliant, sufficient for daily browsing, and not ‘good’ enough for the serious users. As long as they don’t use their position to *freely* distribute *superior* products noone *can* whine. I hope Microsoft will also fix their attitude about making their own websites only function in IE. I mean, there’s a tutorial on the internet that shows how to make Microsoft.com XHTML/CSS based, standards-compliant, and less than half the download size. Yet Microsoft refuses to fix it and leaves it performing badly in alternative browsers…. And if I give my full opinion about WindowsUpdate not containing a <body> tag so no alternative browser can render it this comment will be deleted, so I will not go there 😉 Fixing the attitude also means fixing those websites. You are in the damn W3C yourselves, then at least your 3 main portals (microsoft.com, msn.com and windowsupdate.com) should comply to it. Outlook Web Access is a good 4th…. Bah, why should MS "fix" their attitude about websites only working in IE? Should apple "fix" their software to make it run on PCs? Besides, MS has used HTC behaviours on a lot of their sites, particularly MSDN, and Mozilla et al do not support this technology. The HTC model is light-years ahead of the w3c spec for CSS/javascript. (for more info: ). The investment bank I work for actively prevents anyone from using a non-IE browser on our systems. Because we can guarantee that people use IE, we can develop a consistent model for look and feel of our web-based systems. I wish I could say the same for the internet at large. Very rarely do we see IE’s much-criticised bugs affecting any real-world app development. Mozilla, Opera and their ilk piss me off because if you look into them more deeply you will find they are riddled with bugs (in fact, just take a look at bugzilla!), and have some severe rendering issues (see ). The free version of Opera is plagued with banner ads, and goodness knows about the rendering engines in the more minor candidates, I shudder to think about having to support their quirks. Other browsers should be cloning the de-facto standards set by IE. That’s what happened when Intel created the x86 architecture and now there is healthy competition. It takes a real-live company (not a glorified discusssion board, like w3c) to implement innovative products. The industry should not have to wait for a the w3c to drip-feed us new internet standards. MS are way ahead on powerful presentation and logic standards (filter classes, css/htc behaviour model, enhanced javascript dom, to name but a few). F**k the upstarts – embrace the power of the market leader and make more constructive use of your time! I’m not a developer nor a programmer, I’m just your average internet junkie and I stopped using IE6 months ago in favor of Firefox. It’s impossible to imagine going back now. My wife is even less internet savvy than me but after using Firefox a few times she won’t use IE. Firefox is faster, it has more features, I like the ability to change the look of it, the standards thing… etc. I absolutely HATE advertising and I love that with Firefox I can right click and block ads from specific servers. Tabbed browsing keeps the taskbar uncluttered. In the months before I switched, IE was hijacked a few times and a bunch of crap was installed on my machine which took a great deal of effort to delete. I just don’t have those worries with Mozilla… yet, anyway. It seems like I have a LOT more control over my browsing experience with Firefox than with IE and so I guess my suggestion to the fine folks at Microsoft would be to make IE as customizeable as possible. I like Microsoft products and I’m not a Microsoft basher by any stretch but when it comes to IE… fuggetaboutit. Google have developed a brilliant addin for IE (only), called the Google Toolbar. Firstly, it blocks popups and secondly it integrates perfectly with the search engine. When you type in search terms into the toolbar and press enter, the search will be run and the search terms will become buttons on the Google Toolbar. You just have to click on them to find where they occur within the page. Plus, the toolbar has a form auto-filler and displays the Google page-rank of the current page. As for tabbed browsing, I have used this in Mozilla and Opera, comparing it to running several IE instances, listed on a vertically-docked start-bar. I prefer IE’s "tabbed browsing" experience. For me it works just as well, and you can potentially arrange several browsers on one screen. Also, the browser is well supported by the OS, it loads in a jiffy. I don’t find that the taskbar gets cluttered. If any of the alternative browsers get popular you will find that hackers and scammers will start targeting them just like IE. I don’t buy it when people say IE is intrinsically less secure. It’s simply more heavily targeted because it’s what everyone is using. Plus there are many beginners using IE as it comes packaged with Windows. These people are more likely to fall for scams and click on banners etc. As for web standards.. well don’t get me started on that – there is so much ignorance displayed in these comments I could go on all day! There is a holy war, a jihad, against Microsoft and I’m sad to see otherwise intelligent people get swept up in it in their droves. Desperately missing from the list: 1. A download manager built into IE as in Firefox, having the options to be displayed as its own window, a tab, or a sidebar. 2. Fix the favicons, seperate them from the cache and permanently attach them to shortcut file itself. (Their disappearance and having to use FavOrg is very annoying). I don’t agree on Tab Browsing. If IE is an intergated Browser. Than This extra function should be left to others. May be IE could have a plug in system to add this fuction? I think IE 6.5 should be geared towards Developer. With ALL the Standard support. And Improve Languages Selection. I don’t know how you go about it but currently i fails to detect some site. Some english sites uses chinese to render while some chinese uses English…………. Not good enough huh? Fix IE’s cache! This is part of standards compliance. In lots of cases the IE browser cache does not work correctly. For example combining gzip with etags does not work. Caching gzipped js and css files does not work consistantly, and document caching is anyone’s guess. These bugs have existed for years. Caches are integral to the browsing experience. IE feels slower than firefox largely because it’s caching code is screwed up. Interesting.. isn’t this kind of feedback from users what Netscape tried to do before releasing Mozilla? And if I remember correctly it worked pretty well, listening to the users’ feeback. I’m a hardcore linux user myself, however, the last thing I want to see are users stuck with spyware infested windows boxes, so I wish you guys well in your efforts to improve IE. The things I’d like see are: A. Not having the browser integrated into Windows, if people want to totally remove IE – then let them. People should be able to do whatever they want. B. CSS support, up to CSS3 C. PNG support – oh, this alongside CSS support would be sooo sweet. D. Pop up blocking similar to Firefox’s E. Tabs! I had to use Konqueror to get this from to work in Opera. Wow, transparent png support. Sounds like a huge undertaking. Don’t overdue yourselves. > Bah, why should MS "fix" their attitude about websites only working in IE? Should apple "fix" their software to make it run on PCs? Bad analogy – HTML was designed to be cross-platform right from the start, you have to go out of your way to only make things work for a particular browser. > The investment bank I work for actively prevents anyone from using a non-IE browser on our systems. Because we can guarantee that people use IE, we can develop a consistent model for look and feel of our web-based systems. This is completely untrue. You cannot ensure a consistent look and feel even if you restricted a website to Internet Explorer only. Even if you could, over here in the UK, such an action would be in breach of the DDA and therefore illegal. If you are in the UK, please talk to your solicitors about this. You may wish to even if you aren’t operating in the UK, as other countries have similar laws (e.g. Australia, where an IBM client, SOCOG, got sued for tens of thousands of dollars). Quite frankly, it is worrying that somebody working on a *bank’s* website doesn’t understand one of the basic principles of the WWW; namely that consistency is not only impossible but undesirable. > Very rarely do we see IE’s much-criticised bugs affecting any real-world app development. Working around Internet Explorer’s bugs sucks up a good portion of my time at work, and others have posted similar opinions. I take it you haven’t run into the bugs where the entire text of a page disappears? Or where sections of the page just get cut off halfway through? Internet Explorer has many, non-trivial bugs. > see. > Other browsers should be cloning the de-facto standards set by IE. Which Internet Explorer would that be? Internet Explorer 5.0? Internet Explorer 5.2? Internet Explorer 5.5? Internet Explorer 6.0 ("Quirks mode")? Internet Explorer 6.0 ("Standards mode")? Internet Explorer 6.0 ("Quirks mode", post-XP service pack 2)? Internet Explorer 6.0 ("Standards mode", post-XP service pack 2)? Internet Explorer 6.5 or whatever comes next? Where are these "de-facto standards" written down? Which ones, in particular, should we be following, and why should we follow them rather than the public specifications *that Microsoft helped create*? If we blindly follow everything Microsoft did, it would essentially be giving Microsoft control of the web. Why on earth would anybody outside of Microsoft be in favour of that? > It takes a real-live company (not a glorified discusssion board, like w3c) to implement innovative products. So how exactly are Microsoft innovative? Please give specific examples that were not already implemented elsewhere beforehand, that are present in Internet Explorer 6.0. > The industry should not have to wait for a the w3c to drip-feed us new internet standards. We’re still waiting for Microsoft to implement specifications that are between five and eight years old – it’s not the W3C that is slow. Moloch, The problem with Opera and these forms can be worked around with a user stylesheet: Most of the comments I have seen posted can be translated loosely to "I want IE to do what (my current browser) does". Now my question is, if IE is just going to evolve to copy all the other browsers, so they’re going to be the same, why should I bother to switch to something that’s the same as what I’ve already got? Give me a browser that doesn’t have anything I detest, (poor security, rendering, etc, you’ve heard the list already) BUT you’re going to have to give me something extra… something that I can’t get in another browser. Something with substance. An ORIGINAL idea. Can you deliver this?. Since most ideas have been said already, one thing I’d like is a way to prevent IE from stealing focus. If I have an IE window minimised, or sent to the background, it immediately jumps to the front if there is any activity on it. This is EXTREMELY annoying if I’m trying to read or type something in. It even does it when I’m in other apps. There’s a reason taskbar buttons flash on activity: to get attention. The window itself does not need to grab attention by being overly intrusive. YAY – somewhere I can beg for new IE features! OK – first, the biography to put the post in context – 20+ years of professional software development experience. First browser=mosaic, second = lynx. I try every browser – have firefox installed, and use it sometimes, but always go back to IE + google + AI roboform. Yes – I’ve tried the firefox extensions, but they don’t measure up – for instance the "google" copy bar doesn’t have my favorite button – the "blog this" button. I have no affiliation with MS, try every new technology on linux and MS and MenuEt and Syllable as soon as they come out, and have no particular reason to favour one over the other. Anyway, my first observation… a lot of people combine stupidity with extreme bigotry, not a good combination. Sure open source is a good thing, but let’s separate the concerns… the question of "what browser is most useful" and "which browser is open source" have nothing even vaguely to do with each other. Please try not to confuse them. Even more funny is the question of "standards". This is where people really demonstrate their lack of IQ. Where is the standard here – a) a piece of paper ratified by a board somewhere or b) a set of functionality in use 95% of the time? Grow up… An ad-hoc standard in use is worth a billion pages of carefully ratified, planned agreed standards lying in a drawer somewhere. Anyway.. Saying that, neither IE nor firefox is perfect – but given this is an IE blog, here’s my IE wish list, IN ORDER OF PREFERENCE: ==== regular updates ==== I think what’s disappointed people most about IE is the fact that it appears to have stopped. We don’t count security updates, they’re like bug fixes – but we haven’t seen anything NEW in IE for years. As a developer, we’ve all gone over to agile methods, and as a user, we have come to expect our products, even our operating systems, to give us great new features every few months. For years I swapped between the latest version of netscape and IE – and then IE won – so convincingly that EVERYONE changed over. But that was a long time ago, and people need to feel like their product is keeping up with the times! ==== jpeg 2000 ==== PLEASE, PLEASE, PLEASE don’t go PNG. We don’t need another piece-of-s*** file format. JPEG2000 was produced by the very very bright among us, and is extraordinarily good. However, it’s languishing in need of popular acceptance, and if you give people PNG, they’ll go down that route. PLEASE support JPEG2000 and give us a file format that can take us into the future, not keep us in the past! ==== intelligent caching ==== I think the biggest performance problem in IE is that the cache can be a lot more intelligent.. Every now and then, it can check "after the fact". Same thing with other pages. You have your news pages, but the majority of pages people go to are quite static. The browser could start remembering pages that haven’t changed between a few days – and again, present the page out of the cache IMMEDIATELY – only checking after the fact, and blinking to the new page (with a quick "I’m sorry" tooltip) later. Another suggestion would be pre-caching – guessing at clicks, following through the first search etc – so that the page is in the cache when the user decides they want to go there. Another suggestion would be to see pages that are accessed frequently, (news pages) and watch them every few hours, hoping to pre-cache them. Basically, I think there are thousands of little tweaks that can be done in terms of pre-loading, not-loading, guessing at things – aiming at a much higher cache-hit ratio than we are currently getting! ==== update me button ==== How many times have you done something like posted a comment on a blog? Or you are watching something on ebay? or you have a news page that you like going to. There are many reasons why, when you are looking at a page, you want to be aware when that page changes. There are lots of ways to do it, and lots of programs, plugins that do it, but all I want is single button "Update me" that I can click – and when my system detects a change to the page a little notification box pops up. I can choose view, later, and/or stop watching this page now. Would be useful. ==== Rendering performance ==== FASTER. ==== better find in page ==== Anyone noticed that ie’s find in page just SUCKS? It’s easier to find something across the whole internet than it is to find it in the page that came back! ==== Integrated find with OS ==== really, if I type a find into my google toolbar, and there’s a match on my local disk or in an email, I want it to come up. Yes, I use Lookout, but I don’t like having two different entry points. If I want to find something, I want to find it – I don’t want to have to make distinctions between my system, my lan, or the internet – I just want to find everything to do with BLAH. ==== built in svg ==== come on all… let’s try to stamp out flash! It CAN BE DONE! ==== firefox extensions ==== Got some good people out there doing some good extensions for firefox. Wouldn’t be hard for IE to use those extensions. ==== page faults / GDI ==== I have a gig of memory, but IExplore does a hell-of-a-lot of page swapping, and uses an obscene amount of GDI objects. FireFox seems to be worse on memory, better on GDI.. Ideally both of those go down! Especially now that we’re reading RSS as formatted blogs, we have web page newsreaders, we have pages allowing us to submit news items, blogs etc, it’s time to lose the line between NNTP/RSS/HTTP. The browser should be able to present threaded hierarchical information, and allow for editing responses to such. ==== remember this password ==== ANY dialog that has an option, should have an equivalent counter option. The remember this password box has a "Yes", a "NO", and a "never show me this again". HMM – what if (as is the case) I want to say "never show me this again – the answer will always be YES". Every time the dialog pops up, I get annoyed ==== anti-feature: tabbed browser ==== I think a lot of people would find that if they turned off the very evil "re-use browser windows" option – in advanced, they’d want this a lot less. MDI is a very bad thing.. Applications shouldn’t do their own window management. If you want things like tabs, you should be approaching the people designing your window manager (ie Microsoft or StarDock) – NOT the people designing your apps. It would be very easy for instance for a developer to make a desktopX object to give tabs to every application that had multiple instances loaded – which is a far far better approach than building that functionality into one specific item. PLEASE DON’T ENCOURAGE PEOPLE PUTTING AN WINDOW SYSTEM FEATURE INTO A SPECIFIC APP. IT’S VERY VERY WRONG. Imagine I produced a program with my own NTFS driver built into it? Just as stupid. I must state that if Internet Explorer is to ever get tabbed browsing that it is imperative that the implementation be as complete as that of Visual Studio 2005 (Whidbey). This is the absolute epitome of the TDI metaphor with multiple tab groups, the ability to make docked tabbed groups and also the ability to tear tabs off into free-floating windows. That last feature would be a killer above all current implementations of tabbed browsing and is something I’ve seen mentioned on sites like Slashdot as being a problem in FireFox. It is very important in a multiple monitor world to be able to take a page loaded in a tab and tear it off to drag it to another monitor, and then be able to load more tabs into that page and so on. The possibilities would be limitless. Of course TDI should also be optional for those that don’t want to use it. I would also like to see the ability to block all plugin/activeX content from executing on a page unsoliticited, by plugin/control and by page. For example I can state that I never want Flash animations to run automatically and instead I would get a placeholder where the animation would be. I could then click on that placeholder to get a menu that would allow me to execute that specific piece of content, execute all content of that type on this page or to execute all content on this page. This would improve security because nothing would execute without the user knowing. It would improve performance because bulky content would not be loaded up-front. It would also improve usability in environments like Remote Desktop because animations or other content would not overwhelm the existing bandwidth. The same options for images would be excellent. i love ie 6 and just worrry about security is about it. i surf a lot and have tried all the major browsers. none compare to ie for speed and fault free use imo. i use netcaptor 7.52 on top of ie 6 for its tabbed enhancements. avant browser does the same thing but has had problems with memory leaks. netcaptor does act buggy at times. i am a very long time mac user that moved to windows with win98 second. now firmly a windows user. ie could stand to learn a lot from necaptor 7.52 see how they allow control of the tabs…closing with right click or center click tabs are there from last closing state of browser when the browser is reopened. favorites menu expands horizontally so you dont have to scroll vertically through a huge list if your list is that large. look at flashget 1.6 as a sample of the best download manager. senses selections of multiple urls and pops up the dl manager when you do so….dls in streams and really maximizes a connections max speed. allows dls of more than one file at a time for those on super fast lines… id say make ie with all the features people are discussing, but leave them to be turned on by the savvy user. ie as it is fine for nearly everyone. most dont want tabs as a single window is more than enough to handle. keep up the great work ms! don’t just play catch up – INNOVATE. Please fix the "bug" where you can’t position an element on top of a selectbox. Then I could remove a whole lot of needless complex workaround code from my web apps. *** XML performance There are several situations where the XML parser stops parsing. For example when doing an XSL or XSLT transformation. *** Edit masks I mean you can put edit masks in with JavaScript, but as a regular expression where it just puts in the characters there’s no choice on. Now that would be nice. I want a toggle button on the tool bar that turns the proxy on/off. Drives me nuts every time log on or off of VPN. i would like to have the flexibility of placing the address bar wherever i wish to. I prefer to have it at the botton. I use opera and it provides me exactly what i want. Also, there needs to be a better way of storing bookmarks. Opera’s bookmark is decent, but still can be improved. Also there should be a flexibility of exporting/importing bookmarks to/from XML/html. I want an integrated rss reader too with IE. Mozilla’s sage seems to be nice, but a lot is left to be desired. Also, the back button is incredibly slow. Try Opera’s back button and see for yourself how fast it is. 1. Full adjustability of rendering properties for SELECT tag (dimensions, borders, z-index position, etc.) 2. Disable Cross Domain Security (CDS) feature when server, for which are security rules applied, allow it, e.g. over HTTP field. CDS obstruct usability of behaviour HTC files as library on one domain for many web servers. 3. Adjustable blocking for JavaScript page redirecting and window resizing 4. Fix bug in RegExp, /.+/ no match with string that contains new-line (n or rn) >. I think the most fundamental thing that needs fixing is the printing. Even just having a fit-to-width option would make a world of difference as so many web-sites put menus down the left hand side and the article text ends up being clipped. n-upping pages (i.e. 2 pages per sheet, 4 pages per sheet etc) would be nice too. Of course I suppose that a Java 2 JVM would be asking too much… Hi. I would say complete XHTML 1.1 & CSS2 support is most important as has already been mentioned. Keeping the developers happy makes the web a better place. Poor support for current technologies limits creativity. As for user features the changes are far more trivial requiring only tabbed browsing and a pop up blocker. This should keep the media happy as well. Once again, I would place strongest emphasis on the interpretation of XHTML and CSS2. Specifically. * child selectors like td:first-child, td:last-child * position:fixed * input:hover * fix the box model Merely fixing such CSS issues would make a big difference in the way IE is perceived by developers and their recommendations to users. I determine what browsers my clients use and so far it isn’t IE. Improving CSS support will lower reliance on JavaScript to compensate for these missing features. Why is it that standards support is seen as a trivial and optional request? It is precisely this that determines the kind of content that appears on the web. Thank you for finally establishing a channel of communication with the public and I wish you all the best. Narada Sage. > fix the box model They’ve already fixed the box model – if Internet Explorer is in "standards mode". update IE on the Mac at least once more. Look at Avant Browser as a model for features. – Block and click-to-play Flash (see Firefox extension) – easily-switchable background colours – proper text resizing – remember tabs and their locations on browser re-start I’ve run Windows since Win95, and IE since 3.0, without any anti-virus software. I’m diligent about using WindowsUpdate, not opening untrusted attachments, not accepting spyware, keeping security settings high, and I never had a problem, never got a virus. In fact I was a big supporter of IE, writing ASP web apps which took advantage of its XML/DOM support, behaviors, DHTML, and even ActiveX in trusted environments. Finally about two months ago I got hit. I forget what it was, but it hijacked my home page, my default search engine, added bookmarks, and redirected URLs. I’m not exactly sure how I got it, I wasn’t really concerned, but I was able to identify down to about a 1/2 hour when I was infected, and looked at the sites I had visited in that time. I still don’t know exactly what happened, I didn’t notice anything unusual, but I didn’t really care, IE failed me. I downloaded Firefox the next day and really have found it to be terrific. Firefox became my default browser two days later. #1 issue: security Speculation is fife at the moment about a possible Internet Explorer 7 update ahead of the planned Longhorn operating system update. There seems to be some substance to the speculation, especially the recent request on the recently launched IE… Jim, as I said, the investment bank I work for actively prevents anyone from using a non-IE browser on our systems. Because we can guarantee that people use IE, we can develop a consistent model for look and feel of our web-based systems. This applies to our intranet systems, with approx 70,000 users. If you enter the site with another browser you are presented with a message and blocked from viewing content. For developers like me it’s great! No more minority browsers to deal with. As for people wanting to using "themes" or "gestures" for their browsers, you don’t get much of that round here because this is the -real world- and people don’t have the time or the inclination to geekily toy with their browsers. The users of our systems are also not concerned that we have strict w3c xhtml compliance – they are interested in the CONTENT of our pages and tend to care (or notice) if we happen to have put in an errant uppercase tag or whatever! Our intranet site offers dynamic data via ActiveX controls and none of this would work in Mozilla or Opera or whatever other browser, so that’s that. As for the legal implications of this clampdown, for god’s sake – do you really think we should be sued for not allowing our traders the ability to "skin" their browser or use "tabs." Come off it! Geeks can toy with their alternative browsers but Internet Explorer serves a crucial purpose in the real world. I know this post will receive the tired old comments about security holes and rendering bugs. As I have said before, IE is a victim of attacks because it is what almost everyone is using. It’s simply not as "worthwhile" attacking any other browser at present. As for rendering bugs, there’s enough in Mozilla to devote a whole website to (bugzilla). I have found some myself simply in making my homepage – see As for Opera, there’s no way I will tolerate banner ads within the browser itself, and there’s no way I will pay for a browser when IE does everything I want, perfectly integrates with my OS and with Google via the Google Toolbar. I would ask for support of two items: 1. SVG with support for events. (Work with W3c to add an evend for drag and drop handling.) Currently, short of some overly bloated Java code or obscured flash, there is no good way of implementing a next generation (in my mind) web page that is cross platform and has a level of interactivity and integration similar to current desktop applications. SVG with events (and D&D) will give us the baseline tools for a new generation of interactivitly and integration of websites and applications. 2. Support for federated identity. This will provide a seamless and simplified user experience where they are not interrupted with continious login prompts between integrated 3rd party sites/apps/sevices. Passport was a good first attempt. THis is better: I think by the time IE7 is out, there will be newer standards it will have to support – like CSS3, for example. No question that it will also have to have support for XHTML1.1, CSS2.1 according to W3C specifications. At the same time there are often ambiguities with W3C definitions, especially when it comes to front-end/on-screen rendering (which is what I care most about for my job). By that time many current web developers (like myself) will grow to expect their code (XHTML, CSS) to render how Mozilla renders it now and, most likely, in the future. This may not be because they do everything right – but because they do it best now, and by being a pioneer, of sorts, they set de facto rendering standards and developer expectations. So when IE7 is out I better not see a dotted border be dashed, or how weird little Opera does it now! 🙂 . " Jim – how many website authors implement this stuff properly? That’s right _none_ (rounded off). But when it comes down to it, you have to be pragmatic.. I personally (and I think most rational people) would choose b). Your time is valuable – your integrity of browsing experience… well…. ain’t! Chris, > Because we can guarantee that people use IE, we can develop a consistent model for look and feel of our web-based systems. Look and feel is not something you can enforce. Honestly. If you were using PDF or some other format, then I wouldn’t disagree with you. But. > people don’t have the time or the inclination to geekily toy with their browsers. I wouldn’t call making something readable so they can do their jobs "geekily toying with their browsers". If anything, I’d say that a futile effort to get look and feel to be consistent is "geekily toying with browsers". > Our intranet site offers dynamic data via ActiveX controls and none of this would work in Mozilla or Opera or whatever other browser, so that’s that. I never suggested that your site would work in other browsers. Where did you get that idea? > do you really think we should be sued for not allowing our traders the ability to "skin" their browser or use "tabs." Once more, I didn’t even come close to saying that. I’m talking about varying look and feel. Things like IBM’s aural browser (based upon Internet Explorer, FYI), bumping up the font size, all manner of things that allow people to access websites where the "intended" design would otherwise prevent them from doing so. > As I have said before, IE is a victim of attacks because it is what almost everyone is using. And as I have said before, market share is not the reason for insecurity; Apache is the leading web server by far, and yet IIS leads the way in security holes. Nobody has managed to address this point yet. > As for rendering bugs, there’s enough in Mozilla to devote a whole website to (bugzilla). That’s ridiculous. Most applications that are worked on by a large team have a bug tracker, and Bugzilla is not solely devoted to rendering bugs; they are almost certainly in a minority. > I have found some myself simply in making my homepage – see Yes, did you read what I wrote about that? I’ll paste it in here:. You haven’t answered my question either: what is innovative about Internet Explorer 6.0? Darren, > how many website authors implement this stuff properly? That’s right _none_ (rounded off). What is your basis for saying that? Last-Modified and ETag are computed automatically by Apache, and it also pays attention to If-Modified-Since and If-None-Match when they are supplied by clients, providing a 304 response when appropriate. That leaves fine-tuning, sure, but the basic mechanism is in use by the *majority* of websites. >. But b) wouldn’t be ten times faster, and it’s more than likely that it will get it wrong *often*. The current mechanisms do *exactly* what you want, and are not inherently unreliable. Why is there a need to introduce something new and unreliable that works differently to all the other browsers? Chris, "But b) wouldn’t be ten times faster, and it’s more than likely that it will get it wrong *often*. The current mechanisms do *exactly* what you want, and are not inherently unreliable. " What’s the single biggest bottleneck? That’s right – connection to the server. I should be able to render the entire page out of the cache faster than I can do a dns lookup on the site, let alone make a connection to it. Asking the last modified date of everything on the page takes a significant amount of time. I would render the page, guessing, THEN ask for the last modified date – in parallel. It’s actually quite easy to guess at whether something has changed, and even when you are wrong, your bitmaps might quickly flash away or something, but most of the time you’ll have a page you can use AT LEAST 10x quicker apparently. "Why is there a need to introduce something new and unreliable that works differently to all the other browsers?" The best program is the one that produces the best user experience – which is almost certainly not the MOST CORRECT user experience. For instance – if I was making a 3d visualisation program, I wouldn’t be having a beautiful fully rendered 3d model spinning around – because it’s far more important for the user to be able to quickly move and change it.. I’d render the objects at far less precision, cut colours ignore shading/shadows etc until I got the speed and feel I was looking for. The same thing here – what I want in a user experience with a browser is bang, a page, click a button, bang a new page. I’m quite happy to live with the fact that the google title image I’m looking at may be yesterday’s – because what I want to do is type a search in as quickly as possible. Same with practically any site – even if I go to a news site, I’m far more happy to occasionally go "bummer, this is an old page", and hit refresh than I am to wait looking at three empty tables while it tries to ask the server what the image size of this particular bitmap is. "The current mechanisms do *exactly* what you want" Obviously they don’t! – I want FASTER.. MUCH, MUCH faster. "that works differently to all the other browsers?" I couldn’t care less if browsers work similarly or differently. I want a browser that suits me, and works the way I want it to. Why do I care if other browsers don’t? That’s like saying I shouldn’t use Photoshop because it works differently to Microsoft Paint? > Asking the last modified date of everything on the page takes a significant amount of time. *Please* learn a little about HTTP before asking for new features. Web browsers *don’t* need to ask for the last modified date. HTTP caching already deals with this. That is what I am trying to get across to you. There’s no need for a new, unreliable mechanism. > most of the time you’ll have a page you can use AT LEAST 10x quicker apparently. No. You are still operating under the assumption that a private cache has to validate its cached copy each time. This is incorrect. The *best* your new mechanism can do is be equally as fast as the current mechanism which is reliable and works in every browser. Of course, that’s the *best* it can do, it will often be very much slower, rendering and re-rendering the content when it guesses wrong. Us web authors, instead of being able to tune our caching properly, would then have to put up with our websites being slower in Internet Explorer than they were previously and have to worry about tripping up on its guesswork. There would be nothing we could do to improve the situation for our visitors. Right now, things work fine, we can mark when things are valid and when they become stale, when have flexible tools for doing so, and it’s a proven technology that *works*. Definitely wanting the alpha transparency in PNG-24, that is my number 1 complaint with MSIE, the second one is the CSS Rendering bugs, I design my sites with straight XHTML and then style it using the Web Developer plugin for Firefox which lets me edit the CSS file on the fly and see how it changes the document right infront of my eyes. If I then test the same page in Opera and MSIE it never seems to work in MSIE, I have come to the point where I have a warning on my sites informing the user that they are not getting the full experience of the site due to the standards non-compliance present in their browser and provide a link to download Firefox. BTW see if you can get this blog valid, afaik the <link> tag is "empty" so there shouldnt be a </link>, though in terms of XHTML it should be <link /> I am going to take a look to see if its a problem with the .Text blog software though Chris: "Jim, as I said, the investment bank I work for actively prevents anyone from using a non-IE browser on our systems. Because we can guarantee that people use IE, we can develop a consistent model for look and feel of our web-based systems." You’re whole assumption is wrong. You’re not guaranteeing everyone uses IE, you’re guaranteeing that people *NOT* using IE will stop using your bank and bring their hard-earned money to a bank that does support open standards and cares a bit about their Solaris/Mac/Firefox/Mozilla/Konqueror/Safari/whatever based browsing machine. In other words: you are proactively screwing up your company’s profit by negligence and stupidity. I am a hardcore Windows-user and use IE about as much as I do Firefox. The only reason I still use IE is because. Jim – I think we are talking about very different things. Let’s assume no local caching at all. When I type a url into the browser it does the following things: 1> parse the url, look for a method 2> prepare a http get header 3> look up the ip address in dns 4> make a connection to the ip addresss 5> send through the http header 6> download the content 7> do any content specific layout 8> render the content Now, suppose you had internet explorer set to "Never" in the "Check for new versions". In theory, the steps would be 1> find url in cache 2> do any content specific layout 3> render the content now, you SHOULD be able to do ALL OF THAT quicker than you can even get to 3> above – because any IP connection is quite expensive in the scheme of things. Trouble is, you’d never get any updates. The real trick is in the heuristics of finding an in-between solution. How do we get the highest LOCAL cache hit rate we possibly can, so we never have to go anywhere near making a DNS request, let alone a full HTTP connection! All I’m saying is we can get a lot more intelligent. For instance – even when title bitmaps change, they tend to remain the same SIZE. A very smart caching browser could pre-layout the page based on the size of the previous bitmaps, HOPING the new ones will be the same size… in some cases it would get it wrong, but in most cases the apparent user experience would be astronomically quicker. There are certain things that you just can’t do – at some point you have to get the data over the link for instance… but the more trickery, fakery, whatever we put into the browser to give the user a snappier feel, the happier everyone is. ." That’s a complete contradiction, Niels. If HTML truly is an open standard then Microsoft should feel free to extend it, as long as they don’t pollute other people’s namespaces (which they don’t, as far as I know). In the case of MS’s ActiveX, it serves our purposes, so we’ll use it. We know that all our users will be able to use it because our corporate intranet is locked down to Internet Explorer only. When we get any complaints from our 70,000 users about this lock-down (none so far!), perhaps it will be re-considered. But for the time being, our users here in the real world just aren’t that fussed about "gestures" and "tabbed browsing" ..ah well.. "Look and feel is not something you can enforce …." Jim, the look and feel of a site is precisely what HTML and CSS is designed to dictate. Yes, of course parts of the actual "look" will vary when the browser is resized, overall text is resized or the colour depth varies or whatever, but good standards ought to offer the designer a full toolset and allow him to have a large level of control over the final look and feel, facilitating such differences. This should, and does, extend to accessibility – the designer should be able to use the same standards to enable blind users to view the website in a consistent fashion too. Since when did I suggest otherwise?? IE was used as the basis for IBM’s aural reader, as you said, so what, exactly is your issue? (are you getting confused and actually arguing FOR internet explorer here?) "market share is not the reason for insecurity; Apache is the leading web server by far, and yet IIS leads the way in security holes. Nobody has managed to address this point yet" You are diverting the argument to the server market, which is a completely different issue. I would agree with you that some of the best servers are unix based. However, this thread, and this site is discussing the CLIENT market, and specifically, browsers. "I find it amusing that you moan about Mozilla’s shortcomings whilst simulataneously messing about with floats to try and make up for the fact that Internet Explorer doesn’t support display: table-cell." Since when did I say I wanted to use the "display: table-cell" property? I don’t. Floats work very well on my site, especially from an accessibility point of view. On a handheld device, like my Smartphone, the navigation is displayed first, and the content is displayed underneath. But being the bleeding heart accessibility guru you are, I’m sure you knew that already. "what is innovative about Internet Explorer 6.0" In answer to your question I have begun documenting what’s special about IE6 on my site at In my opinion the HTC behaviour model alone is reason enough to make me want to develop for IE rather than the minority browsers. HTC behaviours are in a league of their own. With this technology, IE allows javascript behaviour to be propogated through a whole site without having to put any event handlers inline with the HTML. I’m sure developers in this forum could understand the significance of this. HTC can even be used to call SOAP webservices. Plus, IE has a much enhanced JavaScript DOM allowing, for example, HTML elements to be iterated much more neatly than in Mozilla, Opera etc. Since 1997, IE has the awesome filter classes in CSS that allow powerful client-side effects such as gradated transparency and photoshop-style colour filters. IE6 can automatically resize images to fit them within a page, it allows display-time editing (using the contentEditable attribute), coloured scrollbars, media bar, transparent IFrames, vertical text layout, and css zoom property. If you need to know more about innovation in IE6 don’t hesitate to ask Guys, we know you love your pet alternative browsers, but rather than spreading ignorance about IE6, how about helping Microsoft to develop IE7 into the browser that people will want to use? No wonder it takes MS a while to get it to final release it when there’s such a euphoric condemnation and scrutiny from all the self-proclaimed web gurus swept up in the jihad that’s become the "web-standards" battle. None of you will be happy, however good IE7 becomes, because there is always something that some minority browser does, that IE doesn’t. And there are so many personal agendas, for which the w3c has become a festering ground. CSS3 will probably never be finished. It’s a case of too many cooks spoiling the broth. Have you noticed how Microsoft’s developers never bitch about the bugs and flaws in other browsers (which do exist, believe it or not!). It seems open-source developers spend most of their time trying to discredit the market leader in order that they might get a slice of the pie. It’s complete hypocricy. Darren, > I think we are talking about very different things. Let’s assume no local caching at all. What? Why? Your initial suggestion was entitled "intelligent caching". We are talking about caching mechanisms. Without any local caching, your idea wouldn’t even get off the ground because it _is_ local caching. > In theory, the steps would be >1> find url in cache Hang on a sec, I point out that the current caching mechanism is better than your suggestion, so you say "well it’s not better if it’s turned off", and then proceed to explain how your idea is better, relying on the cache being turned on? If you would like to compare like with like (i.e. normal caching behaviour against your proposed behaviour), then go ahead. You’ll find that the current mechanism works very well. But if you are going to create a completely artificial scenario where everything possible is done to slow down the current mechanism, of *course* your method will win – *any* method would. > How do we get the highest LOCAL cache hit rate we possibly can, so we never have to go anywhere near making a DNS request, let alone a full HTTP connection! How do we get the highest cache hit rate? Simple. We listen to what the website tells us. The web server and web developers are in the best possible position to know when something is going to be in date or out of date. No amount of guesswork or heuristics on the part of the browser will outperform them. > but the more trickery, fakery, whatever we put into the browser to give the user a snappier feel, the happier everyone is. Firstly, as I have said, it doesn’t give the user a snappier feel. In fact, you have agreed that in the cases that it gets it wrong, a rerendering will be necessary, which is *slow* in the eyes of the user. But more importantly, you are removing the opportunity for web authors to tune their caching headers to provide the fastest, most reliable experience for their visitors. That isn’t going to make everyone happier! > the look and feel of a site is precisely what HTML and CSS is designed to dictate. HTML wasn’t designed with look and feel in mind at all. CSS wasn’t designed to dictate anything, it was designed to _suggest_ presentation. A fundamental concept in CSS is how author suggestions only make up one third of the presentation layer (that’s where the ‘C’ in CSS comes from). > Yes, of course parts of the actual "look" will vary when the browser is resized Well that is what I was trying to say – the look and feel varies no matter what you try and do about it. I’m not sure why you quote the word "look", I’m not using some outlandish definition of the word. > This should, and does, extend to accessibility – the designer should be able to use the same standards to enable blind users to view the website in a consistent fashion too. Since when did I suggest otherwise? When you seemed to be saying that you were trying to enforce a particular look and feel. What is a reasonable presentation to one person is an unreasonable presentation to others. > IE was used as the basis for IBM’s aural reader, as you said, so what, exactly is your issue? That you cannot enforce look and feel, even if you restrict yourself to a single browser. > Since when did I say I wanted to use the "display: table-cell" property? I don’t. You arrange your website into columns. Floats were only designed to shift stuff to the side within normal flow, they weren’t designed for creating columns. They have numerous disadvantages when they are hacked into doing so. display: table-cell, on the other hand, was designed for just this purpose, and is tailor-made for your type of use pattern. Why on earth would you not want to use it? > Floats work very well on my site, especially from an accessibility point of view. On a handheld device, like my Smartphone, the navigation is displayed first, and the content is displayed underneath. You are talking as if display: table-cell doesn’t do this. It does. Even if it didn’t, that’s what handheld stylesheets are for. > In answer to your question I have begun documenting what’s special about IE6 on my site No, you said "innovations" to begin with. That means something that is brand new. There are plenty of things that make Internet Explorer "special"; that doesn’t mean that it is innovative. I loaded up your page, and the very first thing I saw was a rant about how Mozilla was doing the wrong thing. At the end of the entry, you admit that you were writing invalid CSS – i.e. Mozilla was correctly implementing CSS error handling, ignoring non-zero lengths without units, and Internet Explorer was incorrectly assuming they were pixels. Specification: Then I read further. As far as I can tell, you think innerHTML and HTCs are innovative. I think we are talking at cross-purposes here, I don’t count something as innovative unless it lets you do something new. That’s a high bar – I don’t consider things like tabbed browsing to be innovative; MDI interfaces have been around for decades. Xerox was innovative. > HTC behaviours are in a league of their own. With this technology, IE allows javascript behaviour to be propogated through a whole site without having to put any event handlers inline with the HTML. You mean like Mozilla’s XBL? > HTC can even be used to call SOAP webservices. Yep, Mozilla can do this too. > Plus, IE has a much enhanced JavaScript DOM allowing, for example, HTML elements to be iterated much more neatly than in Mozilla, Opera etc. You see that’s where we disagree. I don’t consider "neatness" to be something innovative. > Since 1997, IE has the awesome filter classes in CSS that allow powerful client-side effects such as gradated transparency and photoshop-style colour filters. You even state yourself this isn’t new stuff! > IE6 can automatically resize images to fit them within a page Mozilla does this. > it allows display-time editing Mozilla and Safari do this. > coloured scrollbars Konqueror does this. > media bar This has been removed from Internet Explorer with XP service pack 2. Must be really useful, eh? 🙂 > transparent IFrames I’m not sure what you mean by this. You certainly don’t mean "transparent iframes"; that’s simply a case of visibility: hidden. > vertical text layout, and css zoom property. These I am not sure about. It seems to me that you have mixed up "innovative" with "does something in a nicer way than the competition", or "has a feature the competition doesn’t", and furthermore are a little ignorant of the competition’s capabilities. I’m not saying Internet Explorer is deficient in every possible way. I’m saying that I can’t think of anything it has done recently that is *innovative*. > None of you will be happy, however good IE7 becomes, because there is always something that some minority browser does, that IE doesn’t. I don’t think I’ve criticised Internet Explorer for not having an obscure feature. I haven’t even asked for XHTML or CSS 3 support. What I have done is ask for support for years-old public specifications that Internet Explorer’s competitors all implement well. HTML 4.01. HTTP 1.1. CSS 1. CSS 2. PNG 1. > CSS3 will probably never be finished. There are a number of CSS 3 specifications that have already reached Recommendation status, as I have pointed out to you before. > Have you noticed how Microsoft’s developers never bitch about the bugs and flaws in other browsers (which do exist, believe it or not!). I’ve noticed that people in general tend to bitch about Internet Explorer far more. Did you ever consider that there’s a perfectly legitimate reason for that? What I would like to see is for MS to quit integrating stuff into IE and to make things "addons" like for instance the RSS reader. I don’t need that right now so I don’t want it in my browser. Make Active X and Java and Javascript easier to turn on/off. These things are easily done in Mozilla. For Java and Javascript, I can just check or uncheck a box on a bar (called a prefbar) above my browser window (the bar was an addon). If I want to add support for Active X, I can always go to a page to install it. If I want a calendar with my browser, I just go to the calendar addon page and get/install it. If I want an RSS reader, I can go to a page to add that. I haven’t heard anything about improving Outlook Express but there are many features that Mozilla has that OE can’t compare to. Right then Jim, let’s get anal about this, if that’s what you want. "display: table-cell" — According to w3schools.com "The element will be displayed as a table cell (like <td> and <th>)." Therefore I’d assume it ought not to wrap below other such cells on the same row. And, if it does, I’m sure you would agree that this is a completely counter-intuitive w3c spec. Regarding the filter classes (first seen in IE4, 1997), these have been expanded in IE6. But in any case, if you can lump all versions of IE together to support your argument, I ought to be able to use previous Internet Explorer innovations to support my argument. [regarding the innovation of HTC] "You mean like Mozilla’s XBL" Ahh yes — but which came first? Remember we’re talking about _innovation_ here. HTC was first submitted by Microsoft as a spec in 1998. Y’know the javascript innerHTML property? That was a Microsoft innovation that became so popular that Mozilla implemented it, despite the fact it’s not in the w3c spec. "There are a number of CSS 3 specifications that have already reached Recommendation status" So, are you suggesting we have a new release of IE for each new little chunk of CSS3 that drips out of the w3c? Ridiculous! IE is developed by a real-world software house, not by open-source hobbyists, it therefore goes through a well-controlled but less-frequent release cycle. If you think that one organisation (w3c) should control innovation on the web then you are being completely hypocritical when attacking Microsoft for it’s domination. "I loaded up your page, and the very first thing I saw was a rant about how Mozilla was doing the wrong thing. At the end of the entry, you admit that you were writing invalid CSS" Remember that "invalid CSS" is a dichotomy here, and, moreover, we’re talking about the javascript DOM, NOT vanilla CSS. Despite being the most w3c-compliant browser, Opera will quite happily accept my old code, which (logically) assumes pixel measurements by default. Your debate is around the strictness of CSS parsing – and I’d be happy to indulge you there in another post. "You see that’s where we disagree. I don’t consider "neatness" to be something innovative." But you like strict adherence to anal parsing rules.. go figure. "I’m not sure what you mean by this. You certainly don’t mean "transparent iframes"; that’s simply a case of visibility: hidden" Again, perhaps you’re slightly missing the point. A transparent iframe is one where elements within the iframe window are displayed individually above elements _behind_ the iframe, avoiding a rectangular cut-out imposed by the iframe above the page. "I’ve noticed that people in general tend to bitch about Internet Explorer far more. Did you ever consider that there’s a perfectly legitimate reason for that?" Yes, as I’ve made clear, the minority browser luvvies need to discredit IE in order to claw themselves some market share. In fact today I saw a ridiculous post from a firefox evangelist at He claims that problems with being denied webpages by an overloaded web-server are slightly alleviated by using firefox as your browser. So now firefox can magically avoid server-outages? Perhaps using firefox also cures hair-loss and increases fertility? A small suggestion: Download files directly to a destination. Currently I’m verrry anoid when I select to download a file to my D: drive because C: is almost full and IE downloads a file to c:…temprary internetfiles and afterward starts COPYING this file (why not move it?) It is important to treat filetypes straightforwardly. *NEVER* guess filetypes from filename extensions nor their content. *ALWAYS* obey Content-Type header, and no cheating after that. — Quote: "That’s a complete contradiction, Niels. If HTML truly is an open standard then Microsoft should feel free to extend it, as long as they don’t pollute other people’s namespaces (which they don’t, as far as I know)." No that’s not a contradiction. HTML is an open standard, in that anyone can make suggestions to the appointed independent entity, tentatively named World Wide Web Consortium to indicate that they have something to say about WWW standards. The W3C will then judge whether the suggestion makes it into the standards, and anyone implementing those *OPEN* standards can then render the page correctly. Microsoft is one of the *cofounders* of the W3C. Ignoring them is akin to a dictator who publicly founds a justice system with independent judges and has 500 people shot without trial the next day. The w3c may have been co-founded in MS with good intent. However, it is now acting AGAINST Microsoft by ‘drip-feeding’ parts of the CSS3 spec through to the market. Microsoft cannot deploy a new version of IE to support each revision of a specification like this. Its users would not accept having to upgrade their browsers every time some trivial new CSS trick came out. And then we have w3c reinventing the wheel, developing features into CSS3 that Microsoft have made de-facto standards for years, like vertical text display and element alpha support. The w3c CSS3 spec is a hotch-potch collection of disparate "modules": one such module is called "CSS2.1," which is a fix for the mistakes made by the w3c in CSS2. Maybe in some ways it’s a good thing that MS IE doesn’t support all of CSS2. In any case, I don’t think it should be forced to support CSS3 either. I’d go so far as to suggest that some people in the w3c know that they now have a rare strangle-hold on Microsoft. They use the good reputation of CSS as a standard (as developed in cooperation with MS) abusing it’s coherence to foist a mixed bag of new technologies onto the market, some of which a 3-year-old IE6 already support with MS’s own standards. And then MS gets bashed for not supporting these "new" w3c ideas! I would question the independence of the w3c, in it’s "luvvie" open-source community. I don’t know what you think luvvie means, Chris, but I don’t think it means what you seem to think it means. If any of you guys are using Firefox you need to read the following about the following security hole: It’s the most critical and severe hole I have ever seen in a browser. Not only that but Mozilla have known about it for 5 years and been unable or unwilling to do anything about it. Even the latest Mozilla (0.9.3 at time of writing) is subject to this flaw. I’m certainly glad I’m using IE6 If you’re using Mozilla Firefox you should read the following disturbing security advisory This is the most significant security hole I have ever seen in a browser. Because the browser is written in XUL language and also accepts and runs XUL, a website can completely re-write the user interface of the browser! This hole has existed for 5 years. And what’s the recommended fix for this huge and intrinsic problem? "Do not follow links from untrusted sites" … good luck, you guys.. I’m glad I’m using IE6.. – IEv6x is the Courtney Love browser in a world of Kirsten Dunst browsers I wouldn’t say this.. Courtney Love still only has 7 holes :p this bug exists in practically any browser that can display fake browser-like XUL buttons and such. "do not follow untrusted links" is really the only advice you can give – think of all of the people who use IE who get POPUPS of fake internet explorer images? SP2 fixes the popup problem, but not a lot of people have downloaded the update, and even so… IE is still way behind with tons of other things. there is a VerifyURL extension that allows you to right-click and see the true URL of any website (even the ones with fake buttons or within frames). this does more than solve the problem, it gives you a new tool to check other seemingly fake sites. taking away a great feature because it can be abused is no good. that’d be like assuming your entire userbase is dumb and won’t know that they’re using a fake browser window. "this bug exists in practically any browser that can display fake browser-like XUL buttons and such." Err yes, as you have just confirmed, this bug exists in Mozilla and not IE. IE does NOT support XUL (thank goodness). "there is a VerifyURL extension…" So, you have to install extensions to your browser to make sure you’re not gonna be spoofed? Haha! Perhaps you also have to install extensions to make sure your themes system doesn’t make your browser unreliable? (as it is at the moment see: ) "taking away a great feature because it can be abused is no good. that’d be like assuming your entire userbase is dumb and won’t know that they’re using a fake browser window." If you think it’s a great feature to allow a website to re-write your browser user interface? When the Mozilla Organisation keeps a serious bug like this confidential (it did for 5 years), I’d expect you to be ill-informed about such holes. To that effect, you could be right in saying the entire userbase of Firefox is "dumb"! [regarding SP2] "not a lot of people have downloaded the update" Changes to IE (including SP2, when it’s released to the public properly) will be auto-downloaded via Windows Update. The current SP2 fixes have been released to developers at the moment. Mozilla relies on people visiting its site to download the latest version of Firefox, although a Microsoft-style update feature is soon to be incorporated into the browser. What happens to all those people using versions 0.9.3 and before who simply haven’t got round to re-downloading and re-installing their browser? R DA SCUM OF DEE EARTH! OUR BRAINZ OWN Y0! GO MODERATE _this_ DOWN, IF YOU DARE! MUAHMUAHMUAH! …that was my crazy brother over there, I am so sorry please forgive me, he is out of his mind you must now. i beg your pardon it will not happen again. sorry… IE7 needs to be available for all Windows OS – not just Longhorn. pamibe » IE tries to play catch up… and still fails! PingBack from PingBack from PingBack from PingBack from PingBack from PingBack from PingBack from PingBack from PingBack from PingBack from PingBack from PingBack from
https://blogs.msdn.microsoft.com/ie/2004/07/23/overwhelming-response/
CC-MAIN-2016-44
refinedweb
25,012
71.75
@ryvasy, Why did you flag this package out-of-date? It's 0.30, just like this one: Am I overlooking something? Search Criteria Package Details: perl-data-visitor 0.30-1 Dependencies (6) - perl-class-load>=0.06 - perl-moose>=0.89 - perl-namespace-clean>=0.19 - perl-task-weaken - perl-tie-toobject>=0.01 - perl-test-requires (make) Required by (8) Sources (1) Latest Comments SIGTERM commented on 2013-07-28 14:40 @ryvasy, Why did you flag this package out-of-date? ryvasy commented on 2013-04-17 08:03 $srcdir variable is no longer available outside of the build() and package() functions
https://aur.archlinux.org/packages/perl-data-visitor/
CC-MAIN-2016-50
refinedweb
106
52.76
One of the most significant new features introduced in Synergy/DE 11.1.1d is a unit testing framework for traditional Synergy. The framework is based on the MSTest framework that we have supported in Synergy .NET for some time and is delivered as a component of our Synergy DBL Integration for Visual Studio product. Introduction. The idea is to exercise each routine in as many different scenarios as you can think of, passing various combinations of parameters, both correct and incorrect, and perhaps by altering the execution environment. The goal is to prove that the routine responds correctly in all cases when used correctly, and also to prove that it fails appropriately in all cases when it should fail. Unit tests are generally small and fast to run individually, but numerous. It is not uncommon to have hundreds or even thousands of unit tests, and, ideally, those tests execute quickly enough to make it feasible to run them frequently. In some environments, unit tests are configured to run each time the developer builds a new version. In other environments, tests might run as part of a “continuous integration / continuous deployment (CI/CD) pipeline that runs each time developers attempt to check code into a central repository, and the failure of a unit test might cause the check-in of the code to fail. Adding Unit Tests to a Solution To make adding unit tests easy, we have provided a new Visual Studio project template named “Unit Test (Traditional).” It adds a traditional Synergy ELB project to your solution, and that project contains some sample unit test code. Adding Test Classes and Test Methods Unit test projects contain unit test classes, and a unit test class is simply a class that is decorated with a {TestClass} attribute. Here is an example: A unit test project can include any number of unit test classes. Generally, a test class groups together a related set of tests. In the case of the code above, notice the name of the class is CustomerApiTests. Presumably, tests that exercise a Customer API are to be added later. Each test class contains one or more test methods, which are, once again, decorated with an attribute. In this case the {TestMetod} attribute. Here is an example of a test class containing several test methods: The code above declares three test methods, each of which tests a particular piece of functionality of the API. But this is just a simple example. In an actual test class, there may be several test methods to test each of the API functions in different ways. The number of test methods per function is typically determined by how complicated the function is, how many parameters it has, how many different things it can do based on those parameters, etc. Coding Test Methods Each test method typically contains a small amount of code, sufficient to test a particular scenario. The operating environment must be the same each time a particular test runs; we’ll talk more about that later. Here is an example of a test method that contains some simple code. When writing unit tests, many developers follow what is known as the “AAA Pattern,” which stands for “Arrange, Act and Assert.” First, you arrange (or prepare) whatever is needed to execute the test, then you “Act” by executing the test, and finally, you “Assert” the result of the test. The name of this final phase is interesting because, in MSTest, Assert is the name of the actual class used to notify the framework of the results of a test. In the example above, the arrange phase involves creating a new instance of the CustomerAPI to use for the test. The Act phase then uses that object to make a call to the API function to be tested. And in the Assert phase, the code checks that the expected customer record was returned. The Assert Class The Assert class has many different static methods used to signal success or failure based on various scenarios. The method you choose to use depends on the nature of the tests that you need to execute to determine the success or failure of the test. Some examples of Assert methods are: Assert.AreEqual(expectedObj, actualObj) Assert.AreNotEqual(notExpectedObj, actualObj) Assert.AreSame(expectedObj, actualObj) Assert.AreNotSame(notExpectedObj, actualObj) Assert.Fail() Assert.Inconclusive() Assert.IsTrue(condition) Assert.IsFalse(condition) Assert.IsNull(object) Assert.IsNotNull(object) Most of these methods also have several overloads with different combinations of parameters. For example, most also have a variant that allows an error message to be recorded in the case of a failing test, and more. In traditional Synergy, the Assert class is located in the Synergex.TestFramework namespace, along with the various attributes mentioned earlier. By the way, I should mention that if any test method fails with an error, or throws an exception, then the test framework catches those errors and exceptions and reports them as test failures in the Test Explorer window. Building Tests When you build a solution that includes a unit test project, that project builds just like any other traditional Synergy ELB. It is likely that the code being tested by your unit tests exists in one or more other projects in your solution, so to give your unit tests access to that code, you simply add references to the other projects. Also, if you are using a Synergy repository project, and your unit tests require repository data definitions, then also add a reference to the repository project in the usual way. And like other Synergy projects in Visual Studio, if your development environment and code rely on the values of environment variables that are defined using the “Common Properties” method, you may need to opt- into those common properties via the project properties dialog in the usual way. The Test Explorer Window Visual Studio includes a piece of UI called the Test Explorer. If you don’t see it already displayed, you can display it by using the menu commands Test> Test Explorer. When the Test Explorer window it first displayed, it often starts out empty, but notice the instructions that are displayed in the window, which says, “Build your solution to discover available tests.” Once you build a project that contains unit test classes and methods, the Test Explorer window will discover and display your tests: By default, your tests are displayed in a hierarchy based on your project name, namespace, and class, and then the individual methods are displayed below each class. Notice the two green buttons to the left side of the Test Explorer toolbar. These are the Run All Tests and Run Test buttons. In my sample scenario, only one of the tests contains any code; the other two test methods are configured to throw an exception that says “Not implemented.” Here’s what happens when the Run All button is clicked: As you can see, Test Explorer is presenting a summary of what happened when it attempted to run each of the three methods. One test passed, and two failed. Notice the color-coded icons, green is good, red is bad, and notice that all of the parent folders are also red. For a parent folder to show a green icon, all tests below must pass. Also, notice how the Group Summary pane is displaying a summary of the pass/fail information. This is because we have a folder node selected in the tree. If we select an individual test, then the pane displays the details of the result of that specific test: In addition to executing all tests or a single test, you can also select a number of tests, and use the right-click menu to execute just that set of tests: In the above screenshot, you can see that the third test was not executed, which is indicated by the light color of its icons background. Debugging Tests and Tested Code In addition to merely running tests, it is also possible to use the Test Explorer to launch debugging sessions for specific tests. To do so, right-click on the test and select Debug. A debugger session will start, and the debugger will break in the test method. You can debug the test method, and as long as you have the source code available, you can also step through into the underlying code being tested. Organizing Tests As the number of tests in your environment grows, it is likely that the sorting and filtering methods provided by the Test Explorer may not be enough to allow you to work efficiently. For this reason, it is also possible for you to apply custom categorizations to your tests in code. This is done by decorating your test methods with an additional attribute named {TestCategory}. Here is an example: And having done so, there are options in the Test Explorer toolbar to modify the display hierarchy to include “Traits,” which are the custom attributes that you have applied to your test methods. Here is an example of that: Environment Initialization and Cleanup As you start to really get into writing unit tests, you will quickly realize that a big part of the road to success lies with your ability to always run your tests in a consistent and known state. For example, if your tests are reading and writing data files, those data files need to be in a known good state. To help with that, the framework provides some special functionality that can be used to establish and reset the environment, if necessary. In MSTest, there are six such mechanisms; in traditional Synergy, we currently support four of them. These are: - ClassInitialize – runs once, BEFORE any test in the class runs - TestInitialize – runs BEFORE EVERY TEST in the class - TestCleanup – runs AFTER EVERY TEST in the class - ClassCleanup – runs once, AFTER all tests have completed You can optionally add methods to your test class to provide each of the functions that you require. To do so, you simply add public void methods, decorated with an appropriate attribute, like this: For example, if you are testing an API, and that API is completely stateless, then instead of creating an instance of the API in each method, then using it and discarding it, you could use a ClassInitialize method to create an instance of the API, code all of your test methods to use that same instance, then delete the instance in the ClassCleanup method. If your API is not stateless and you do require a new instance for each test, you could simplify the code in each test by defining the code to instantiate the API in a TestInitialize method and discard it in a TestCleanup method. The runtime overhead is the same, but you only have to code the APU creation one time instead of potentially hundreds of times. In MSTest, there is a third level of Initialization and Cleanup that happens at the “Assembly” (ELB in this case) level. This is implemented the exact same way, via the attributes {AssemblyInitialize} and {AssemblyCleanup}. We do plan to add that support; it just didn’t make it into the initial release. Summing Up This post is not intended to be a complete guide to the subject of unit testing in Traditional Synergy, and indeed there are other features that have not been discussed here. Instead, the intention was to provide enough information to pique your interest, and hopefully, that goal was achieved. I encourage you to refer to the Synergy documentation for additional information, and we are looking forward to receiving your feedback on this important new technology for traditional Synergy.
https://www.synergex.com/blog/tag/visual-studio/
CC-MAIN-2021-04
refinedweb
1,940
56.89
Morpion solitaire/Unicon This example goes beyond the task goal of playing a random game of Morpion solitaire. The code is divided into two sections (a) to support the basic task and (b) extensions. The program was designed as a first cut/test bed more for understanding and exploring the problem space rather than for high efficiency. The program is structured to allow its behaviour to be configured (see options and M_ vars). Some of the features and extensions include: - Playing single games or running multigame simulations to find and capture the best games - Producing re-loadable game records and the ability to replay a saved game from a given position forward - The ability to reproduce purely random games from the random number seed - Limiting the initial random moves to 1 of 4 cases (inside corners (1 type) x 8, outside corners (2 types) x 8, outside valley (1 type x 4)) not 1 of 28. In the case of this program these are all in the North East quadrant. - Plays both 5T (by default) and 5D variants - Plugable modules for functions including applying player strategy, position evaluation (fitness/heuristics)to facilitate quick experimentation, changing game read/write functions, etc., etc. Using random play in many summary runs, the highest (5T) score achieved was 92. While it was fairly easy to push random games into the low 80's running simulations of as little as a few hundred games, going beyond the highest score without adding some smarts will require some very long runs and is unlikely to result in any significant progress. Observing multigame runs show that play advances fairly quickly before progress grinds to a virtual halt. Many runs peak in the first few hundred or thousand games. The ability to replay recorded games provides a way to study the behaviour of the problem. Selecting successful random games, truncating them and running multigame simulations using a successful base shows that the seeds of success are sown early. It was possible to better the results of good random game scores (mid 80s) pushing the new scores up into the mid 90s. Unfortunately this technique when applied to Chris Rosin's 177 move record game did not produce any new records :(. Surprisingly however, it was possible to produce games in the high 160's! using random play with a truncated (approx 60%) version of this game. The observation that a game becomes bad or good fairly early suggests that finding an intelligent forward looking position fitness evaluator is going to be a challenge to say the least. It's also possible to capture bad games. This might provide some kind of comparison against good games when considering fitness evaluators. Or not. Internally, the grid is automatically expanded as needed whenever a cell is played on an outer edge. When this happens only the affected side is expanded. As a side effect the grid numbering will seem unusual. The game log shows all row/col coordinates relative to the 1,1 origin located at the intersection of the lines containing top and left most edges of the cross. A fixed grid might be more efficient. With the ability to detect an off page condition and save and replay the game later this could be easier to deal with. The issue of grid origin comes up all the time when working with the code and debugging output. The program produces a crude ASCII art grid; however and more importantly it produces a game log that can be replayed in the Pentasol free player. This can be used to capture graphical grids as well as providing independent validation of the games. Most of the Record Games have downloadable games in this notation. For example Chris Rosin's 178 move grid. For more see: Morpion -h|-help [edit] Basic Solution The code need to solve the task requirements is found in this section. Just delete the $define below. The basic code will play a single random game, show an ASCII art grid, and produce a log in Pentasol compatible notation. On any given run your most likely to see a game with scores in the 20's or 60's. [edit] Main and Core Game Play Procedures link printf,strings,options $define MORPVER "1.7g" # version $define EXTENDED 1 # delete line for basic procedure main(A) # Morphion $ifdef EXTENDED MorpionConf(A) # conf extended version if \M_ReplayFile then ReplayMorpion() else if \M_Limit === 1 then ShowGame(SingleMorpion()) else MultiMorphion(\M_Limit) $else printf("--- Morpion Solitaire 5 (v%s) (single random game)---\n\n",MORPVER) M_Strategy := RandomPlayer M_Mvalid := ValidMove5T # can be changed to 5D M_WriteGame := WriteMoveLogPS M_Output := &output ShowGame(SingleMorpion()) $endif end $define XEMPTY "." # symbols used in Grid $define XINIT "*" $define XUSED "+" $define DHOR "-" # Directions for moves $define DVER "|" $define DRD "\\" $define DLD "/" $define DALL "-|/\\" global M_Strategy,M_Mvalid # Pluggable procedures global M_Output # output files global M_WriteGame # for logger record morpiongame(grid,score, # the grid & score log,history, # move log and replayable history log roff,coff,center, # origin expansion offsets and center pool, # pool of avail moves move, # selected move count, # game number (multi-game) rseed) # &random at start of random play record morpionmove(direction,move,line,roff,coff) # move & line data record morpioncell(symbol,direction,row,col) # a grid cell record MorpionGameRecord(ref,row,col,darow,dacol) # record of game procedure SingleMorpion(MG,N) #: Play a game silently /MG := SetupM5Grid() # start game with new grid? while MorphionMove(M_Strategy(MG)) do # keep moving if MG.score >= \N then break # unless truncated return MG end procedure MorphionMove(MG) #: make a move (M := MG.move).roff := MG.roff # copy offsets M.coff := MG.coff put(MG.history,M) # save history MG.score +:= 1 # and score \M_LogDetails(MG,M) # for analysis every x := !M.line do { # draw the line g := MG.grid[x[1],x[2]] g.direction ||:= M.direction /g.symbol := XUSED # remove / for all XUSED } return end procedure ScanGrid(MG) #: Scan all grid lines G := ExpandGrid(MG).grid # expand the grid if needed MG.pool := [] # candidate moves every c := 1 & r := 1 to *G do # horizontal ScanGridLine(G,r,c,0,+1,DHOR,MG.pool) every r := 1 & c := 1 to *G[1] do # vertical ScanGridLine(G,r,c,+1,0,DVER,MG.pool) every ( c := 1 & r := 1 to *G-4 ) | ( c := 2 to *G[r := 1] ) do # down & right ScanGridLine(G,r,c,+1,+1,DRD,MG.pool) every ( r := 2 to *G-4 & c := *G[r] ) | ( c := 5 to *G[r := 1] ) do # down & left ScanGridLine(G,r,c,+1,-1,DLD,MG.pool) if MG.score = 0 & M_Strategy ~=== Replayer then { # move 1 special case every put(pool1 := [], MG.pool[2|19|20|26]) # cor. o(2), i(1), val o(1) MG.pool := pool1 } if *MG.pool > 0 then return MG end procedure ScanGridLine(G,r0,c0,ri,ci,dir,pool) #: scan 1 grid line (5T/D) local L5,M,r,c,x L5 := [] r := r0 - ri & c := c0 -ci # one step back while put(L5,G[r +:= ri,c +:= ci]) do { while *L5 > 5 do pop(L5) # too long ? if *L5 < 5 then next # too short ? if M_Mvalid(L5,dir) then { # just right, but valid? put(pool,M := morpionmove(dir,,[])) # add to pool of valid moves every x := L5[i := 1 to *L5] do { put(M.line,[x.row,x.col]) if /x.symbol then M.move := i } } } return pool end procedure ValidMove5T(L,dir) #: Succeed if L has valid 5T move local i,j if *L ~= 5 then fail # wrong count every (i := 0) +:= (/(!L).symbol,1) if i ~= 1 then fail # more than 1 avail space every (j := 0) +:= ( find(dir,(!L).direction), 1) if j > 1 then fail # no overlap, =1 implies at an end return # that's it! end procedure ValidMove5D(L,dir) #: Succeed if L has valid 5D move #@@ local i,j if *L ~= 5 then fail # wrong count every (i := 0) +:= (/(!L).symbol,1) if i ~= 1 then fail # more than 1 avail space every (j := 0) +:= ( find(dir,(!L).direction), 1) if j > 0 then fail # no overlap, =1 implies at an end return # that's it! end procedure SetupM5Grid() #: construct 5T/D grid & cross local G,r,c,s every !(G := list(10)) := list(10) # Grid every G[r := 1 to 10, c := 1 to 10] := morpioncell(,"",r,c) # Empties every s := ![[1,4],[4,1],[4,7],[7,1],[7,7],[10,4]] do { # Cross every r := s[1] & c := s[2] + (0 to 3) do G[r,c] := morpioncell(XINIT,"",r,c) every r := s[2] + (0 to 3) & c := s[1] do G[r,c] := morpioncell(XINIT,"",r,c) } return morpiongame(G,0,[],[],0,0,1 + (*G-1)/2.) # Create game end procedure ExpandGrid(MG) #: expand any touching sides local r,c,rn,cn,C rn := *(G := MG.grid) # capture ... cn := *G[1] # ... entry dimensions if \(!G)[1].symbol then { # left edge MG.coff +:= 1 every (cn | (!!G).col) +:= 1 every push(G[r := 1 to rn],morpioncell(,"",r,1)) } if \(!G)[cn].symbol then { # right edge cn +:= 1 every put(G[r := 1 to rn],morpioncell(,"",r,cn)) } if \(!G[1]).symbol then { # top edge MG.roff +:= 1 every (rn | (!!G).row) +:= 1 push(G,C := list(cn)) every C[c := 1 to cn] := morpioncell(,"",1,c) } if \(!G[rn]).symbol then { # bottom edge rn +:= 1 put(G,C := list(cn)) every C[c := 1 to cn] := morpioncell(,"",rn,c) } return MG end procedure ShowGame(MG) #: show games if M_Output === &output then every (\(PrintGrid|WriteMoveLog|M_PrintDetails))(MG) else # header first to output, game saved every (\(WriteMoveLog|PrintGrid|M_PrintDetails))(MG) end procedure PrintGrid(MG) #: print the current Grid G := MG.grid every (ruler := " ") ||:= (1 to *G[1]) % 10 fprintf(M_Output,"\nMorphion Solitare Grid (move=%i):\n%s\n",MG.score,ruler) every r := 1 to *G do { fprintf(M_Output,"%s ",right(r%100,2)) every c := 1 to *(G[r]) do fprintf(M_Output,"%s",\G[r,c].symbol | XEMPTY) fprintf(M_Output,"\n") } fprintf(M_Output,"%s\n",ruler) return MG end procedure RandomPlayer(MG) #: Simulate random player if &random := \M_Rseed then M_Rseed := &null # set seed if given only once /MG.rseed := &random # seed for this game if MG.move := ?ScanGrid(MG).pool then return MG end [edit] Game Logging procedure WriteMoveLog(MG) #: write move log wrapper if \M_GameSave then { savegame := sprintf("Games/%s/%i-L%i",M_GameType,*MG.history,M_Limit) if M_Limit ~= 1 then savegame ||:= sprintf("(%i)",MG.count) if \M_ReplayFile then { fn := map(M_ReplayFile,"/\\","__") fn ? (="Games/", savegame ||:= "-RF" || tab(find(".txt"))) savegame ||:= "-RN" || (0 < \M_ReplayAfter) } savegame ||:= sprintf("_%s-%s-F%s.txt",deletec(&date,'/'),deletec(&clock,':'),M_GameFmt) M_GameSave := savegame fprintf(M_Output,WriteMoveLogHeader(MG)) # write header, game is saved f := open(savegame,"w") | stop("Unable to open ",savegame," for writing") fprintf(f,M_WriteGame(MG),M_Config) # call desired writer for output/save close(f) } else fprintf(M_Output,M_WriteGame(MG)) end procedure WriteMoveLogHeader(MG) #: write common header comments return sprintf("#\n# Game Record for Morphion %s game of %i moves\n_ # Date: %s\n# Saved: %s\n# &random: %i\n_ # ReplayFile: %s (%s moves)\n#\n", \M_GameType|"5T",MG.score,&date,\M_GameSave|"* none *", MG.rseed,\M_ReplayFile|"* none *",\M_ReplayAfter|"* all *") end procedure WriteMoveLogPS(MG) #: write pentasol style move log l := WriteMoveLogHeader(MG) l ||:= sprintf("# Pentasol compatible format\n#\n#\n_ # Morpion Solitaire game\n#\n_ # XXXX\n# X X\n# X X\n_ # XXXR XXXX\n# X X\n# X X\n_ # XXXX XXXX\n# X X\n# X X\n# XXXX\n#\n_ # R = reference point\n_ # List of moves starts with reference point (col,row)\n_ # Lines are\n# (col,row) <direction> <+/-centerdist>\n_ # distance to center is left side or top edge\n#\n") l ||:= sprintf("(%i,%i)\n",4+MG.coff,4+MG.roff) every l ||:= FormatMoveLogPS(MG,!MG.history) return l || "#" end procedure FormatMoveLogPS(MG,m) #: format a PS move d := if m.direction == "/" then m.move-3 else 3-m.move return sprintf("(%i,%i) %s %s%d\n", m.line[m.move,2]-m.coff+MG.coff,m.line[m.move,1]-m.roff+MG.roff, m.direction,(d < 0,"-")|(d = 0,"")|"+",abs(d)) end [edit] Extended Framework None of the code below is needed to satisfy the basic task. It supports the extended framework which includes, command line options, reading and replaying games from a saved file, running mass simulations to glean the best games, and some code for detailed analysis if I ever get around to experimenting with other than random strategy. [edit] Interface, Parameters, Globals $ifdef EXTENDED # --- Interface, Parameters, Additional Globals --- global M_Eval # Pluggable procedure global M_SrchWid,M_SrchDep # For strategy modules global M_LogDetails,M_PrintDetails # Misc. global M_CommandLine,M_Config,M_GameType,M_GameSave global M_Limit,M_StatUpd,M_BestL,M_WorstL # Multi-game simulation options global M_ReplayFile,M_ReplayAfter,M_Rseed # For game replay global M_ReadGame,M_GameFmt # Game formats to use global M_ChartW,M_ChartG # histogram # --- Beginning of Non-core (Extended) code --- procedure MorpionConf(A) # Configure the Solver M_CommandLine := copy(A) # preserve os := "-Q! -L+ -limit+ -V: -variant: -seed+ " os ||:= "-R! -replay! -RF: -RN+ -save! -RW: -RR: " os ||:= "-histwidth+ -histgroup+ -HW+ -HG+ " os ||:= "-UN+ -SW+ -SD+ -A: -E+ -B+ -W+" os ||:= "-details! " opt := options(A,os,Usage) # -<anything else> gets help M_Limit := ( 0 <= integer(\opt["limit"|"L"])) | 1 M_Rseed := \opt["seed"] M_Mvalid := case opt["V"|"variant"] of { "5D" : (M_GameType := "5D", ValidMove5D) default : (M_GameType := "5T", ValidMove5T) # also 5T } M_ReadGame := case map(\opt["RR"]) | &null of { default : (M_GameFmt := "ps", ReadMoveLogPS) "0" : (M_GameFmt := "0", ReadMoveLogOrig) # deprecated } M_WriteGame := case map(\opt["RW"]) | &null of { default : (M_GameFmt := "ps", WriteMoveLogPS) "0" : (M_GameFmt := "0", WriteMoveLogOrig) # deprecated } M_Strategy := case opt["A"] of { "A1" : (def_un := 50, PlayerA1) default : (def_un := 500, RandomPlayer) } M_Eval := case opt["E"] of { "1" : Score1 # test default : &null # also "0" } M_ChartW := (40 <= \opt["histwidth"|"HW"]) | 80 M_ChartG := \opt["histgroup"|"HG"] | 5 M_LogDetails := if \opt["details"] then LogDetails else 1 M_PrintDetails := if \opt["details"] then PrintDetails else 1 M_StatUpd := (0 < \opt["UN"]) | def_un M_BestL := (0 < \opt["B"]) | 5 M_WorstL := (0 < \opt["W"]) | 0 M_SrchWid := (0 < \opt["SW"]) | 5 M_SrchDep := (0 < \opt["SD"]) | 5 if \opt["R"|"replay"] then { M_ReplayFile := \opt["RF"] | "Games/5T/177-5T-rosin.txt" M_ReplayAfter := (0 < \opt["RN"]) | &null } else M_ReplayFile := &null if \(M_GameSave := opt["save"]) then { fn := sprintf("Runs/%s-L%i-",M_GameType,M_Limit) fn ||:= sprintf("RF%s-",map(\M_ReplayFile,"/\\","__")) fn ||:= sprintf("RN%i-",\M_ReplayAfter) fn ||:= sprintf("%s-%s.txt",deletec(&date,'/'),deletec(&clock,':')) M_Output := open(fn,"w") } /M_Output := &output c := sprintf("# --- Morpion Solitaire 5 (v%s) ---\n#\n",MORPVER) c ||:= "# Command line options :" every c ||:= " " || !A c ||:= "\n# Summary of Morpion Configuration:\n" c ||:= sprintf("# Variant (5T/D) move validation = %i\n",M_Mvalid) c ||:= sprintf("# Games to play = %s\n",( 0 < M_Limit) | "* unlimited *") c ||:= "# Multi-game options:\n" c ||:= sprintf("# - Status Updates = %i\n",M_StatUpd) c ||:= sprintf("# - Keep best = %i\n",M_BestL) c ||:= sprintf("# - Keep worst = %i\n",M_WorstL) c ||:= sprintf("# - Histogram width = %i\n",M_ChartW) c ||:= sprintf("# - Histogram grouping = %i\n",M_ChartG) c ||:= sprintf("# Games will be saved = %s\n", (\M_GameSave, "Yes") | "* No *") c ||:= sprintf("# - Format for game file (write) = %i\n",\M_WriteGame) c ||:= "# Replaying\n" c ||:= sprintf("# - Format for game file (read) = %i\n",\M_ReadGame) c ||:= sprintf("# - Game file to be replayed = %s\n", \M_ReplayFile | "* None *") c ||:= sprintf("# - Moves to replay = %i\n", 0 ~= \M_ReplayAfter) c ||:= sprintf("# Player Strategy = %i\n",M_Strategy) c ||:= sprintf("# - Seed for &random = %i\n",\M_Rseed) c ||:= sprintf("# - Position Fitness Evaluator = %s\n", image(\M_Eval) | "* None *") c ||:= sprintf("# - Search Width (strategy dependant) = %i\n",M_SrchWid) c ||:= sprintf("# - Search Depth (strategy dependant) = %i\n",M_SrchDep) c ||:= sprintf("# Log Details for analysis = %s\n", if M_LogDetails === 1 then "No" else "Yes") c ||:= "#\n" M_Config := c if \opt["Q"] then stop(M_Config,"-Q Stops run after processing options") else fprintf(M_Output,M_Config) /M_Eval := Scorefail end procedure Usage() fprintf(&errout,"_ Morphion [options] Plays the 5T/D variant of Morphion Solitaire\n_ Arguments : ") every fprintf(&errout," %s",!M_CommandLine) fprintf(&errout,"_ Morphion [options] Plays the 5T/D variant of Morphion Solitaire\n_ Where options are:\n_ \t-A\tchoose player strategy approach (default=random, future)\n_ \t-E\tSpecifiy an position fitness evaluation function (default none, future)\n_ \t-SW\tSearch width (strategy dependent, future)\n_ \t-SD\tSearch depth (strategy dependent, future)\n_ \t-V|-variant\t5D or 5T (default)\n_ \t-R|-replay\treplay\n_ \t-RF\tfile containing game record to be replayed\n_ \t-RN\tnumber of moves to replay (0=all)\n_ \t-RR\tgame recording format to read (ps=pentasol(default), 0=original)\n_ \t-RW\tgame recording format to write ps=pentasol(default), 0=original)\n_ \t-save\tsave the best (-B) and worst (-W) games\n_ \t-seed\tstart the seed of the random number at this value\n_ \t-L|-limit\tgames to play (if 0 or less then play until any of 'XxQq' is pressed\n_ \t\tnote for larger n this benefits from larger BLKSIZE, STRSIZE environment variables\n_ \t-HW|-histwidth\twidth (cols) of histogram of scores\n_ \t-HG|-histgroup\tsize of groups (buckets) for histogram\n_ \t-UN\tGive status update notifications every n simulations\n_ \t-B\tKeep best n games of unique length (default 5)\n_ \t-W\tKeep worst n games of unique length (default 3)\n_ \t-details\tLog game details for analysis\n_ \t-Q\t(debugging) terminates after options processing\n_ \t-?|-h|-help\tthis help text") stop() end [edit] Multigame Simulation and Monitoring Support procedure MultiMorphion(N,MG) #: Simulate N games using MG etime := -&time scores := table(n := 0) every bestL|worstL := [] if N <= 0 then N := "unlimited" repeat { if n >= numeric(N) then break else n +:= 1 mg := SingleMorpion(deepcopy(\MG)|&null) # play out game scores[mg.score] +:= 1 # count score mg.count := n # game number if short := ( /short | short.score >= mg.score, mg) then { push(worstL,short) # keep worst if *worstL > M_WorstL then pull(worstL) } if ( /long | long.score <= mg.score) then { bestcnt := if (\long).score = mg.score then bestcnt + 1 else 1 long := mg put(bestL,long) # keep best if *bestL > M_BestL then get(bestL) fprintf(M_Output,"Longest game %i after %i simulations &random=%i.\n", long.score,n,long.rseed) fprintf(&errout,"\r%i of %s simulations, long=%i(%i) (%s %s)", n,N,long.score,bestcnt,&date,&clock) } if (n % M_StatUpd) = 0 then # say we're alive & working fprintf(&errout,"\r%i of %s simulations, long=%i(%i) (%s %s)", n,N,long.score,bestcnt,&date,&clock) if kbhit() & getch() == !"QqXx" then # exit if any q/x break fprintf(&errout,"\nExiting after %i simulations.\n",n) } etime +:= &time avg := 0.0 short := key(scores) \ 1 # 1 key only every i := key(scores) do { # summarize stats short >:= i avg +:= i * scores[i] } fprintf(M_Output,"\nResults from Sample of %i games of Morpion 5T/D:\n",n) fprintf(M_Output,"Shortest game was %i moves.\n",short) fprintf(M_Output,"Average game was %i moves.\n",avg /:= n) fprintf(M_Output,"Longest game was %i moves.\n",long.score) fprintf(M_Output,"Average time/game is %i ms.\n",etime/real(n)) fprintf(M_Output,"&random is now %i.\n",&random) GraphScores(scores) # graph results fprintf(M_Output,"\nLongest (%i) Game(s):\n",M_BestL) every ShowGame(!reverse(bestL)) # show longest game(s) and log fprintf(M_Output,"\nShortest (%i) Game(s):\n",M_WorstL) every ShowGame(!worstL) # show longest game(s) and log MemUsage() # diagnostic end procedure GraphScores(S) #: graph results chart := [] every s := key(S) do { # by score n := s/M_ChartG+1 # chunks of ... until chart[n] do put(chart,0) # grow chart to need chart[n] +:= S[s] } s := (1 < max!chart/M_ChartW | 1) # scale fprintf(M_Output,"\nSummary of Results every '*' = %d games\n",s) every n := 1 to *chart do fprintf(M_Output,"%3d | (%6d) %s\n",(n-1)*M_ChartG,chart[n],repl("*",chart[n]/s)) end procedure MemUsage() #: monitor usage fprintf(M_Output,"\nTotal run time = %i ms\n",&time) fprintf(M_Output,"&allocated (Total,static,string,block) : ") every fprintf(M_Output," %i",&allocated) ; fprintf(M_Output,"\n") fprintf(M_Output,"&collections (Total,static,string,block) : ") every fprintf(M_Output," %i",&collections) ; fprintf(M_Output,"\n") fprintf(M_Output,"®ions ( - , - ,string,block) : ") every fprintf(M_Output," %s","-"|®ions) ; fprintf(M_Output,"\n") fprintf(M_Output,"&storage ( - , - ,string,block) : ") every fprintf(M_Output," %s","-"|&storage) ; fprintf(M_Output,"\n\n") fprintf(M_Output,"Icon/Unicon version %s\n",&version) end [edit] Game Replayer procedure ReplayMorpion() #: Handle recorded games Replayer(M := ReadMoveLog(M_ReplayFile)) # read game and save data M_Strategy := Replayer if /M_ReplayAfter | (M_ReplayAfter > *M) then { fprintf(M_Output,"Single game replay\n") ShowGame(SingleMorpion()) } else { # truncation replay MG := SingleMorpion(,M_ReplayAfter) # play shortened game M_Strategy := RandomPlayer if M_Limit === 1 then ShowGame(SingleMorpion(MG)) # single game else MultiMorphion(M_Limit,MG) # simulate many games from here } return end procedure Replayer(MG) #: feed replayed moves from list/game static ML,radj,cadj if type(MG[1]) == "MorpionGameRecord" then return ML := MG # setup list if not ScanGrid(MG) then fail # out of moves ? x := get(ML) | fail # get next move if x.ref = 0 then x := get(ML) | fail # skip move 0 if any xr := x.row + MG.roff # adjust move for grid expansion xc := x.col + MG.coff dr := \x.darow + MG.roff # adjust end for grid expansion dc := \x.dacol + MG.coff pool := [] every m := !MG.pool do { # find possible moves here mr := m.line[m.move,1] mc := m.line[m.move,2] if xr=mr & xc=mc then if \dr & \dc then { # info to disambiguate? every p := (m.line)[1|5] do # try endpoints if p[1] = dr & p[2] = dc then put(pool,m) # save matching move } else put(pool,m) # save matching move(s) } if *pool = 1 then # unique move? return ( MG.move := pool[1], MG) # set unique move and return MG else { # we have a problem ShowGame(MG) fprintf(M_Output,"Problem encountered replaying game at move #%i, %i choices.\n",MG.score,*pool) every m := !pool do fprintf(M_Output," %s\n",FormatMoveLogPS(MG,m)) &dump := 0 stop() } end [edit] Game Reader procedure ReadMoveLog(MG) #: read move log wrapper fprintf(M_Output,"Reading recorded game from: %s\n",M_ReplayFile) f := open(M_ReplayFile ,"r") | stop("Unable to open file ",M_ReplayFile ," for read.") R := [] while b := trim(read(f)) do { if b ? ="$end" then break # allow pre-mature end of file b ?:= tab(find("#")|0) # strip comments b := deletec(b,' \t') # strip whitespace if *b > 0 then put(R,b) # save move for reader } close(f) return M_ReadGame(R) # call reader, return move list end procedure ReadMoveLogPS(R) #: read pentasol style move log static off initial { # precalc center offsets off := table() off[-2] := -4 off[-1] := -3 off[0] := 2 off[1] := -1 off[2] := 4 } M := [] n := 0 # move number get(R) ? ( ="(", coff := integer(tab(many(&digits))) - 4, =",", roff := integer(tab(many(&digits))) - 4, =")", pos(0) ) | # Reference Cell (c,r) stop(&output,"Syntax error in reference line.") while b := get(R) do { # Line (c,r) d o b ? ( ="(", c := integer(tab(many(&digits))), =",", r := integer(tab(many(&digits))), =")", d := tab(any(DALL)), o := integer(=("-2"|"-1"|0|"+1"|"+2")) ) | stop(&output,"Syntax error in line above.") x := MorpionGameRecord() # new move / line x.ref := n +:= 1 x.darow := x.row := r - roff x.dacol := x.col := c - coff case d of { # adjust based on direction DHOR : x.dacol +:= off[o] DVER : x.darow +:= off[o] DRD : ( x.darow +:= off[o], x.dacol +:= off[o]) DLD : ( x.darow -:= off[o], x.dacol +:= off[o]) } put(M,x) } return M end [edit] Detailed Move Logging (for analysis) procedure PrintDetails(MG) #: print the log fprintf(M_Output,"Detailed Move Log\n") every fprintf(M_Output,"%i : %s\n",i := 1 to *MG.log,MG.log[i]) end procedure LogFormatMove(M,roff,coff) #: format a log entry /M.roff := \roff | 0 /M.coff := \coff | 0 log := sprintf("\"%s\" [%i,%i] : ",M.direction, M.line[M.move,1]-M.roff,M.line[M.move,2]-M.coff) every x := !M.line do log ||:= sprintf("[%i,%i] ",x[1]-M.roff,x[2]-M.coff) return log end procedure LogDetails(MG,M) #: Record details log := LogFormatMove(M) log ||:= sprintf(" - of %i choices.",*MG.pool) # append # choices log ||:= sprintf(" Metric=%i",M_Eval(MG)) # append score (opt) put(MG.log,log) # log the move end [edit] Strategy Support # No useful examples at this time procedure Scorefail(MG);end #: dummy M_Eval always fails $endif printf.icn provides formatting strings.icn provides deletec options.icn provides options processing Other RosettaCode pages used Deepcopy [edit] SampleOutput [edit] Help Morphion [options] Plays the 5T/D variant of Morphion Solitaire Arguments : -helpMorphion [options] Plays the 5T/D variant of Morphion Solitaire Where options are: -A choose player strategy approach (default=random, future) -E Specifiy an position fitness evaluation function (default none, future) -SW Search width (strategy dependent, future) -SD Search depth (strategy dependent, future) -V|-variant 5D or 5T (default) -R|-replay replay -RF file containing game record to be replayed -RN number of moves to replay (0=all) -RR game recording format to read (ps=pentasol(default), 0=original) -RW game recording format to write ps=pentasol(default), 0=original) -save save the best (-B) and worst (-W) games -seed start the seed of the random number at this value -L|-limit games to play (if 0 or less then play until any of 'XxQq' is pressed note for larger n this benefits from larger BLKSIZE, STRSIZE environment variables -HW|-histwidth width (cols) of histogram of scores -HG|-histgroup size of groups (buckets) for histogram -UN Give status update notifications every n simulations -B Keep best n games of unique length (default 5) -W Keep worst n games of unique length (default 3) -details Log game details for analysis -Q (debugging) terminates after options processing -?|-h|-help this help text [edit] Random Game Record The following image was obtained by replaying the game in Pentasol and saving the picture. The original was generated by a multigame simulation and represents the best game generated by this game from the starting position. This the game record used to produce the above image: # # Game Record for Morphion 5T game of 92 moves # Date: 2012/02/18 # Saved: Games/5T/92-L1_20120218-163208-Fps.txt # &random: 1617565851 # ReplayFile: * none * (* all * moves) # # Pentasol compatible format # # # Morpion Solitaire game # # XXXX # X X # X X # XXXR XXXX # X X # X X # XXXX XXXX # X X # X X # XXXX # # R = reference point # List of moves starts with reference point (col,row) # Lines are # (col,row) <direction> <+/-centerdist> # distance to center is left side or top edge # (9,7) (12,8) | -2 (16,7) - -2 (9,8) | -2 (13,11) / 0 (9,9) | +2 (15,11) | -2 (13,13) - -2 (6,11) | -2 (16,10) - -2 (12,14) | -2 (8,4) - +2 (5,7) - +2 (13,6) \ 0 (10,12) \ 0 (14,8) \ 0 (14,12) / 0 (10,10) - -2 (11,9) / 0 (8,6) / 0 (10,8) \ +1 (8,11) \ 0 (11,7) / -1 (13,9) \ 0 (11,11) \ 0 (14,11) - -1 (14,9) | +1 (13,12) | -1 (12,9) - +1 (16,9) / -2 (17,6) / -2 (11,12) - +1 (16,6) / -2 (16,8) | 0 (11,10) | +1 (10,9) \ +1 (11,8) / 0 (13,8) - -2 (15,6) / -2 (13,5) | +2 (10,7) \ +2 (14,6) \ 0 (10,6) | +2 (16,11) \ -2 (11,6) - +2 (11,5) | +2 (8,8) / +2 (8,14) / +2 (8,9) | -1 (7,9) - +2 (4,6) \ +2 (5,12) / +2 (13,4) / -2 (10,5) - +1 (10,11) \ 0 (7,8) / +2 (7,6) | +2 (7,11) - +1 (10,14) | -2 (9,14) / +2 (7,3) \ +2 (17,8) - -2 (14,5) \ +1 (11,14) - -1 (8,12) \ 0 (5,8) - +2 (8,13) | -1 (14,4) | +2 (8,5) / -1 (7,14) / +2 (8,3) \ +2 (6,6) - +2 (5,5) \ +2 (8,2) | +2 (6,4) / 0 (9,3) \ +1 (7,5) / 0 (5,3) \ +2 (6,3) - +1 (6,5) - +1 (6,2) | +2 (7,4) \ +1 (7,2) | +2 (5,4) \ +2 (5,6) | 0 (9,2) / -2 (10,2) - -2 (4,5) \ +2 (10,3) | +1 (3,6) / +2 (4,4) - +2 (2,6) - +2 (3,5) / +1 # [edit] Multigame Run The multigame simulation of completely random play includes a histogram showing game scores clustering in the 20's and 60's. This result is similar to results obtained by Jean-Jacques Sibil (you will have to scroll down that page to find the reference) who has run nearly a billion such simulations achieving a high score of 102 as of 2010. The following is a summary file: # --- Morpion Solitaire 5 (v1.7e) --- # # Command line options : # Summary of Morpion Configuration: # Variant (5T/D) move validation = procedure ValidMove5T # Games to play = * unlimited * # Multi-game options: # - Status Updates = 500 # - Keep best = 5 # - Keep worst = 0 # - Histogram width = 80 # - Histogram grouping = 5 # Games will be saved = Yes # - Format for game file (write) = procedure WriteMoveLogPS # Replaying # - Format for game file (read) = procedure ReadMoveLogPS # - Game file to be replayed = * None * # Player Strategy = procedure RandomPlayer # - Position Fitness Evaluator = * None * # - Search Width (strategy dependant) = 5 # - Search Depth (strategy dependant) = 5 # Log Details for analysis = No # Longest game 77 after 1 simulations &random=20122297. Longest game 77 after 85 simulations &random=274082001. Longest game 78 after 118 simulations &random=559240181. Longest game 78 after 123 simulations &random=1682993637. Longest game 81 after 292 simulations &random=826134037. Longest game 84 after 1181 simulations &random=1936506737. Longest game 86 after 4584 simulations &random=1266457499. Longest game 86 after 44424 simulations &random=1725594333. Longest game 86 after 47918 simulations &random=1686351259. Longest game 86 after 50600 simulations &random=665807725. Longest game 87 after 60841 simulations &random=152917603. Longest game 87 after 74778 simulations &random=1037682795. Longest game 88 after 173368 simulations &random=72059739. Longest game 88 after 241134 simulations &random=2095899781. Results from Sample of 242921 games of Morpion 5T/D: Shortest game was 20 moves. Average game was 53.65064774144681 moves. Longest game was 88 moves. Average time/game is 133.1678282239905 ms. &random is now 452165683. Summary of Results every '*' = 940 games 0 | ( 0) 5 | ( 0) 10 | ( 0) 15 | ( 0) 20 | ( 38637) ***************************************** 25 | ( 13790) ************** 30 | ( 6657) ******* 35 | ( 2604) ** 40 | ( 1306) * 45 | ( 1088) * 50 | ( 3448) *** 55 | ( 22481) *********************** 60 | ( 75207) ******************************************************************************** 65 | ( 61501) ***************************************************************** 70 | ( 13902) ************** 75 | ( 1922) ** 80 | ( 349) 85 | ( 29) Longest (5) Game(s): # # Game Record for Morphion 5T game of 88 moves # Date: 2012/02/18 # Saved: Games/5T/88-L0(241134)_20120218-083009-Fps.txt # &random: 2095899781 # 88 moves # Date: 2012/02/18 # Saved: Games/5T/88-L0(173368)_20120218-083009-Fps.txt # &random: 72059739 # 87 moves # Date: 2012/02/18 # Saved: Games/5T/87-L0(74778)_20120218-083009-Fps.txt # &random: 1037682795 # ReplayFile: * none * (* all * moves) # Morphion Solitare Grid (move=87): 123456789012345 1 ............... 2 ....+.+........ 3 .....+......... 4 ....+++++...... 5 ...++++++...... 6 .+++++****+.... 7 ..++++*++*+.... 8 .+++++*++*++... 9 ..+****++****.. 10 ..+*++++++++*+. 11 ...*++++++++*.. 12 ..+****++****+. 13 ...+++*++*++++. 14 ..++++*++*++... 15 .....+****+.+.. 16 ......+++++.... 17 ........+...... 18 ............... 123456789012345 # # Game Record for Morphion 5T game of 87 moves # Date: 2012/02/18 # Saved: Games/5T/87-L0(60841)_20120218-083009-Fps.txt # &random: 152917603 # ReplayFile: * none * (* all * moves) # Morphion Solitare Grid (move=87): 123456789012345678 1 .................. 2 .........+.+..+... 3 .....+...++++++... 4 ......****+++++... 5 .....+*++*+++++... 6 ....++*++*+++++++. 7 ..+****++****+.+.. 8 .++*++++++++*+.... 9 .++*++++++++*+.... 10 ..+****++****+.... 11 .+++++*++*++++.... 12 ....++*++*++...... 13 ....++****........ 14 ....+++........... 15 .................. 123456789012345678 # # Game Record for Morphion 5T game of 86 moves # Date: 2012/02/18 # Saved: Games/5T/86-L0(50600)_20120218-083009-Fps.txt # &random: 665807725 # ReplayFile: * none * (* all * moves) # Morphion Solitare Grid (move=86): 1234567890123456 1 ................ 2 .....++..+++.... 3 .....+****+..... 4 ....+.*++*+..... 5 ...+.+*++*+..+.. 6 ..+****++****... 7 ..+*++++++++*... 8 ..+*++++++++*... 9 ..+****++****++. 10 .+++++*++*+++++. 11 ....++*++*+++++. 12 .....+****+++++. 13 .....++++++++++. 14 ..........+++++. 15 ..........+..... 16 ................ 1234567890123456 Shortest (0) Game(s): Total run time = 32349320 ms &allocated (Total,static,string,block) : 1152124804 0 144396168 1007728636 &collections (Total,static,string,block) : 4623 0 0 4623 ®ions ( - , - ,string,block) : - 0 41859440 41859440 &storage ( - , - ,string,block) : - 0 37784 13290772 [edit] Other Interesting Results One of things I did to get a feel for the game and perhaps get some insight in to strategies was to replay published record games. These universally look smoother and more organized than even the best random games. Somewhere it occurred to me to truncate these record games and play forward randomly to see what happened. I probably expected them to go off the rails fairly quickly. And while that happens, I was surprised how close these could get to the original record. This lead to my earlier observation that the seeds of success are set very early in morpion. It may also be due to the number of possible moves falling off more quickly than I might expect. - Using Bruneau's 170 game truncated at 112 moves the program produced the following: Longest game 168 after 175790 simulations &random=1821730183. Longest game 169 after 279675 simulations &random=873864083. Longest game 170 after 1073380 simulations &random=2014543635. Longest game 170 after 1086106 simulations &random=1319746023. Results from Sample of 1091265 games of Morpion 5T/D: Shortest game was 122 moves. Average game was 127.0640165312733 moves. Longest game was 170 moves. Average time/game is 77.48127814967033 ms. &random is now 609048351. Summary of Results every '*' = 6761 games ... 115 | ( 0) 120 | (324190) *********************************************** 125 | (540953) ******************************************************************************** 130 | (193255) **************************** 135 | ( 17723) ** 140 | ( 13447) * 145 | ( 1577) 150 | ( 83) 155 | ( 0) 160 | ( 28) 165 | ( 7) 170 | ( 2) - The two games of 170 are both different and not just transpositions. However the difference produced is in a single move. - Using Rosin's 177 move (A) grid, the program produced the following, - from move 129: Longest game 145 after 1 simulations. Longest game 153 after 2 simulations. Longest game 154 after 20 simulations. Longest game 155 after 40 simulations. Longest game 164 after 50 simulations. Longest game 168 after 2203 simulations. Results from Sample of 78393 games of Morpion 5T: Shortest game was 143 moves. Average game was 147.2826145191535 moves. Longest game was 168 moves. Average time/game is 115.6193155001084 ms. Summary of Results every '*' = 973 games ... 120 | ( 0) 140 | ( 77901) ******************************************************************************** 160 | ( 492) - from move 112: Longest game 140 after 1 simulations. Longest game 146 after 10 simulations. Longest game 148 after 441 simulations. Longest game 151 after 2029 simulations. Longest game 153 after 7167 simulations. Longest game 157 after 34601 simulations. Longest game 168 after 41977 simulations. Results from Sample of 524157 games of Morpion 5T: Shortest game was 126 moves. Average game was 136.6568528131838 moves. Longest game was 168 moves. Average time/game is 138.334270075569 ms. Summary of Results every '*' = 5643 games ... 100 | ( 0) 120 | (451482) ******************************************************************************** 140 | ( 72673) ************ 160 | ( 2) - Unfortunately that earlier version of the program was not logging the random number seed used for these games nor was it recording games in a notation that I have a converter for (at this time). The above were run under "Unicon Version 12.0. July 13, 2011" on Windows 7/x64.
http://rosettacode.org/wiki/Morpion_solitaire/Unicon
CC-MAIN-2015-14
refinedweb
5,730
53.1
Flash components overview This is part of Flash tutorial series. Contents - 1 Introduction - 2 AS 3 built-in component overview - 3 Various button components - 4 Number and text input - 5 Other - 6 List-based components - 7 A little ComboBox with ActionScript only - 8 Working with external other components - 9 Links 1 Introduction Components are prebuilt interface elements (widgets) that will speed up programming of interactive Flash pages. - Learning goals This is a high level overview: - Learn where to find components - Learn about the purpose of various Flash 9 (CS3) components - Prerequisites - Flash CS3 desktop tutorial - Flash drawing tutorial - Flash button tutorial (not absolutely necessary) - Moving on - This article is just a conceptual overview. There are some specific component tutorials you may be interested in: - Flash component button tutorial - Flash Video component tutorial - Flash datagrid component tutorial - - There is component demo you can look at (source file is flash-cs3-components-overview.fla ... in progress). - Grab the various *.fla files from here: - The executive summary Flash has a few built-in components (called widgets or gadgets in other contexts) and that will allow to you to build an interactive environment more quickly than by coding all by yourself. However, making good use of most of these components still requires basic knowledge of ActionScript. In this article we will try to show a few design patterns that you can copy and adapt. - The executive summary - Open the component library (Window->Components or CTRL-F7) - Drag a component to the stage - Fill in some parameters - Add some ActionScript code that will handle user action events. - ActionScript (AS2) vs. ActionScript (AS3) - In CS3, a component library is available for both versions - The AS3 one is smaller as you could see in the screenshot above. We shall focus on Flash 9 and Action Script 3 here. However, the principle of using AS2 parameters is the same. In this article we are going to look at User Interface components only, see the Flash video component tutorial for the Video elements. - To open the component library - Window->Components or CTRL-F7 - I suggest to dock it against your library. - 'Warning - AS2 components are different from AS3 components (!!) - It really is important to plan ahead, i.e. you must decide whether you work with AS2 or AS3 before you start using any component ! - As long as you don't use components, as long as you don't insert any ActionScript code you can easily switch between various Flash and ActionScript versions. Once you start using AS or AS-based components you can't ! 2 AS 3 built-in component overview I am (slowly) making a demo (not yet fully completed, but somewhat instructive). - Get the source from 2.1 Use of components with the Flash Desktop - Using components - Open the component library (Window->Components or CTRL-F7) - Drag a component to the stage. This will also add this component to the library. - Once a component is in your library, just drag it from there - Component assets - Adding a component to the stage or to the library also will copy necessary Component Assets in a folder. Do not delete this folder ! - If you wish you can then change the components skins (i.e. the graphical representation) by editing these elements (but make sure that you know what you do). - Parameters Each component has a series of parameters that you can modify in order to change the component's appearance and behavior. The most important parameters can be simply changed through the Parameters panel (menu Window->Properties>Parameters) Alternatively, you also can use the Components Inspector. The components inspector gives extra information, e.g. the name of the component used (on top just below the tab). Tip: If the parameter or component inspect won't display the parameters, click on an empty spot on the workspace and select the object again. (You likely selected more than one object). Other parameters only can be changed through ActionScript coding. - Sizing components With the free transform tool (or similar) you can resize component instances - Just click on it on the stage and do it - Alternatively set the size through the parameters or properties panel. In any case, you do not need to fiddle with component internals .... - Live preview If your component doesn't really show, (re)enable live preview: - Select Control > Enable Live Preview (tick the box) The rest of this article briefly presents each UI component. See also the Flash Video component tutorial which is in a separate article. - Working with the Actions Frame panel Tip: If your screen is large enough, it's a good idea to drag out this panel to the Desktop (do not let it "touch" Flash. Then pin it down with the pin at the bottom. This way you can move around in your frames and layers and still edit code. (More is needed here, maybe I will write an ActionScript panel tutorial). 2.2 Understanding what is going on A beginner usually has trouble understanding why a script doesn't show any effect. Errors are mostly likely in the data, e.g. in the names you give to your component instances or fields. One way to track a few things down is to print out messages. Below is really ugly piece of code that attempts to print out information (even if it doesn't exist). Just insert it in the scripts layer after/before your other code. This will write messages to the output pane that will pop up. this.addEventListener(MouseEvent.CLICK, onMouseClickEvent); // Then make the callback function onMouseClickEvent(e:Event) { try { // trace(e); trace(">>" + e.type + " event from " + e.target.name + " called on " + this.name); trace("------"); } catch (e:Error) { // trace("Error occurred!"); } } Remove it once you are done or better just put comments around the code like this: /* .... above code here .... */ I have to complete this function, with some more useful infos at some point .... Below we summarize functionalities of the other button components. (More details will be added sometimes, in the meanwhile, please consult the official documentation) You can see a simple use of these in thedemo. 3.1 Button ” Adobe documentation (see above) 3.2 CheckBox “A CheckBox is a square box that can be selected or deselected. When it is selected, a check mark appears in the box. You can add a text label to a CheckBox and place it to the left, right, above, or below the CheckBox” Adobe documentation - Using the CheckBox (Adobe Using ActionScript 3.0 Components) - Checkbox class (ActionScript 3.0 Language and Components Reference) 3.3 RadioButton “The RadioButton component lets you force a user to make a single choice within a set of choices. This component must be used in a group of at least two RadioButton instances. Only one member of the group can be selected at any given time. Selecting one radio button in a group deselects the currently selected radio button in the group. You set the groupName parameter to indicate which group a radio button belongs to.” Adobe documentation - RadioButton (Adobe Using ActionScript 3.0 Components) 4 Number and text input 4.1 TextArea “The TextArea component is a wrapper for the native ActionScript TextField object. You can use the TextArea component to display text and also to edit and receive text input if the editable property is true. The component can display or receive multiple lines of text and wraps long lines of text if the wordWrap property is set to true. The restrict property allows you to restrict the characters that a user can enter and maxChars allows you to specify the maximum number of characters that a user can enter. If the text exceeds the horizontal or vertical boundaries of the text area, horizontal and vertical scroll bars automatically appear unless their associated properties, horizontalScrollPolicy and verticalScrollPolicy, are set to off. ” Adobe documentation 4.2 TextInput “The TextInput component is a single-line text component that is a wrapper for the native ActionScript TextField object. If you need a multiline text field, use the TextArea component. For example, you could use a TextInput component as a password field in a form. You could also set up a listener that checks whether the field has enough characters when a user tabs out of the field. That listener could display an error message indicating that the proper number of characters must be entered. ” Adobe documentation 4.3 Numeric Stepper “The NumericStepper component allows a user to step through an ordered set of numbers. The component consists of a number in a text box displayed beside small up and down arrow buttons. When a user presses the buttons, the number is raised or lowered incrementally according to the unit specified in the stepSize parameter until the user releases the buttons or until the maximum or minimum value is reached. The text in the NumericStepper component's text box is also editable.” Adobe documentation - Using the NumericStepper (Using ActionScript 3.0 Components) - NumericStepper class (ActionScript 3.0 Language and Components Reference) 4.4 Slider “The Slider component lets a user select a value by sliding a graphical thumb between the end points of a track that corresponds to a range of values. You can use a slider to allow a user to choose a value such as a number or a percentage, for example. You can also use ActionScript to cause the slider's value to influence the behavior of a second object. For example, you could associate the slider with a picture and shrink it or enlarge it based on the relative position, or value, of the slider's thumb. ” Adobe documentation - Slider class (ActionScript 3.0 Language and Components Reference) 5 Other 5.1 ColorPicker “The ColorPicker component allows a user to select a color from a swatch list. The default mode of the ColorPicker shows a single color in a square button. When a user clicks the button, the list of available colors appears in a swatch panel along with a text field that displays the hexadecimal value of the current color selection.” Adobe documentation - Using the ColorPicker (Using ActionScript 3.0 Components) - ColorPicker class (ActionScript 3.0 Language and Components Reference) 5.2 ComboBox A ComboBox component allows a user to make a single selection from a drop-down list. Adobe documentation 5.3 Label “The Label component displays a single line of text, typically to identify some other element or activity on a web page. You can specify that a label be formatted with HTML to take advantage of its text formatting tags.” Adobe documentation 5.4 ProgessBar “The ProgressBar component displays the progress of loading content, which is reassuring to a user when the content is large and can delay the execution of the application. The ProgressBar is useful for displaying the progress of loading images and pieces of an application. The loading process can be determinate or indeterminate. A determinate progress bar is a linear representation of a task's progress over time and is used when the amount of content to load is known. An indeterminate progress bar is used when the amount of content to load is unknown. You can also add a Label component to display the progress of loading as a percentage. ” Adobe documentation 5.5 ScrollPane . ” Adobe documentation 5.6 Uploader “The UILoader component is a container that can display SWF, JPEG, progressive JPEG, PNG, and GIF files. You can use a UILoader whenever you need to retrieve content from a remote location and pull it into a Flash application. For example, you could use a UILoader to add a company logo (JPEG file) to a form. You could also use the UILoader component in an application that displays photos. Use the load() method to load content, the percentLoaded property to determine how much content has loaded, and the complete event to determine when loading is finished. ” Adobe documentation 5.7 UIScollBar “The UIScrollBar component allows you to add a scroll bar to a text field. You can add a scroll bar to a text field while authoring, or at run time with ActionScript. To use the UIScrollBar component, create a text field on the Stage and drag the UIScrollBar component from the Components panel to any quadrant of the text field's bounding box. ” Adobe documentation 6 List-based components - Lists These components are based on lists - A list is a row - Rows can have columns - A cell is either an element of a simple row or the intersection of a row with a column - Items - Contents of cells are items - Items are objects with various properties (depending on the component) - The dataProvider parameter - For ComboBox, List, and TileList click on the dataProvider parameter to enter data. 6.1 DataGrid “ The DataGrid component lets you display data in a grid of rows and columns, drawing the data from an array or an external XML file that you can parse into an array for the DataProvider. The DataGrid component includes vertical and horizontal scrolling, event support (including support for editable cells), and sorting capabilities.” Adobe documentation See the Flash datagrid component tutorial in this Wiki. 6.2 List “ The List component is a scrollable single- or multiple-selection list box. A list can also display graphics, including other components. You add the items displayed in the list by using the Values dialog box that appears when you click in the labels or data parameter fields. You can also use the List.addItem() and List.addItemAt() methods to add items to the list.” Adobe documentation 6.3 TileList “The TileList component consists of a list that is made up of rows and columns that are supplied with data by a data provider. An item refers to a unit of data that is stored in a cell in the TileList. An item, which originates in the data provider, typically has a label property and a source property. The label property identifies the content to display in a cell and the source provides a value for it. ” Adobe documentation See Displaying images with the TileList component (Adobe tutorial) by Bob Berry 7 A little ComboBox with ActionScript only After a day or two learning ActionScript I already was fed up with working with the drag and drop method. Problem is that I can't remember any instance names, label names, etc. and writing little pieces of ActionScript is very time consuming that way. If you have the same problems, you should write as much as you can directly in ActionScript. For instance, instead of dragging a ComboBox to the desktop, filling the dataProvider field in the Parameters panel etc. you just write the whole code that creates, positions and fills this ComboBox. Code below (more or less) has been used to make the CS3 components overview import fl.controls.ComboBox; import fl.data.DataProvider; import flash.net.navigateToURL; var items_CB:Array = new Array( {label:"HOME", data: "home_frame"}, {label:"User Input - buttons", data: "buttons_frame"}, {label:"Color Picker", data: "color_picker_frame"}, {label:"Data Grid", data: "data_grid_frame"}, {label:"Lists", data: "lists_frame"}, {label:"User Input - more", data: "user_input_frame"}, {label:"Scrolling", data: "scroll_frame"} ); var menu_CB:ComboBox = new ComboBox(); // Width of items menu_CB.dropdownWidth = 200; // With of the menu button menu_CB.width = 150; // Position (from to right: x,y) menu_CB.move(380, 7); // Number of rows to display without scrollbar menu_CB.rowCount = 7; // Label of the menu button menu_CB.prompt = "Flash CS3 UI Components"; // Insert the items defined above menu_CB.dataProvider = new DataProvider(items_CB); // Register the event handler menu_CB.addEventListener(Event.CHANGE, changeHandler); addChild(menu_CB); function changeHandler(event:Event):void { var destination = ComboBox(event.target).selectedItem.data; gotoAndStop(destination); menu_CB.selectedIndex = -1; } Tip: Positioning the box is not that hard. Just turn on the Rulers (Right-click->Rulers). 8 Working with external other components Firstly, don't install anything from a website you don't trust. 8.1 MXP files An MXP file is a component file format that is used in Adobe Exchange. To install these, simply doubleclick on it and it will launch the extensions manager program which in turn will then install either swc component(s) or a *.fla libraray. This also may happen if you download it. - Adobe Exchange Is a website that offers components under all sorts of licensing schemes. Unfortunately, you have to click on each title to figure out what version of Flash it supports, if it's crippleware of really free, etc. Don't download without reading the description first ... - You can search, browse by categories, and list according to various criteria - At the same time you may filter by license type Link: 8.2 SWC components Components can be distributed as *.swc files. A *.swc component is a compiled movie clip that you can't edit. You still may edit its properties. They publish faster than *.fla component. - Installing other components in your system You may find or (mostly) buy other ActionScript components on the Internet or through Adobe's website. You just can open a component file, but then it will not permanently be in your System. To make a component permanently available: - Quit Flash - Copy the component to the components directory Under Windows XP: - C:\Program Files\Adobe\Flash CS3\language\Configuration\Components - e.g. C:\Program Files\Adobe\Adobe Flash CS3\en\Configuration\Components Under Windows Vista: - C:\Programs\Adobe\Adobe Flash CS3\language\Configuration\Components - e.g. C:\Programs\Adobe\Adobe Flash CS3\en\Configuration\Components Under MacIntosh: - Macintosh HD:Applications:Adobe Flash CS3:Configuration:Components Alternatively you also can install these in a user directory: - Win XP: C:\Documents and Settings\username\Local Settings\Application Data\Adobe\Adobe Flash CS4\en\Configuration\Components - Windows Vista: C:\Users\username\Local Settings\Application Data\Adobe\Adobe Flash CS4\en\Configuration\Components - Mac OS X: Macintosh HD:Users:<username>:Library:Application Support:Adobe Flash CS4:Configuration:Components If you can't see these folders: In the Windows Explorer, select Tools- >Folder Options; View tab. Then select the Show hidden files and folders radio button. 8.3 FLA components .” (Creating ActionScript 3.0 components in Flash – Part 1: Introducing components) Installed *.fla libraries can be found through the menu Window->Common libraries. You then can dock it next to your library for example. 8.4 Managing external components in CS3 Open Help->Manage components. You then can for example: - enable/disable - delete The same tool also gives access to the Adobe exchange, help files, etc. 9 Links 9.1 Reference - For Designers - Flash CS3 Documentation - Select Using ActionScript 3 Components or here then click on top left menu icon. (Yes Adobe can't manage URLs for pages and menus) - For programmers - UIComponent (Adobe AS3 reference manual). For programmers. - External component files 9.2 Tutorials - Getting started with Flash CS3 user interface components, Bob Berry, Adobe - Creating ActionScript 3.0 components in Flash – Part 1: Introducing components by Jeff Kamerer, Adobe (sept. 2007). 9.3 Component libraries - Meta-Index at HotScripts.com (but not sorted by Flash version ....) - Adobe Exchange beta (various licences, also commercial). Not everything a component.
https://edutechwiki.unige.ch/en/Flash_components_tutorial
CC-MAIN-2018-09
refinedweb
3,168
56.05
Description Your friend is a soccer fan and you were watching some World Cup matches with him. You liked this game, but the rules are very complicated for you, so you decided just to try to guess whether the given attack will end with a goal or not. In the beginning, the ball is in the attacking team’s goalkeeper’s hands. On the attacking team, there’s a very talented goalscorer, who is waiting for his chance at the other end of the field. His teammates want to give him the ball so he can score. They can move the ball by passing it one to another along a straight line, but the defender can steal the pass if he is closer than d to the ball at any point throughout the pass. Now you want to know if the attacking team can score or not. Formally, you are given the coordinates of all attacking players in an array attackingPlayers (where the player at index 0 is the goalkeeper and the player at the final index is the goalscorer), the coordinates of all defending players in an array defendingPlayers, and an integer d (representing how far each defending player can reach in order to intercept a pass). You need to find out whether it is possible to score a goal by passing the ball to the best scorer without any passes being intercepted. Example For attackingPlayers = [[0, 0], [1, 2], [3, 1]], defendingPlayers = [[2, 1]] and d = 1, the output should be canScore(attackingPlayers, defendingPlayers, d) = false. Attacking player 0 can pass to attacking player 1 without the pass being intercepted, but neither attacking player 0 nor attacking player 1 can pass to attacking player 2 (the goal scorer), so the goal cannot be completed. For attackingPlayers = [[0, 0], [1, 2], [3, 3], [3, 1]], defendingPlayers = [[2, 1]] and d = 1, the output should be canScore(attackingPlayers, defendingPlayers, d) = true. The goal can be scored if the ball is passed from attacking players 0 to 1 to 2 to 3. For attackingPlayers = [[1, 2], [5, 3], [4, -2], [8, 0], [8, 6]], defendingPlayers = [[4, 4], [1, -1], [9, 2]] and d = 2, the output should be canScore(attackingPlayers, defendingPlayers, d) = true. The goal can be scored if the ball is passed from attacking players 0 to 3 to 2 to 4. Input/Output - [execution time limit] 3 seconds (java) - [input] array.array.integer attackingPlayers An array of coordinates of all players from the attacking team. The first one is the goalkeeper’s coordinates and the last one is the best goalscorer’s coordinates. Guaranteed constraints: 2 <= attackingPlayers.length <= 100 attackingPlayers[i].length = 2 104 <= attackingPlayers[i][j] <= 104 - [input] array.array.integer defendingPlayers An array of coordinates of all players from the defending team. Guaranteed constraints: 1 <= defendingPlayers.length <= 100 defendingPlayers[i].length = 2 -104 <= defendingPlayers[i][j] <= 104 - [input] integer d The distance that each defending player can reach in intercepting a pass. Guaranteed constraints: 0 <= d <= 104 - [output] boolean Trueif attacking team can score a goal, Falseotherwise. Solution below . . . /* * We're going to create a graph with each attacking player as a vertex. * * For each pair of attacking players, we check each defensive player to * see if the defensive player is less than distance d from the line * segment between the attacking players. If no defensive player is close * enough to intercept a pass, we add an edge to the graph between the two * attacking players. * * Finally, we do a depth-first search from vertex 0 (the goalkeeper) to * find whether a path exists to the best scorer. * */ public class Main { static boolean[][] graph; static boolean[] marked; static boolean canScore(int[][] attackingPlayers, int[][] defendingPlayers, int d) { int apLen = attackingPlayers.length; int dpLen = defendingPlayers.length; graph = new boolean[apLen][apLen]; // check each pair of players for (int i = 0; i < apLen - 1; i++) { PT a = new PT(attackingPlayers[i][0], attackingPlayers[i][1]); for (int j = i + 1; j < apLen; j++) { PT b = new PT(attackingPlayers[j][0], attackingPlayers[j][1]); boolean ok = true; // is a defender within distance d? for (int k = 0; k < dpLen; k++) { PT c = new PT(defendingPlayers[k][0], defendingPlayers[k][1]); if (Geometry.DistancePointSegment(a, b, c) < d) { ok = false; break; } } // if defenders are too far, add the edge if (ok) { graph[i][j] = true; graph[j][i] = true; } } } marked = new boolean[apLen]; dfs(0); return marked[apLen - 1]; } static void dfs(int v) { marked[v] = true; for (int w = 0; w < graph[v].length; w++) { if (graph[v][w] && !marked[w]) { dfs(w); } } } public static void main(String[] args) { // Test cases // false // int[][] attackingPlayers = {{0, 0}, {1, 2}, {3, 1}}; // int[][] defendingPlayers = {{2, 1}}; // int d = 1; // true int[][] attackingPlayers = {{0, 0}, {1, 2}, {3, 3}, {3, 1}}; int[][] defendingPlayers = {{2, 1}}; int d = 1; System.out.println(canScore(attackingPlayers, defendingPlayers, d)); } } class Geometry { // Computational geometry functions static final double EPS = 1e-12; static double dot(PT p, PT q) { return p.x * q.x + p.y * q.y; } static double dist2(PT p, PT q) { return dot(p.subtract(q), p.subtract(q)); } static PT ProjectPointSegment(PT a, PT b, PT c) { double r = dot(b.subtract(a), b.subtract(a)); if (Math.abs(r) < EPS) return a; r = dot(c.subtract(a), b.subtract(a)) / r; if (r < 0) return a; if (r > 1) return b; return a.add(b.subtract(a).multiply(r)); } static double DistancePointSegment(PT a, PT b, PT c) { return Math.sqrt(dist2(c, ProjectPointSegment(a, b, c))); } } class PT { double x, y; PT() {} PT(double x, double y) { this.x = x; this.y = y; } PT(PT p) { this.x = p.x; this.y = p.y; { } } PT add(final PT p) { return new PT(x + p.x, y + p.y); } PT subtract(final PT p) { return new PT(x - p.x, y - p.y); } PT multiply(double c) { return new PT(x * c, y * c); } PT divide(double c) { return new PT(x / c, y / c); } @Override public String toString() { return "(" + x + "," + y + ")"; } }
http://eppsnet.com/2018/07/competitive-programming-codesignal-canscore-a-world-cup-challenge/
CC-MAIN-2018-47
refinedweb
1,008
54.22
The goal for our tow truck was to have a 4-axis crane and a movable vehicle that we could remotely control with an Android smart phone. The parts we used for this project were: - 2 axis camera mount - motor and servo shield - Arduino Mega - Bluetooth serial module - Arduino car chassis - Meccano parts - wire and duct tape - USB phone charger Hardware Setup The tow truck project used a camera mount for up/down/left/right crane motion and an Arduino car chassis for mobility. The controls were done using Bluetooth. Meccano was used to build a box for the main structure. Wire was used to secure everything together. We laid a folded piece of paper under the Arduino Mega to ensure that none of the Arduino solder connections shorted on the metal Meccano base. The motor and servo shield that we used did not expose any of the extra Arduino pins, so we needed to use the Mega board. We then wired the Bluetooth module to the exposed pins on the end of the Mega. Arduino Code The Arduino code will vary a little based on the motor/servo shield that is used. Our shield was an older version 1 (V1) board that used direct pin connections (no I2C or SDA/SCL connections). Also because Tx/Rx (Tx0/Rx)) were not available once our motor/servo shield was installed we used Tx1/Rx1 and so our Bluetooth connection was on Serial1 and not Serial. For the Bluetooth communications we used the following command letters: - R = drive right - L = drive left - f = drive forwards - b = drive backwards - s = stop driving - r = move crane right - l = move crane left - u= move crane up - d = move crane down Our Arduino code is below: #include <Servo.h> Servo servo1; Servo servo2; char thecmd; int xpos = 90; int ypos = 90; AF_DCMotor motor1(1); AF_DCMotor motor2(2); void setup() { pinMode( 19, INPUT_PULLUP ); Serial1.begin(9600); Serial1.println("Crane Controls"); Serial1.println("r = right, l = left, u= up, d = down"); Serial1.println("Driving Controls"); Serial1.println("R = right, L = left, f = forwards, b = backwards, s = stop"); servo1.attach(9); // attaches the servo on pin 9 to the servo object servo2.attach(10); // attaches the servo on pin 9 to the servo object servo1.write(xpos); servo2.write(ypos); motor1.setSpeed(255); motor2.setSpeed(255); } void loop() { if (Serial1.available() > 0) { // read the incoming byte: thecmd = Serial1.read(); Serial1.println(thecmd); if (thecmd =='l') { move_crane(servo1, 5); } if (thecmd =='r') { move_crane(servo1, -5); } if (thecmd =='d') { move_crane(servo2, 5); } if (thecmd =='u') { move_crane(servo2, -5); } if (thecmd =='f') { motor1.run(FORWARD); motor2.run(FORWARD); } if (thecmd =='b') { motor1.run(BACKWARD); motor2.run(BACKWARD); } if (thecmd =='L') { motor1.run(BACKWARD); motor2.run(FORWARD); } if (thecmd =='R') { motor1.run(FORWARD); motor2.run(BACKWARD); } if (thecmd =='s') { motor1.run(RELEASE); motor2.run(RELEASE); } } } void move_crane(Servo theservo, int direction) { int minpos = 50; int maxpos = 220; if (direction < 0) { if (ypos > minpos) { ypos = ypos + direction; theservo.write(ypos); } } else { if (ypos < maxpos) { ypos = ypos + direction; theservo.write(ypos); } } } Android Program To communication to an Android smart phone we used MIT’s App inventor. This is a free Web based Android development tool. There are many ways to layout a control screen, for us we used a 10×3 table and then populated it with buttons. Our layout is shown below: The button logic will pass the required letter command to the Bluetooth component: Our final running App looked like:
https://funprojects.blog/tag/robotics/
CC-MAIN-2022-40
refinedweb
577
63.9
Configure a DNS Server to Use Forwarders Updated: May 9, 2008 Applies To: Windows Server 2008 A forwarder is a Domain Name System (DNS) server on a network that you use to forward DNS queries for external DNS names to DNS servers outside that network.. You can also configure your server to forward queries according to specific domain names using conditional forwarders. For more information about configuring a server to use a conditional forwarder, see Assign a Conditional Forwarder for a Domain Name. You can use this procedure to designate a forwarder for a DNS server, click the applicable DNS server. Expand DNS, and then click Applicable DNS server On the Action menu, click Properties. On the Forwarders tab, click Edit. Type the IP address or fully qualified domain name (FQDN) of a forwarder, and then click OK. - You can use the Up and Down buttons to change the order in which forwarders are queried. - By default, the DNS server waits three seconds for a response from one forwarder IP address before it tries another forwarder IP address. In Number of seconds before forward queries time out, you can change the number of seconds that the DNS server waits. If the overall recursion timeout (by default, 8 seconds) is exceeded before all forwarders are exhausted, the DNS server fails the query. If the overall recursion timeout has not been exceeded and the server exhausts all forwarders, it attempts standard recursion. -. - Avoid using a primary server as a forwarder, especially if the forwarder is to be used to resolve external (Internet) queries. A primary server should be highly available and not be given the extra work of acting as a forwarder. Also, servers that host zones should not be allowed to communicate directly with the Internet to avoid exposing your internal namespace to external attackers.:
http://technet.microsoft.com/en-us/library/cc816830(v=ws.10).aspx
CC-MAIN-2014-23
refinedweb
304
59.33
def end_other(x, y): return (x[::-1][0:len(y)]).lower() == (y[::-1]).lower() endswith alternativePage 1 of 1 2 Replies - 2147 Views - Last Post: 30 December 2012 - 09:45 AM #1 endswith alternative Posted 17 December 2012 - 03:30 PM Description: The function will return true/false if the first argument ends with the second. Python already has endswith() but it thought this was interesting anyway. Replies To: endswith alternative #2 Re: endswith alternative Posted 30 December 2012 - 09:41 AM It's not precisely an "endswith" alternative, the signature for endswith lets you specify a position where to search for in the string (start and/or end), also you can provide a tuple of suffixes which the string could end with. #3 Re: endswith alternative Posted 30 December 2012 - 09:45 AM I am not seriously suggesting this as an alternative to endswith; it is just, I thought!, an interesting piece of Python code. Page 1 of 1
https://www.dreamincode.net/forums/topic/365255-endswith-alternative/
CC-MAIN-2019-18
refinedweb
160
58.92
Feature flags are powerful mechanisms devs can use to release software safely. They enable development teams to add or remove a feature from a software system on the fly, without the need for any code changes with deployments. It is a very important skill for developers to be able to differentiate deployment from a release. Code deployment is a technical task, whereas releasing features to customers is more of a business activity. With advanced use of feature flags, releasing a feature to a subset of customers significantly reduces the blast radius if anything goes wrong with the new feature. Why use feature flags? Feature flags allow us to build safely without disrupting the proper app behavior in production. It’s one of the necessary structures that’s needed when building a huge project that’s constantly in development. The aim is to have a stable and functioning production app, even with so many moving parts. They are especially useful for specific use cases such as A/B testing, when actively adding new features, or displaying certain features based on user roles. For A/B testing, feature flags can allow us to easily test different user-facing interfaces and how users react to them. For instance, this experimentation can help test the conversion rate of different landing pages and how it affects the bottom line. Plus, once we’re satisfied with the results, flags can be removed without any serious code change. Also, when adding multiple new features to the development environment, there’s the possibility of one feature being ready before the others. To deploy to production, we use flags to hide the uncompleted features from the users. Feature flags also allow us to display different features to users with different roles and permissions. This is useful in cases where all the users are using the same application. Getting. Let’s get going! Prerequisites Before we dive into the code, you should be prepared with the following: - Node.js and npm working on your local machine, preferably the latest LTS - Working knowledge of React and JavaScript Some prior knowledge of feature flags or remote config will be helpful, but is not required for you to follow along. Time to jump into the code! Building a sample Hacker News clone To create a basic Hacker News front page with React, we will first create a new React app with Create React App. To create a new react app using CRA, we will run the following command: npx create-react-app hn-react This command creates a basic React application for us in a couple of minutes. When the npx script finishes execution it will look something like the below: After that, we can go into the newly created hn-react folder with cd hn-react. To run the development server, execute the following: yarn start This command runs the development server and opens the default browser at which will show something like below: Hurray! Our React app skeleton is running. Next, we will change the React app to display stories from Hacker News. Adding Hacker News stories to our example app To change the boilerplate React app to show stories from Hacker News, we will change the src/app.js to look like the following: import React, { useState, useEffect } from 'react'; import './App.css'; function App() { const [stories, setStories] = useState([]); const [message, setMessage] = useState('loading...'); useEffect(() => { async function fetchNewsStories () { try { const data = await (await fetch(' setStories(data.hits) const message = data.hits.length ? '' : 'No stories found'; setMessage(message); } catch (err) { console.log(`err: ${err.mesasge}`, err); setMessage('could not fetch stories'); } } fetchNewsStories() }, []); return ( <div className="App"> <header className="App-header"> <h2>Latest HN Stories</h2> {message} <div className="stories"> {Array.isArray(stories) && stories.map( story => story.url && <h3><a href={story.url}{story.title}</a> - by {story.author}</h3> )} </div> </header> </div> ); } export default App; The main changes we made in the App.js file call the Hacker News API provided by Algolia in the useEffect hook, then render the stories as fetched from the API later in the component. We make use of the useState hook to set two variables: stories and message. Both of these are set in the fetchNewsStories async function that calls the API mentioned above. In case of any error while fetching the stories, the stories array is set to empty by default, and the message is set to “could not fetch stories,” which is first set to “loading.” If stories are fetched successfully, then the message is set to an empty string. A basic loop is used with the stories variable with a map to cycle through the stories. For each story that has a URL, its title, a link, and the author are printed as an H3 element. Similarly, we will also change the styling in src/App.css to be same as below: .App-header { min-height: 100vh; display: flex; flex-direction: column; align-items: center; justify-content: center; font-size: calc(10px + 2vmin); color: black; } h3 { padding-left: 0.5em; } .App-link { color: #61dafb; } We have removed the background color and made the text black for .App-header. We have also removed any styles associated with the logo animation, because the logo has been removed. To make the text more readable we have added a 0.5em padding to the H3. If we run the app again with yarn start, it will look something like the below on Congrats! Your basic React app that calls the unofficial Hacker News API is functioning. These code changes can be found as a pull request for your convenience. Next up, we will set up a feature flag on Flagsmith to show or hide the points. Setting up feature flag on Flagsmith Flagsmith is an amazing feature flag service that also has an open source version we can host on our own. For this tutorial, we will be using Flagsmith Cloud. To get started, sign in using GitHub at app.flagsmith.com. You will be asked to authorize Flagsmith with your GitHub as follows: At the bottom of the screen, you can click the Authorize Flagsmith button. It might ask for your GitHub password and after that, you will be redirected to the Flagsmith UI. You can create a new project by clicking the + button beneath the Flagsmith logo on the left. We can name the project HN-react, and click the Create Project purple button: Consequently, after creating the project, Flagsmith will automatically create the Development and Production environments. After that, we will create our first feature flag. Click the Create Your First Feature button available at the end of the page: We will add the ID as show_story_points, make sure Enabled by default is on, and click Create Feature: Subsequently, the feature flag will be available for our use like so: As the next step, we will add the Flagsmith JavaScript SDK and use it to get the feature flag we just created running within our React app. Install and use the feature flag JavaScript SDK We have already created the feature flag on Flagsmith’s UI, and now we will use it in our sample Hacker News clone app. To do this, we will add the Flagsmith JavaScript SDK from npm by running: yarn add flagsmith It will take a bit of time to add the Flagsmith client on the package.json file. At the time of writing, it is version 1.6.4. Once we have the Flagsmith client installed, we will again change the src/App.js to incorporate the client, and enable the feature flag to show or hide the points for each Hacker News story. To being with, we will add the following line at line two of the src/Apps.js file: import flagsmith from 'flagsmith'; Then, we will add the following at line eight to initialize the showStoryPoints variable: const [showStoryPoints, setShowStoryPoints] = useState(false); After that, we will add the code below in the useEffect function below the fetchNewsStories call at line 22 as follows: flagsmith.init({ environmentID:"DRLDV3g6nJGkh4KZfaSS5c", cacheFlags: true, enableAnalytics: true, onChange: (oldFlags, params) => { setShowStoryPoints(flagsmith.hasFeature('show_story_points')); } }); In this code block, flags are cached in local storage, and we are enabling analytics and checking if the feature is available on change. You must get the environment ID from the section of the feature flag page as seen below: The next step is to add the following code where you see the looping through stories on line 40: {Array.isArray(stories) && stories.map( story => story.url && <h3><a href={story.url}{story.title}</a> - by {story.author} {showStoryPoints ? '- points '+ story.points : ''}</h3> )} In the above loop, we check if the showStoryPoints variable is true, which is set per the state of our feature flag. If it is true, we show the points for the story; else we show an empty string. After this change, if you run the app again with yarn start, it will show the following: Now, go to the Flagsmith interface and turn off the feature flag like so: Subsequently, if you refresh the page at it will show the following: Hurray! You have successfully implemented your first feature flag, and changed the feature of the application without any code changes. The code for this section is available as a pull request for your reference. The final product with the story’s points can be viewed on Netlify. Feature flag libraries for React Another method is to handle feature flags using libraries directly within the codebase. Some of the most used React libraries for this are react-feature-flags and flagged. They don’t require any visual platform or keys like flagsmith; instead, everything is implemented within the codebase, and the list of flags is hardcoded directly in the codebase or in an .env file. Conclusion In this tutorial, we learned how to use a basic feature flag within a React application using Flagsmith. Feature flags make releasing any major features simple and safe. Every change is risky, and every deployment is a change to a running system. With feature flags, we can minimize the risk of change when it is needed. Feature flags also give non-technical team members (like a product owner) the ability to enable or disable a feature without requiring any code changes or deployment. The most effective use of feature flags can be with a rollout to only a subset of customers, like the employees of your organization. With these practices in place, releasing even something as crucial as the change to the payment gateway can be managed with much lower risk than releasing a feature to all the customers at once. I hope you can practice the “deployment is not a release” philosophy well with feature flags. “How to implement feature flags in React” I am trying to implement feature flags per customer in a React app. Any guidance to do it elegantly in React? Hey Vijay, For that use case multivariate flag might be an option .
https://blog.logrocket.com/how-to-implement-feature-flags-react/
CC-MAIN-2022-21
refinedweb
1,831
61.97
In this document - Defining Styles - Applying Styles and Themes to the UI - Using Platform Styles and Themes See also - Style and Theme Resources R.stylefor Android styles and themes R.attrfor all style attributes A style. Styles in Android share a similar philosophy to cascading stylesheets in web design—they allow you to separate the design from the content. For example, by using a style, you can take this layout XML: <TextView android: And turn it into this: <TextView style="@style/CodeFont" android: All of the attributes related to style have been removed from the layout XML and put into a style definition called CodeFont, which is then applied with the style attribute. You'll see the definition for this style in the following section. A theme. Defining Styles To create a set of styles, save an XML file in the res/values/ directory of your project. The name of the XML file is arbitrary, but it must use the .xml extension and be saved in the res/values/ folder. The root node of the XML file must be <resources>. For each style you want to create, add a <style> element to the file with a name that uniquely identifies the style (this attribute is required). Then add an <item> element for each property of that style, with a name that declares the style property and a value to go with it (this attribute is required). The value for the <item> can be a keyword string, a hex color, a reference to another resource type, or other value depending on the style property. Here's an example file with a single style: <?xml version="1.0" encoding="utf-8"?> <resources> <style name="CodeFont" parent="@android:style/TextAppearance.Medium"> <item name="android:layout_width">fill_parent</item> <item name="android:layout_height">wrap_content</item> <item name="android:textColor">#00FF00</item> <item name="android:typeface">monospace</item> </style> </resources> Each child of the <resources> element is converted into an application resource object at compile-time, which can be referenced by the value in the <style> element's name attribute. This example style can be referenced from an XML layout as @style/CodeFont (as demonstrated in the introduction above). The parent attribute in the <style> element is optional and specifies the resource ID of another style from which this style should inherit properties. You can then override the inherited style properties if you want to. Remember, a style that you want to use as an Activity or application theme is defined in XML exactly the same as a style for a View. A style such as the one defined above can be applied as a style for a single View or as a theme for an entire Activity or application. How to apply a style for a single View or as an application theme is discussed later. Inheritance The parent attribute in the <style> element lets you specify a style from which your style should inherit properties. You can use this to inherit properties from an existing style and then define only the properties that you want to change or add. You can inherit from styles that you've created yourself or from styles that are built into the platform. (See Using Platform Styles and Themes, below, for information about inheriting from styles defined by the Android platform.) For example, you can inherit the Android platform's default text appearance and then modify it: <style name="GreenText" parent="@android:style/TextAppearance"> <item name="android:textColor">#00FF00</item> </style> If you want to inherit from styles that you've defined yourself, you do not have to use the parent attribute. Instead, just prefix the name of the style you want to inherit to the name of your new style, separated by a period. For example, to create a new style that inherits the CodeFont style defined above, but make the color red, you can author the new style like this: <style name="CodeFont.Red"> <item name="android:textColor">#FF0000</item> </style> Notice that there is no parent attribute in the <style> tag, but because the name attribute begins with the CodeFont style name (which is a style that you have created), this style inherits all style properties from that style. This style then overrides the android:textColor property to make the text red. You can reference this new style as @style/CodeFont.Red. You can continue inheriting like this as many times as you'd like, by chaining names with periods. For example, you can extend CodeFont.Red to be bigger, with: <style name="CodeFont.Red.Big"> <item name="android:textSize">30sp</item> </style> This inherits from both CodeFont and CodeFont.Red styles, then adds the android:textSize property. Note: This technique for inheritance by chaining together names only works for styles defined by your own resources. You can't inherit Android built-in styles this way. To reference a built-in style, such as TextAppearance, you must use the parent attribute. Style Properties Now that you understand how a style is defined, you need to learn what kind of style properties—defined by the <item> element—are available. You're probably familiar with some already, such as layout_width and textColor. Of course, there are many more style properties you can use. The best place to find properties that apply to a specific View is the corresponding class reference, which lists all of the supported XML attributes. For example, all of the attributes listed in the table of TextView XML attributes can be used in a style definition for a TextView element (or one of its subclasses). One of the attributes listed in the reference is android:inputType, so where you might normally place the android:inputType attribute in an <EditText> element, like this: <EditText android:inputType="number" ... /> You can instead create a style for the EditText element that includes this property: <style name="Numbers"> <item name="android:inputType">number</item> ... </style> So your XML for the layout can now implement this style: <EditText style="@style/Numbers" ... /> This simple example may look like more work, but when you add more style properties and factor-in the ability to re-use the style in various places, the pay-off can be huge. For a reference of all available style properties, see the R.attr reference. Keep in mind that all View objects don't accept all the same style attributes, so you should normally refer to the specific View class for supported style properties. However, if you apply a style to a View that does not support all of the style properties, the View will apply only those properties that are supported and simply ignore the others. Some style properties, however, are not supported by any View element and can only be applied as a theme. These style properties apply to the entire window and not to any type of View. For example, style properties for a theme can hide the application title, hide the status bar, or change the window's background. These kind of style properties do not belong to any View object. To discover these theme-only style properties, look at the R.attr reference for attributes that begin with window. For instance, windowNoTitle and windowBackground are style properties that are effective only when the style is applied as a theme to an Activity or application. See the next section for information about applying a style as a theme. Note: Don't forget to prefix the property names in each <item> element with the android: namespace. For example: <item name="android:inputType">. Applying Styles and Themes to the UI There are two ways to set a style: - To an individual View, by adding the styleattribute to a View element in the XML for your layout. - Or, to an entire Activity or application, by adding the android:themeattribute to the <activity>or <application>element in the Android manifest.. To apply a style definition as a theme, you must apply the style to an Activity or application in the Android manifest. When you do so, every View within the Activity or application will apply each property that it supports. For example, if you apply the CodeFont style from the previous examples to an Activity, then all View elements that support the text style properties will apply them. Any View that does not support the properties will ignore them. If a View supports only some of the properties, then it will apply only those properties. Apply a style to a View Here's how to set a style for a View in the XML layout: <TextView style="@style/CodeFont" android: Now this TextView will be styled as defined by the style named CodeFont. (See the sample above, in Defining Styles.) Note: The style attribute does not use the android: namespace prefix. Apply a theme to an Activity or application To set a theme for all the activities of your application, open the AndroidManifest.xml file and edit the <application> tag to include the android:theme attribute with the style name. For example: <application android: If you want a theme applied to just one Activity in your application, then add the android:theme attribute to the <activity> tag instead. Just as Android provides other built-in resources, there are many pre-defined themes that you can use, to avoid writing them yourself. For example, you can use the Dialog theme and make your Activity appear like a dialog box: <activity android: Or if you want the background to be transparent, use the Translucent theme: <activity android: If you like a theme, but want to tweak it, just add the theme as the parent of your custom theme. For example, you can modify the traditional light theme to use your own color like this: <color name="custom_theme_color">#b0b0ff</color> <style name="CustomTheme" parent="android:Theme.Light"> <item name="android:windowBackground">@color/custom_theme_color</item> <item name="android:colorBackground">@color/custom_theme_color</item> </style> (Note that the color needs to supplied as a separate resource here because the android:windowBackground attribute only supports a reference to another resource; unlike android:colorBackground, it can not be given a color literal.) Now use CustomTheme instead of Theme.Light inside the Android Manifest: <activity android: Select a theme based on platform version Newer versions of Android have additional themes available to applications, and you might want to use these while running on those platforms while still being compatible with older versions. You can accomplish this through a custom theme that uses resource selection to switch between different parent themes, based on the platform version. For example, here is the declaration for a custom theme which is simply the standard platforms default light theme. It would go in an XML file under res/values (typically res/values/styles.xml): <style name="LightThemeSelector" parent="android:Theme.Light"> ... </style> To have this theme use the newer holographic theme when the application is running on Android 3.0 (API Level 11) or higher, you can place an alternative declaration for the theme in an XML file in res/values-v11, but make the parent theme the holographic theme: <style name="LightThemeSelector" parent="android:Theme.Holo.Light"> ... </style> Now use this theme like you would any other, and your application will automatically switch to the holographic theme if running on Android 3.0 or higher. A list of the standard attributes that you can use in themes can be found at R.styleable.Theme. For more information about providing alternative resources, such as themes and layouts, based on the platform version or other device configurations, see the Providing Resources document. Using Platform Styles and Themes. For more information about the syntax for styles and themes in XML, see the Style Resource document. For a reference of available style attributes that you can use to define a style or theme (e.g., "windowBackground" or "textAppearance"), see R.attr or the respective View class for which you are creating a style.
http://developer.android.com/guide/topics/ui/themes.html
CC-MAIN-2014-52
refinedweb
1,988
51.99
Search nearby locations using .Net core and EF. Project Set Up We need to create our Asp.Net Core Web Application We will need to download the following NuGets to use entity framework core. We do this by right clicking on the selecting “Manage NuGet Packages” Select “browse” from the top nav bar and search and install - Microsoft.EntityFrameworkCore.SqlServer - Microsoft.EntityFrameworkCore - Microsoft.EntityFrameworkCore.SqlServer.NetTopologySuite We need to create a data model to represent the data in the database. We will create a class called “Pub.cs” in a new folder “Data->DataModels” The class Pub.cs contains the properties that we want to persist to the database. We have included the [Key] annotation to show that we want Id to be the primary key when the database table is created. We need to include the using System.ComponentModel.DataAnnotations; at the top of the class for this too. The [Key] annotation is not required as EF will work out that the Id is the primary Key but it is good practice to add it anyway. There is a property “Location” that is of Type NetTopologySuite.Geometries.Point we need to add the using NetTopologySuite.Geometries to allow this to work. Database Context The next step is to create the database context. We will create this at the root level of the data folder. The PubContext.cs class will inherit the DBcontext class and set up the Pubs DbSet. We will also add the OnModelCreating override method that will create the Pubs database table and we want to give it the name “Pubs”. To use the PubContext we have to register it. This is done in the Startup.cs class in a simular way to how other services are registered with dependency injection. !IMPORTANT! we have to add the opt=>opt.UseNetTopologySuite() when Registering the Database connection. If we don’t the database will not be able to map the NetTopologySuite.Geometries.Point type to type Geography. In the startup.cs there is a ConfigureServices method. In there we need to add Services.AddDbContext to register the new database context. You will note that it is using a connections string called “Default”. We will need to add this to the applicationSetting.json file. If you want to change what database to point to just update this string. We will need to add a using Microsoft.EntityFrameworkCore; so that the UseSqlServer is available and there will also need to be a using using Pubs.Data; so that the PubContext is available. Initialize DB We need to create a class called DbInitializer.cs to create and populate the database. We will use the EnsureCreated method to automatically create the database. We will also seed the database with some test data. We will create this in the Data folder. We need to create static method Initialize and add the line context.Database.EnsureCreated(); to create the database. We check the Pubs table to see if it is empty and if it is we will create some new records. if (!context.Pubs.Any()) { var pubs = new Pub[] { new Pub { Name =….}, new Pub { Name =….}, } } context.Pubs.AddRange(pubs); context.SaveChanges(); When we are creating the “Location” data we can not just add a lat and long we need to construct a using NetTopologySuite.Geometries.Point. This is done by creating a geometryFactory var geometryFactory = NtsGeometryServices.Instance.CreateGeometryFactory(srid: 4326); SRID 4326 means that latitude and longitude geographic coordinates are being used and when creating the location the geometryFactory knows to create the correct type. new Pub { Name =...Location= geometryFactory.CreatePoint(new Coordinate(-5.9348902, 54.5881783)) }, This creates the Point Type from the Latitude and longtitude supplied. !IMPORTANT — The Coordinates have to be specified Longtiude and Latitude not the other way round. That has caused me a lot of pain in the past. We then need to Run the Initialize method when the application is started. To do this we add the following code to the Program.cs Main Method. We have to add the following usings to allow access. using Microsoft.Extensions.DependencyInjection; using Pubs.Data; If you run the application now it will create the database and seed to with the locations. :) The Front End We now want to consume all that lovely data we have created in the database. To to that we need to create a model, view and controller. - Right-click the Controllers folder in Solution Explorer and select Add > New Scaffolded Item. - In the Add Scaffold dialog box: - Select MVC controller with views, using Entity Framework. - Click Add. The Add MVC Controller with views, using Entity Framework dialog box appears. Enter the details like so This will create the controller with default methods such as Get, Get/1 etc. It also injects the database context for us. We can see that there is a constructor with the context injected for us to use. If we open Views -> Pubs-> Index.cshtml we can remove the “Location” since it is a complex type and will not render well. Now we have the basic list of Pubs being returned we need to do the important work and find a nearby pub. We will create a new method in the PubsController.cs called “FindNearBy” that will take 3 parameters lat, lon (long is a key word) and distance. The latitude and longtitude parameters will be the latitude and longtitude of our current location. You can add javascript to find this for you automatically but for this demonstration we will pass them in manually. The third parmeter is the distance (in meters) we are prepared to walk to get to the pub. Here we will create our current location using the geometryFactory. Please note that it is lon and lat not the other way around. We then get the distance from our current location to the Pubs and filter out any that are too far away. _context.Pubs.Where(x => x.Location.Distance(location)< distance).ToListAsync(); We then need to return the results to the page. We can reuse the Index page for this. Run the application and enter the url. Update the lat and lon to your current location and change the distance and you should get a list of public houses within the search distance in meters. /pubs/FindNearBy?lat=54.5900622&lon=-5.9389685&distance=500
https://richardbeggs.medium.com/search-nearby-locations-using-net-core-and-ef-d7024a63db12?source=post_page-----d7024a63db12--------------------------------
CC-MAIN-2021-31
refinedweb
1,056
59.4
Find out which folder on the file server the LogicWorks program is located on from your lab instructor. Logiworks intruction manuals are available, on overnight reserve, in the library. To start LogicWorks double click on the icon. After a short time two windows will appear on your screen, a circuit window and a timing window. The circuit window is the window that is used most frequently so resize it to fill the screen by clicking on the zoom button in the top right corner. The following circuit window illustrates the short cut icons that are located in the bottom left hand corner of the window. All these functions can be accessed from the menus as well. From the device menu select the device you would like to place in the circuit area. The pointer will be replaced by the flickering circuit symbol of the device you selected. The orientation can be changed by clicking on the orientation icon in the lower left corner of the circuit window. An alternate way of specifying the orientation is by pressing the appropriate arrow key on the keyboard. Move the symbol to the desired location and click the mouse to place the device there. This may be repeated as desired. To return to the pointer click anywhere on the menu bar or click on the pointer icon in the lower corner in the window. To draw a wire press and hold the mouse button with the pointer near another wire or chip input or with a small square simulating a solder connection. The cross icon in the lower corner of the circuit window provides a different way of placing wires. Click on the starting point and ending point of a desired wire and the wire is automatically drawn. To delete something select Zap from the Edit menu. The pointer will change to a small lightning bolt. Move the tip of the bolt to whatever device or wire is to be deleted and click the mouse button. To switch back to the pointer click anywhere in the menu bar. To move a device place the pointer inside the device and press and hold the mouse button. The selected device will now follow the mouse pointer although the program will limit the movement based on the current circuit. The organization of your circuit is greatly improved by labeling switches, gates, etc. to reflect their purpose. To name a device or wire select Name from the Edit menu. The pointer will switch to a pencil. Click the pencil near the wire you want to label(Don't click nearer than 5 pixels to a device or the naming won't work). The pencil will switch to an insertion point I-beam. As long as you hold down the mouse button the position of the I-beam can be moved. To type in the label release the mouse button and enter the text followed by return. The label can be moved at a later time just like any other device. Devices can also be named. Click inside the device to name it. Miscellaneous text can be placed at any point on the circuit as well.). a AND gate has a 1 input and the other input is not connected then the output of the AND gate will be an unknown. Finally a "C" indicates a conflict. This occurs when two outputs are connected together. The most common causes of circuit problems are: Problem Solution 1) wires that aren't joined Double click on likely bad wires and make sure they are joined. Place probes on likely places to isolate the fault. 2) a device is placed over top of Try deleting a device which doesn't another device. seem to be working. 3) the set and clear inputs of As in 1) double click on the set and flip-flops are not connected to 1 clear inputs to make sure they are connected to one. 4) an unknown signal that resulted See if selecting clear unknowns helps during construction of the circuit was not cleared. 5) outputs are connected together As in 1) double click on wires to isolate the problem. DOS LogicWorks is installed on the PCs in cl 135.2. It resides in the directory "\lgworks". To start LogicWorks, cd to its directory and type "lw". For example: C:\> cd lgworks C:\lgworks> lw After a short time three windows will appear on your screen, a Circuit Window, a Parts Window and a Timing Window. The Circuit Window is the window that is most frequently used, so you may want to resize it so that it is a bit more workable. This is done by selecting one of the window sizing controls in either of the four corners of the window, clicking the left mouse button, and dragging the mouse to resize the window. You can drag the window around the screen by moving the mouse pointer over the title bar of the window, pressing the left button, and then dragging it to reposition the window. You can also close the circuit window by selecting the close button in the upper left-hand side of the window and clicking in it with the left mouse button. By pressing the right mouse button when the mouse pointer is in the title bar area of the window, you will be given a choice of whether to close the window or to enlarge it so that it fills the whole screen. The Main Menu can be reached by pressing the right mouse button while the pointer is either inside a window or on the background. The following Circuit Window illustrates features of a LogicWorks window, as well as the short-cut icons that are located in the bottom left-hand corner of the window. The functions that these icons offer can also be accessed from the Main Menu (shown above). From the Parts Menu you can select the kind of device you would like to use, be it a gate (AND, OR, XOR, etc), an IO device (a probe, a switch, an led, etc), or some other generic device (a flip-flop, a clock, etc). To select the type of device you want to use, place the mouse cursor over the type selection button, just beneath the title bar in the Parts Menu, and press the left mouse button. A list of different types will appear. Move the mouse to select the appropriate type. Once a type has been selected, a list of possible devices of that type is displayed. You can select the device you would like to place in the Circuit Window by double-clicking on that device with the left mouse button. The pointer will be replaced by a flickering circuit symbol of the device that you selected. The orientation can be changed by clicking on the orientation icon in the bottom left corner of the Circuit Window. An alternate way of specifying the orientation is by pressing the appropriate arrow key on the keyboard. To place the device, move the symbol to the desired location in the Circuit Window and press the left mouse button to place the device there. This may be repeated as desired. To return to the pointer, click anywhere on the menu bar, press the space bar, or click on the pointer icon in the bottom left-hand corner in the window. To draw a wire press and hold down the left mouse button with the pointer at the start of another wire or chip input by a small square which represents a solder connection. The wiring tool icon in the bottom left-hand corner of the Circuit Window provides a different way of placing wires. Clicking on the desired starting and ending points will cause a wire to be automatically drawn between these points. If there is no end point to join your wire to, then double-click on the left mouse button to draw a wire up to the point where the wiring tool is currently located. To delete something, select the zap tool icon from the bottom left-hand corner of the Circuit Window. Move the tip of the bolt to whatever device or wire you want to delete and press the left mouse button. To switch back to the pointer, press the space bar once or click anywhere in the menu bar . To move a device, place the pointer inside the device, press and hold the left mouse button. This "grabs" the device and is indicated by its change to a highlighted colour. The selected device will now follow the mouse anywhere in the current Circuit Window. The organization of your circuit will greatly improved if you label switches, gates, etc. to reflect their purpose. To name a device or wire select "Name" from the Edit menu or select the naming tool icon in the bottom left-hand corner of your Circuit Window. The pointer will change to a pencil. Click the pencil near the wire you want to label (Do not click nearer than 5 pixels to a device or the naming will not work.). The pencil will then change into an insertion point I-beam. As long as you hold down the mouse button the position of the I-beam can be moved. To type in the label release the mouse button and enter the text. The label can be moved at a later time just like any other device. Devices can also be named. Simply click inside the device to name it. Miscellaneous text can also be placed at any point on the circuit as well. The Timing Window is like an oscilloscope in that it displays the value of a named signal on the vertical axis versus time on the horizontal axis. In this way the behaviour of a circuit can be monitored over time. In order for the timing information of a particular signal to appear in the Timing Window, you must name the signal with the naming tool, from the Circuit Window. The order of the signals can be changed in the Timing Window by selecting the name of the signal with the mouse and moving it up or down. The speed of the Timing Window can be modified by using the icons in the bottom left hand corner of the window. The three timing icons correspond to the icons in the Circuit Window: pause, slow, and fast simulation. Selecting an icon in either of the Circuit or Timing windows changes the selection in the other window as well. To print out the timing diagram make sure that the timing window is selected as the current window. Choose the "Print Timing..." option from the File menu.). an AND gate has a 1 input and the other input is not connected then the output of the AND gate will be an unknown. Finally a "C" indicates a conflict. This occurs when two outputs are connected together. These are useful for displaying and entering four bit values. The low bit is at the bottom and the most significant bit is at the top. The pin on the bottom of the keyboard generates a short clock pulse every time one of the hex digits is selected. A tri-state buffer is an electronic switch. The input on the top of the buffer controls whether a voltage is passed through to the output or not. In the following diagram the input is let through to the output when the switch is low. When the switch is high the input is not connected to the output and is therefore floating which is why the probe show a "Z". The most common causes of circuit problems are: Problem Solution 1) Wires that are not joined. Double click on likely bad wires and make sure they are joined. Place probes on likely wires to monitor and hence isolate faults. 2) A device is placed over on top of Try deleting a device which does not another device. seem to be working. 3) The set and clear inputs of As in 1) double click on the set and flip-flops are not connected to 1. clear inputs to make sure they are connected to 1. 4) An unknown signal that resulted See if selecting the options/cear during construction of the circuit unknowns option from the main menu, was not cleared. will help. 5) Outputs are connected together. Separate them. Rayshade is a program for creating ray-traced images. It reads a description of a scene to be rendered and produces a colour image corresponding to that description. Rayshade is available on all Unix platforms in the CS department. The version that this manual describes is version 4.06. Additional information can be acquired from the sources mentioned later in this document. rayshade -n infile.ray > outfile.rle The previous command invokes rayshade with the no shadow option. The scene description is contained in the file infile.ray and the output RLE file is placed in the file outfile.rle. Rayshade also produces a great deal of diagnostic and statistical information to standard error. To view an RLE file, you can use the program xv. Xv will read a 24 bit RLE image and display it on the local workstation. Xv can also be used to convert the RLE image into another format, so as to conserve disk space since RLE images tend to be quite large as compared to GIF or JPEG images. Set the adaptive ray tree pruning colour. If all channel contributions fall below the given cutoff values, no further rays are spawned. -c Continue an interrupted rendering. -E eye_separation Set eye separation for stereo imaging. -F report_freq Set frequency, in lines, of status report (default 10). -h Print a short usage message. -j Perform jittered sampling. -l Render image for left eye (requires -E option). -n Do not trace shadow rays. -O output_file Override image file name in input file, if any. -P cpp_arguments Specify the options that should be passed to the C preprocessor. -q Do not print warning messages. -R xres yres Set image resolution. -r Render image for right eye (requires -E option). -S samples Specifies number of jittered samples. -s Do not cache shadowing information. -T red_thresh green_thresh blue_thresh Specifies adaptive ray-depth cutoff threshold. -V filename Write verbose output to filename. -v Write verbose output to standard output. -W minx maxx miny maxy Render the specified window. The following sections describe the keywords which may be included in the input file. Specifies the eye's position in space. The default is (0, -8, 0). lookp x y z Specifies the point at which the eye is looking. The default is (0, 0, 0). up x y z Specifies the direction which should be considered "up" from the eye's position. Note that this vector need not be perpendicular to the vector between the look point and the eye's position. The default is (0, 0,1.). fov horizontal_field_of_view [vertical_field_of_view] The horizontal_field_of_view specifies, in degrees, the angle between the center of the image and both the left-most and right-most columns of pixels If present, the vertical_field_of_view specifies the angle between the center of the image and the center of the top-most or bottom-most row. If not present, the vertical field of view is calculated using the screen resolution and the assumption that pixels are square. Thus, the fov keyword actually specifies twice the field-of-view. The default horizontal field-of-view is 45 degrees, while the default vertical field-of-view is calculated as described above. screen x_resolution y_resolution Specifies the horizontal and vertical resolution of the image to be rendered. This command may be overridden through use of the -R option. The default resolution is 512 by 512 pixels. background red green blue Specifies the colour that should be assigned to rays which do not strike any object in the scene. The default is black (0, 0, 0). outfile filename Specifies the name of the file to which the resulting image should be written. By default, the image is written to the standard output. This command may be overridden through the use of the -O option. aperture aperture_radius The aperture_radius is the radius, in world units, of the aperture centered at the eye point. This controls, in conjunction with focaldist, the depth of field, and thus the amount of focus blur present in the final image. Rays are cast from various places on the aperture disk towards a point which is focal_distance units from the center of the aperture disk. This causes objects which are focal_distance units from the eye point to be in sharp focus. Note that an aperture_radius of zero causes a pinhole camera model to be used, and there will be no blurring (this is the default). Increasing the aperture radius leads to increased blurring. When using a non-zero aperture_radius, it is best to use jittered sampling in order to reduce aliasing effects. focaldist focal_distance Specifies the distance, in world units, from the eye point to the focal plane. Points which lie in this plane will always be in sharp focus. By keeping aperture_radius constant and changing focal_distance, it is possible to create a sequence of frames which simulate pulling focus. By default, focal_distance is equal to the distance from the eye point to the look point. maxdepth maximum_depth Controls the maximum_depth of the ray tree. The default is 15, with eye rays considered to be of depth zero. cutoff cutoff_threshold Specifies the adaptive ray-depth cutoff_threshold. When any ray's maximum contribution to the final colour of a pixel falls below this value, the ray and its children (specularly transmitted and reflected rays) are not spawned. This threshold may be overridden through the use of the -T option. The default value is 0.002. sample num_samples [jitter | nojitter] Specifies num_samples2 jittered samples (default). See SAMPLING for details. When specified, this value may be overridden through the use of the -S option. The default value is 3, the maximum value is 5. If nojitter is specified, sample locations and times will not be jittered. contrast red green blue Specifies the maximum contrast allowed between samples in a (sub)pixel before subdivision takes place. See SAMPLING for details. When specified in the input file, these values may be overridden through the use of the -C option. The defaults for the red, green and blue channels are 0.25, 0.2, and 0.4, respectively. Any number of light sources may be defined, but rendering time will increase with each new light source. It should also be kept in mind that light sources will not actually appear in the image, even if they are defined as being in the frame. By default, rayshade will create a directional light source of intensity 1.0 defined by the vector (1, -1, 1) if no other light source is specified. In the definitions below, brightness specifies the intensity of the light source. If a single floating-point number is given, the light source emits a "white" light of the indicated normalized intensity. If three floating-point numbers are given, they are interpreted as the normalized red, green and blue components of the light source's colour. Lights are defined as follows: light intensity ambient Define the amount of light present in the entire scene. Only one ambient light source is permitted. If more than one is specified, only the last instance is used. A surfaces ambient colour is multiplied by the intensity of the ambient source to give the total ambient light reflected from its surface. The default intensity is 1, 1, 1. light intensity directional x y z Define a light source with the given intensity that is defined to be in the given direction from every point it illuminates. The direction (x y z) does not need to be normalized. light intensity point x y z Creates a point source located at (x, y, z). light intensity spot x y z tx ty tz a [in out] Place a spotlight at position (x, y, z) oriented as to be pointing at (tx, ty, tz). Value in is the angle at which the light source begins to get attenuated. Value out is the angle at which the spotlight intensity is 0. By default, both are 180 degrees. The intensity of the light falls off as (cosine(angle))a. light intensity extended x y z radius Creates an extended source centered at (x, y, z) with the indicated radius. The images produced using extended sources are usually superior to those produced using point sources, but ray-tracing time is increased substantially. Rather than tracing one shadow ray to a light source, multiple rays are traced to various points on the extended source. The extended source is approximated by sampling a square grid of light sources. See SAMPLING for more details on the sampling of extended light sources. light intensity area x1 y1 z1 x2 y2 z2 usamp x3 y3 z3 vsamp Create a quadrilateral area light source. The u axis is defined by the vector from (x1, y1, z1) to (x2, y2, z2). Along this axis, a total of usamp samples will be taken. The v axis of the light source is defined from (x1, y1, z1) to (x3, y3, z3). Along this axis, a total of vsamp samples will be taken. Surfaces are defined once, and may be associated with any number of primitive objects. A surface definition is given by: surface surf_name <Surface Definition> Surf_name is the name associated with the surface. This name must be unique for each surface. The binding of a collection of surface properties to a given object is accomplished in a bottom-up manner. The surface that is "closest" in the modeling tree to the primitive being rendered is the one that is used to give the primitive its appearance. An object that has no surface bound to it is assigned a default surface that gives it the appearance of white plastic. The Surface Definition consists of a number of keywords and numbers (usually rgb values). Each of the values in the colour triple are normalized, with zero indicating no intensity and 1 indicating total intensity. If any of the surface components are not included in the surface definition, it defaults to zero. The only exception is the index of refraction, which is assigned the value 1. The following are the component keywords and their associated values: ambient ar ag ab Ar, ag and ab are used to specify the rgb components of the surface's ambient colour. This colour is always applied to a ray striking the surface. diffuse dr dg db Dr, dg and db specify the diffuse colour of the surface. This colour, the brightness component of each light source whose light strikes the surface, and dot product of the incident ray and the surface normal at the point of intersection determine the colour which is added to the colour of the incident ray. specular sr sg sb Sr, sg and sb are used to specify the specular colour of the surface. The application of this colour is controlled by the coef parameter, a floating-point number which indicates the power to which the dot product of the surface's normal vector at the point of intersection and the vector to each light source should be raised. This number is then used to scale the specular colour of the surface, which is then added to the colour of the ray striking the surface. This model (Phong lighting) simulates specular reflections of light sources on the surface of the object. The larger coef is, the smaller highlights will be. specpow exponent Controls the size of the specular highlight. The larger the exponent, the smoother the apparent finish. body br bg bb Specifies the body colour of the object. The body colour affects the colour of rays that are transmitteed through the object. extinct coef Specifies the extinction coefficient of the interior of the object. transp transp Transp indicates the transparency of the object. If non-zero, a ray striking the surface will spawn a ray which is transmitted through the object. The resulting colour of this transmitted ray is scaled by transp and added to the colour of the incident ray. The direction of the transmitted ray is controlled by the index parameter, which indicates the index of refraction of the surface. reflect refl Refl indicates the reflectivity of the object. If non-zero, a ray striking the surface will spawn a reflection ray. The colour assigned to that ray will be scaled by refl and added to the colour of the incident ray. index n Specifies the index of refraction. The default value is equal to the index of refraction of the atmosphere surrounding the eye. translu translu tr tg tb stpow Specifies the translucency, diffusely transmitted colour, and Phong exponent for transmitted specular highlights. noshadow There are no shadows cast on this surface. Rayshade usually ensures that a primitive's normal is pointing towards the origin of the incident ray when performing shading calculations. Exceptions to this rule are transparent primitives, for which rayshade uses the dot product of the normal and the incident ray to determine if the ray is entering or exiting the surface, and super quadrics, whose normals are never modified due to the nature of the ray/super quadric intersection code. Thus, all non transparent primitives except super quadrics will in effect be double sided. Primitives are specified by lines of the form: primitive_type [surface] <primitive definition> [transformations] [texture mapping information] Surface is the name of the surface to be associated with the primitive, if any. Texture mapping and transformations are discussed below. A list of available primitives follows. The surface parameter is optional, and is omitted in all the following descriptions. blob thresh st r x y z [st r x y z ...] Defines a blob consisting of a threshold equal to thresh, and a group of one or more metaballs defined with a strength, st, radius, r, and position (x, y, z). box x1 y1 z1 x2 y2 z2 Create an axis-aligned box which has the specified opposite corners. cone xbase ybase zbase xtop ytop ztop base_radius top_radius Creates a (truncated) cone which extends from (xbase, ybase, zbase) to (xtop, ytop, ztop). The bottom of the cone will have radius base_radius, while the top will have radius top_radius. cylinder xb yb zb xt yt zt radius Creates a cylinder which extends from base (xb, yb, zb) to top (xt, yt, zt) and has the indicated radius. heightfield file Creates a heightfield defined by the altitude data stored in the named file. The height field is based on perturbations of the unit square in the z = 0 plane, and is rendered as a surface tessellated by right isosceles triangles. The binary data in file is stored as an initial integer giving the square root of number of data points in the file, followed by altitude (Z) values stored as floating-point numbers. The height field is rendered as a surface tessellated by triangles. Non-square height fields may be rendered by setting vertex heights to less than or equal to -1000. Triangles which have any vertex less than or equal in altitude to this value are not rendered. plane xn yn zn x y z Create a plane which passes through point (x, y, z) and has normal (xn, yn, zn). poly x1 y1 z1 x2 y2 z2 x3 y3 z3 [x4 y4 z4 ...] Creates a polygon with the specified vertices. The vertices should be given in a counterclockwise order as one faces the "top" of the polygon. The polygon may be non-convex, but non-planar polygons will not be rendered correctly. The number of vertices defining a polygon is limited only by available memory. sphere radius x y z Creates a sphere with the indicated radius centered at (x, y, z). torus rmajor rminor x1 y1 z1 x2 y2 z2 Create a torus centered around (x1, y1, z1) with a minor radius rminor and a major radius rmajor. The up vector is (x2, y2, z2). triangle x1 y1 z1 x2 y2 z2 x3 y3 z3 Creates a triangle with vertices (x1, y1, z1), (x2, y2, z2) and (x3, y3, z3). Vertices should be given in a counter-clockwise order as one is looking at the 'top' face of the triangle. triangle p1x p1y p1z n1x n1y n1z p2x p2y p2z n2x n2y n2z p3x p3y p3z n3x n3y n3z Defines a Phong-shaded triangle. Here, the first three floating-point numbers specify the first vertex, the second three specify the normal at that vertex, and so on. Again, vertices should be specified in counterclockwise order. Currently, all three vertex/normal pairs are stored for every triangle (as opposed to storing pointers to these vectors, which would reduce storage space in cases where more than one triangle shared a vertex). There are two types of aggregate objects, but they are defined in roughly the same way. Each is defined by specifying a keyword that defines the type of aggregate followed by a series of object instanciations and surface definitions, and terminated with the end keyword. The two types of aggregates are list and grid. list ... end Each object in the list is tested for intersection with the ray. The closest intersection is returned and the others are ignored. Any number of primitive objects may appear in the list. grid xvox yvox zvox ... end The region of space occupied by the grid is divided into a number of discrete box-shaped voxels. Each of the voxels contains a list of the objects that intersect the voxel. This limits the number of objects tested for intersection with the ray to only those mostly likely to be intersected. (xvox, yvox, zvox) defines the voxel space. CSG objects are defined by surrounding the objects on which to be operated, as well as any other surface-binding commands. union <object> <object> [<object> ...] end Specify an object defined as the union of the given objects. difference <object> <object> [<object> ...] end Specify an object defined as the difference of the given objects. intersect <object> <object> [<object> ...] end Specify an object defined as the intersection of the given objects. Currently only two objects in a CSG list are supported, but support for more is planned in a future release. name objname <instance> Associate objname with the given object. The specified object is not actually instantiated, it is only aliased with the given name. object objname [<transformations> <textures>] Instantiate a copy of the object associate with objname. Transformation and texture information can be applied to the object being instantiated. For convenience, one may also define surfaces inside of an object-definition block. Surfaces defined in this manner are nevertheless globally available. In addition, object definitions may be nested. This facilitates the definition of objects through the use of recursive programs. translate x y z Translate the object by (x, y, z). rotate x y z theta Rotate the object counter-clockwise about the vector (x, y, z) by theta degrees. scale x y z Scale the object by (x, y, z). transform x1 y1 z1 x2 y2 z2 x3 y3 z3 [xd yd zd] Transform the object by the column major matrix specified by the nine floating point numbers. Thus, a point (x, y, z) on the surface of the object is mapped to (x*x1 + y*y1 + z*z1, x*x2 + y*y2 + z*z2, x*x3 + y*y3 + z*z3). If it is given, (xd, yd, zd) specifies a translation vector. texture texture_type [arguments] [transformations] Texture_type is the name of the texture to apply. Arguments are any arguments that the specific texture type requires. If supplied, the indicated transformations will be applied to the texture. (More accurately, the inverse of the supplied transformation is applied to the point of intersection before it is passed to the texturing routines.) Versions of Perlin's Noise() and DNoise() functions are used to generate values for most of the interesting textures. Currently, there are eleven textures available: blotch blend_factor surface This texture produces a mildly interesting blotchy looking surface. Blend_factor is used to control the interpolation between a point's default surface characteristics and the characteristics of the named surface. A value of 0 results in a roughly 50-50 mix of the two surfaces. Higher values result in greater instances of the 'default' surface type. bump scale Applies a random bump map to the surface being textured. The point of intersection is passed to DNoise(). The returned normalized vector is weighted by scale and added to the normal vector at the point of intersection. checker surface Applies a (3D) checkerboard texture to the object being textured. Every point that falls within an "even" cube will be shaded using the characteristics of the named surface. Every point that falls within an "odd" cube will retain its usual surface characteristics. Be warned that strange effects due to roundoff error are possible when the planar surface of an object lies in a plane of constant integral value in texture space. cloud scale H lambda octaves cthresh lthresh tscale This texture is a variant on Geoff Gardner's ellipsoid-texturing algorithm. It should be applied to unit spheres centered at the origin. These spheres may be transformed to form the appropriately shaped cloud or tree. The parameters are identical to those for the fbm texture (see below), plus the three threshold parameters to control the overall density of the cloud. fbm offset scale H lambda octaves thresh [colourmap] This texture generates a sample of discretized fractional Brownian motion (fBm) and uses it to modify the diffuse and ambient components of an object's colour. If no colourmap is named, the sample is used to scale the object's diffuse colour. If a colourmap name is given, a 256-entry colourmap is read from the named file, and the object is coloured using the values in this colourmap (see below). Scale is used to scale the output of the fractional Brownian motion function. Offset allows one to control the minimum value of the fBm function. H is related to the Holder constant used in the fBm (a value of 0.5 works well). Lambda is used to control the lacunarity, or spacing between successive frequencies, in the fBm (a value of 2.0 will suffice). Octaves specifies the number of octaves of Noise() to use in simulating the fBm (5 to 7 works well), and thresh is used to specify a lower bound on the output of fBm function. Any value lower than thresh is set to zero. fbmbump offset scale H lambda octaves This texture is similar to the fbm texture. Rather modifying the colour of a surface, fbmbump acts as a bump map. gloss glossiness Gives a reflective surface a glossy appearance. This texture perturbs an objects surface normal such that the normal "samples" a cone of unit height with radius 1 - glossiness. A value of 1 results in perfect mirror-like reflections while a value of 0 results in fuzzy reflections. marble [colourmap] This texture gives a surface a marble like appearance. If the name of a colourmap file is given, the marble will be coloured using the RGB values in the colourmap. If no colourmap name is given, the diffuse and ambient components of the object's surface are simply scaled. One may transform the texture to control the density of the marble veins. sky scale H lambda octaves cthresh lthresh Similar to the fbm texture. Rather than modifying the colour of a surface, this texture modulates its transparency. Chtresh is the value of the fbm function above which the surface is completely opaque. Below the vlaue of lthresh, the surface is completely transparent. stripe <surface> size bump Apply a raised stripe pattern to the surface. The surface properties used to colour the stripe are those of the given surface. The width of the stripe, as compared to the unit interval, is given by size. The magnitude of bump controls the extent to which the bump appears to be displaces from the rest of the surface. If negative, the stripe will appear to sing into the surface and if positive, it will appear to stand out of the surface. wood This texture gives a wood-like appearance to a surface. A colourmap is an ASCII file 256 lines in length, each line containing three space separated integers ranging from 0 to 255. The first number on the nth line specifies the red component of the nth entry in the colourmap, the second number the green component, and the third the blue. The values in the colourmap are normalized before being used in texturing functions. Textures which make use of colourmaps generally compute an index into the colourmap and use the corresponding entry to scale the ambient and diffuse components of a surface's colour. It is important to note that more than one texture may be applied to an object at any time. In addition to being able to apply more than one texture directly (by supplying multiple "texturing information" lines for a single object), one may instantiate textured objects which, in turn, may be textured or contain instances of objects which are textured, and so on. component <component> The named component will be modified. Possible components are: ambient, diffuse, specular, specpow, reflect, transp, and bump. range high low Specify the range of values to which the values in the image should be mapped. A value of 1 will be mapped high, 0 to low. Intermediate values will be linearly interpolated. smooth When given, pixel averaging will be performed in order too smooth the sampled image. If not specified, no averaging will occur. textsurf <surface specification> For use when modifying surface colours, this keyword specifies that the given surface should be used as the base to be modified when the alpha value in the image is non-zero. When alpha is zero, the object's unmodified default surface characteristics are retained. tile un vn Specify how the image should be tiled. fog r g b r_thin g_thin b_thin Add global exponential fog with the specified thinness and colour. Fog is simulated by blending the colour of the fog with the colour of each ray. The amount of fog colour blended into a ray colour is an exponential function of the distance from the ray origin to the point of intersection divided by the r_thin, g_thin, and b_thin values. If the distance divided by thinness is equal to 1, a ray's new colour will be half of the fog colour plus half its original colour. fogdeck altitude offset scale chaoscale r g b r_thin g_thin b_thin Add low-altitude fog, with transmissivity modulated by a chaotic function. mist r g b r_thin g_thin b_thin zero scale Add global low-altitude mist of the specified colour. The colour of a ray is modulated by a fog with density which varies linearly with the difference in altitude (Z coordinate) between the ray origin and the point of intersection. The three thin values specify the transmissivity (thinness) of the mist for each of the red, green and blue channels. The base altitude of the mist is given by zero, and the apparent height of the mist can be controlled by scale, which is used to scale the difference in altitude. Adaptive subdivision works by sampling each pixel at its corners. The contrast between these four samples is computed, and if too large, the pixel is subdivided into four equivalent sub-pixels and the process is repeated. The threshold contrast may be controlled via the -C option or the contrast command. There are separate thresholds for the red, green, and blue channels. If the contrast in any of the three is greater than the appropriate threshold value, the pixel is subdivided. The pixel-subdivision process is repeated until either the samples' contrast is less than the threshold or the maximum pixel subdivision level, specified via the -P option or the adaptive command, is reached. When the subdivision process is complete, a weighted average of the samples is taken as the colour of the pixel. Jittered sampling works by dividing each pixel into a number of square regions and tracing a ray through some point in each region. The exact location in each region is chosen randomly. The number of regions into which a pixel is subdivided is specified through the use of the -S option. The integer following this option specifies the square root of the number of regions. Each extended light source is, in effect, approximated by a square grid of light sources. The length of each side of the square is equal to the diameter of the extended source. Each array element, which is square in shape, is in turned sampled by randomly choosing a point within that element to which a ray is traced from the point of intersection. If the ray does not intersect any primitive object before it strikes a light source element, there is said to be no shadow cast by that portion of the light source. The fraction of the light emitted by an extended light source which reaches the point of intersection is the number of elements which are not blocked by intervening objects divided by the total number of elements. The fraction is used to scale the intensity (colour) of the light source, and this scaled intensity is then used in the various lighting calculations. When jittered sampling is used, one shadow ray is traced to each extended source per shading calculation. The element to be sampled is determined by the region of the pixel through which the eye ray at the top of the ray tree passed. When adaptive super sampling is used, the -S option or the samples command controls how may shadow rays are traced to each extended light source per shading calculation. Specifically, each extended source is approximated by a square array consisting of samples * samples elements. However, the corners of the array are skipped to save rendering time and to more closely approximate the circular projection of an extended light source. Because the corners are skipped, samples must be at least 3 if adaptive super sampling is being used. Not that the meaning of the -S option (and the samples command) is different depending upon whether or not jittered sampling is being used. While jittered sampling is generally slower than adaptive subdivision, it can be beneficial if the penumbrae cast by extended light sources take up a relatively large percentage of the entire image or if the image is especially prone to aliasing. eyep -20 20 20 light 1 directional 1 1 1 surface red ambient 0.2 0 0 diffuse 0.8 0 0 specular 0.5 0.5 0.5 specpow 32 reflect 0.8 surface green ambient 0 0.2 0 diffuse 0 0.8 0 sphere red 8 0 0 -2 plane green 0 0 -10 0 0 1 Passing this input to rayshade will result in an image of a red reflective sphere sitting on a green ground plane. Note that in this case, default values for lookp, up, screen, fov, and background are assumed. A more interesting example uses instantiation to place multiple copies of an object at various locations in world space: eyep 10 10 10 fov 20 light 1 directional 0 1 1 surface red ambient 0.2 0 0 diffuse 0.8 0 0 specular 0.5 0.5 0.5 specpow 32 reflect 0.8 surface green ambient 0 0.2 0 diffuse 0 0.8 0 surface white ambient 0.1 0.1 0.1 diffuse 0.8 0.8 0.8 specular 0.6 0.6 0.6 specpow 30 name bl list sphere red 0.5 0.5 0.5 0 sphere white 0.5 0.5 -0.5 texture marble scale 0.5 0.5 0.5 sphere red 0.5 -0.5 -0.5 0 sphere green 0.5 -0.5 0.5 0 end object bl translate 1 1 0 object bl translate 1 -1 0 object bl translate -1 -1 0 object bl translate -1 1 0 Here, an object named bl is defined to consist of four spheres, two of which are red and reflective. The object is stored as a simple list of the four spheres. The World object consists of four instances of this object, translated to place them in a regular pattern about the origin. Note that since the marbled sphere was textured in "sphere space" each instance of that particular sphere has exactly the same marble texture applied to it. Of course, just as the object bl was instantiated as part of the World object, one may instantiate objects as part of any other object. For example, a series of objects such as: #define ENDCAPS name wheel list sphere tire_colour 1 0 0 0 scale 1 0.2 1 sphere hub_colour 0.2 0 0 0 end name axle list object wheel translate 0 2 0 object wheel translate 0 -2 0 cylinder axle_colour 0.1 0 -2 0 0 2 0 #ifdef ENDCAPS disc axle_colour 0.1 0 -2 0 0 -4 0 disc axle_colour 0.1 0 2 0 0 4 0 #endif end name truck list box truck_colour -5 -2 -2 5 2 2 /* Trailer */ box truck_colour 4 -2 -2 8 2 0 /* Cab */ object axle translate -4 0 -2 object axle translate 4 0 -2 end could be used to define a very primitive truck-like object. The first and most obvious way is to reduce the number of rays which are traced. This is most simply accomplished by reducing the resolution of the image to be rendered by either using the -R command line option or the screen keyword. By default, a pixel will be subdivided a maximum of one time, giving a maximum of nine rays per pixel total. Alternatively, the -C option or the cutoff and contrast commands may be used to decrease the number of instances in which pixels are subdivided. Using these options, one may indicate the maximum normalized contrast which is allowed before super sampling will occur. If the red, green or blue contrast between neighboring samples (taken at pixel corners) is greater than the maximum allowed, the pixel will be subdivided into four subpixels and the sampling process will recurse until the sub-pixel contrast is acceptable or the maximum subdivision level is reached. The number of rays traced can also be lowered by making all surfaces non-reflecting and non-refracting or by setting maxdepth to a small number. If set to 0, no reflection or refraction rays will be traced. Lastly, using the -n option or the noshadow command will cause no shadow rays to be traced. In addition, judicious use of the grid command can reduce rendering times substantially. However, if an object consists of a relatively small number of simple objects, it will likely take less time to simply check for intersection with each element of the object than to trace a ray through a grid. The C preprocessor can be used to make the creation and managing of input files much easier. For example, one can create "libraries" of useful colours, objects, and viewing parameters by using #define and #include. Another program called raypaint (another GL specific version called raypaintgl also exists specifically for SGI workstations) also allows you to preview a rayshade file by progressively rendering it in an X window. Although this is slow, it does allow you to get a general idea of how the file looks. The X version of raypaint only displays the image in gray scale but the GL version can display its images in colour. While transparent objects may be wholly contained in other transparent objects, rendering partially intersecting transparent objects with different indices of refraction is, for the most part, nonsensical. Rayshade is capable of using large amounts of memory. In the environment in which it was developed (machines with at least 8 Megabytes of physical memory plus virtual memory), this has not been a problem, and scenes containing several billion primitives have been rendered. On smaller machines, however, memory size can be a limiting factor. The "Total memory allocated" statistic is the total space allocated by calls to malloc. It is not the memory high-water mark. After the input file is processed, memory is only allocated when refraction occurs (to push media onto a stack) and when ray tracing height fields (to dynamically allocate triangles). The image produced will always be 24 bits deep. Explicit or implicit specification of vectors of length less than epsilon (1.E-6) results in undefined behavior. For complete information on the RLE file format, refer to the man page rle(5). The assembler is use to convert the source code into object code, which can be used by the linker to create an executable file. The normal format for CS 300 is : TASM /la /zi file.asm If TASM is typed with out any object files specified the following help screen is displayed: Turbo Assembler Version 2.5 Copyright (c) 1988, 1991 Borland International Syntax: TASM [options] source [,object] [,listing] [,xref] /a,/s Alphabetic or Source-code segment ordering /c Generate cross-reference in listing /dSYM[=VAL] Define symbol SYM = 0, or = value VAL /e,/r Emulated or Real floating-point instructions /h,/? Display this help screen /iPATH Search PATH for include files /jCMD Jam in an assembler directive CMD (eg. /jIDEAL) /kh# Hash table capacity # symbols /l,/la Generate listing: l=normal listing, la=expanded listing /ml,/mx,/mu Case sensitivity on symbols: ml=all, mx=globals, mu=none /mv# Set maximum valid length for symbols /m# Allow # multiple passes to resolve forward references /n Suppress symbol tables in listing /o,/op Generate overlay object code, Phar Lap-style 32-bit fixups /p Check for code segment overrides in protected mode /q Suppress OBJ records not needed for linking /t Suppress messages if successful assembly /w0,/w1,/w2 Set warning level: w0=none, w1=w2=warnings on /w-xxx,/w+xxx Disable (-) or enable (+) warning xxx /x Include false conditionals in listing /z Display source line with error message /zi,/zd Debug info: zi=full, zd=line numbers only The linker is use to link one or more object files, created by turbo assembler or a compiler, to create an executable file. The normal fromat for CS 300 is : TLINK /v file.obj [file2.obj] [...] If TLINK is typed with out any object files specified the following help screen is displayed: Turbo Link Version 4.0 Copyright (c) 1991 Borland International Syntax: TLINK objfiles, exefile, mapfile, libfiles, deffile @xxxx indicates use response file xxxx Options: /m = map file with publics /x = no map file at all /i = initialize all segments /l = include source line numbers /L = specify library search paths /s = detailed map of segments /n = no default libraries /d = warn if duplicate symbols in libraries /c = lower case significant in symbols /3 = enable 32-bit processing /v = include full symbolic debug information /e = ignore Extended Dictionary /t = create COM file (same as /Tc) /o = overlay switch /P[=NNNNN] = pack code segments /A=NNNN = set NewExe segment alignment factor /ye = expanded memory swapping /yx = extended memory swapping /C = case sensitive exports and imports /Txx = specify output file type /Tdx = DOS image (default) /Twx = Windows image (third letter can be c=COM, e=EXE, d=DLL) "FileMaker Pro is an electronic database manager ... [that] lets you rearrange the presentation of your information . . . . You can use the same information, such as names and addresses, in address lists, mailing labels, form letters, invoices and multi-page reports." [FileMaker Pro - Getting Started, Intro-9] Create a new database by first clicking on the FileMaker icon and then clicking on the New icon in the startup screen shown as follows. Filemaker Startup Screen An existing database can be opened by clicking on Open in this dialogue box and then typing in the name of the file. Alternately, do not call FileMaker directly, simply click on the icon of the existing database. This will launch FileMaker and also open the database. When you instruct FileMaker to create a new database you are automatically put in the Define Fields screen. The following illustration represents this screen. Enter the name for a field, click on the desired field type, then click on Create. The field that you just entered will be highlighted but carry on, type in the next field name, select its type, and click on Create again. When you have finished defining fields in this manner, click on Done. This will put you in 'browse' mode. Define Fields Screen When you exit Design, the Browse screen appears. The first time you encounter this, the cursor will be located at the first field of the first record. Type in the contents of fields, using the [Tab] key to move between fields. Select New Record from the Edit menu to create the next record. Use [Enter] (not [Return] ) to terminate the last entry. The following illustration represents the Browse screen with 'text' fields defined under the names 'first', 'second', and 'third'. Screen Fields can be organized in a variety of ways to produce different reports. Each organization is referred to as a layout.. Click on Layout in the Select menu to design a report layout. The standard layout has: a header - which will appear at the top of each printed page, a body - which contains field contents, and a footer - which appears at the bottom of each printed page. Click on New Layout from the Edit menu to begin defining a layout. Now enter a name for the layout and click on one of the available layout types (standard, columnar, single page, labels, envelope, or blank). Click OK to proceed to the dialogue box which allows you to select what fields and what field order you desire in your report. When the list of fields appears, click on the field you wish to appear first, then click on the Move button in the center of the screen. Continue selecting fields in this manner, and click OK when done. Layout Field Dialog Box After defining fields for your layout you are returned to the layout screen. You will not see any field contents in this screen, only field labels. Select Rulers from the Layout menu to have vertical and horizontal rulers appear on the screen. This will make it easier to define the desired size of the fields. Begin by adjusting the appearance of the header area by clicking and holding above and to the left of the first field (shown in the following illustration). Layout Screen Drag the mouse below and to the right of the last field. Square boxes appear around the fields selected. Now go to the format menu and choose the desired font style, size, and so on. Click on a blank portion of the screen to de-select field(s). To adjust individual fields: - click on the field to make the square boxes outline the field OR - click and hold above and to the left of a field and then drag to enclose desired fields (for example - the label in the header record and the corresponding field in the body of the report) - click and hold on the outlined field(s) then move it to where you want it placed (holding the [Shift] key down before you do this will constrain the motion to the direction you first begin when moving the mouse) - to enlarge a field: click and drag the lower right square box - to change a field label: click on the A icon in the tools palette on the left side of the screen, highlight the field name you want to change, then type in the desired field name Return to Browse mode by clicking on Browse in the Select menu. Click on the layout name in the top left corner of the screen to call a pop-up menu that you can use to move between layouts. The following example illustrates two sample layouts. The material for this manual has been compiled from man page information and from various other X Windows references, most notably "X Window System Programming" by Nabajyoit Barkakati. This manual is only intended as an introduction to programming in X Windows. Its purpose is to learn how to produce programs that will make use of X Windows features and functionality and to become familiar with the coding and design philosophies behind GUI and event-driven programming. A solution to this problem was to design a new window system with a standard set of library routines and implement that system on all graphics workstations. Robert Scheifler, Jim Gettys, and their team members took this approach when they developed the X Window System (or X, as it is more commonly called) at MIT in 1984. They assumed a few basic capabilities in the workstation: a bit-mapped graphics display, a keyboard, and a mouse. A dozen computer vendors announced a joint effort to support and standardize X in January 1987. Since then almost all workstation vendors have accepted X as the basis of the graphical user interface to their hardware. The X server process, running on a workstation with bitmapped graphics display, manages regions of the screen known as windows where the output from X client applications appear. The X client, whether running locally or remotely, sends requests to the server using a communication channel. An X client on a system can display its output on any X server without regard to display hardware or the operating system. Application developers do not work directly with the X protocol. They use the Xlib routines or one of the many X toolkits (ex. Xt Intrinsics or OSF Motif) that provide higher level objects, known as widgets, for building user interfaces. X windows are arranged in a hierarchy. That is, the root window, or your desktop, is the top-level window. It is the parent and all windows within it are its children. The X window that opens when you run your application is called the main window of your application. It is a child of the root window. Inside your main application window you may have buttons or sliders or other sorts of objects. These too are windows and they are children of your main window. It is important to remember this as you write your X application because certain functions require that you specify parent and/or child windows. For more information, it is suggested that you look in any of several XWindows texts such as the one mentioned above. A typical X window application program contains the following basic components: 1. Make a connection to a display server. 2. An X application you write may run on any number of different computers which support different display resolutions, supported colours, fonts, etc. A general X client will usually first load default resources available from the display that it is connected to and then define its own fonts and colours. 3. The geometrical components of the window such as size, shape, and position in a screen are specified. 4. The top-level window is then created -- it is important to note that it is only a data structure and not yet displayed on the screen of the server yet. Now you may set or change window manager properties, graphics context, and any ohter window attributes. 5. Make the top-level window visible. 6. You may want to create child windows under the main window. Child windows can be denoted by following steps similar to 3, 4, and 5 above. 7. Select which of the events occuring in the window should be processed by your program. 8. Enter the main processing module of your application program which is typically an event loop. It is important to keep in mind that X programs are event-driven. This means that the main program loops indefinitely and reacts to events as they happen. These events, most commonly, are button presses or window movements. The application is written around the event processing, which is usually implemented as a large while statement containing one or more if-then-else or case statements. On the DEC Stations available in CL 136, the program xdm prompts you for your username and password. It handles most of the startup details, which are: 1. Get a username and password. 2. Sets DISPLAY environment variable to correspond to the name of your terminal. 3. Reads information from the file .Xresources, if it exists in your home directory. 4. Executes the file .xsession from your home directory, if it exists, else it executes the system .xsession file. If there is an error in the .xsession file then you will not be able to login. To fix this, you will have to login with a telnet session from a regular terminal and fix your .xsession file. You may also login in through xdm by pressing the F1 key after your password to load the system .xsession file. The DISPLAY environment variable is set using the setenv command. This specifies the machine and the display number to send output to. For example: setenv DISPLAY dec5kj:0 specifies that the output of any clients is to go to the machine dec5kj and to display 0. A computer could have more than one monitor attached so the second one would be display 1. xprog WIDTHxHEIGHT[+/-]xoffset[+/-]yoffset Depending on the application, WIDTH and HEIGHT are either in characters or pixels. The offset, if positive, specifies the distance from the left side or top of the screen. If it is negative, the offset is from the right side or bottom of the screen. For example: xterm 80x30+100-500 will open an xterm client that is 80 columns by 30 lines. The window will be displayed at a location 100 pixels in from the left hand side of the screen and 500 pixels up from the bottom of the screen. Another form of access control is to use the xhost command to specify which hosts have access to your X server. For example: xhost +erato gives anyone on erato access to your X server. On the other hand: xhost -erato takes that access away. To disable or enable access to the world, just use the + or - without any host name. The resource names for different applications are usually explained in the man pages for that application. A simple example is to set the cursor colour for your xterm sessions: XTerm*cursorColor: gold Default fonts, colours, displays, geometry's, titles, and most any other X option can be specified from the .Xresources file. X is only a graphics protocol. The Window Manager can be any window manager the user wishes to install, whether it be twm, mwm (Motif), olwm (Open Look), or fvwm. The default window manager at the U of R is twm. Detailed documentation on twm can be found from the man pages. The file .twmrc can be used to configure twm colours, fonts, and behaviour as described in the man page. Documentation for mwm is also on-line in the form of man pages. The file .mwmrc can be used to configure it in a manner similar to the twm configuration file. Most X programs are written in C. As a matter of fact the entire X Windows system itself is written in C so it is quite portable to different environments. Since most X applications are also written in C they are also portable. However, other languages, such as Pascal, FORTRAN, and Icon are also used to write X applications because of X's standard programming interface. To write an X application in C you must include a number of .h files. These are: #include <X11/Xlib.h> - Defines the basic structures that are needed by Xlib. #include <X11/Xutil.h> - Used by the utility function of Xlib. #include <X11/Xresource.h> - This is required if you use the resource manager functions (those that begin with Xrm). You must link in the Xlib library routines when you compile your program. This is no different than linking in any other C libraries. For example: cc -o xprog xprog.c -lX11 NOTE: To conserve the limited disk space on the DECstations, a subset of the X libraries have been moved to a common directory on the central server (Mercury). You will have to link these libraries in using the -L option. For example: cc -o xprog xprog.c -lX11 -L/net/share/ULTRIX/depot/X11R5/lib The rules for running your X client are the same as those for running any X client, such as xcalc or xv. You can manipulate the window as you could any other window and you can make use of any X facility such as remote servers, etc. Following are a number of X Windows functions. To fully understand how the functions work, you should examine the sample program, xhello.c, in section 8.7.7. The descriptions below are brief and provide just enough information so that you can understand how they work. For more detailed information it is highly recommended that you read the man pages. The X functions and Xlib are heavily documented in the man pages and provide very useful cross-referencing. When using the man pages, be sure to use the proper capitalization of function or data-type names. Display *p_disp - Define a pointer to a structure of type Display which contains information about the X server . XOpenDisplay(display_name) - Specify a connection to the X server (the display name), as a string, in the standard hostname:#.# syntax (ex. dec5kj.cs.uregina.ca:0.0) - Return a pointer of type Display. To create a GC, you should follow the following the procedure: Display *p_disp; Window my_window; XGCValues XGCv; unsigned long mask; GC my_GC; /* set values in XGCv and set up mask */ ... my_GC = XCreateGC(p_disp, my_window, mask, &XGCv); where XGCv is a structure of XGCValues type which allows you to provide attribute values of your choice. Mask indicates the valid fields in the XGCv structure to be affected. Right now it is enough to be able to create a GC with default settings: my_GC = XCreateGC(p_disp, my_window, 0, 0); It should be noted that the second argument in the above example is of type Window. It can also be of type Pixmap, which is a 2D array of memory. Both represent a raster of pixels. One is on-screen (a Window) and the other is off-screen (a Pixmap). Both are called drawable. After a GC is created, you can modify its attributes by calling the XChangeGC() function. The attribtes are same as for XCreateGC(): XChangeGC(p_disp, my_GC, mask, &XGCv); Manipulating the contents of the GC is an advanced subject that is beyond the scope of this lab. For more information, interested users can find full descriptions of XGCValues in any X reference book. The layout of the structure can be found in the file <X11/Xlib.h>. It contains attributes such as font, foreground and background colours, clipping masks, tile origins, dashed-line masks, etc. Because the GC is a resource, it consumes system resources such as memory in the X server. It should destroyed (freed up) when it is no longer needed: XFreeGC(p_disp, my_GC); typedef struct { long flags; int x,y; /* suggested position of window */ int width, height; /* suggested width and height */ int min_width, min_height; /* min. width and height */ int max_width, max_height; /* max. width and height */ int width_inc, height_inc; /* width & height increment */ struct { int x; /* numerator of aspect ratio */ int y; /* denominator of aspect ratio */ } min_aspect, max_aspect; /* min. & max. aspect ratio */ } XSizeHints; /* hints for X window manager */ - Define a pointer to the X font structure. XLoadQueryFont(p_disp, font) - Load a font by specifying the connection to the X server (p_disp) and set the font (font) by specifying a font name. - Returns a pointer of type XFontStruct. - NOTE: You can use 2 separate functions: XLoadFont and XQueryFont, but this function saves some typing effort. Colormap c_map - Define a structure of type Colormap. XColor color_struct - Define a structure that holds a red, green, and blue value for a particular colour. This can be copied to and from an entry in Colormap. - Color_struct.pixel contains the exact colour you should use where you want to use your desired colour. DefaultColormap(p_disp, screen) - Some server may have more than one screen. The argument screen specifies the screen number. If the screen is replaced by the macro DefaultScreen(p_disp), the current X window (ie. screen 0) is specified. - Returns a structure of type Colormap (the default colour map, that is) on the specified screen. XParseColor(p_disp, c_map, color_value, &color_struct) - It takes a string specification of a colour, ex: color_value = "pink", searches the colour map c_map, and returns the corresponding RGB values. If "pink" does not exist in c_map, it returns 0. XAllocColor(p_disp, c_map, &color_struct) - Allocates a (read-only) colour map entry in c_map corresponding to the closest RGB values supported by the hardware, and returns the pixel value (ie. colour map index) and the content in color_struct is replaced by the RGB values actually used. A 0 is returned if it failed. WhitePixel(p_disp, screen) and BlackPixel(p_disp, screen) - Assign the default white or black colours -- this is usually done if the desired colour cannot be used, for whatever reason. XTextWidth(fontstr, text, strlen(text)) - Accept a pointer to an array of chars (a string) and return the length of that string in pixels -- this length, for example, can be used to set the width value in a structure of type XTextWidth. DisplayWidth(p_disp, screen) and DisplayHeight(p_disp, screen) - These two functions get the height and width of the screen -- it lets us get values of screen dimensions in pixels so that we could properly set the geometry of windows. XGeometry(p_disp, screen, user_geometry, default_geometry, border_width, width_factor, height_factor, xadd, yadd, &(hint.x), &(hint.y), &(hint.width), &(hint.height)); - This is the main function that sets window geometry. Notice that it uses many of the hints that you specify with the XSizeHints structure. It also uses the parameters that you specified on the command line or via some resource (if you want to, that is). - This function returns a bitmask that is used to set position and size. height, border_width, border_color, background_color) - This function creates a window on the current display (p_disp). The window is a child of the parent_screen, has the specified location (x and y) of the window and its size (height and width). The last three parameters are the width of the border and the colours of the background and border. - The function returns the identification of the window. XSetStandardProperties(p_disp, window_id, window_name, icon_name, icon_pixmap, argv, argc, &hints) - Set a set of properties for the window manager. XMapWindow(p_disp, window_id) - Map the window to make it visible. XNextEvent(p_disp, &theEvent ); where theEvent is an XEvent type. The structure theEvent provides the type of event and other relevant information, which may vary from event to event, that is used to describe the event. Because of the diversity of events in X, the information necessary to describe one class of events (say, mouse events) differs significantly from that needed by another (say, keyboard events). Thus a number of different data structures are used to describe the events. It is also desirable to have a common format to report all events. A C union of these data structures is used to hold the information. The header file <X11/Xlib.h> defines this union, named XEvent, as follows: typedef union _XEvent { int type; XAnyEvent xnay; XKeyEvent xkey; XButtonEvent xbutton; XMotionEvent xmotion; XExposeEvent Xexpose; XGraphicsExposeEvent xgraphicsexpose; .... } /* refer to xlib.h or a book for a complete description */ There are a total of 33 types of different events which can be broadly grouped into seven categories: 1) Mouse Events 2) Keyboard Events 3) Expose Events 4) Colormap Notification Events (notice, American spelling) 5) Interclient Communication Events 6) Window State Notification Events 7) Window Structure Control Events Display *p_disp; /* connection to X server */ Window w_id; /* window ID */ unsigned long event_mask; /* bit pattern to select events */ ... XSelectInput( p_disp, w_id, event_mask); where w_id identifies the window for which events are being selected. The event_mask parameter, which consists of a set of bit patterns, indicates the events being selected. You can construct the mask by the bitwise-OR of selected masks defined in the <X11/X.h> header file. For example, you may decide that you want to receive expose and ButtonPress events for a window. In this case, the event_mask will be as follows: event_mask = ExposureMask | ButtonPressMask; where the Expose event (ExposureMask) allows a previously obscured window, or part of window, to become visible, and ButtonPress (ButtonPressMask) traps the event of pressing a mouse button in a window. Other events, and their corresponding masks are: ButtonRelease ButtonReleaseMask - A mouse button is released with the pointer in a window. EnterNotify EnterWindowMask - Mouse pointer enters a window. LeaveNotify LeaveWindowMask - Mouse pointer leaves a window. MotionNotify ButtonMotionMask - The mouse is moved after being stopped. KeyPress KeyPressMask - A key is pressed (when window has focus). KeyRelease KeyReleaseMask - A key is released (when window has focus). FocusIn FocusChangeMask - Window receives input focus (all subsequent keyboard events will come to this window). FocusOut FocusChangeMask - Window looses input focus. There are many more events. If you are curious as to what they are, please take a look at an X windows book or check out the header files or man pages. typedef struct { int type; /* the event type */ unsigned long serial; /* number of last processed event */ Bool send_event;/* True = event is from SendEvent */ Display *display; /* identifies the reporting server */ Window window; /* window requesting the event */ } XAnyEvent; The fields appearing in the XAnyEvent structure happen to be the first five entries in every events data structure. The type tells you the kind of event being reported. The X server keeps track of all the events, so serial is the sequence number of the last processed event. The send_event lets you simulate an event by using the XSendEvent function. Display and Window identify the server and the window that receives the event respectively. ex) XNextEvent(theDisplay, &theEvent); if(theEvent.Xany.window == theMain) ... Most workstations have 3 button mice, but X supports up to 5 button mice. The buttons are usually numbered from left to right, but the XSetPointerMapping function can be used to change this ordering. The pointer refers to the graphical indication of the mouse's position in the display screen. The cursor is the actual graphical object that determines the appearance of the pointer in the screen. The X server keeps track of the pointer, but the programmer can position it anywhere using the XWarpPointer function to forcibly move the pointer to a specified location on the screen. Similarly, acceleration and threshold can be set by the programmer to alter how fast the pointer moves as you move the mouse beyond the threshold limit. The functions XGetPointerControl and XChangePointerControl are used to change these parameters. The program xset, for example, uses these functions to alter mouse parameters. For more information on these events, refer to the man pages. Display *theDisplay; /* Identify X server */ Window w, /* Window of interest */ root, child; /* for return values */ int root_x, root_y, /* position in root */ win_x, win_y; /* position in w's frame */ unsigned int keys_buttons; /* info on mouse buttons */ if(!XQueryPointer(theDisplay, w, &root, &child, &root_x, &root_y, &win_x, &win_y, &keys_buttons)) { /* pointer not on screen where window w is ... */ } Here, w is the window in whose coordinate frame you want to find the pointer position. XQueryPointer returns 0 if the pointer is not on the screen where window w is displayed. In this case, in the root variable, it returns the ID of the root window on the screen where the pointer appears. If successful, it proviedes the information in a set of variables. You have to decalare these varaibles and pass their addresses as arguments to the function. In root and child, you get back the window ID of the root window and any visible subwindow of w that contains the pointer. If the pointer is in w but not any of its child windows, w will be set to None. The coordinates of the poiner with respect to the root window are returned in root_x and root_y while win_x and win_y are the coordinates of w's frame. The keys_buttons variable is a bit mask that indicates which mouse button is pressed as well as which modifier keys are pressed at that moment. To decipher this information you need to perform a bitwise-AND of keys_buttons with the bitmask names defined in <X11/X.h>. For example, to see whether Button1 is pressed, you might write: if(keys_buttons & Button1Mask) { /* ... button 1 was pressed ... */ } Here is an example of using these events: XEvent theEvent; while(!AppDone) { XNextEvent(theDisplay, &theEvent); switch(theEvent.type) { case ButtonPress: /* handle each button differently ... */ switch(theEvent.xbutton.button) { case Button1: .... break; case Button2: .... break; case Button3: .... break; } break; /* handle other events .... */ } } /* while */ More detailed information on button events can be extracted from the XButtonEvent data structure. For more information, please refer to the man pages. Like the mouse button events, the X server generates a KeyPress event when a key is pressed and a KeyRelease event when the key is released. All keys, including modifier keys, generate events. Some implemenations of X do not properly handle the KeyRelease event, so it is not a good idea to use it. The keyboard has to rely on the concept of focus to determine which window receives its output. Typically, the window that contains the mouse pointer is the one that has focus. It is this window that will receive keyboard events. There are methods of giving focus to another window even though the mouse pointer is nowhere near it. X uses the Inter-Client Communication Conventions Manual (ICCCM) for determining focus. It is the de-facto standard for designating focus in X applications. The manner in which it is implemented in different window managers may vary, but because all the conventions are standardized, source code will be portable. Application programs set input focus by setting the input field of the XWMHints structure to True before calling XSetWMHints. For Example: XWMHints xwmh; /* Tell the WM to put the window in its normal state and to transfer input focus to our window when the user does so. */ xwmh.flags = (InputHint | StateHint); xwmh.input = True; xwmh.initial_state = NormalState; XSetWMHints(theDisplay, theMain, &xwmh); You can access the fields of XKeyEvent through the xkey member of the XEvent union. For example, if the event is received in a variable named theEvent of type XEvent, refer to the keycode after a KeyPress event as theEvent.xkey.keycode. The two most important fields in this data structure are state and keycode. The state indicates the state (pressed or released) of the modifier keys. The keycode is an integer (ASCII 8 - 255) that uniquely identifies the key. You have to manually translate this code to into an ASCII character before using it. The X server decides what keycode to generate for a specific physical key. Each key, including the modifiers, has a unique code. These codes may be different from computer to computer. The first step involved is translating the keycode to a symbolic name known as keysym. All keys and combinations of keys have unique keysyms, which are constants defined in <X11/keysym.h>. For example, pressing "A" results in a keysym of XK_a but <shift>"A" results in XK_A. The second step is to convert the keysym to an ASCII text string that you can use for displaying. For most keys, this would be a string with a single character, but function keys, especially programmable ones, may generate a multicharacter string. The function XLookupString is used to perform these two actions. Here is an example: #include <X11/keysym.h> ... XEvent theEvent; char xlat[20]; /* room for string */ int nchar = 20; int count; /* chars returned */ KeySym key; /* keysym on return */ XComposeStatus cs; /* Compose key status */ ... XNextEvent(p_disp, &theEvent); ... switch(theEvent.type) { case KeyPress: { count= XLookupString(&theEvent, xlat, nchar, &key, &cs); /* add null byte to make 'xlat' a valid C string */ xlat[count] = '\0'; /* print out translated string ... */ printf("Keypress %s on window %x, subwindow %x\n", xlat, theEvent.xkey.window, theEvent.xkey.subwindow); /* keysym may be used like this ... */ if(key == XK_Q || key == XK_q) { /* user pressed q key ... EXIT! */ ... exit(0); } } break; /* other events ... */ } - draw objects such as lines, rectangles, and circles - draw and manipulate text - display and manipulate images The first three arguments to all drawing functions are the same: - a pointer to the display: ex) Display *p_disp; - the id of a drawable window (a Window or Pixmap): ex) Window my_window; - a GC: ex) GC my_GC; Display *p_disp; /* connection to X server */ Window this_window; /* the drawable window */ GC this_GC; /* Graphics Context for drawing */ int x, y; /* point to be drawn */ int nPoints = 10; /* # of points for multiple draw */ XPoint pt[10]; /* array of multiple (x,y) points */ The following function draws a single point. The pixel at the location is set to the foreground colour specified in the GC. Please look in the file <X11/Xlib.h> for a description of the XGCValues data structure. XDrawPoint(p_disp, this_window, this_GC, x, y); The next function draws multiple points all with a single X protocol request. All the points are drawin using the same GC. This is a convenient way to draw a field of pixels or a bitmap without having to send one X packet for each pixel (which is an incredible waste of bandwidth). It uses the structure, XPoint, defined in <X11/Xlib.h>: typedef struct { short x, y; /* x and y coordinates */ } XPoint; XPoint *pt; int nPoint The array pt was declared of this type to store all our coordinates, and nPoint specifies the number of points.. This array is used in the following function: XDrawPoints(p_disp, this_window, this_GC, pt, nPoint, CoordModeOrigin); The last argument tells the server how to interpret the coordinates of the points in the array, pt. It can be any of the following constants: CoordModeOrigin - The coordinates are relative to the origin of the window or pixmap. CoordModePrevious - Each point is given in coordinates relative to the previous point displayed. The very first point is assumed to be relative to the origin of the window. Display *p_disp; Window this_window; GC this_GC; The following function draws a simple line from (x1, y1) to (x2, y2). XDrawLine(p_disp, this_window, this_GC, x1, y1, x2, y2); This next function make use of the following data structure, defined in <X11/Xlib.h>: typedef struct { short x1, y1; /* start of segment */ short x2, y2; /* end of segment */ } XSegment; XSegment lines[]; /* the start/end coords for the lines */ int numsegs; /* the number of line segments */ The purpose of this function is to draw multiple (and possibly disjoint) line segments. XDrawSegments(p_disp, this_window, this_GC, lines, numsegs); The third function, XDrawLines(), has the same format and arguments as the XDrawPoints() function and is called in exactly the same way. In this function, however, the points are connected with a line. Display *p_disp; Window this_window; GC this_GC; int x, y; int width, height; XRectangle rects[]; /* array of rectangles */ int nrect; /* number of rectangles */ The following function is used for drawing a rectangle where (x, y) are the coordinates of the upper left corner, and the width and height values specify the width and height of the window. XDrawRectangle(p_disp, this_window, this_GC, x, y, width, height); The following function is used to draw multiple rectangles. It makes use of the following structure which is defined in <X11/Xlib.h>: typedef struct { short x, y; /* upper left corners */ unsigned short width, height; } XRectangle; XDrawRectangles(p_disp, this_window, this_GC, rects, nrects); To fill rectangles you use the two functions XFillRectangle() and XFillRectangles(). They are called exactly in the same manner as their non-filling counterparts. Display *p_disp; Window this_window; GC this_GC; int x, y; int width, height; int start_angle, end_angle; XArc arcs[]; /* the arcs ... */ int numarcs; /* how many arcs in the array */ The unique parameters for drawing arcs are the angles, which are measured in units of 1/64'th of a degree. The first angle, start_angle, measures where the arc begins starting from 0 degrees (the three o'clock line). The end_angle is the angular extent of the arc. XDrawArc(p_disp, this_window, this_GC, x, y, width, height, start_angle, end_angle); Drawing multiple arcs is done using the following structure, defined in <X11/Xlib.h>: typedef struct { short x, y; unsigned short width, height; short angle1, angle2; } XArc; XDrawArcs(p_disp, this_window, this_GC, arcs, numarcs); Drawing filled arcs is done using the XFillArc() and XFillArcs() functions, which are identical to their non-filling counterparts. The filling is modified by the arc_mode attribute in the XGCValues structures. If it is set to ArcPieSclice, the arc is filled as a wedge. If it is set to ArcChord, the chord is filled in. int shape; /* Convex, NonConvex, or Complex */ int mode; /* CoordModeOrigin or CoordModePrevious */ XPoint points[]; /* vertices of polygon */ int nPoints; /* how many vertices */ XFillPolygon(p_disp, this_window, this_GC, points, nPoints, shape, mode); -adobe-helvetica-bold-r-normal--14-140-75-75-p-82-iso8859-1 manufacturer: adobe typeface: helvetica weight: bold style: upright size: normal point size: 14 tenth point size: 140 screen res.: 75dpi x 75dpi spacing: proportional ( m = monospaced) avg. width: 82 pixels encoding: iso8859-1 (aka ISO Latin-1) The Xlib functions that accept font names have been designed to handle incomplete font names. You can use asterisks as wildcard characters for parts of the name that can be arbitrary. For example, if you want Adobe's 14 point Helvetica bold font, all you have to specify is: *adobe-helvetica-bold-r*140* The program xlsfonts can be used to display font names. For example, typing: xlsfonts "*helvetica*bold*140*" will result in a list of fonts that match the given pattern. Similarly, you can use the xfd program to view a font in an X window. Display *p_disp; char fontname[] = "*helvetica-bold-r*140*"; Font helvb14; /* resource id */ helvb14 = XLoadFont(p_disp, fontname); XLoadFont() will return a 0 if it fails, otherwise it results in some font handle resource id. When successful, you can start using the 14 point Helvetica bold font by setting the font attribute of a GC. To do this, use the following function as follows: XSetFont(p_disp, this_GC, helvb14); Since fonts are resident on the server they are server-resident resources and should be released when they are no longer needed. The following function can be used to accomplish this task: XUnloadFont(p_disp, helvb14); Display *p_disp; Font helvb14; XFontStruct *info_helvb14; info_helvb14 = XQueryFont(p_disp, helvb14); The structure XFontStruct is defined in <X11/Xlib.h>. It contains information about the font characteristics and attributes. The function XFreeFont(p_disp, info_helvb14) can be used in place of XUnloadFont() to unload the font and free the memory allocated by Xlib for the XFontStructure. Similary, instead of calling XLoadFont() followed by XQueryFont(), you can call the XLoadQueryFont() function to load a font and get information with a single request to the server in the following manner: info_helvb14 = XLoadQueryFont(p_disp, fontname); Like XLoadFont(), XLoadQueryFont() will return a 0 if it fails, otherwise it just fills the XFontStruct data structure. char string[]; /* string to be displayed */ int nchars; /* number of characters in string */ XDrawString(p_disp, this_window, this_GC, x, y, string, nchars); XDrawImageString(p_disp, this_window, this_GC, x, y, string, nchars); XDrawString() simply displays the string using the foreground pixel value (from the GC) for all the pixels corresponding to a 1 in the character bitmaps. XDrawImageString(), on the other hand, fills the pixels corresponding to 0 in each character's bitmap with the GC's background colour and those corresponding to 1 with the foreground colour. The third function, XDrawText() is used to draw a line of text in which multiple fonts are used. It uses the following data structure defined in <X11/Xlib.h>: typedef struct { char *chars; /* pointer to string to be drawn */ int nchars; /* number of chars in string */ int delta; /* distance from end of last string */ Font font; /* font to use (None means use GC's) */ } XTextItem; XTextItem text_chunks[]; /* array of strings */ int nChunks; /* number of XTextItem structs */ /* initialize "text_chunks" ... */ ... XDrawText(p_disp, this_window, this_GC, x, y, text_chunks, nChunks); XFontStruct *p_font; char text[]; int text_width; text_width = XTextWidth(p_font, text, strlen(text)); XTextWidth() simply returns the width of a string in pixels. This width can be used for horizontal placement. For vertical spacing, use the sum of the ascent and descent fields of the XFontStruct structure pointed to by p_font. Say you wanted to draw a line -- you would require a couple of pieces of information, the two most important of which are the start and end coordinates. In this context, that of line-drawing, you would enter some mode that would perform line drawing. The system would now interpret your commands and actions as related to entering information for drawing a line or lines. There are a couple of different methods you could use for drawing lines. You can enter the start and end points through the keyboard and have the system draw the line. You can enter the line start and end coordinates using the mouse and then the system will draw the line. You can even enter the start coordinate and have the system display the line in "real-time" by using a rubber-band effect. We would select the line-drawing technique we wish to use and then wait for particular events that would provide us with enough information to draw the line. Depending on the method we chose, this would be button presses, mouse movements, and possibly even key presses. In the event handling loop of the main program, suppose a menu button called "line" is selected, the corresponding handling routine (which was a dummy part in the previous lab) will be called. Conceptually, the system enters the "line" mode. In this mode a sequence of actions described above will be performed in order. This is typically handled by a sub-event loop in your "line" button handling routine. Before we design the sub-event loop for entering a line, a brief look at the concept of the state diagram will be very helpful. (You might want to refer to section 8.2.2 in your text book, "Introduction to Computer Graphics", for a further description of the state diagram.) In your "line" button handling routine, say we wanted a system that would wait for a button to pressed. The location of that button press would mark the start of our line segment (it doesn't matter if the user moves the mouse while still holding down the button). When the button is pressed a second time, this marks the end of our line segment and we draw the line. This can be represented using the following state diagram: It is a good idea to create such state diagrams for all the events of an X application that you are creating. Since the event loop is usually nothing more than a finite state machine, it is pretty easy to create a state diagram. The above diagram shows us what happens when the mouse buttons are pressed and released. Simply by looking at the diagram we realize that we will be required to create a variable to count the number of times the button has been released. The above state diagram can be translated into an event loop in a striaghtforward manner: line = 0; lineDone = FALSE; while(!lineDone) { XNextEvent(theDisplay, &theEvent); switch(theEvent.type) { case ButtonPress: /* Get the coordinates of the current * pointer location and save it. You * might use XQueryPointer(). */ lineState ++; break; case ButtonRelease: if(lineState == 2) lineDone = TRUE; break; } } Once you've entered two points, you are ready to draw the line, which is the subject of the next lab. This source code is also available via ftp in the directory "pub/405/X/xhello.c". Compilation and usage instructions are given in the comments. /***************************************************************************/tt> xhello.c This is a "simple" X Windows application that opens an X window and displays some text in it. A user defined button is also defined. There are also a number of command line options that make use of X Windows features (type "xhello -h"). This program is intended as a sample that you could base other X programs on. Enjoy! Oh, by the way, this code is copyright by Nabajyoti Barkakati. It's from the text "X Window System Programming". Notes: 1) To compile the program simply type: cc -o xhello xhello.c -lX11 2) On the DECstations, the Xwindows libraries are located in an alternate library (not the default) so that only 1 copy has to exist rather than 16. To link in the proper libraries, compile with the following: cc -o xhello xhello.c -lX11 -L/net/ULTRIX/X11R5/lib 3) X programs tend to be really quite huge. It is a good idea to run the 'strip' command on them after you've compiled/debugged it so that it consume fewer resources. The strip command removes symbol table information and other misc. data from your program. Check out the man page for more info. ***************************************************************************//tt> #include <stdio.h> #include <string.h> #include <X11/Xlib.h> /* X includes ... must be present */ #include <X11/Xutil.h> /* macro stuff ... */ #define max(x,y) (((x) > (y)) ? (x) : (y)) /* constants ... */ #define DEFAULT_MAIN_TEXT "Hello, World" #define DEFAULT_EXIT_TEXT "Exit" #define DEFAULT_BGCOLOR "white" #define DEFAULT_FGCOLOR "black" #define DEFAULT_BDWIDTH 1 #define DEFAULT_FONT "fixed" /* Tie in command line parameters with the the resources ... */ typedef struct XHELLO_PARAMS { char *name; char **p_value_string; } XHELLO_PARAMS; /***************************************************************************/tt> Setup up a number of the configuration options for the X window behaviour, like colour, geometry, etc. ***************************************************************************//tt> char *mbgcolor = DEFAULT_BGCOLOR, *mfgcolor = DEFAULT_FGCOLOR, *mfont = DEFAULT_FONT, *ebgcolor = DEFAULT_BGCOLOR, *efgcolor = DEFAULT_FGCOLOR, *efont = DEFAULT_FONT, *mgeom_rsrc = NULL, *mtext_rsrc = NULL, *etext_rsrc = NULL, *display_name = NULL, *mtext_cline = NULL, *etext_cline = NULL, *mgeom_cline = NULL, *mtext = DEFAULT_MAIN_TEXT, *extext = DEFAULT_EXIT_TEXT, *mgeom = NULL; /***************************************************************************/tt> These are the resources you can specify in your .Xdefaults file or whatever resource file you use. Ex: xhello*background These options could also be set via the command line. Check it out. Currently, we are setting up the resources with the options given in the above variable definitions. ***************************************************************************//tt> XHELLO_PARAMS resources [] = { "background" , &mbgcolor, "foreground" , &mfgcolor, "font" , &mfont, "geometry" , &mgeom_rsrc, "text" , &mtext_rsrc, "exit.background" , &ebgcolor, "exit.foreground" , &efgcolor, "exit.font" , &efont, "exit.exit" , &etext_rsrc }; int num_resources = sizeof(resources) / sizeof(XHELLO_PARAMS); /***************************************************************************/tt> Here there be the command line parameters. Check them out. Some are kind of nifty. ***************************************************************************//tt> XHELLO_PARAMS options[] = { "-display" , &display_name, "-d" , &display_name, "-geometry" , &mgeom_cline, "-g" , &mgeom_cline, "-mtext" , &mtext_cline, "-m" , &mtext_cline, "-etext" , &etext_cline, "-e" , &etext_cline }; int num_options = sizeof(options) / sizeof(XHELLO_PARAMS); char *app_name = "xhello"; XFontStruct *mfontstruct, *efontstruct; unsigned long mbgpix, mfgpix, ebgpix, efgpix; unsigned int ewidth, eheight; int ex, ey, extxt, eytxt; XWMHints xwmh; XSizeHints xsh; Display *p_disp; Window Main, Exit; GC theGC, exitGC; XEvent theEvent; int Done = 0; char default_geometry[80]; void usage(void); /***************************************************************************/tt> main() ***************************************************************************//tt> main(int argc, char **argv) { int i,j; char *tmpstr; Colormap default_cmap; XColor color; int bitmask; XGCValues gcv; XSetWindowAttributes xswa; app_name = argv[0]; /* Determine which command line arguments to use ... */ for (i=1; i<argc; i += 2) { for (j=0; j<num_options; j++) { if (strcmp(options[j].name, argv[i]) == 0) { *options[j].p_value_string = argv[i+1]; break; } } if (j >= num_options) usage(); } /* Try to open the X display ... */ if ((p_disp = XOpenDisplay(display_name)) == NULL) { fprintf(stderr, "%s: can't open display name %s\n", argv[0], XDisplayName(display_name)); exit(1); } /* Determine which resources to use ... */ for (i=0; i<num_resources; i++) { if ((tmpstr = XGetDefault(p_disp, app_name, resources[i].name)) != NULL) *resources[i].p_value_string = tmpstr; } /* *---------------------------------------------------------------------- * Now that we've got both the command line options _and_ the * resources, we can begin configuring everything ... *---------------------------------------------------------------------- */ /* Load the font for the text message ... */ if ((mfontstruct = XLoadQueryFont(p_disp, mfont)) == NULL) { fprintf(stderr, "%s: display %s cannot load font %s\n", app_name, DisplayString(p_disp), mfont); exit(1); } /* Load the font for the exit message ... */ if ((efontstruct = XLoadQueryFont(p_disp, efont)) == NULL) { fprintf(stderr, "%s: display %s cannot load font %s\n", app_name, DisplayString(p_disp), efont); exit(1); } default_cmap = DefaultColormap(p_disp, DefaultScreen(p_disp)); if (XParseColor(p_disp, default_cmap, mbgcolor, &color) == 0 || XAllocColor(p_disp, default_cmap, &color) == 0) mbgpix = WhitePixel(p_disp, DefaultScreen(p_disp)); else mbgpix = color.pixel; if (XParseColor(p_disp, default_cmap, mfgcolor, &color) == 0 || XAllocColor(p_disp, default_cmap, &color) == 0) mfgpix = BlackPixel(p_disp, DefaultScreen(p_disp)); else mfgpix = color.pixel; if (XParseColor(p_disp, default_cmap, ebgcolor, &color) == 0 || XAllocColor(p_disp, default_cmap, &color) == 0) ebgpix = WhitePixel(p_disp, DefaultScreen(p_disp)); else ebgpix = color.pixel; if (XParseColor(p_disp, default_cmap, efgcolor, &color) == 0 || XAllocColor(p_disp, default_cmap, &color) == 0) efgpix = WhitePixel(p_disp, DefaultScreen(p_disp)); else efgpix = color.pixel; if (etext_cline != NULL) extext = etext_cline; else if (etext_rsrc != NULL) extext = etext_rsrc; if (mtext_cline != NULL) mtext = mtext_cline; else if (mtext_rsrc != NULL) mtext = mtext_rsrc; extxt = efontstruct->max_bounds.width / 2; eytxt = efontstruct->max_bounds.ascent + efontstruct->max_bounds.descent; ewidth = extxt + XTextWidth(efontstruct, extext, strlen(extext)) + 4; eheight = eytxt + 4; xsh.flags = (PPosition | PSize | PMinSize); xsh.height = mfontstruct->max_bounds.ascent + mfontstruct->max_bounds.descent + eheight + 10; xsh.min_height = xsh.height; xsh.width = XTextWidth(mfontstruct, mtext, strlen(mtext)) + 2; xsh.width = max(xsh.width, ewidth); xsh.min_width = xsh.width; xsh.x = (DisplayWidth(p_disp, DefaultScreen(p_disp)) - xsh.width)/2; xsh.y = (DisplayHeight(p_disp, DefaultScreen(p_disp)) - xsh.height)/2; sprintf(default_geometry, "%dx%d+%d+%d", xsh.width, xsh.height, xsh.x, xsh.y); mgeom = default_geometry; if (mgeom_cline != NULL) mgeom = mgeom_cline; else if (mgeom_rsrc != NULL) mgeom = mgeom_rsrc; bitmask = XGeometry(p_disp, DefaultScreen(p_disp), mgeom, default_geometry, DEFAULT_BDWIDTH, mfontstruct->max_bounds.width, mfontstruct->max_bounds.ascent + mfontstruct->max_bounds.descent, 1, 1, &(xsh.x), &(xsh.y), &(xsh.width), &(xsh.height)); if (bitmask & (XValue | YValue)) xsh.flags |= USPosition; if (bitmask & (WidthValue | HeightValue)) xsh.flags |= USSize; Main = XCreateSimpleWindow(p_disp, DefaultRootWindow(p_disp), xsh.x, xsh.y, xsh.width, xsh.height, DEFAULT_BDWIDTH, mfgpix, mbgpix); XSetStandardProperties(p_disp, Main, app_name, app_name, None, argv, argc, &xsh); xwmh.flags = (InputHint | StateHint); xwmh.input = False; xwmh.initial_state = NormalState; XSetWMHints(p_disp, Main, &xwmh); gcv.font = mfontstruct->fid; gcv.foreground = mfgpix; gcv.background = mbgpix; theGC = XCreateGC(p_disp, Main, (GCFont | GCForeground | GCBackground), &gcv); xswa.colormap = DefaultColormap(p_disp, DefaultScreen(p_disp)); xswa.bit_gravity = CenterGravity; XChangeWindowAttributes(p_disp, Main, (CWColormap | CWBitGravity), &xswa); XSelectInput(p_disp, Main, ExposureMask); XMapWindow(p_disp, Main); ex = 1; ey = 1; Exit = XCreateSimpleWindow(p_disp, Main, ex, ey, ewidth, eheight, DEFAULT_BDWIDTH, efgpix, ebgpix); XSelectInput(p_disp, Exit, ExposureMask | ButtonPressMask); XMapWindow(p_disp, Exit); gcv.font = efontstruct->fid; gcv.foreground = efgpix; gcv.background = ebgpix; exitGC = XCreateGC(p_disp, Exit, (GCFont | GCForeground | GCBackground), &gcv); while (!Done) { XNextEvent(p_disp, &theEvent); if (theEvent.xany.window == Main) { if (theEvent.type == Expose && theEvent.xexpose.count == 0) { int x, y, itemp; unsigned int width, height, utemp; Window wtemp; if (XGetGeometry(p_disp, Main, &wtemp, &itemp, &itemp, &width, &height, &utemp, &utemp) == 0) break; x = (width - XTextWidth(mfontstruct, mtext, strlen(mtext))) / 2; y = eheight + (height - eheight + mfontstruct->max_bounds.ascent - mfontstruct->max_bounds.descent) / 2; XClearWindow(p_disp, Main); XDrawString(p_disp, Main, theGC, x, y, mtext, strlen(mtext)); } } if (theEvent.xany.window == Exit) { switch(theEvent.type) { case Expose: if (theEvent.xexpose.count == 0) { XClearWindow(p_disp, Exit); XDrawString(p_disp, Exit, exitGC, extxt, eytxt, extext, strlen(extext)); } break; case ButtonPress: Done = 1; } } } XFreeGC(p_disp, theGC); XFreeGC(p_disp, exitGC); XDestroyWindow(p_disp, Main); XCloseDisplay(p_disp); exit(0); } void usage(void) { fprintf(stderr, "usage: %s [-display host:display] \ [-geometry geom] [-mtext text], [-etext text]\n", app_name); exit(1); }
http://www.cs.uregina.ca/Dept/manuals/Manuals/8Software/8Software.html
CC-MAIN-2017-43
refinedweb
16,561
63.19
Schema migrations So far, you have learned about the ways to fetch or persist data using the database query builder. We take a step further in this guide and explore schema migrations for creating/altering database tables. Migrations Overview Database schema migrations is one of the most confusing topics in software programming. Many times individuals don't even understand the need to use migrations vs. manually creating database tables. So, let's take a step backward and explore the possible options for creating/modifying tables inside a database. Using a GUI Application The simplest way to create database tables is to use a GUI application like Sequel Pro, Table plus, and so on. These applications are great during the development phase. However, they have some shortcomings during the production workflow. - You need to expose your database server to the internet so that the GUI application on your computer can connect to the production database. - You cannot tie the database changes to your deployment workflow. Every deployment impacting the database will require manual intervention. - There is no history of your tables. You do not know when and how a database modification was done. Custom SQL Scripts Another option is to create SQL scripts and run them during the deployment process. However, you will have to manually build a tracking system to ensure that you are not running the previously ran SQL scripts. For example: - You write a SQL script to create a new userstable. - You run this script as part of the deployment workflow. However, you have to make sure that the next deployment must ignore the previously executed SQL script. Using Schema Migrations Schema migrations address the above issues and offer a robust API for evolving and tracking database changes. Many tools are available for schema migrations ranging from framework-agnostic tools like flywaydb to framework-specific tooling offered by Rails, Laravel, and so on. Similarly, AdonisJS also has its own migrations system. You can create/modify a database by just writing JavaScript. Creating Your First Migration Let's begin by executing the following ace command to create a new migration file. node ace make:migration users# CREATE: database/migrations/1618893487230_users.ts Open the newly created file inside the text editor and replace its content with the following code snippet. import BaseSchema from '@ioc:Adonis/Lucid/Schema'export default class Users extends BaseSchema {protected tableName = 'users'public async up () {this.schema.createTable(this.tableName, (table) => {table.increments('id').primary()table.string('email').unique().notNullable()table.string('password').notNullable()table.timestamps(true, true)})}public async down () {this.schema.dropTable(this.tableName)}} Finally, run the following ace command to execute the instructions for creating the users table. node ace migration:run# migrated database/migrations/1618893487230_users Congratulations! You have just created and executed your first migration. Lucid will not execute the migration file if you re-run the same command since it has already been executed. node ace migration:run# Already up to date 👈 How it works? - The make:migrationcommand creates a new migration file prefixed with the timestamp. The timestamp is important because the migrations are executed in ascending order by name. - Migration files are not only limited to creating a new table. You can also altertable, define database triggers, and so on. - The migration:runcommand executes all the pending migrations. Pending migrations are those, which are never executed using the migration:runcommand. - A migration file is either in a pendingstate or in a completedstate. - Once a migration file has been successfully executed, we will track it inside the adonis_schemadatabase table to avoid running it multiple times. Changing Existing Migrations Occasionally you will make mistakes when writing a migration. If you have already run the migration using the migration:run command, then you cannot just edit the file and re-run it since the file has been tracked under the list of completed migrations. Instead, you can roll back the migration by running the migration:rollback command. Assuming the previously created migration file already exists, running the rollback command will drop the users table. node ace migration:rollback# reverted database/migrations/1618893487230_users How rollback works? - Every migration class has two methods, upand down. The downis called during the rollback process. - You (the developer) are responsible for writing correct instructions to undo the changes made by the upmethod. For example, if the upmethod creates a table, then the downmethod must drop. - After the rollback, Lucid considers the migration file as pending, and running migration:runwill re-run it. So you can modify this file and then re-run it. Avoiding Rollbacks Performing a rollback during development is perfectly fine since there is no fear of data loss. However, performing a rollback in production is not an option in the majority of cases. Consider this example: - You create and run a migration to set up the userstable. - Over time, this table has received data since the app is running in production. - Your product has evolved, and now you want to add a new column to the userstable. You cannot simply roll back, edit the existing migration, and re-run it because the rollback will drop the users table. Instead, you create a new migration file to alter the existing users table by adding the required column. In other words, migrations should always move forward. Alter example Following is an example of creating a new migration to alter the existing table. node ace make:migration add_last_login_column --table=users# CREATE: database/migrations/1618894308981_add_last_login_columns.ts Open the newly created file and alter the database table using the this.schema.table method. import BaseSchema from '@ioc:Adonis/Lucid/Schema'export default class Users extends BaseSchema {protected tableName = 'users'public async up () {this.schema.table(this.tableName, (table) => {table.dateTime('last_login_at')})}public async down () {this.schema.table(this.tableName, (table) => {table.dropColumn('last_login_at')})}} Re-run the migration:run command to run the newly created migration file. node ace migration:run# migrated database/migrations/1618894308981_add_last_login_columns Migrations Config The configuration for migrations is stored inside the config/database.ts file under the connection config object. {mysql: {client: 'mysql',migrations: {naturalSort: true,disableTransactions: false,paths: ['./database/migrations'],tableName: 'adonis_schema',disableRollbacksInProduction: true,}}} naturalSort Use natural sort to sort the migration files. Most of the editors use natural sort, and hence the migrations will run in the same order as you see them listed in your editor. paths An array of paths to lookup for migrations. You can also define path to an installed package. For example: paths: ['./database/migrations','@somepackage/migrations-dir',] tableName The name of the table for storing the migrations state. Defaults to adonis_schema. disableRollbacksInProduction Disable migration rollback in production. It is recommended that you should never roll back migrations in production. disableTransactions Set the value to true to not wrap migration statements inside a transaction. By default, Lucid will run each migration file in its transaction.
https://docs-adonisjs-com.pages.dev/guides/database/migrations
CC-MAIN-2021-49
refinedweb
1,136
50.33
dart-genderize-api Genderize API wrapper for Dart. This package will help you to determinate the gender of a person from his name. Basic usage import 'package:genderize/genderize.dart'; void main() async { /// Initializing an instance of [Genderize] var gen = Genderize(); /// Getting a [Gender] of [name] "Manuel". var gender = await gen.getGender('Manuel'); print('You are ${gender.gender}'); //output: "You are male" gen.close(); } How to start 1. Initialize an istance of "Genderize" class: var gen = Genderize(); 2. Fetch any data you need from the instance (more informations are specified in the documentation). 3. Close the Genderize client to save dart process performances. gen.close(); Documentation More info - API is fetched from
https://pub.dev/documentation/genderize/latest/
CC-MAIN-2020-50
refinedweb
112
54.49
Deadlocks Operating System Concepts chapter 7 CS 355 Operating Systems Dr. Matthew Wright Deadlock Problem • Deadlock: A set of processes, each holding a resource, and each waiting to acquire a resource held by another process in the set. • Example: • Suppose a system has two disk drives. • Processes P1 and P2 each hold one disk drive and are waiting for the other drive. • Multithreaded processes are good candidates for deadlock because multiple threads can be competing for shared resources. Deadlock Characterization Deadlock requires that four conditions hold simultaneously: • Mutual exclusion: only one process can use a resource at a time • Hold and wait: a process holding a resource is waiting to acquire a resource held by other processes • No preemption: a resource can only be released voluntarily by the process holding it, after it has completed its task • Circular wait: there exists a set {P0, P1, ..., Pn} of waiting processes such that P0 is waiting for a resource held by P1, P1 is waiting for a resource held by P2, ..., Pn-1 is waiting for a resource held by Pn, and Pn is waiting for a resource held by P0. Resource-Allocation Graph R1 R3 • Round nodes indicate processes. • Rectangular nodes indicate resources, which might have multiple instances. • An arrow from a process to a resource indicates that the process has requested the resource. • An arrow from a resource to a process indicates that the resource has been allocated to the process. • If the graph contains no cycles, then there is no deadlock. • If each resource has only one instance, then a cycle indicates deadlock. P1 P2 P3 no deadlock deadlock R2 R4 Resource-Allocation Graph • Note: A cycle indicates the possibility of deadlock. It does not guarantee that a deadlock exists. • Example: The following resource-allocation graph contains a cycle, but not a deadlock: P2 R1 P1 P3 P4 R2 Java Deadlock Example The following program might or might not cause deadlock when it is run. class A implements Runnable { private Lock one, two; public A(Lock one, Lock two) { this.one = one; this.two = two; } public void run() { try { one.lock(); // do something two.lock(); // do something else } finally { one.unlock(); two.unlock(); } } } classB implementsRunnable { private Lock one, two; public A(Lock one, Lock two) { this.one = one; this.two = two; } public void run() { try { two.lock(); // do something one.lock(); // do something else } finally { two.unlock(); one.unlock(); } } } public classDeadlockExample { public static void main (String arg[]) { Lock lockX = newReentrantLock(); Lock lockY = newReentrantLock(); Thread threadA = new Thread(new A(lockX, lockY)); Thread threadB= new Thread(newB(lockX, lockY)); threadA.start(); threadB.start(); } } Handling Deadlocks Three strategies for handling deadlocks: • Ensure that the system will never enter a deadlocked state. • Allow the system to enter a deadlocked state, detect it, and recover. • Ignore the problem and pretend that deadlocks never occur. Strategy 3 is employed by most operating systems, including UNIX, Windows, and the JVM. Deadlock Prevention Deadlock prevention: ensure that at least one of the four necessary conditions for deadlock cannot hold • Mutual exclusion: Can we eliminate mutual exclusion? • Sharable resources (e.g. read-only files) cannot be involved in deadlock. • Since some resources are intrinsically nonsharable, we generally cannot remove the mutual exclusion condition. • Hold and wait: How could we guarantee that a process never holds a resource while it waits for another? • We could require that a process holding any resource may not request another resource (e.g. a process must request all resources when it is created). • We could require that a process releases all resources before making a request for another resource. • These protocols are not efficient. Deadlock Prevention • No preemption: We could preempt resources from processes. • If a process requests resources that are not available, we could preempt any resources that it currently holds. • If a process requests resources held by another waiting process, we could preempt them from the other process. • Preemption is difficult if the state of a resource cannot easily be saved and restored. • Circular wait: Can we eliminate circular waits? • We could require that processes request resources in a particular order (e.g. tape drives, then disk drives, and finally printers). • An ordering could be implemented in Java by using System.identityHashCode(). • Often, requiring resource requests in a particular order is not convenient. Deadlock Avoidance • OS requires additional information from each process about the resources it will need, and the OS makes processes wait if they make a request that would produce deadlock. • Simple strategy: • Require each profess to declare in advance the maximum number of resources of each type that it will need. • The system then allocates resources in such a way that a circular wait condition never exists. • Safe state: The system is safe if it can allocate resources to each process and avoid deadlock. • Safe sequence: A sequence of processes (P1, P2, ..., Pn) is safe if the resources required by Pi can be satisfied by the currently available resources plus those held by all Pj, with j < i. Deadlock Avoidance • Safe state: The system is safe if it can allocate resources to each process and avoid deadlock. • Unsafe state: The system might not be able to allocate resources to each processes and avoid deadlock. • An unsafe state is not necessarily deadlocked! Deadlock Avoidance Example Suppose a system has 10 tape drives and 3 processes: At t0, the system is safe. Processes can run in the order P2, P1, P0. However, suppose that we let process P1 run and it requests and is allocated another tape drive at time t1. The system state is then: This state is unsafe, because any process that runs next might request another tape drive, which we would be unable to allocate. Deadlock Avoidance Strategy • When a process requests resources, grant the resources only if the system will still be in a safe state. • Resource utilization may be lower than it would otherwise be. • If there is only one instance of each resource, we can implement this strategy using a variant of the resource-allocation graph. • If there are multiple instances of each resource, we can use the “Banker’s Algorithm.” Resource-Allocation-Graph Algorithm • This avoids deadlock if there is only one instance of each resource type. • Initially, each process must specify which resources it might request in the future. • In the resource allocation graph, a dotted arrow from Pi to Rj is a claim edge, indicating that Pi might request Rj in the future. • If the request occurs, the claim edge is converted to a request edge. • A request can be granted if and only if converting the request edge to an assignment edge does not result in a cycle in the graph. • Cycle-detection algorithms are O(n2), where n is the number of vertices. R1 R1 P1 P2 P1 P2 R2 R2 Unsafe: a cycle Safe: no cycle Banker’s Algorithm • This avoids deadlock if there are many instances of each resource type. • We must maintain the following data structures: (n is number of processes and m is number of resource types) • Available: vector of length m, indicating the number of available resources of each type Available[j] is the number of instances of resource Rj. • Max: n x m matrix, indicating the maximum demand of each process Max[i][j] is the maximum number of instances of resource Rj that process Pi may request. • Allocation: n x m matrix, indicating the resources of each type currently allocated to each process Allocation[i][j] is the number of instances of resource Rj currently allocated to process Pi. • Need: n x m matrix, indicating the remaining resource need of each process Need[i][j] = Max[i][j] – Allocation[i][j] Banker’s Algorithm Safety Algorithm: determines whether or not the system is in a safe state; the algorithm is O(mn2) • Let Work be a vector of length m, and set Work = Available. Let Finish be a vector of length n, initialized so that each entry is false. • Find an index i such that both • Finish[i] == false • Need[i] ≤ Work If no such i exists, go to step 4. • Work = Work + Allocation[i] Finish[i] = true Go to step 2. • If Finish[i] == true for all i, then the system is safe. Otherwise, the system is unsafe. Example 3 resources: A (has 6 instances), B (has 3 instances), and C (has 4 instances) 4 processes, P0, P1, P2, P3, with maximums and allocations: Is the system in a safe state? Banker’s Algorithm Resource-Request Algorithm: determines whether resources can be safely granted Example 3 resources: A (has 6 instances), B (has 3 instances), and C (has 4 instances) 4 processes, as before: Let Request[i] be the request vector for Pi. • If Request[i] ≤ Need[i], go to step 2. Otherwise, the request exceeds the process’ maximum. • If Request[i] ≤ Available[i], go to step 3. Otherwise, Pi must wait. • Pretend to grant the request, as follows: Available = Available – Request[i] Allocation[i] = Allocation[i] + Request[i] Need[i] = Need[i] – Request[i] Check to see if the system would still be safe, then grant the request. Otherwise, roll back the changes and make Pi wait. What happens if the following requests are made (starting from the above state each time)? P0 requests [1, 0, 0] P2 requests [4, 0, 1] P2 requests [2, 0, 1] Deadlock Detection • If a system does not prevent deadlocks, it may provide: • An algorithm that examines the state of the system to determine whether a deadlock has occurred • An algorithm to recover from deadlock • If all resources have only a single instance, detecting deadlock involves looking for a cycle in the resource-allocation graph. • In fact, we can collapse the resource-allocation graph to a wait-for graph, which indicates which processes are waiting for which other processes to release resources. P5 P5 R1 R3 R4 resource-allocation graph P1 P1 P2 P3 P2 P3 corresponding wait-for graph P4 R2 R5 P4 Deadlock Detection Algorithm • If some resources have multiple instances, we must use the following data structures, similar to those in the Banker’s Algorithm: • Available: vector of length m, indicating the number of available resources of each type Available[j] is the number of instances of resource Rj. • Allocation: n x mmatrix, indicating the resources of each type currently allocated to each process Allocation[i][j] is the number of instances of resource Rj currently allocated to process Pi. • Request: n x mmatrix, indicating the current request of each process Request[i][j] is the number of instances of resource Rjrequested by Pi. • The deadlock detection algorithm is O(mn2). Deadlock Detection Algorithm Example 3 resources: A (has 7 instances), B (has 4 instances), and C (has 2 instances) 4 processes, as before: • Let Work be a vector of length m, and set Work = Available. Let Finish be a vector of length n. If Allocation[i] = 0, set Finish[i] = true; otherwise, set Finish[i] = false. • Find an index i such that both • Finish[i] == false • Request[i] ≤ Work If no such i exists, go to step 4. • Work = Work + Allocation[i] Finish[i] = true Go to step 2. • If Finish[i] == false for some i, then the system is in a deadlocked state. Is the system deadlocked? What if P1 instead requests [4, 2, 1]? Deadlock Algorithm Usage • How often should we run the deadlock detection algorithm? • Factors to consider: • How often is deadlock likely to occur? • How many processes will be affected by deadlock if it happens? • Running the deadlock detection algorithm whenever a process requests a resource would be computationally expensive. • We could run the algorithm at periodic intervals (e.g. every hour). • We could run the algorithm when CPU utilization drops below some threshold (e.g. 40%). Deadlock Recovery • One way to recover from deadlock is to terminate processes. • Two strategies: • Abort all deadlocked processes: will surely work, but very expensive • Abort processes individually until deadlock is eliminated: still expensive, since we have to run the deadlock detection algorithm after each process terminated • Aborting processes is tricky, because the system could be left in an inconsistent state (e.g. if the process was in the midst of updating a file). • How do we choose which processes to terminate? Consider: • What is the priority of the process? • Are the resources held by the process are easy to preempt? • What is the least number or processes whose termination would resolve the deadlock? • How much computation would be repeated if the processes is restarted? Deadlock Recovery • Another way to recover from deadlock is to preempt resources. • Three issues: • Selecting a victim: Which resources should be preempted from which processes? How expensive will this be? • Rollback: If we preempt a resource from a process, what happens to that process? Can we roll the process back to a previous state? Perhaps we will need to abort the process. • Starvation: How do we ensure that starvation will not occur? (i.e. we shouldn’t always preempt resources from the same process)
https://www.slideserve.com/tekla/deadlocks
CC-MAIN-2019-51
refinedweb
2,170
63.49
This afternoon, I trained a 3-layers neural network as a regression model to predict the house price in Boston district with Python and Keras. The example case came from the book "Deep Learning with Python". There were 2 big loop during the running procedure. The first one went through the data for 100 times (epochs), while the second one ran 500 epochs. My poor laptop was apparently overladed in such a hot summer weather and the fan was roaring. It seems the laptop is not the best choice to train deep neural models. It would be so great if I have got a GPU. Suddenly, it occurs to me that it is not necessary to train the model locally. It's a cloud computing age! How about to run the code on cloud GPU to save my laptop's effort? It reminds me a video clip post by Siraj Raval on Youtube recently. He recommended cloud GPU platform, namely Floydhub(), in this video. Actually, I once tried AWS GPU product in a online deep learning course. The instructor collaborated with AWS and provided all the students with AWS Computing power to solve the exercise as well as the homework. However, it was not a very good experience, since he had to make a long video to show the students how to configure the AWS instance. Indeed, comparing with some other solutions, the AWS was simple enough, yet still not so simple for the new newbies. The website FloydHub, on the other hand, solved the pain point well. Firstly, it is wrapper over AWS, and filtered out a lot of complex operations. Secondly, FloydHub is batteries-included with a lot of main stream machine learning frameworks. Besides, it is well-documented and friendly to the new users. The slogan is: Focus on what matters. Let FloydHub handle the grunt work. Honestly, I like all the things designed for the lazy folks. So I registered immediately and validated my email. Then I got 2 hours GPU running time for free! To spend the precious GPU running time on something import, I read the Quick Start Tutorial eagerly. Several minutes later, I feel confident to use it. I created a new job from personal control panel on FloydHub and named it "try-keras-boston-house-regression". Then I exported a Python Script file from my local Jupyter Notebook. I created a new directory and copied the script file into it. To save the Evaluation Metrics of the training and evaluation process, I added 3 lines of code in the end of the Python Script. import pickle with open('data.pickle', 'wb') as f: pickle.dump([all_scores, all_mae_histories], f) In this way, we can save all_scores and all_mae_histories data into a file named data.pickle with the Pickle Module in Python. Then let's dive into the shell and navigate to this new created folder with cd command and execute the following command: pip install floyd-cli The command line interface of FloydHub is ready to use. We can login the FloydHub account with: floyd login Then input your FloydHub username and password. When it's ready, run: floyd init try-keras-boston-house-regression Please notice the last parameter should be identical to the title you input just now when created the new job from control panel. Now we can run the Python script with following command: floyd run --gpu --env tensorflow-1.8 "python 03-house-price.py" In this command, --gpu means that we ask the FloydHub to run the script in a GPU environment instead of a default CPU one, and --env tensorflow-1.8 means it will use Tensorflow version 1.8, and the Keras version is 2.1.6 accordingly. If you want to use other framework or choose a different version, please refer to this link. In response, we get the following messages from FloydHub. It's all set. Yes, so easy. And your learning job is already running in the cloud. While the job was running, I drank some tea, read several pages of books and browsed some news on Social Media with my phone. When the running job is done, it will terminate the environment and will not charge you any extra GPU running time. So you don't need to keep an eye on it. When I came back to my computer, the job's already fininished. GPU memory was busy during the whole procedure, as the Utilization was above 90% most of the time. The GPU, on the other hand, was not busy at all. Maybe my neural network was too simple. Scrolling down the page, we can see the logs. The output was similar to the one when you train the model locally. Besides, it showed you extra information about GPU resource allocation. To see the saved file, you can open the Files tag. The pickle file's already there. FloydHub helped us with all the hard computing job, and my laptop is much cooler this time. You can download the pickle file, and put it back into the original working directory. Let's go back to the Jupyter Lab page on the laptop and open a new ipynb file. The following code can check the running results. import pickle import matplotlib.pyplot as plt import numpy as np %matplotlib inline with open('data.pickle', 'rb') as f: [all_scores, all_mae_histories] = pickle.load(f) num_epochs = 500 average_mae_history = [ np.mean([x[i] for x in all_mae_histories]) for i in range(num_epochs) ] plt.plot(range(1, len(average_mae_history) + 1), average_mae_history) plt.xlabel('Epochs') plt.ylabel('Validation MAE') plt.show() Please notice these codes will only do some drawings. Here is the result: The visualization result is identical to the textbook which shows the code ran smoothly on the Cloud GPU environment. You can check the remaining GPU running time easily. There's still more than 1 hour to play with. Great! Just now, I showed you how to run FloydHub in Command Line Interface. If you are familiar with bash command, it will be great. However, for the new users who do not want to use the shell command, I recommend you to try an easier way. Click the Workspace tab. You will see two existing Workspace examples. Try to open the first one and check it out. Hit the green Resume button on top right, the system will try to provide us the environment. When it's done, you'll see the familiar Jupyter lab interface. Open the dog-breed-classification.ipynb from the left side file list. It's a complete example to separate different dog breeds. Hit Run -> Restart Kernel and Run All Cells from the menu. You'll figure out there is no significant difference with running the code locally. However, this time, you are using GPU! What if you want to set up a new workspace yourself? You can go back to the Project page . For each project, you can create new workspace with the Create Workspace button. Floydhub will ask you how to create the new workspace. Let's select Start from scratch on the left side and choose the environment. Let's change the default one into Tensorflow 1.9 and GPU. Hit the Create Workspace. Then click on the link try-keras-boston-house-regression workspace. A Jupyter Lab interface is ready. You don't need to install Tensorflow or configure the GPU yourself. Even better, you don't need to run bash commands this time. Just input the Python code, and use Keras and Tensorflow freely. That's cool! Start your own Deep Learning Journey with Floydhub(). You don't need to buy your own expensive deep learning device if you just need GPU computing power occasionally. It will be a waste, and you'll not get a good price when you want to sell it to make an upgrade. In this case, Cloud GPU is a better choice. Have you ever used any other Cloud GPUs? What are the pros and cons comparing with Floydhub? I would like to have your feedbacks. 本文分享自微信公众号 - 玉树芝兰(nkwangshuyi),作者:王树义老:2018-08-05。 The Offline OData plugin provides offline OData support to Kapsel applications. ... 扫码关注云+社区 领取腾讯云代金券 我来说两句
https://cloud.tencent.com/developer/article/1193351
CC-MAIN-2020-24
refinedweb
1,371
76.62
Monad Technology Blog To develop a mshsnapin, you can use following three simple steps, Following is the sample code for a mshsnapin class. Basically, you just need to fill in information about name, vendor and description of the mshsnapin. namespace XYZ.TestNameSpace{ [RunInstaller(true)] public class MyMshSnapIn : MshSnapIn { public MyMshSnapIn() : base() { } /// <summary> /// Gets name of the mshsnapin. This will be the string to be used for registering this /// mshsnapin. /// </summary> public override string Name { get { return "XYZ.TestNameSpace.MyMshSnapIn"; } } /// <summary> /// Gets vendor of the mshsnapin. /// </summary> public override string Vendor { get { return "XYZ Corporation"; } } /// <summary> /// Gets description of the mshsnapin. /// </summary> public override string Description { get { return "This is a test mshsnapin"; } } } For step 2, you build the mshsnapin code (from step 1), cmdlet code, and provider code (from your normal cmdlet and provider development) into one assembly (for example MyMshSnapin.dll). For step 3, you install the snapin assembly by running following command. (installutil.exe is an standard utility from CLR). installutil.exe -i MyMshSnapin.dll - George
http://blogs.msdn.com/monad/archive/2006/01/11/511415.aspx
crawl-002
refinedweb
168
56.76
I am developing a small framework for my web projects in PHP and so i do not have to perform the fundamental work again and again again for each new website. It's not transpire to produce a second CakePHP or Codeigniter and I am also not likely to build my websites with the available frameworks when i would rather use things I have produced myself generally. I've had no problems in creating and coding the framework if this involves parts such as the core structure, request handling, and so forth, but I am getting tied to creating the database interface for my modules. I have already considered while using MVC pattern but discovered that it might be a little of the overkill for my rather small project(s). Therefore the exact problem I am facing is when my frameworks modules ( viewCustomers might be a module, for instance) should connect to the database. Could it be (still) smart to add SQL straight into PHP code? (Could be "old way": mysql_query( 'SELECT firstname, lastname(.....))? How could I abstract a question such as the following? Choose firstname, lastname FROM clients WHERE id=X Would MySQL "assistant" functions like $this->db->customers->getBy( 'id', $x ) be advisable? I am not necessarily sure simply because they often become useless when confronted with more difficult queries such as the virtually trivial one above. May be the "Model" pattern from MVC my main choice to solve this? Exactly what do you presently use to resolve the issues proven above? perhaps you have investigated wide web.doctrine-project.org/ or any other php orm frameworks (zend_db involves mind)? In my opinion you want to obtain access to your DB out of your module. I'd stay away from mysql_query from the code. Rather, opting for simple model with abstracted DB access could be simple and easy , straight-forward. For instance, you'll have a file like models/Clients.php with this particular code: <?php class Clients Choose first_title, last_title FROM clients WHERE id='$id'" $res = $DB::getRow($sql) return ($res) I'm presuming some type of DB assistant has already been instantiated and available as $DB. Here is a straightforward the one that uses PDO. Now, you need to include this inside your module and employ the next way: <?php include_once "models/Clients.php" $clients = new Clients() $theCustomer = $clients->getById(intval($_REQUEST['cust_id'])) echo "Hello " . $theCustomer['first_name'] Cheers. If you want speed, then use raw queries (however, you should certainly use PDO with prepared queries). If you would like some thing OOP, you are able to —as you suggest it— design this with assistants. Once, I have designed such like which in fact had the next concept: - DB connection/handler classes (handling multi-connections to various databases and various servers for example MySQL, Oracle, etc.) - A category per action (ie. Choose, Remove, etc.) - Filter classes (eg. RangeFilter) The code looked something similar to this: $choose = new Choose('field1', 'field2', ) $result = $choose->from('myTable') ->addFilter(SQLFilter::RangeFilter, 'field2') ->match(array(1, 3, 5)) ->unmatch(array(15, 34)) ->fetchAll() It is a simple illustration of the best way to construct it. You are able to go further and implements automated handling of table relations, area type check (using more self examination in your tables), table and area alias support, etc. It could appear to become a lengthy and effort, but really, it will not get you much time for you to make all of these features (≈1 month).
http://codeblow.com/questions/creating-an-over-all-database-interface-in/
CC-MAIN-2018-39
refinedweb
580
63.19
Light's Vapoursynth Functions Project description lvsfunc A collection of VapourSynth functions and wrappers written and/or modified by LightArrowsEXE. Full information on how every function/wrapper works, as well as a list of dependencies and links, can be found in the documentation. For further support, drop by #lvsfunc in the IEW Discord server. How to install If you have the old lvsfunc.py module, remove that from your system first. Install lvsfunc with the following command: $ pip3 install lvsfunc --no-cache-dir -U Or if you want the latest git version, install it with this command: $ pip3 install git+ --no-cache-dir -U Arch Linux Install the AUR package vapoursynth-plugin-lvsfunc-git with your favorite AUR helper: $ yay -S vapoursynth-plugin-lvsfunc-git Note that this may be outdated. It's recommended you grab the git version instead. Usage After installation, functions can be loaded and used as follows: import lvsfunc as lvf src = lvf.misc.source(...) aa = lvf.aa.clamp_aa(...) comp = lvf.comparison.compare(...) ... Disclaimer Anything MAY change at any time. The public API SHOULD NOT be considered stable. If you use lvsfunc in any of your projects, consider hardcoding a version requirement. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/lvsfunc/
CC-MAIN-2022-27
refinedweb
225
55.64
Nixie Millivolt Meter Clock . The other day, I managed to come across a pile of electronics that were being dispose of and there was something I was looking for: a Nixie Tube based Volt Meter…. er, “Volt meter” is a bit misleading, it’s actually a milliVolt meter. It wouldn’t turn on at first, but a good whack to the tube board fixed that… no joke, it works a lot more often than it should. Now came the interesting part. I wanted to make a clock out of it, but not ruin the normal functionality. Normally with these projects a voltmeter like this is gutted and replaced with clock innards, or people just take out the tubes and their respective driver boards, but I wanted to be able to use this tool (never know if it’ll come in handy!). So, after a little thought and knowing I had some free time, I came up with this. Time to Complete: 2 hours for design, construction, and fine-tuning (3.5 if you count debugging the issue with different power sources for the board as described below) Bill of Materials: - Microcontroller with at least three PWM outputs, more if you need zero-correction control or if you need two pins for properly control minutes. Arduino is a great board for beginners. - Resistors matched to bring 5V to scale on the meter (while there are parts numbers listed here, please see notes farther down about how to calculate the ideal value and then why those aren’t used). - A meter to display it on, unless you can see electrons flowing in wires ; ) Allied Electronics Parts List (note the arduino they list is a microchip based kit, rather than a standard Atmel-based that I used): This circuit can be built on a breadboard like I did, or you can get a piece of Protoboard *NOTE: I did not include a zero-correction resistor in the BOM since it was specific for my meter. You can find a suitable value for your equipment, or adjust the zero on the hardware (which will have to be re-adjusted before using to measure voltages, but simple enough) date of completion of design and testing: 10/21/2011 The clock operates by an Arduino** that keeps the time using the Time library, and outputs variable voltages via four/five (depends on mode) PWM pins. These are maxed out at 5V, and cannot operate at the milliVolt level the meter needs on their own, so I put each output through a Resistor Divider to compensate and tune it in. With the resistor dividers, the clock time is output as a voltage anywhere from 0-0.0XXXX V where the X’s mark the time. For example, if the time was 12:30, the output voltage would be 0.01230V or 12.30mV ** I have used this same arduino for an absurd number of things, I always can just pop it from one thing to another with a quick reprogram, but I want this running always so I am choosing a barebones uC and an actual RTC to keep things nice and smooth. Heck, I may even remake the circuit with all TTL gates since I am doing nothing better in required-before-taking-anything-else digital logic class right now. The day I picked up the meter, I went home and did the math for the voltage dividers. After a few tries (due to ADD on my part regarding the scale of the voltages being output, and then again since I switched to the 100mV scale rather than the 1000mV scale to take advantage of the decimal point placement), I had the set of resistors needed. However, some of the values either A could not exist (infinite decimals) or B, were too bizarre. So I rounded them down the closest value that was simple to implement, and then use some tweaking to determine the maximum PWM value to make the maximum output the divider should have, and all clock calculations are done accordingly. The values calculated were as follows: Vout = Vin*(R2/(R1 + R2) (this only works for no-load circuits, but that’s Ok for us using a mete which has such a high impedance it is not considered a significant load) - Common Resistor to Ground for each divider: 1 Ohm (just to keep it simple, and fewer resistors) - Hours left digit: 449 Ohms –> ~490 Ohms with PWM output mapping - Hours right digit: 554.5 repeating Ohms –> 550 Ohms with PWM mapping - Minutes left digit: 8332.3 repeating Ohms –> 8.3 kOhms with PWM mapping - Minutes right digit: 55554.5 repeating Ohms –> 55kOhms with PWM mapping However, calculations are great and all but in the end they only give you a ballpark with things like this. You have to fiddle around with what you have, and you may find better results. I found out that I needed first a variable PWM output to compensate the meter which usually is -0.65mV from 0 with no outputs be but varies based on whether or not I use a USB bus for power or 5V adapter so yes, PWM is needed to compensate via software for both situations. I also realized I had enough accuracy in the PWM output for the left minutes digit to control the entire minutes range without issue, except again when on 5V external adapter. I think my board’s regulator may be damaged at the rate in which I keep bringing it up, time to make a new one 🙂 Here are the resistor values I actually used: - Common Resistor to Ground: 1 Ohm - Hours left digit: 430 Ohms with PWM mapping (the resistor was on hand, and worked better without the 56 Ohm I had to bring it closer to target)* - Hours right digit: 510 Ohm (resistor was on hand)* - Minutes both digits: 8.2 kOhm (resistor on hand, and realized that it had enough resolution even when mapped to work as both digits accurately)* - Zero Correction Pin: a 50 kOhm and a 9.7 kOhm resistor in parallel to create a 8.124 kOhm* - Turns out I didn’t need this, the zero is easy enough to adjust when switching from clock to meter modes. *I had them on hand, and it worked out (again, trial and error helps as doing math for every possible combination of things is either to much work for the thousands+ combinations, or gives you bizarre results like the calculations showed above The Arduino Sketch is at the bottom of the page. The build was done on a breadboard for quick and easy tweaking, and since I wanted to get this working for a demo ASAP (ok, in reality because I’m still deciding on a different uC and will make a proper PCB rather than use perfboard, as I haven’t opened a bottle of Ferric Chloride in, quite literally, years. Photos: The circuit on the breadboard; the basic schematic; and a photo of everything working Results: The clock worked after a lot of tweaking, but there are still a couple of quirks. First, there are points at which it will be running fast, and others where it is running slow, which is similar to the design of “Lord Vetinari’s Clock” which is meant to drive people mad. This will only drive you mad if you realize the minutes either change too ofter or too little, or if the least significant digit flickers between two values due to the presence of voltages below 0.01mV adding up (which it does… A LOT). As for general drift, it seems like I am getting quite a bit, but I shouldn’t be losing 9 minutes after an hour! – I have noticed that this old meter performs differently over time, maybe I should just let it run for a while and see how far off the tuning operations are when run. If they have gone off, the meter adjusted from temperature or long term operation. If they are the same, then my clock is drifting or programming wrong. So far my programming has tested right on everything, but I do know that the time library is also meant to grab time from serial. I am probably going to set this thing up with my spare network shield and pull from the universal clock for fun, I’ll check for more drift then. There also is the occasional value of minutes around 99, this is due to the resolution of the HOURS output, which if I turned it up one notch higher, would put it several mV above ideal. I figured I’d rather have closest to a number rather than only real numbers. (ex: if I set only valid values could be output, then assume the actual time is 2:00. If I output real values only, the error is larger and would show 2:03/4. If I want more accuracy, then at 2:00 it may output 1:99 on some values. I’d rather have this as once it hits 2:01 it just shows up as 2:00 and not as far off. This only happens about every other hour. There is also a strange issue with the power supply arrangement. When setting up, programming, and tuning I used my computer as a power source for the arduino. When I tried to use a standard 5V adapter, I got a lot of absurd results that I could not successfully tune within the capabilities of my very short attention span. It may be that I damaged the voltage regulator on this board on another project (wouldn’t be the first time I’ve done that). I may evaluate this, but likely not as I want to try some other uCs I haven’t used yet and will build a board around it anyways. Future Modifications: - A trimpot on each voltage divider for finer tuning - A different uC with a proper time keeping circuit - Tweak the settings to get the device working properly off of a normal 5V power adapter - A proper PCB for everything - Input buttons to control it once it is no longer tethered to a computer, as right now the time is set by reprogramming it since I need the computer anyways. - A selector to use different voltage dividers so that I can use the circuit on virtually any Voltmeter. Other Thoughts: None really, so I’ll post some more photos of the Nixie milliVolt meter: Here is the Arduino Sketch (should work on all versions of the software, made particularly in 0018 since the machine I did this on has an old version since some libraries I use broke and I haven’t fixed them yet). Please do note, that if you try to make this yourself remember that all the values I have given will need to be tweaked for your setup, as when working in milliVolts things get a little more sensitive, for example I have multiple variable sets: one for power from a computer’s USB port, a set for operating on an external 5V adapter, and another that is needing to be written for operating on a USB Wall Wart. /* Arduino PWN Nixie Voltmeter Clock, by Jimmy Hartnett This sketch is to output the time in the form of a voltage using PWN, for example 12:34 = 0.1234V */ #include <Time.h> int HoursA_pin = 11; int HoursB_pin = 10; int MinutesA_pin = 9; int MinutesB_pin = 6; int zero_correction_pin = 5; /* These Values are for when powered by the USB bus (not the USB Power Adapters, the actual computer Bus.. I had discrepancies when testing so that’s why if you’re wondering) */ int HoursA_max = 220; //Tuned? – pretty much int HoursB_max = 231; //Tuned? – pretty much in software until a resistor adjustment is made int MinutesA_max = 239; //Tuned? – pretty much int MinutesB_max = 255; //Tuned? – int zero_correction_value = 241; //Tuned? – pretty much – !!! This is the Value for use with the USB Bus, not the 5V external adapter (no idea why it matters) int HoursB_offsets[] = {1,1,2,3,3,4,4,5,6}; // This is for USB Bus Power (NOT USB power adapter) (just my situation, adjust as needed for you) /* These Values are for when powered by an external 5V power adapter(not USB power, standard power adapter) I had discrepancies when testing so that’s why if you’re wondering */ /* int HoursA_max = 220; //Tuned? – pretty much int HoursB_max = 255; //Tuned? – pretty much in software until a resistor adjustment is made int MinutesA_max = 255; //Tuned? – pretty much int MinutesB_max = 255; //Tuned? – int zero_correction_value = 103; //Tuned? – pretty much, maybe +1 – !!! This is the Value for use with the 5V external adapter (no idea why it matters) int HoursB_offsets[] = {0,0,0,0,0,0,0,0,0}; // This is for 5V External Adapter (just my situation, your mileage may vary) */ void setup() { pinMode(HoursA_pin,OUTPUT); pinMode(HoursB_pin,OUTPUT); pinMode(MinutesA_pin,OUTPUT); pinMode(MinutesB_pin,OUTPUT); pinMode(zero_correction_pin,OUTPUT); analogWrite(zero_correction_pin,zero_correction_value); hourFormat12(); setTime(03,51,30,21,10,2011); } void loop() { //These are to test the max outputs during calibration // analogWrite(HoursA_pin,HoursA_max); //analogWrite(HoursB_pin,HoursB_max); //analogWrite(MinutesA_pin,MinutesA_max); //analogWrite(MinutesB_pin,MinutesB_max); //analogWrite(HoursB_pin,(((HoursB_max/9)*9) + HoursB_offsets[(9 – 1)])); // delay(10000); // just here to comment and uncomment to avoid constantly (un)commenting the big thing below this line while(true) { if (hour() <=9) { analogWrite(HoursB_pin,((HoursB_max/9)*hour() + HoursB_offsets[(hour() – 1)])); } if (hour() > 9) { analogWrite(HoursB_pin,((HoursB_max/9)*(hour() – 10) + HoursB_offsets[(hour() – 10 – 1)])); } if (hour() <=9) { analogWrite(MinutesA_pin,((MinutesA_max/59) * (minute() /* – 1 */ ))); // the -1 in a comment is if it is needed to account for an occassional 0.01mV discrepancy due to resolutions of output } if (hour() > 9) { analogWrite(MinutesA_pin,((MinutesA_max/59) * (minute() – 3 /* – 1 */))); // the minute() – 3 is to compensate for the low resolution of the HoursA output, which has a 0.03mV discrepancy and the HoursB 0.01mV discrepancy } if (hour() >= 10) { analogWrite(HoursA_pin,HoursA_max); } else { analogWrite(HoursA_pin,0); } /* for (int x = 0; x <= HoursA_max; x += (HoursA_max/12) ) { int HoursB_counter = 0; analogWrite(HoursA_pin,x); for (int y = 0; y <= HoursB_max; y += (HoursB_max/9)) { analogWrite(HoursB_pin,(y + HoursB_offsets[HoursB_counter])); HoursB_counter++; for (int z = 0; z <= MinutesA_max; z += (MinutesA_max/59)) { analogWrite(MinutesA_pin,z); delay(1000); // test clock as if it were a stopwatch, good for testing
https://hackingand.coffee/2016/04/nixie-millivolt-meter-clock/
CC-MAIN-2020-40
refinedweb
2,383
61.4
A Locale. In the C APIs, a locales is simply a const char string. You create a Locale with one of the three options listed below. Each of the component is separated by '_' in the locale string. The first option is a valid ISO Language Code. These codes are the lower-case two-letter codes as defined by ISO-639. You can find a full list of these codes at a number of sites, such as:The first option is a valid ISO Language Code. These codes are the lower-case two-letter codes as defined by ISO-639. You can find a full list of these codes at a number of sites, such as:newLanguage newLanguage + newCountry newLanguage + newCountry + newVariant The second option includes an additonal ISO Country Code. These codes are the upper-case two-letter codes as defined by ISO-3166. You can find a full list of these codes at a number of sites, such as: The third option requires another additonal information--the Variant. The Variant codes are vendor and browser-specific. For example, use WIN for Windows, MAC for Macintosh, and POSIX for POSIX. Where there are two variants, separate them with an underscore, and put the most important one first. For example, a Traditional Spanish collation might be referenced, with "ES", "ES", "Traditional_WIN". Because a Locale is just an identifier for a region, no validity check is performed when you specify a Locale. If you want to see whether particular resources are available for the Locale you asked for, you must query those resources. For example, ask the UNumberFormat for the locales it supports using its getAvailable method. Note: When you ask for a resource for a particular locale, you get back the best available match, not necessarily precisely what you asked for. For more information, look at UResourceBundle. The Locale provides a number of convenient constants that you can use to specify the commonly used locales. For example, the following refers to a locale for the United States: Once you've specified a locale you can query it for information about itself. Use uloc_getCountry to get the ISO Country Code and uloc_getLanguage to get the ISO Language Code. You can use uloc_getDisplayCountry to get the name of the country suitable for displaying to the user. Similarly, you can use uloc_getDisplayLanguage to get the name of the language suitable for displaying to the user. Interestingly, the uloc_getDisplayXXX methods are themselves locale-sensitive and have two versions: one that uses the default locale and one that takes a locale as an argument and displays the name or country in a language appropriate to that locale. The ICU provides a number of services that perform locale-sensitive operations. For example, the unum_xxx functions format numbers, currency, or percentages in a locale-sensitive manner. Each of these methods has two variants; one with an explicit locale and one without; the latter using the default locale.Each of these methods has two variants; one with an explicit locale and one without; the latter using the default locale.UErrorCode success = U_ZERO_ERROR; UNumberFormat *nf; const char* myLocale = "fr_FR"; nf = unum_open( UNUM_DEFAULT, NULL, success ); unum_close(nf); nf = unum_open( UNUM_CURRENCY, NULL, success ); unum_close(nf); nf = unum_open( UNUM_PERCENT, NULL, success ); unum_close(nf); AAnf = unum_open( UNUM_DEFAULT, myLocale, success ); unum_close(nf); nf = unum_open( UNUM_CURRENCY, myLocale, success ); unum_close(nf); nf = unum_open( UNUM_PERCENT, myLocale, success ); unum_close(nf); Localeis the mechanism for identifying the kind of services ( UNumberFormat) that you would like to get. The locale is just a mechanism for identifying these services. Each international serivce implement these three class methods: const char* uloc_getAvailable(int32_t index); int32_t uloc_countAvailable(); int32_t uloc_getDisplayName(const char* localeID, const char* inLocaleID, UChar* result, int32_t maxResultSize, UErrorCode* err); Concerning POSIX/RFC1766 Locale IDs, the getLanguage/getCountry/getVariant/getName functions do understand the POSIX type form of language_COUNTRY.ENCODING@VARIANT and if there is not an ICU-stype variant, uloc_getVariant() for example will return the one listed after the @at sign. As well, the hyphen "-" is recognized as a country/variant separator similarly to RFC1766. So for example, "en-us" will be interpreted as en_US. As a result, uloc_getName() is far from a no-op, and will have the effect of converting POSIX/RFC1766 IDs into ICU form, although it does NOT map any of the actual codes (i.e. russian->ru) in any way. Applications should call uloc_getName() at the point where a locale ID is coming from an external source (user entry, OS, web browser) and pass the resulting string to other ICU functions. For example, don't use de-de@EURO as an argument to resourcebundle. Definition in file uloc.h. #include "unicode/utypes.h" #include "unicode/uenum.h" Go to the source code of this file.
http://icu.sourcearchive.com/documentation/4.2.1~rc1/uloc_8h.html
CC-MAIN-2018-13
refinedweb
788
53.61
Overview Atlassian SourceTree is a free Git and Mercurial client for Windows. Atlassian SourceTree is a free Git and Mercurial client for Mac. ProxyActors A simple, lightweight typed actor framework for Scala. It includes an optional load balancer for transparently load balancing typed actor workloads. The current implementation is only ~160 SLOC in a single file. The goal of the project is to meet some specific use cases of our company (see 'Why?' section), but we are hoping you will find it useful as well. Installation - Requires Scala 2.10 (which requires Java 1.6) and CGLib (tested on v2.2.2) Option 1: SBT libraryDependencies += "com.api-tech" % "proxyactors_2.10" % "0.2.1" Option 2: Copy the file into your project (you'll still need CGLib) Copy the 'package.scala' file into the folder for package api.actor in your project. Optionally, change the package to match your organization. Option 3: Download the jar (you'll still need CGLib) Examples Hello World First, the obligatory 'hello world'. The async. hello is called first, but printed after the synchronous hello due to being delayed and executed in a different thread. import api.actor._ // Single import line class HelloWorld { def async() { Thread.sleep(100); println("Hello World! (async)") } } val hello = singleThreadContext.proxyActor[HelloWorld]() hello.async() println("Hello World! (sync)") actorsFinished(hello) // Blocks until asyncHello is complete The output is: Hello World! (sync) Hello World! (async) Load Balancing Next, a quick example demonstrating typed actor load balancing. This example doubles all numbers 1 to 1000 in parallel and then adds them up and prints the result. import scala.concurrent.{Await, Future, Promise} import scala.concurrent.duration.Duration import api.actor._ trait Doubler { def double(n: Int): Future[Int] } class MyDoubler extends Doubler { def double(n: Int): Future[Int] = { Promise.successful(n + n).future } } // Create one thread and actor per logical CPU thread val actors = allCoresContext.proxyActors[MyDoubler](qty = totalCores) val doubler = proxyRouter[Doubler](actors) val futures = for (i <- 1 to 1000) yield doubler.double(i) val total = futures.foldLeft(0) { (sum, fut) => Await.result(fut, Duration.Inf) + sum } println(s"1 to 1000 doubled and then summed equals $total") actorsFinished(actors) The output is: 1 to 1000 doubled and then summed equals 1001000 Simple Mutual Exclusion Lastly, we show how we can wrap a callback class in a simple mutual exclusion proxy actor. This still runs in the same thread as called, but removes the need for us to lock our mutable class data. NOTE: This example doesn't work/compile - it is just to show what is possible. import api.actor.{proxyActor, actorsFinished} // Not thread-safe 'as-is' class MyCallbackClass extends FictionalCallbackInterface { var mutableData: Int = 0 def callbackMethod(newData: Int) { mutableData += newData } } // By default, actors are created in the 'sameThread' context, so callback // is executed synchronously within whatever thread(s) 'library' provides val callback = proxyActor[MyCallbackClass]() library.addCallbackListener(callback) // library is used here and invokes your callback via one or more threads // The library has now signalled us that the work is complete // Remember that accessing a 'var' in scala is a method call. Therefore // even directly accessing the var is subject to actor mutual exclusion println(s"Our final value was: ${callback.mutabledata}") // We aren't using a thread pool - so no need to call 'actorsFinished' // (but you still can and it is probably a good idea) actorsFinished(callback) Features - Extremely small and lightweight - Learn the API in minutes - Typed actor router/load balancer with the distribution algorithm as a simple function - Actors extend your classes - no way to leak non-actor reference - Utilizes Scala 2.10 Futures/ExecutorContexts - Thread pools auto shutdown after last actor signals it is finished - CGLib is the only dependency (other than Scala library) - Ability to wrap callback objects in a mutual exclusion proxy (when used with threaded libraries). By default, this happens without additional threading overhead. Typed Actors? There are better sites for understanding all the theory. Definitely google them to get the 'big picture'. We'll just cover how our library works in few bullet points, and hopefully you can see why something like this makes sense. - The base of a typed actor is your regular Scala class with no special features - We create a proxy of that class and instantiate it with the args of your choosing - This proxy can now be used just like the regular object with these differences: - We lock them for mutual exclusion - they are guaranteed to only execute in one thread at a time. You don't need to lock your class's mutable data anymore. - They are executed in a thread pool of your choosing (one pool for all actors, single thread per actor, or any other combination) - Methods that return Unit or a Scala future execute asynchronously in the pool - Methods that return values will still operate synchronously, however (with exception of Scala futures). - When you are done with an actor, you call 'actorsFinished'. When all actors finish that were using a thread pool, that thread pool is automatically freed. 'actorsFinished' will block as needed if not all actor methods have finished execution. Why? The first thing that probably comes to mind for Scala typed actors is Akka. Akka is great. We like Akka. If you need its many features, you should use it too. Our library has probably not even 2% the features of Akka (and never will!). Clear enough? :-) That said, Akka is a large, diverse collection of parallel compute, synchronization paradigms, and much more. Sometimes this is what we need, and sometimes we just need something very small, simple, and focused. Here are the specific things we needed for our projects: Typed actors that 'extend' our classes, not 'wrap' them Yes, we know this seems backwards with all the hype of composition over inheritance, but we have a good reason for wanting this. Due to the Scala/Java implicit 'this' reference, it is very easy to leak 'this' outside the object when registering for callbacks, etc. Additionally, we don't like giving our classes knowledge that they are a typed actor even if just for identity purposes. Yes, we can abstract that, but it is more convoluted. Extending isn't always great either - you can't extend final classes/methods, etc (it is possible we'll add wrapping as an alternative in a future release). Typed actor routing/load balancing We wanted to be able to take a list of typed actors and load balance to them as a group. We wanted to do it without a lot of work or boilerplate. Lastly, we wanted a solid default load balancing algorithm that would make it easy to load balance parallel workloads. Easy mutual exclusion without boilerplate Not every program needs to be parallel. Due to Java having first-class threads since it's inception, several libraries have shipped with embedded threading for concurrency purposes. Even if you are writing a simple script that takes callbacks, it is very likely you'll need to think about mutual exclusion. Using a simple 'proxyActor' function call, you can wrap these callback objects in an actor that is designed for mutual exclusion, and by default, operates in the same thread it was called by. Since it is very coarse locking via proxy, you'll lose some performance/parallelism, but for many programs that just isn't a concern. Very easy, fast, and small The goal is to make every feature easy to use without needing to reach for the ScalaDoc. We wanted an API that fit in our heads. We only want things in the library that we will use. Our jar is currently ~40K and includes examples. The code itself is ~160 SLOC in a single file, so no worries about tracking a bug through thousands of lines of code. You can even just copy the single file right into your programs. (unfortunately, you'd still need to have the CGLib jar) Performance We haven't done any benchmarks yet. No doubt we will eventually, but it isn't high priority for us as our workloads either a) do a ton of work per actor call or b) aren't performance sensitive. Regardless, since we use a proxy that uses reflection, you will want to do as much work as possible per call to call offset the overhead in performance sensitive workloads. Links Primary: Mirror: ScalaDoc: Downloads: License ProxyActors is.
https://bitbucket.org/apitech/proxyactors
CC-MAIN-2017-09
refinedweb
1,397
54.93
Hi, I've created a very simple custom control. But it doesn't show up in the toolbox in the iOS Designer, and I can't figure out why. I've read about Custom Controls in the documentation, and I've followed the requirements, but it still doesn't show up. I've recompiled, rebuilt and reloaded the solution, and I've restarted VS, but nothing helps. So I'm actually beginning to doubt that the Visual Studio iOS Designer supports custom controls, but hopefully that's not the case. Here is the code for the custom control. Can anyone see if I'm doing something wrong? [Register("MyView"), DesignTimeVisible(true)] public class MyView : UIView { public MyView(IntPtr handle) : base(handle) { } public override void AwakeFromNib() { base.AwakeFromNib(); this.Initialize(); } private void Initialize() { // Gives the view a red, rounded border. this.Layer.CornerRadius = 5; this.Layer.MasksToBounds = true; this.Layer.BorderColor = new MonoTouch.CoreGraphics.CGColor(1.0f, 0.0f, 0.0f); this.Layer.BorderWidth = 2.0f; } } I'm using a storyboard in case that makes any difference. If I drag a View from the Windows & Bars category in the Toolbox, and change it's class to MyView in the Properties window, the control is still rendered as a standard UIView in the iOS Designer. But then when I run the application in the simulator, the same control is rendered as my custom control, having a red rounded border. How can I get my custom control to render in the iOS Designer, and how can I get it to show up in the Toolbox? Just a random question, but does there happen to be any shared asset or PCL project in your solution? Hi Alex, thank you for replying. No, there's only one project, and it's actually a very simple project that I created just to play around with the iOS Designer. So there's one storyboard with a navigation controller and 2 views, 1 View Controller class, and my custom control. I just tested my solution on Xamarin Studio on OSX, and my custom control actually shows up in the Custom Components category here. So it seems it's a problem related to the Visual Studio iOS Designer. Hi @RenGundersen , did you find any solution for this issue? I am experiencing the same issue. No, not yet. I need Xamarin to fix the pairing issues with Yosemite first, before I can work on this. And even then, there is some problems with iOS 8 in the VS iOS Designer they also need to fix. (The error 500 issue) So right now, I've just decided to go with Xamarin Studio on the Mac, instead of using Visual Studio. When the pairing and iOS 8 issues has been fixed, I'll come back to this and start a support incident with Xamarin, to get help from one of their engineers on this. Hi @RenGundersen and @michelTol, I am having the same problem. Even the simplest custom control with all the trimmings (based on UIView, implementing IComponent or decorated with DesignTimeVisibleAttribute) does not show up on the Toolbox. Have either of you gotten any further with this? @henniegottenbos and @michelTol: I was at a Xamarin conference today and asked Michael James from Xamarin about this problem. He told me that Xamarin had pushed a change in how custom controls worked with the iOS Designer. You can see a note about this in the Xamarin 3.7 release notes There's also a note about it in the release notes for Xamarin Studio 5.5. The note says that one should implement System.ComponentModel.IComponentor decorate ones custom control with [DesignTimeVisible(true)]. In my example above, I have the DesignTimeVisibleattribute, but I haven't tried the IComponentinterface. So I'll try this and see if it works. I'll report back here with my findings. Im experiencing the same problem. Im using the step-by-step example from the xamarin site but the created control does not show up in the toolbox. Im at Xamarin 5.5.3 build 6 (in iOS 10.9.5) and i use the following code (and variations of it with/without disigntimevisible and/or IComponent.): [Register("ViewClass"), DesignTimeVisible(true)] public class ViewClass : UIView, System.ComponentModel.IComponent { public event EventHandler Disposed; Sounds like a bug to me, added both IComponent and DesignTime attribute, control appears in Xamarin Studio but not on Visual Studio's toolbox. Is this "Yet Another Xamarin Studio" only feature? Hi All, Sounds like you might have found a bug in the iOS Designer. Can I get you to please file a Bugzilla report (or attach to an existing one) agains the iOS Designer. Thanks, Kevin Just to close the loop for anyone else ending up here, this is the link to the related bug on BugZilla: @KMullins Were you guys able to make any progress on this one, and do you have an idea as to when a fix might be released? I am experiencing it either. On the other hand, I am not able to add a custom control on Xamarin Studio too. I've done the exactly the first example on this page, on a brand new project it works but on my existing project it doesn't. Is there any problems to use it together with PCL's? The custom component is not developed into a PCL, the PCL has been only referred on my main project. Same thing for me. With a new project and a copy of the controls & storyboard -> it works. Adding a reference to my PCL project -> it works. Copying all files from older project to new project -> doesn't work anymore :-( Copying storyboard from new project (with the control) to buggy project -> doesn't work in the designer but when I start the application, it works on the app! It works perfect with VS2013! I hope that somebody from Xamarin Team will fix this bug in Xamarin Studio... Hi, I was going through the following blog and following the steps. When I build the file, 'DesignTimeVisible(true)' doesn't seem to work. The issue in Xamarin Studio has been fixed and will ship at some stage in the future. I don't know when other than it'll probably be part of the next major release.
https://forums.xamarin.com/discussion/26223/custom-control-not-showing-up-in-toolbox
CC-MAIN-2020-16
refinedweb
1,051
65.12
Arrays in Visual Basic An array is a set of values that are logically related to each other, such as the number of students in each grade in a grammar school. By using an array, you can refer to these related values by the same name, and you can use a number that’s called an index or subscript to tell the values apart. The individual values are called the elements of the array, and they’re contiguous from index 0 through the highest index value. In contrast to an array, a variable that contains. By creating this array, you can write simpler code than if you declare seven variables. The following illustration shows the array students. Each element of the array has the following characteristics: The index of the element represents the grade (index 0 represents kindergarten). The value that's contained in the element represents the number of students in that grade. The following example shows how to refer to the first, second, and last element of the array students. You can refer to the entire array by using just the array variable name without indexes. The array students in the preceding example uses one index and is one-dimensional. An array that uses more than one index or subscript is multidimensional. For more information, see the rest of this topic and Array Dimensions in Visual Basic. You can define the size of an array in several ways. You can supply the size when you declare the array, the statement should single value. After you declare the array, you can define its size by using the ReDim Statement (Visual Basic). The following example declares a one-dimensional array variable by adding a pair of parentheses after the type. You can also specify). You can declare a jagged array variable by adding its index enclosed in parentheses. Commas (,) separate indexes for multi-dimensional arrays. You need one index for each array dimension. The following example shows some statements that store values in arrays. The following example shows some statements that get values from arrays. For each array dimension, the GetUpperBound method returns the highest value that the index can have. The lowest index value is always 0. determine it by using type inference. you specify at the class level infer the values that are supplied for the array literal as type Object. You can explicitly specify the type of the elements in an array that you create by using an array literal. In this case, the values in the array literal must widen to the type of the elements of the array. The following code example creates an array of type Double from a list of integers. Nested Array Literals You can create a multidimensional array by using nested array literals, which Array Variables in Visual Basic. An array that holds other arrays as elements is known as an array of arrays or a jagged array. A jagged array and each element in it that the array contains.. You should keep the following points in mind when you deal with the size of an array: Data Types Every array has a data type, but it differs from the data type of its elements. No single data type applies don't influence the array data type. Every array inherits from the System.Array class, and you can declare a variable to be of type Array, but you can't create an array of type Array. Also, the ReDim Statement (Visual Basic) can't operate on a variable that you declare as type Array. For these reasons and for type safety, you should that contains the name of the run-time type. You can pass the variable to the VarType function to receive a VariantType value that represents. provides a variety of classes, interfaces, and structures for general and special collections. The System.Collections and System.Collections.Specialized namespaces contain definitions and implementations that include dictionaries, lists, queues, and stacks. The System.Collections.Generic namespace provides many of these types in generic versions, which take one or more type arguments. If your collection is to hold elements of only one specific data type, a generic collection has the advantage of enforcing type safety. For more information, see Generic Types in Visual Basic (Visual Basic). ExampleList collection specifies that it can contain elements only of type Customer. The declaration also provides for an initial capacity of 200 elements. The AddNewCustomer procedure checks the new element for validity and then adds it to the collection. The PrintCustomers procedure uses a For Each loop to traverse the collection and display its elements.
http://msdn.microsoft.com/en-us/library/wak0wfyt(v=vs.100).aspx
CC-MAIN-2014-41
refinedweb
768
56.25
In this article, we will discuss a very popular algorithm that is generally used in interview coding questions. The aim is to find the greatest common divisor (GCD) between two numbers. The optimized algorithm is pretty fast as compared to the brute force approach. The GCD of two integers (a, b), denoted by gcd(a,b), is defined as the largest positive integer d such that d | a and d | b where x | y implies that x divides y. Example of GCD: gcd(4, 8)= 4 gcd(10, 5)= 5 gcd(20,12)= 4 According to the Euclid’s Algorithm: gcd(A,B) = gcd(B,A%B) // recurrence for GCD gcd(A,0) = A // base case Let’s discuss the proof of the recurrence. Proof: The GCD of more than 2 numbers, e.g., gcd(a,b,c) is equal to gcd(a,gcd(b,c)) and so on. int gcd(int a, int b) { return (b == 0 ? a : gcd(b, a % b)); } int main() { // your code goes here int a = 100, b = 20; int result; result = gcd(a,b); cout << result; } Explanation: gcd(), accepts two numbers as input and provides the output. As we are using the %operator, the number of recursive calls required to reach the final result would be less than nwhere n= max(a,b). RELATED TAGS CONTRIBUTOR View all Courses
https://www.educative.io/answers/how-to-find-the-greatest-common-divisor
CC-MAIN-2022-33
refinedweb
225
70.33
First, you should make sure that the ssh daemon is installed properly on your RP. You can install it with sudo apt-get update && apt-get install ssh Next, you should try to connect to your SSH server locally, that means, open up a terminal directly on your RP and try to connect to your locally running SSH server: ssh root@localhost Or even better, use the current IP address of the RP instead of localhost. If you can connect, everything works on this side. Now you can try to connect to your RP from another box inside your local network. It should also work. If it does not, you most probably have a firewall blocking outgoing SSH connections on this box, or your RP itself is blocking incoming connections. Usually, it should not block it, unless you configured something like that You probably need to use an other connector. MySQL-python seems to support MySQL3.23+. mysql-connector-python simply can't connect using old authentication. Find your socket file using mysqladmin variables | grep socket And then add the socket file to your database.yml configuration development: adapter: mysql2 host: localhost username: root password: xxxx database: xxxx socket: <socket location> You are passing a dictionary object self.configs into mysql.connector.connect, though, according to docs, you should pass to it user, password and other arguments. Looks like you need to unpack configs: self.connection = mysql.connector.connect(**self.configs) Hope this is it. The ip address "127.0.0.1" that you use in your connection object refers to localhost, which is the computer the program is running on. You can't remotely connect to the localhost of another computer. You need to replace the host ip address with the ip address of the remote server. Also you should only need the MySQLdb.connect object, not the ssh object if you connect correctly. I have recently had the same problem myself. I got it working by doing the following: Edit MySQL configuration By default, MySQL is not configured to accept remote connections. You can enable remote connections by modifying the configuration file: sudo nano /etc/mysql/my.cnf Find the [mysqld] section. The line you need to alter is bind-address, which should be set to the default value of 127.0.0.1. You want to edit this line to instead show the IP of your RPi on the network (which would seem to be 192.168.1.102 from your example). Write the changes. Restart the MySQL service sudo service mysql restart Setup MySQL permissions Connect to your MySQL instance as root: mysql -p -u root Create a user: CREATE USER '<username>'@'<ip_address>' IDENTIFIED BY '<password&g. Mm, i have try with this two simple example: /Scripts/test.py #!/usr/bin/env python print (str('hello world')) /var/www/test.php <?php echo shell_exec('/Scripts/test.py'); ?> via a browser, show me "hello world" Perhaps try this: chmod a+x your_py_file.py Hope that help.. I'm absolutly not sure this is the problem, but according to the specs p22: MMA7660FC is read using it’s internally stored register address as address pointer, the same way the stored register address is used as address pointer for a write. The pointer generally auto-increments after each data byte is read using the same rules as for a write (Table 5). Thus, a read is initiated by first configuring the device’s register address by performing a write (Figure 11) followed by a repeated start. The master can now read 'n' consecutive bytes from it, with the first data byte being read from the register addressed by the initialized register address. As far as I understand, to "read" from a register, you have to start by writing the register address, and then blindly read a byte It seems like a permission issue, try disabling selinuza by using setenforce 0, then restart the mysql service, also check if mysql has the right permissions on the mounted device (assuming you mounted as root or a sudo user that mysql cannot access) What i can think right now if you've already tried the solutions in your link, is that SELinux (Fedora's) is blocking service. try sudo setenforce 0 and then sudo service mysqld start let me know if it helped please. Judging from this: you should be looking at dataReceived(self,...) method of your Protocol subclass Thus: class USBclient(Protocol): def connectionMade(self): global serServ serServ = self print 'Arduino device: ', serServ, ' is connected.' def cmdReceived(self, cmd): serServ.transport.write(cmd) print cmd, ' - sent to Arduino.' pass def dataReceived(self,data): print 'USBclient.dataReceived called with:' print str(data) try this to see if it works. You need to read up on the difference between arduino and raspPI 'serial' pins - in short, their voltage levels are quite different and require a conversion cable between the two of them to adjust these levels. Got it!! pi@raspberrypi ~ $ aptitude install bluetooth pi@raspberrypi ~ $ hcitool dev Devices: hci0 00:07:80:54:CA:E2 pi@raspberrypi ~ $ hcitool scan Scanning ... 00:07:80:54:CA:E2 BGWT11i pi@raspberrypi ~ $ bluez-simple-agent hci0 00:07:80:54:CA:E2 Enter PIN Code: 1234 Now I can connect ... pi@raspberrypi ~ $ python rfcomm-client.py 00:07:80:54:CA:E2 You can use LoopingCall (howto) to schedule a repeated function call on a certain interval. This probably replaces your polling threads entirely. So I finally got this guy working. Couple key things I needed to realize: 1. Even if you're using Pulseaudio on your Raspberry Pi, as long as Alsa is still installed you're still able to use it. ( This might seem like a no brainer to others, but I honestly didn't realize I could still use both of these at the same time ) Hint via (syb0rg). 2. When it comes to sending large amounts of raw audio data ( .wav format in my case ) to Pocketsphinx via Gstreamer, (queues) are your friend. After messing around with gst-launch-0.10 on the command line for a while I came across something that actually worked: gst-launch-0.10 alsasrc device=hw:1 ! queue ! audioconvert ! audioresample ! queue ! vader name=vader auto-threshold=true ! pocketsphinx lm=/home/pi/dev/scarlettPi/config/speech/lm/scarlett Your terminal application doesn't know where to locate the mysql program. For example, if you installed xampp in C:/xampp, you open a command prompt, cd to C:/xampp/mysql/bin and then run your command, which should work now. Edit: Just remembered you mentioned OSX, regardless, use the same principle. In your terminal, cd to the XAMPP install subfolder which contains the mysql executable app You have to bind your server to that specific IP address So add this to your my.cnf under [mysqld] bind-address=192.168.250.99 or write bind-address = 0.0.0.0 for all ip addresses i tooth you are writing the port no on wrong place , also you are using short name PASS it's not working use PASSEWORD do it like this . and i hop it will work as you want. MySqlConnection mysqlConn=new MySqlConnection("SERVER=127.0.0.1:3306;UID=pankaj;PASSWORD=master;DATABASE=patholabs;"); mysqlConn.Open(); best of luck This doesn't sound like it has anything to do with 64/32 bit from the error message given. Have a look at this SO question and answer and see if that gives any clues: How do I find which transaction is causing a "Waiting for table metadata lock" state? just had the same issue. Are you using numpy? One of your passed values could be of type numpy.float64 which is not recognized by the mysql connector. Cast it to a genuine python float on populating the dict. This worked for me. hth, gr~~~ You should run mysql_query($query) 1 time not in loop. Your code makes running mysql_query in every loop. Remove from: while($rows = mysql_fetch_array(mysql_query($query))) //HERE^ Do: <?php $query = "SELECT * FROM servers"; $result = mysql_query($query); while($rows = mysql_fetch_array($result)){ // do more stuff Error is not strictly connected to mysql module. Probably you are behind the proxy, try to follow the instruction how to setup npm behind the proxy here: If you are using a Glassfish-initiated JDBC connection (I think you are since you say you sucessfully pinged the datasource), then you don't want to define the properties as you have them in the persistence.xml. You'll want to just specify a data-source by JNDI name. <?xml version="1.0" encoding="UTF-8"?> <persistence version="2.0" xmlns="" xmlns:xsi="" xsi: <persistence-unit <non-jta-data-source>jdbc/myds</non-jta-data-source> <class>model.HelloWorld</class> <properties> </properties> </persistence-uni Change $db = new PDO( 'mysql:host = '.$config['db']['host'].'; dbname = '.$config['db']['dbname'], $config['db']['username'], $config['db']['password'] ); to $db = new PDO('mysql:host='.$config['db']['host'].';dbname='.$config['db']['dbname'], $config['db']['username'], $config['db']['password'] ); or $db = new PDO( "mysql:host={$config['db']['host']};dbname={$config['db']['dbname']}", $config['db']['username'], $config['db']['password'] ); Apparently MySql PDO driver doesn't like spaces in DSN. mysql:host='.$config['db']['host'].';dbname='.$config['db']['dbname'] ^^ ^^ First locate your XAMPP install C:Program Filesxampp for a full install. Open a new Command Prompt Window: Click Start Type "cmd" Hit enter In Command prompt go to the XAMPP directory do not include the C: part cd path oxampp Start MySql: mysqlinmysql.exe -u pingu -p You should be getting NPE. As you are executing your querie on dbCon and not on dbcon // initialize here Connection dbcon = DriverManager.getConnection( "jdbc:mysql://localhost:3306/EMPLOYEE", "root", "root"); String query ="select count(*) from EMPLOYEE4 "; // Null here Connection dbCon = null; // on dbCon which is null stmt = dbCon.prepareStatement(query); EDIT THis how your code suppose to look like. Connection dbcon = DriverManager.getConnection( "jdbc:mysql://localhost:3306/EMPLOYEE", "root", "root"); String query = "select count(*) from EMPLOYEE4 "; Statement stmt = dbcon.createStatement(); ResultSet rs = stmt.executeQuery(query); Try this, $con = mysqli_connect("localhost","root","abcd123","payrolldb001") or die("Error " . mysqli_error($con)); $sql="SELECT substationid,substationcode FROM wms_substation WHERE assemblylineid = '".$q."'"; $result = mysqli_query($con,$sql); ... Or $con= mysqli_connect("localhost","root","abcd123") or die ("could not connect to mysql"); mysqli_select_db($con,"payrolldb001") or die ("no database"); Read mysqli_select_db Please ensure the MySQL is running in default port 3306 else you need change the port number accordingly. Connection connection = DriverManager.getConnection( "jdbc:mysql://", "username", "password"); instead of you can use ip of the domain. run the following in the command line to get the ip cmd prompt> nslookup The Url syntax as follows jdbc:mysql://(host/ip):port/databasename", "username", "password" rags comment (above) wasn't the solution, but it definitely helped track it down. When using a full connection string the error message was different and much more useful, the driver failed to load. The issue seems to be that since I'm on a 64 bit machine it was the 64 bit driver. VB6 can't use the 64 bit driver and the 32 bit driver doesn't show up in the ODBC Connection Administrator on a 64 bit machine. DSN is not an option in my machine. The error indicates that "mysql is not started/running". From what you describe, looks like the new location pointed to the alias has no mysql or its mysql is not started. Look my.cnf file with the correct parameters in the new location: /Applications/MAMP/Library/bin/mysql Another trick is to list your running processes and look for mysql. Also, are you sure, "rails c" is not using the sqlite3 vs mysql? Updated: *Courtesy of @bfavaretto MySQL 'my.cnf' location? By default, the OS X installation does not use a my.cnf, and MySQL just uses the default values. To set up your own my.cnf, you could just create a file straight in /etc. OS X provides example configuration files at /usr/local/mysql/support-files/ Update: Take a look at this: Getting "Can't connect...th this may be a DNS cache issue. Try flushing your cache. If you are on windows/osx, look at this: I'm not sure what has to be done on Linux. (Flush on CLIENT side, by the way). I ran in to the same problem after restarting my EC2 server - I couldn't connect to mysql. When I ran: sudo /opt/bitnami/ctlscript.sh status it said that mysql was not running. I had to do sudo service mysql stop sudo /opt/bitnami/ctlscript.sh restart mysql A couple of things can block connections to your machine: a firewall on your WinXP server skip_networking in the MySQL server configuration no access rights for root@some-ip, but only for root@localhost Error messages would really help ;) And you should now, that official support for WinXP will end soon. So I would not recommend installing new software on a Win XP machine. Edit: Ho do you connect to your server? What program do you use? I think you are out of luck on this one. You can either use the ssh extension in your PHP code, or if you have access to the server, you could try to create a ssh tunnel on the command-line. You probably need special permissions to do that, though. It also looks like you don't have ssh access to this hosting account. duplicate answered by @jpm Setting up tunneling posted by @Ólafur Waage on Connect to a MySQL server over SSH in PHP And this one for tunneling by @Sosy shell_exec(“ssh -f -L 3307:127.0.0.1:3306 user@remote.rjmetrics.com sleep 60 >> logfile”); $db = mysqli_connect(’127.0.0.1′, ‘sqluser’, ‘sqlpassword’, ‘rjmadmin’, 3307); Well the bottom line is you have really messy code. There are many violations to good coding practice. From the code you included this line never gets executed: con = DriverManager.getConnection( url,"root", ""); So there is no connection to your db. Does your exception indicate that this line Statement st = con.createStatement(); is the problem? SqlConnection is for SQL Server. You need MySqlConnection - This is not part of the .NET Framework so you will have to download it and reference it in your project. You can then create a MySqlConnection object and connect to MySQL in your application: MySqlConnection connection = new MySqlConnection(myConnString); You will also have to use the MySqlCommand object rather than the SqlCommand object.
http://www.w3hello.com/questions/how-to-connect-mysql-with-python-an-raspberry-pi-
CC-MAIN-2018-17
refinedweb
2,410
56.96
Overview Atlassian Sourcetree is a free Git and Mercurial client for Windows. Atlassian Sourcetree is a free Git and Mercurial client for Mac. Zip Please A script to help you zip playlists. Contents Installation To make this work best you want to have pip () installed, although technically it is possible to install it without it. From a terminal, (Terminal.app if you're on a Mac, or whatever turns you on) after installing pip, do: sudo pip install argparse mutagen zipls That should do it. If it doesn't, please contact me. Usage Users Graphical Use After installation there should be a program zipls that you can run. Run it. That is to say that, in general, if you run zipls without any arguments it will give you a gui. If you run it from a command line with playlist files as arguments, you can give it the -g switch to make it still run in graphical mode. All arguments given to the command line should still apply even if run in graphics mode. Command Line Typically: zipls PLAYLIST.pls that'll generate a zip file PLAYLIST.zip with a folder PLAYLIST inside of it with all the songs pointed to by PLAYLIST.pls. And of course: zipls --help works. (Did you think I was a jerk?) Programmers Basically all you care about is the Songs class from zipls. It takes a path, or list of paths, to a playlist and knows how to zip them: from zipls import Songs songs = Songs("path/to/playlist.m3u") # __init__ just goes through add(): songs.add("path/to/another/playlist.xspf") # lists of paths also work: songs.add(['another.pls', 'something/else.m3u']) songs.zip_em('path/to/zipcollection') Extending First of all, just email me with an example of the playlist that you want zipls to parse and I'll do it. But if you want to not monkey-patch it: If you want to add a new playlist format with extension EXT: subclass Songs and implement a function _songs_from_EXT(self, 'path/to/pls') that expects to receive a path to the playlist. Similarly, if you want to add audio format reading capabilities subclass Song (singular) and create a _set_artist_from_EXT, where EXT is the extension of the music format you want to add. You'll also need to initialize Songs with your new song class. So if I wanted to add .spf playlists and .mus audio: class MusSong(zipls.Song): def _set_artist_from_mus(self): # and then probably: from mutagen.mus import Mus self.artist = Mus(self.path)['artist'][0] class SpfSongs(zipls.Songs): def _songs_from_spf(self, playlist): # add songs songs = SpfSongs('path/to/playlist', MusSong) Works With playlist formats: - .pls - .xspf - .m3u A variety of common audio formats. (Ogg Vorbis, MP3/4, FLAC...) Basically everything supported by mutagen should work Contact and Copying My name's Brandon, email me at quodlibetor@gmail.com, and the project home page is . Basically do whatever you want, and if you make something way better based on this, lemme <>.
https://bitbucket.org/quodlibetor/zipls
CC-MAIN-2017-47
refinedweb
501
75.71
If you have used arcpy in ArcGIS for some time, you might have noticed that not all of those operations that are accessible to you via the user interface in ArcMap are exposed as functions and methods in arcpy. It was not designed to be a complete replacement for ArcObjects or an attempt at creating a function, method, or property for every conceivable button, dialog box, menu choice, or context item in the ArcMap interface (that is what ArcObjects provides). So, there are no plans to make the whole ArcGIS platform available via arcpy which is why it is sometimes referred to as coarse-grained API into ArcGIS. For those situations when you need to have a finer control over the GIS data management and maintenance, Esri recommends using ArcObjects. It is usually related to advanced data management operations where support in arcpy is very limited such as LIDAR data management, network dataset generation, write-access to the properties of workspaces and data repositories, or metadata management. All of this can be done in ArcObjects, and hence, the name – fine-grained API into ArcGIS. Here are just a few of the things you cannot do in arcpy. - You cannot create a new network dataset with arcpy and neither do you have any geoprocessing tools for that; this can be done solely by using the New Network Dataset wizard. - You cannot create a new empty ArcMap map document with arcpy. This means if your workflows rely on generating map documents and adding layers into it, you need to pre-create an empty map document which will be used as a template. - You cannot create new bookmarks in a map document and neither can you import existing bookmarks into a map document. This can be done manually from the ArcMap user interface only. However, if you do need to automate some of those workflows, either for one-time job when you need to process some data really quickly or when building a script that will be run on a regular basis, the only option you have is to use ArcObjects. Learning ArcObjects can be hard due to its complexity. You would also need to learn Java or .NET (C# or VB) if you want to write ArcGIS add-ins or develop stand-alone applications. If you are not comfortable with those languages and have most of the workflows written in Python, I have good news for you. It is possible to access ArcObjects from Python. This means that you can write your Python code using arcpy and other packages and incorporate some of the ArcObjects-based operations right into your code. This comes very handy when you lack just some minor operations in arcpy and need to use ArcObjects without getting out of your existing Python code. To get started, please review this GIS.SE post: How do I access ArcObjects from Python? It has enough information to let you set up everything needed. Then follow these steps: - Install comtypes package (I recommend using pip, see How to install pip on Windows? on how to get it) - Download the snippets.py file to get examples and some helper functions. - Change the “10.2” in the snippets file to “10.3” and installation paths of ArcGIS accordingly. - You are ready to access the ArcObjects from your Python code! Look here for a sample that will create a new map document. There is no need to install the ArcObjects SDK; the only thing you need to have installed is ArcGIS Desktop. You will need to play around with ArcObjects reference to find out what assembly and what interface you need to import. Here is the place you can start exploring the object model diagrams. Here is the section I recommend reviewing, Learning ArcObjects. Skip those parts you find irrelevant for you, though, as it covers nearly all ArcGIS Desktop operations. In the API Reference part of the Help, you will find detailed information about the namespaces used in ArcObjects. Reading through this and visiting this often is an excellent way to lean ArcObjects. Here is an example of the Carto namespace. Even though you do not need to know C#, VB, or Java, it is still worth to be able to read the code, as there are tons of useful snippets and code samples available in the Help system. Those will help you find out what interface should be used, any data type casting needed, and many more. To learn more about the ArcObjects, listen to a recorded live-training seminar from Esri. Please share in comments what kind of operations you are missing in arcpy and would need to use ArcObjects to implement them. 3 thoughts on “Accessing ArcObjects in Python”
https://tereshenkov.wordpress.com/2016/01/16/accessing-arcobjects-in-python/
CC-MAIN-2017-43
refinedweb
788
62.07
#include <deal.II/lac/solver_minres.h> Minimal residual method for symmetric matrices. For the requirements on matrices and vectors in order to work with this class, see the documentation of the Solver base class. Like all other solver classes, this class has a local structure called AdditionalData which is used to pass additional parameters to the solver, like damping parameters or the number of temporary vectors. We use this additional structure instead of passing these values directly to the constructor because this makes the use of the SolverSelector and other classes much easier and guarantees that these will continue to work even if number or type of the additional parameters for a certain solver changes. However, since the MinRes method does not need additional data, the respective structure is empty and does not offer any functionality. The constructor has a default argument, so you may call it without the additional parameter. The preconditioner has to be positive definite and symmetric The algorithm is taken from the Master thesis of Astrid Battermann with some changes. The full text can be found at The solve() function of this class uses the mechanism described in the Solver base class to determine convergence. This mechanism can also be used to observe the progress of the iteration. Definition at line 69 of file solver_minres.h. Constructor. Constructor. Use an object of type GrowingVectorMemory as a default to allocate memory. Virtual destructor. Solve the linear system \(Ax=b\) for x. Implementation of the computation of the norm of the residual. Interface for derived class. This function gets the current iteration vector, the residual and the update vector in each step. It can be used for graphical output of the convergence history.54 of file solver. Within the iteration loop, the square of the residual vector is stored in this variable. The function criterion uses this variable to compute the convergence value, which in this class is the norm of the residual vector and thus the square root of the res2 value. Definition at line 143 of file solver_min71 of file solver.h.
https://dealii.org/developer/doxygen/deal.II/classSolverMinRes.html
CC-MAIN-2021-10
refinedweb
346
54.32
assertEquals man page assertEquals — an opinionated testing interface for Python Synopsis Description assertEquals is an interface for running tests written with the Python standard library's unittest module.. Options - -s - --scripted Use the command-line interface. If not set, assertEquals will use the curses(3) interface. - -f - --find-only assertEquals should find TestCases but not run them. This only obtains in scripted mode, for summary reports. - -x stopwords - --stopwords stopwords stopwords is a comma-delimited list of strings that, if they appear in a module's full dotted name, will prevent that module from being included in the search for TestCases. - -t testcase - --testcase testcase - --TestCase testcase assertEquals should only run the tests found in testcase, which is the name of a Python unittest.TestCase class within the module specified by module. Given this option, assertEquals will output a detail report for the named TestCase; without it, a summary report for all TestCases found at or below module. This option only obtains in scripted mode. Scripted Mode If the --testcase option is not given, assertEquals imports module, and then searches sys.modules for all modules at or below module that do not include any stopwords in their full dotted name. assertEquals collects TestCase classes that are defined in these modules, and prints a summary report to the standard output of the format (actually 80 chars wide): -------------<| assertEquals |>------------- <header row> -------------------------------------------- <name> <passing> <failures> <errors> <all> -------------------------------------------- TOTALS <passing> <failures> <errors> <all> <name> is the full dotted name of a TestCase (this row is repeated for each TestCase). If the --find flag is set, then no tests are run, and <passing>, <failures>, and <errors> are each set to a single dash (‘-’). Otherwise, <passing> is given as a percentage, with a terminating percent sign; the other three are given in absolute terms. There will always be at least one space between each field, and data rows will be longer than 80 characters iff the field values exceed the following character lengths: Note that in order for your TestCases to be found, you must import their containing modules within module. assertEquals sets the PYTHONTESTING environment variable to ‘assertEquals’ so that you can avoid defining TestCases or importing testing modules in a production environment. You can also quarantine your tests in a subpackage, and give module as the dotted name of this subpackage. If the --testcase flag is set, then only the named TestCase is run (any --find option is ignored), and assertEquals delivers a detail report. This report is the usual output of unittest.TextTestRunner, preceded by the same first banner row as for the summary report. For both summary and detail reports, assertEquals guarantees that no program output will occur after the banner row. Interactive Mode Interactive mode is a front end for scripted mode. There are two main screens, representing the summary and detail reports described above. Each is populated by calling assertEquals in scripted mode in a child process, and then parsing and formatting the output. There are two additional screens: One is a primitive pager showing a Python traceback, which is used both for viewing individual test failures, as well as for error handling in both parent and child processes. The other is a primitive terminal for interacting with a Pdb session in a child process. You can send a SIGINT (<ctrl>-C) at any time to exit assertEquals. Summary Screen The summary screen shows the summary report as described above, but item names are indented rather than given in full. Modules are shown in gray, and un-run TestCases in white. TestCases with non-passing tests are shown in red, and those that pass in green. You may run any subset of the presented tests. The totals for the most recent test run are shown at the bottom of the screen, in green if all tests pass, red otherwise. TestCases for which there are results but that were not part of the most recent test run are shown in faded red and green. - <ctrl>-L Refresh the list of available TestCases without running them. - enter Run the selected tests and go to the detail screen if there are non-passing tests. - left-arrow Alias for q. - q Exit assertEquals. - right-arrow Alias for enter. - space Alias for enter. - F5 Alias for enter. Detail Screen The detail screen shows a list of non-passing tests on the left side, and the traceback for the currently selected test on the right. Failures are displayed in red, and errors in yellow. Tests are listed in alphabetical order. - F5 Run the tests again. - enter Open the traceback for the selected test in an error screen. - left-arrow Alias for q. - q Exit back to the summary screen. - right-arrow Alias for enter. - space Alias for F5. Error Screen The error screen provides a primitive pager for viewing tracebacks. - left-arrow Alias for q. - q Exit back to the previous screen. Debugging Screen The debugging screen is a primitive terminal for interacting with a Python debugger session. When a child process includes the string ‘(Pdb) ’ in its output, assertEquals enters the debugging screen. When the debugger exits, assertEquals returns to the previous screen, ignoring any report output that may have followed the debugging session. You can easily start debugging from any point in your program or tests by manually setting a breakpoint: import pdb; pdb.set_trace() The Python debugger's command reference is online at: Implementation Notes This program is known to work with the following software: - FreeBSD 4.11 - Python 2.4.2 Examples Run assertEquals's own tests, displaying a summary report on the standard output: See Also python(1) curses(3) Version assertEquals <trunk> Authors - (c) 2005-2012 Chad Whitacre <> - This program is beerware. If you like it, buy me a beer someday. - No warranty is expressed or implied.
https://www.mankier.com/1/assertEquals
CC-MAIN-2018-30
refinedweb
967
64.61
Caching. - 1. What is In-memory cache in Asp.Net core? - 2. What are pros and cons of In-memory cache in Asp.Net core? - 3. Implement In-memory cache in Asp.Net core to caching data. - Step 1: Create ICacheBase interface to define some methods to help manipulation with cache. - Step 2: Create CacheMemoryHelper class to implement ICacheBase interface. - Step 3: Execute cache for any business functions that want to cache data. - 4. Compare In-memory cache with non-cache in Asp.net Core - 5. Summary Caching is a technique very important to help the web page load faster. It helps increase the UX (user experience) of the user. To cache data, I will introduce In-memory cache in Asp.net Core to help you cache very fast and easy. 1. What is In-memory cache in Asp.Net core? When the first time a user requests to load a web page, to get data and respond to the browser, we have to connect to the database to get data. And then we will put this data to cache. In-memory cache will store data in the server's memory (RAM). From secondary request data will not get from the database, data will return from the server's cache. This helps data response faster. The in-Memory cache can be used with: - .NET Standard 2.0 or later. - Any .NET implementation that targets .NET Standard 2.0 or later. For example, ASP.NET Core 2.0 or later. - .NET Framework 4.5 or later. 2. What are the pros and cons of In-memory cache in Asp.Net core? Pros - It’s response quickly because data is stored in the server's memory (RAM). - Highly reliable. - It’s very suited for small and middle applications. Cons - Difficult to scale up. It’s suitable for a single server when we have many servers we can not share cache to all servers. - It can take up all the memory of the server when traffic is large. 3. Implement In-memory cache in Asp.Net core to caching data. To implement in-memory cache in Asp.net core, Microsoft provided the IMemoryCache interface to help cache data to memory. Okay, now we will implement IMemoryCache step by step. Step 1: Create ICacheBase interface to define some methods to help manipulate the cache. public interface ICacheBase { T Get<T>(string key); void Add<T>(T o, string key); void Remove(string key); } Step 2: Create CacheMemoryHelper class to implement ICacheBase interface. - Firstly, we need to install the package Microsoft.Extensions.Caching.Memory You can install by Package Manager with the syntax below: Install-Package Microsoft.Extensions.Caching.Memory -Version 3.1.0 Or you can install by Nuget, you can select version before install, the version has to map with the version of Asp.net Core of your project. And then we need to create CacheMemoryHelper class and this class has a constructor and register DI(Dependency injection) to IMemoryCache interface. public class CacheMemoryHelper : ICacheBase { private IMemoryCache _cache; public CacheMemoryHelper (IMemoryCache cache) { this._cache = cache; } } To can disable or enable cache anytime you want, you can define a flag to do this. public class CacheMemoryHelper : ICacheBase { private bool IsEnableCache = false; private IMemoryCache _cache; public CacheMemoryHelper (IMemoryCache cache) { this._cache = cache; this.IsEnableCache = AppSettings.Instance.Get<bool>(“AppConfig:EnableCache”); } } Notice that : AppSettings.Instance.Get<bool>(“AppConfig:EnableCache”) this line code helps get value setting from appsettings.json file. If you don’t know how to read the key value from appsettings.json file you can read the article below: Read configuration value from appsettings.json in ASP.Net Core - Secondly, we will implement the Add() method to help add data to the memory. public void Add<T>(T o, string key) { if (IsEnableCache) { T cacheEntry; // Look for cache key. if (!_cache.TryGetValue(key, out cacheEntry)) { // Key not in cache, so get data. cacheEntry = o; // Set cache options. var cacheEntryOptions = new MemoryCacheEntryOptions() // Keep in cache for this time, reset time if accessed. .SetSlidingExpiration(TimeSpan.FromSeconds(7200)); // 2h // Save data in cache. _cache.Set(key, cacheEntry, cacheEntryOptions); } } } In that: - SetSlidingExpiration(): This method helps set expire time to cache, after expire time the data stored in the cache will be deleted and we have to get data from the database. - Set(): This method helps set data to the memory of the server. - Thirdly, implement the Get() method to help get data from memory and return that data to the end-user. public T Get<T>(string key) { return _cache.Get<T>(key); } - Fourthly and also finally, we need to implement the Remove method to help remove cache data by key. public void Remove(string key) { _cache.Remove(key); } Step 3: Execute cache for any business functions that want to cache data. - Firstly, because cache stores memory by a pair key & value, so I will create a utility function help gen key for each function that wants to cache data. using System.Security.Cryptography; .... public class KeyCache { public static string GenCacheKey(string cacheName, params object[] args) { if (args != null && args.Length > 0) { string separator = "_"; string cacheKey = cacheName; return args.Aggregate(cacheKey, (current, param) => current + (separator + (param.GetType() == typeof(string) ? CalculateMD5Hash(param.ToString()) : param))); } else return cacheName; } private static string CalculateMD5Hash(string input) { // step 1, calculate MD5 hash from input MD5 md5 =(); } } - Secondly, for example, now I want to get the details of an article. I will create an IArticlesCached interface and ArticlesCached class implement as below: public interface IArticlesCached{ Task<ArticleDetail> GetArticleDetail(int newsId); } public class ArticlesCached : IArticlesCached { private ICacheBase _cache; private IArticlesBoFE _articlesBoFE; public ArticlesCached(ICacheBase cache , IArticlesBoFE articlesBoFE) { this._cache = cache; this._articlesBoFE = articlesBoFE; } public async Task<ArticleDetail> GetArticleDetail(int newsId) { try { var key = KeyCache.GenCacheKey("GetArticleDetail", newsId); // Generate key by function var articlesDetailBox = _cache.Get<ArticleDetail>(key); // Get data by key if (articlesDetailBox == null) // If data not exist from cache we will get it from database { articlesDetailBox = await _articlesBoFE.GetArticleDetail(newsId); _cache.Add<ArticleDetail>(articlesDetailBox, key); // Add data to cache } return articlesDetailBox; } catch (Exception ex) { return null; } } } Please see GetArticleDetail() method, in this method I have created a key, get data from the cache by key, if data does not exist I will get data from the database. - Thirdly, we need config to disable & enable cache in appsettings.json file .... "AppConfig": { "EnableCache": "true" // true or false to enable or disable cache } .... - Finally, we need to register ICacheBase and some other interface to the DI container. services.AddSingleton<ICacheBase, CacheMemoryHelper>(); services.AddSingleton<IArticlesCached, ArticlesCached>(); services.AddSingleton<IArticlesBoFE, ArticlesBoFE>(); services.AddSingleton<IArticlesDalFE, ArticlesDalFE>(); Well done, now we can get the details of an article with cache anywhere you want. How do you clear MemoryCache in ASP.NET Core? Because IMemoryCache of Asp. net core store in memory of server (DRAM/RAM) so to clear it you can do by some option as below: - Use Remove()function to remove cache by key, you can make and URL accept a key to delete any key cache you want. - Update the new code. if you use Docker you can update the new image. - Restart your server. - Listening to the HTTP request and check the header cache, if it contains a “refresh” key you will clear all cache. 4. Compare In-memory cache with non-cache in Asp.net Core Currently, Quiz Dev blog is using the in-memory cache in Asp.net core 3.1 to cache data. I will test the performance of the Home page with cache and non-cache for you to see. On the Home page of the Quiz Dev blog, we have 8 boxes approximately with 8 view components that have to get data from the database. And below is a short comparison about in-memory cache and non-cache you can refer: Notice that: I only compare the result from the second time reload the page. It’s not the first. You can see that getting data from memory is very fast. It only takes 28ms to load the page while if non-cache it takes 2680ms to load the page. 5. Summary In this article, I only want to show you know why we need to use in-memory cache in Asp.Net Core 3.1. You can apply this cache technical for Asp.net core from version 2.0 to above. You can download the full source code from GitHub here. List questions & answers - 1. What is caching and how it works. - 2. How to cache data in .NET core?In Asp.Net Core support some way to help cache data such as In-memory cache, Distributed cache and Response cache using middleware. We can cache by URL, cache by method, function name... It depends on your purpose if you want to cache data only you can choose In-memory cache and if you have some web API and want to cache it you can choose Response cache middleware. - 3. What is in-memory cache in Asp.net core?With ASP.NET Core, it is now possible to cache the data within the application. This is known as In-Memory Caching in ASP.NET Core. The Application stores the data onto the server’s instance which in turn drastically improves the application’s performance. This is probably the easiest way to implement caching in your application. Related tips may you like Collect IIS Log using Filebeat, Logstash, Elastic and Kibana Collect IIS Log using Filebeat, Logstash, Elastic and Kibana > COMMENT
https://quizdeveloper.com/tips/caching-data-by-using-in-memory-cache-in-aspdotnet-core-3dot1-aid97
CC-MAIN-2022-33
refinedweb
1,569
59.9
The standard java package, java.util, comes with a group of useful data structure classes (e.g. Stack, LinkedList, HashSet, and TreeSet classes). Unfortunately the Queue class is not implemented within this standard package. In this article we discuss three approaches to implement a Queue class. Stack Vs. Queue A stack is a data structure where you can only access the item at the top. With a computer stack, just like a stack of dishes, you add items to the top and remove them from the top. This behavior is known as LIFO (Last In, First Out). In contrast, a queue is a data structure in which elements are removed in the same order they were entered. This is often referred to as FIFO (First In, First Out). The basic two operations of the stack are: - Push: pushes a new element onto the top of the stack. - Pop: removes the element at the top of the stack and returns that element. This operation may throw an exception when the stack is empty. a. A stack with 4 elements b. The stack after pop operation Figure 1: Pop operation for Stack class Figure 1.a shows a stack with four elements, the brown element is the top of the stack. Figure 1.b shows the same stack after the execution of the pop operation, the brown element has removed and the green element became the top of the stack. There are also two basic queue operations: - Enqueue: inserts a new element at the rear of the queue. - Dequeue: removes the element at the top of the queue and returns that element. This operation may throw an exception when the queue is empty. The EmptyQueueException This exception is thrown by Queue class methods to indicate an empty queue and it will be used with the following three discussed approaches. public class EmptyQueueException extends RuntimeException { public EmptyQueueException() { } } First Approach The first approach is similar to the one used for implementing java.util.Stack class. In this implementation the Queue1 class will extend the java.util.Vector class with the operations that allow a vector to be treated as a queue. public class Queue1 extends Vector { public Object enqueue (Object element) { addElement (element); return element; } public Object dequeue () { int len = size(); if (len == 0) throw new EmptyQueueException(); Object obj = elementAt(0); removeElementAt (0); return obj; } } The enqueue method uses the inherited addElement method to add an object to the end of the vector. The dequeue method uses the inherited removeElementAt to remove the first stored object from the vector. In fact removing the top of the vector requires an additional task handled by the removeElementAt method. The remaining vector elements must be shifted up one position, see figure 2. It should be noted that this implementation suffers a performance problem, especially with large number of queue elements. Figure 2 shows that each time dequeue method is called the first element of the Vector will be removed causing the remaining Vector elements to be shifted up one step. However, this is not the case with the Stack class, where each call to the pop method will remove the last element of the Vector and so there is no need to shift the remaining elements (see figure 1). In other words, although this approach is efficient for implementing the Stack class, it is not efficient for implementing the Queue class. a. A queue with 4 elements b. After dequeue Figure 2: Dequeue operation for first implementation (the remaining three elements had been shifted up one position).
http://mobile.developer.com/java/ent/article.php/3296821/Queue-A-Missed-javautil-Class.htm
CC-MAIN-2017-30
refinedweb
590
62.58
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). Hi, I have been using GetNext() and GetDown() to perform depth first visit of the scene. That works fine. I wish to now perform some operations a a given hierarchy level. Is there some GetNodeListsAtCurrentLevel() type of methods ? I am using the Python API with R21 Cheers Hi @nicholas_yue there is nothing builtin but the most optimized way I think will be import c4d def GetNodeListsAtCurrentLevel(bl2D): parent = bl2D.GetUp() if parent is None: parent = bl2D.GetListHead() return parent.GetChildren() def main(): print GetNodeListsAtCurrentLevel(doc.GetActiveTag()) # Execute main() if __name__=='__main__': main() Cheers, Maxime.
https://plugincafe.maxon.net/topic/12707/getting-a-list-of-objects-at-a-given-hierarchy
CC-MAIN-2022-05
refinedweb
137
69.18
This is one of the 100 recipes of the IPython Cookbook, the definitive guide to high-performance scientific computing and data science in Python. from IPython.core.magic import (register_line_magic, register_cell_magic) %-prefixed magic command). The name of this function is the name of the magic. Then, let's decorate this function with @register_line_magic. We're done! @register_line_magic def hello(line): if line == 'french': print("Salut tout le monde!") else: print("Hello world!") %hello %hello french %%csvthat parses a CSV string and returns a Pandas DataFrame object. This time, the function takes as argument the first line (what follows %%csv), and the contents of the cell (everything in the cell except the first line). import pandas as pd #from StringIO import StringIO # Python 2 from io import StringIO # Python 3 () csvmagic.pyhere) that implements the magic. %%writefile csvmagic.py import pandas as pd #from StringIO import StringIO # Python 2 from io import StringIO # Python 3') %load_extmagic command takes the name of a Python module and imports it, calling immediately load_ipython_extension. Here, loading this extension automatically registers our magic function %%csv. The Python module needs to be importable. Here, it is in the current directory. In other situations, it has to be in the Python path. It can also be stored in ~\.ipython\extensionswhich is automatically put in the Python path. %load_ext csvmagic %%csv col1,col2,col3 0,1,2 3,4,5 7,8,9 Finally, to ensure that this magic is automatically defined in our IPython profile, we can instruct IPython to load this extension at startup. To do this, let's open the file ~/.ipython/profile_default/ipython_config.py and let's put 'csvmagic' in the c.InteractiveShellApp.extensions list. The csvmagic module needs to be importable. It is common to create a Python package implementing an IPython extension, which itself defines custom magic commands. You'll find all the explanations, figures, references, and much more in the book (to be released later this summer). IPython Cookbook, by Cyrille Rossant, Packt Publishing, 2014 (500 pages).
http://nbviewer.jupyter.org/github/ipython-books/cookbook-code/blob/master/notebooks/chapter01_basic/04_magic.ipynb
CC-MAIN-2017-47
refinedweb
335
58.48
User talk:Mcfletch From OLPC Volunteers portal Howdy, Mike. I've been following your blog for a while, but don't think I've introduced myself yet - I'm Mel, an intern at the 1cc office primarily working on Content (but slowly edging towards working on the "wait... volunteers don't have a way to get involved!" problem). Thought you might enjoy some discussions we (primarily Mitchell, Nikki, Ronny, and myself) have been having on the "need better ways to get people to Participate" front - in particular, OLPC volunteers portal and User:Mchua/Volunteers portal, though there's also been discussion on emulation, Jams, and a bit of Summer of Content (as in, having interns tackle the OLPC-volunteer interface as a SoCon project). Would love to talk with you about what you're doing with developer relations, and bounce off some ideas. I'm usually on #olpc and #olpc-content as mchua. Mchua 05:35, 14 August 2007 (EDT) From convo today Some places to point interest towards (folks can ping me on all of these atm): - Jams - if people want to run one at their school/location - Grassroots groups & country groups - like OLPC Austria, OLPC Philippines, etc. if people want to start a local group, and there's at least 10 of them incl. at least one person who wants to lead - University program - also covers things like classes, community service projects... see also Classrooms for Free Culture - Participate - to get refactored this weekend with the contacts you asked for. Also, Projects and Content projects needs an overhaul... I'll see what I can do. Mchua 16:11, 17 August 2007 (EDT) Emulation overhaul Moved to Talk:Emulating_the_XO#Emulation_overhaul. documentation interest There are some folks who want to dive into documentation; I'm pointing them at you and Todd K. among others for tips and advice. Sj talk 12:23, 24 September 2007 (EDT) Talk in Taiwan nov 2-4? Mcfletch, there's a converence on open software for laptops going on in Taipei in a couple of weeks; they want someone from OLPC to talk. Would you be interested? They can cover expenses, and want someone who really knows the software team and development; they'll have the sw lead from Asus's eee project there along with a bunch of their devs hoping to build open source community around their platform. Sj talk 18:23, 16 October 2007 (EDT) Measure Screenshots There is an updated screeenshot of Measure Activity on Measure. Also Ive uploaded more screenshots, though have yet to put them on the main Measure page. --Arjs 13:21, 20 October 2007 (EDT) Sidebar editing the core MediaWiki strings are in the MediaWiki: namespace. You can find the one you're looking for by searching through Special:Allmessages -- in this case, MediaWiki:Sidebar. As an admin, you should be able to change this yourself. --Sj talk to me 14:07, 20 December 2007 (EST) Developers manual naming The developers manual subpages are looking good. Many people have been linking to them. I moved developers to developers manual and updated the dev section of the participate page to include all the core elements of 'how to get involved as a dev' that the current developers page was serving. I've also been thinking about naming schemes and ways to cut down on subpages / encourage cross linking. I reckon each page should get its own name, not as a subpage of the rest of the manual -- aside from the introductory pages and TOC, the manual is just one more view of the most useful info for devs on the wiki. Moving everything to a subpage can be confusing unless it will only ever be used in the contxt of a certain book or sequence. You demonstrate the point by adding the nice subviews by right-hand navbox for emulation, sugar hacking, &c. (NB: Even the HIG, which was largely new material when Eben was writing it, would be better served by not using subpages... we've left it in its current form partly because it is so focused on unified layout across its pages and isn't undergoing active editing). Cheers, --Sj talk to me 00:15, 25 December 2007 (EST) Hacking your XO I've seen a forum for XO hacks; and was just talking to christoph about how to hack your sugar colors, update an image on the fly, and run the new airplane mode... the sorts of things that road-warriors will want to know before they know about the latest activities or security features. We should probably poke that forum to see if they can maintain these pages; and perhaps this calles for a Hack your XO page here? 18.85.18.90 18:55, 22 December 2007 (EST) Smaller version of sleeping.svg Hi Mike... I noticed the sleeping.svg file in your OLPCGames distribution is larger than it really needs to be. Here is a smaller version that looks about the same: Image:Sleeping.svg (direct link). —Joe 15:51, 25 December 2007 (EST) Art wanted SJ has added some things to Art Wanted that would be nice to have done. If you have the time, please check it out and see if there's anything that interests you. --Nikki 12:34, 9 February 2008 (EST) design gang proposal I've posted a proposal for image upload file structure and categorization. OLPC:Design_gang/proposed_file_structure AuntiMame 14:45, 14 September 2008 (UTC)
http://wiki.laptop.org/go/User_talk:Mcfletch
CC-MAIN-2015-48
refinedweb
910
61.06
Jukka, Attached is a patch (I'm not sure how you typically like to receive patches) for the automatic registration of namespaces on import of xml or cnd nodetype files (JCR-349). A little info about the changes: I didn't want the readers to have knowledge of workspaces or registries so I basically added additional methods to the NodeTypeManagerImpl that would take a registry in which the namespaces should be registered. The code will ignore a namespace if it has already been registered (I can change it to throw an exception if the registered uri is different than what's in the file). I've also included a test for each type of file. The tests will fail if run a second time against the same repository, but not because of the namespace registration, but because the nodetypes. Let me know what you think and if you'd like patches in a different format. David "Jukka Zitting" <jukka.zitting@gmail.com> 04/18/2006 05:06 PM Please respond to dev@jackrabbit.apache.org To dev@jackrabbit.apache.org cc Subject Re: Can the NodeTypeReader classes be changed to automatically register the namespaces? Hi, On 4/18/06, David Kennedy <davek@us.ibm.com> wrote: > The NodeTypeReader and CompactNodeTypeDefReader classes parse node type > definition files and discover any referenced namespace definitions, but do > not register them. Can the readers be changed to automatically register > the namespace definitions in addition to the nodetype definitions? There's already an improvement request about this (see JCR-349), but it hasn't yet been taken up by anyone. You can vote for the request (or, if you're adventurous enough, submit a patch) to speed things up. BR, Jukka Zitting -- Yukatan - - info@yukatan.fi Software craftsmanship, JCR consulting, and Java development
http://mail-archives.apache.org/mod_mbox/jackrabbit-dev/200604.mbox/raw/%3COFE2AB9C68.BD1E92E8-ON85257156.00554CCB-85257156.0056A6C0@us.ibm.com%3E/1/2
CC-MAIN-2013-48
refinedweb
298
55.44
Latest additions and updates. Welcome to the FAQ for the CodeProject Visual C++ forum. This FAQ is a compilation of the most-often asked questions in the forum, and covers several C++ programming subjects. It is not a full-fledged C++ or Windows programming FAQ (there are plenty of those already), but rather it's meant to cover the topics that CodeProject readers ask about the most. If you think of any questions that you feel should be covered in this FAQ, email me with the question and the answer. NOTE: Please do not email me directly to ask individual questions. I can't be everyone's personal consultant. Also, don't post programming questions in the message area of this article. Use the CodeProject forums to ask your questions; that's what they're there for! Thanks to Tim Deveaux, Anders Molin, and Christian Graus for their contributions to this FAQ, along with all the folks who have posted suggestions in the comments area and via email! 1.1: What's the best way to ask a question about code, so that I get a good answer? (top) First off, don't just say, "My program doesn't work. What's wrong?" as that will never get an answer. At the very minimum, explain what you want to do, what is going wrong, what compiler or linker errors you are getting, and post the code that isn't working right. That last phrase bears repeating: Post the code that isn't working right. This is usually the most helpful thing you can do for the forum readers. When you post the code, the readers will often be able to tell you exactly what needs to be changed. If you just post with "why doesn't my program work?", the readers then have to play 20 questions until they finally know enough to answer you. Providing all that info up front will make everyone happy and get you an answer sooner. When you include code in your post, enclose the code in a <pre>... </pre> block so that your indentation is preserved. If you don't do this, all groups of spaces are reduced down to one space per HTML rules, and the result is impossible to read. Here's an example of a good post: I'm working on a simple dialog-based app that should show the drives on the computer and each volume label. I have a list control with two columns, the first column for the drive letter and the second for the volume label. But I can't get anything to appear in the second column. Anyone have a clue what I'm doing wrong? Here's how I'm trying to add items to the control:m_DriveList.InsertItem ( 1, szDriveLetter ); m_DriveList.InsertItem ( 2, szVolumeLabel ); And so on, continuing on with items 3, 4, etc. Nothing ever shows up in the second column. It is generally OK to ask questions about a homework assignment, but you must at least try on your own first. Don't just post the assignment description; that looks like you want the forum readers to do the assignment for you. Start the program yourself, and when you get stuck, ask a specific question and, again, include the code that isn't working. 1.2: Why don't my #include lines or template parameters show up right in the forum? (top) The forum allows you to use HTML tags in posts, for example <b> to make text bold. When you write: #include <iostream> the "<iostream>" part looks like an HTML tag, so it doesn't appear in the post. You must use the HTML code for "<" and ">", which is "<" and ">" respectively. So the above include line should be entered as: #include <iostream> Note that you have to do this for if, for, and while expressions as well, such as: for ( i = 0; i < max; i++ ) { ... } If you write "i < max" the < symbol will cause the same problem. And by the way, if you want an ampersand (&) in your post, you must write it as "&" Alternatively, you can turn off HTML parsing altogether in your post. Beneath the edit box where you type in your post, there is a check box labeled Display this message as-is (no HTML). Check that box before submitting your post, and the text will be displayed exactly as typed. But remember that no HTML features will work (such as bold or italic text) when you use this method. 2.1: I'm trying to use a standard C++ library class (like cout, cin, or string) but the compiler gives an undeclared identifier error (C2065) on those names. Why? (top) The STL (standard template library) classes are separated into the std namespace. When referring to classes and functions in that namespace, you must preface them with std:: For example: std::string str = "Salut, tout le monde!"; std::cout << str.c_str() << std::endl; Alternatively, you can put this line at the top of your .CPP file: using namespace std; This makes the compiler treat everything in the std namespace as if it weren't in a namespace, which means you don't have to type std:: everywhere. If you are using a book that pre-dates the STL and namespaces, you'll see library classes like cin and cout written without any prefix. The forerunner of STL, the iostream library, contained classes with those names, but since namespaces hadn't been introduced to the language, they were accessible like any other global object. Visual C++ 6 includes the iostream library, so you can use it if necessary, although it's definitely better to go with the STL nowadays. 2.1a: How do I know if my code is using the iostream library or STL? (top) The iostream library header files have the regular .H extension, whereas STL header files have no extension. #include <iostream.h> // old iostream library #include <iostream> // STL Projects generated by Visual C++ AppWizards use STL unless you manually change the #include lines to reference the iostream headers. 2.2: I'm trying to call a Windows API, but the compiler gives an undeclared identifier error (C2065). Why? (top) The most likely reason is that the header files you are using to build are out-of-date and do not include the features you're trying to use. The headers that came with Visual C++ 6 are extremely old, and if you are still using them, you will run into this problem often. You can get updated header files by downloading the Platform SDK from Microsoft. Microsoft recently created an online installer, much like Windows Update, called SDK Update. If you do not want to use this page (it requires you to install an ActiveX control), you can download the SDK CAB files and install the SDK from that local copy. You can also order the SDK on CD from the SDK Update site. If you have downloaded the latest header files and are still getting compiler errors, read on. The Windows header files can be used to build programs for any version of Windows starting with Windows 95 and NT 3.51. Since not all APIs are present in all versions of Windows, there is a system that prevents you from using APIs that aren't available in the Windows version you are targeting. This system uses preprocessor symbols to selectively include API prototypes. The symbols are: WINVER: Windows version (applies to 9x/Me and NT) _WIN32_WINDOWS: Windows 9x/Me version _WIN32_WINNT: Windows NT version _WIN32_IE: Common controls version By default, you can only use functions in Windows 95, NT 3.51, and pre-IE3 common controls. To use APIs introduced in later versions of Windows or the common controls, you need to #define the above symbols correctly before including any Windows headers. As of this writing, here is the current list of values you can use for the above symbols. 2.3: I'm trying to call a Windows API, but the linker gives an unresolved external error (LNK2001) on the API name. Why? (top) When you call a function whose code is not in your program itself, such as any Windows API, you need to tell the linker where the function is so it can store information about the function in your EXE. This is done with an import library. An import library is a LIB file that contains the list of functions exported from its corresponding DLL. For example, kernel32.lib contains the exports for kernel32.dll. When Windows loads your EXE, it reads this information, loads the correct DLL, and resolves the function calls. The VC AppWizard creates projects with the most commonly-used LIBs (such as kernel32.lib, user32.lib, etc.) already listed in the linker options, but if you use call APIs in other DLLs, you'll need to add the corresponding LIB files. Let's take for example the API PathAddBackslash(). When you get an unresolved external error on this API, you need to find out which LIB file its definition is contained in. Read the MSDN page on PathAddBackslash(), and at the bottom you'll see: "Import Library: Shlwapi.lib". That tells you that you must add shlwapi.lib to your linker options. To add import libraries to the linker options, click Project->Settings and go to the Link tab. Set the Category combo box to General, then add the LIB filenames in the Object/library modules edit box. 2.4: Why do I get an unresolved external error (LNK2001) on main() when I make a release build of my ATL project? (top) Release builds of ATL projects contain an optimization whereby the project does not link with the C runtime library (CRT) in order to reduce the size of your binary. If you use any functions from the CRT (for example, string manipulation functions) or classes from the C++ library, you need to link with the CRT. In your project options, go to the C/C++ tab and select the Preprocessor category. Remove the symbol _ATL_MIN_CRT from the preprocessor defines, which will turn off this optimization. Search MSDN for "lnk2001 atl" and see KB article Q166480 (question #4) for more details. 2.5: I added some source files I got from someone else into my project and the compiler gives the error "C1010: unexpected end of file while looking for precompiled header directive." Why? (top) By default, Visual C++ projects use precompiled headers. This is a system whereby the large Windows headers are compiled only once when you build stdafx.cpp. Every other .CPP file in your project needs to #include "stdafx.h" as the first #include in order to build. The compiler specifically looks for the name "stdafx.h" to know when to insert the precompiled header information. If you received the source for an entire program from someone else, and you want to build it as a Visual C++ project, you can turn off precompiled headers instead. In your project options, go to the C/C++ tab and select the Precompiled headers category. Click the Not using precompiled headers radio button, then click OK. 2.5a: Thanks. Now, what's a precompiled header? (top) After the headers included by stdafx.cpp are processed, the preprocessor saves a snapshot of its internal state, which includes all the function prototypes and #defines in the Windows headers, along with anything else you added to stdafx.h. This snapshot is saved in a .PCH file. Since processing all those headers takes a long time, when the compiler compiles the other .CPP files, it reloads the preprocessor state from the .PCH file instead of processing all the headers again. The preprocessor looks for an #include "stdafx.h" line or a #pragma hdrstop directive to tell when it should read the .PCH file. If neither of those lines is present, you'll get error C1010. 2.6: Where is the header file atlres.h (or atlapp.h)? Where can I download WTL? (top) WTL is an extension to ATL that provides many features previously only found in MFC, such as frame windows, UI updating, common control wrappers, and so on. atlres.h and atlapp.h are two WTL header files, and usually the first ones you'll get errors on if you don't have WTL installed. Microsoft has two versions available for download, WTL 3.1 and WTL 7. Version 7 was released in April 2002 and includes lots of XP support. Version 3.1 is still functional, but it does not run out-of-the-box in VC 7. See Also: WTL section on CodeProject. 2.7: Why do I get an unresolved external (LNK2001) error on _beginthreadex and _endthreadex? (top) This happens when you compile a project that uses MFC, but your compiler settings are set to use the single-threaded version of the C runtime library (CRT). Since MFC uses threads, it requires the multithreaded CRT. Since the single-threaded CRT doesn't contain _beginthreadex() and _endthreadex(), you get a linker error on those two functions. To change your CRT setting, click Project->Settings and go to the C/C++ tab. Set the Category combo box to Code Generation. In the Use run-time library combo box, chose one of the multithreaded versions of the CRT. For debug builds, choose Debug Multithreaded or Debug Multithreaded DLL. For release builds, choose Multithreaded or Multithreaded DLL. The versions that say "DLL" use MSVCRT.DLL, while the others do not depend on that DLL. 2.8: Why do I get an unresolved external on nafxcw.lib or uafxcw.lib? (top) The files nafx*.lib and uafx*.lib are the static LIB versions of MFC. The files beginning with "n" are the ANSI version, and the files beginning with "u" are the Unicode version. By default, only the ANSI files are installed on your hard drive. If the linker cannot find these LIB files, copy them from your Visual C++ CD to your <VCdir>\vc98\mfc\lib directory. If those files are not on your CD, then you have an edition of Visual C++ that does not support static linking to MFC. You will need to change your project settings to use the DLL version of MFC. Click Project->Settings and go the General tab. In the Microsoft Foundation Classes combo box, select Use MFC in a Shared DLL. 3.1: What does a failed debug assert mean? (top) Simply put, it means you have a bug in your code. Asserts check for some condition that must always be true, and if the condition is ever false, it indicates a bug in the calling code (yours). A full description of asserts is beyond the scope of this FAQ, but when a library such as MFC gives an assert failure message, it means the library detected that you are incorrectly using it or calling one of its functions with bad parameters. For example, this MFC code will assert: BOOL CYourDlg::OnInitDialog() { CListCtrl wndList; wndList.InsertColumn ( 0, "abcdef" ); } CListCtrl::InsertColumn() contains this check: ASSERT(::IsWindow(m_hWnd)); which fails because the wndList object wasn't attached to a real list view control. Asserts that aren't trivial like the above example will (usually) have comments that can help you understand what the assert was checking for. 4.1: How can I save and load JPGs, PNGs, or other graphics formats? (top) Use GDI+ or a third-party library like paintlib, ImageMagick, or ImageLibrary. See Also: Christian Graus's GDI+ articles in the .NET section; Peter Hendrix's article "Simple class for drawing pictures." 4.2: How do I change the background color of a dialog, or draw a picture in the background of my window? (top) Handle the WM_ERASEBKGND message. For dialogs, do not handle WM_PAINT; WM_ERASEBKGND is the message designed for just this purpose. Search MSDN for "WM_ERASEBKGND" and you'll find several pages on this topic. 4.3: I have a dialog that does some lengthy processing, and I need to have a Cancel button so the user can abort the processing. How do I get the Cancel button to work? (top) First off, the reason the UI doesn't respond to the mouse is that your program is single-threaded, which means it is not pumping messages in the dialog's message queue while the thread is busy doing the processing. You have two choices, either move the processing to a worker thread, or keep the dialog single-threaded and periodically poll your message queue during your processing. Multithreading is beyond the scope of this FAQ, but see the Threads, Processes & Inter-Process Communication section of CodeProject for more info. As to the second solution, here is MFC code that will pump any messages waiting in your message queue: void ProcessMessages() { CWinApp* pApp = AfxGetApp(); MSG msg; while ( PeekMessage ( &msg, NULL, 0, 0, PM_NOREMOVE )) pApp->PumpMessage(); } Call ProcessMessages() periodically in the code that does the lengthy processing. See Also: Several articles on threads. 4.4: How do I change the cursor when it's in my window? (top) Handle the WM_SETCURSOR message, and call the SetCursor() function to change the cursor. Note that your window receives this message every time the mouse is moved, so be sure your WM_SETCURSOR handler executes quickly. (That is, don't do slow operations like file access.) 4.5: How do I show or hide a window? (top) To show a window: // MFC: wndYourWindow.ShowWindow ( SW_SHOW ); // Win32 API: ShowWindow ( hwndYourWindow, SW_SHOW ); To hide it: // MFC: wndYourWindow.ShowWindow ( SW_HIDE ); // Win32 API: ShowWindow ( hwndYourWindow, SW_HIDE ); There are many more flags that control minimizing and maximizing windows, among other things. See the ShowWindow() documentation in MSDN for more details. 4.6: How do I enable or disable a dialog control (button, edit box, etc.)? (top) To disable a control: // MFC: wndYourControl.EnableWindow ( FALSE ); // Win32 API: EnableWindow ( hwndYourControl, FALSE ); To enable it: // MFC: wndYourControl.EnableWindow ( TRUE ); // Win32 API: EnableWindow ( hwndYourControl, TRUE ); 4.7: How do I keep a window on top of all other windows? (top) To make your window topmost: // MFC: wndYourWindow.SetWindowPos ( &wndTopMost, 0, 0, 0, 0, 0, SWP_NOMOVE|SWP_NOSIZE ); // Win32 API: SetWindowPos ( hwndYourWindow, HWND_TOPMOST, 0, 0, 0, 0, SWP_NOMOVE|SWP_NOSIZE ); To revert back to normal: // MFC: wndYourWindow.SetWindowPos ( &wndNoTopMost, 0, 0, 0, 0, SWP_NOMOVE|SWP_NOSIZE ); // Win32 API: SetWindowPos ( hwndYourWindow, HWND_NOTOPMOST, 0, 0, 0, 0, SWP_NOMOVE|SWP_NOSIZE ); 4.8: How do I highlight an entire row of a list view control in report mode? (top) The full-row-select feature is enabled by setting an extended style of the list control. // MFC: wndYourList.SetExtendedStyle ( LVS_EX_FULLROWSELECT ); // Win32 API: ListView_SetExtendedListViewStyle ( hwndYourList, LVS_EX_FULLROWSELECT ); 4.9: How do I change the background color of a static control? (top) The method for doing this is different depending on whether you're using MFC or the Win32 APIs. In Win32, you handle the WM_CTLCOLORSTATIC message, whereas in MFC you handle WM_CTLCOLOR. In your handler, verify that the message is for the static control in question, and then return a brush of the color you want. // MFC: HBRUSH CYourDlg::OnCtlColor(CDC* pDC, CWnd* pWnd, UINT nCtlColor) { HBRUSH hbr = CDialog::OnCtlColor(pDC, pWnd, nCtlColor); if ( pWnd->GetSafeHwnd() == GetDlgItem(IDC_LABEL1)->GetSafeHwnd() && CTLCOLOR_STATIC == nCtlColor ) { // m_bkbrush is a CBrush member variable m_bkbrush.CreateSolidBrush ( RGB(255,0,0) ); pDC->SetBkMode ( TRANSPARENT ); return m_bkbrush; } return hbr; } Note the call to CDC::SetBkMode() which makes the control's text draw transparently. Omitting this call will make the text background appear gray, although you can change that color as well by calling CDC::SetBkColor(). In the Win32 version, you handle WM_CTLCOLORSTATIC instead of WM_CTLCOLOR. // Win32 API: LRESULT CALLBACK YourDlgProc(HWND hDlg, UINT message, WPARAM wParam, LPARAM lParam) { static HBRUSH hbrBkcolor; switch (message) { case WM_INITDIALOG: hbrBkcolor = CreateSolidBrush ( RGB(255,0,0) ); return TRUE; break; case WM_CTLCOLORSTATIC: { HDC hdc = (HDC) wParam; HWND hwndStatic = (HWND) lParam; if ( hwndStatic == GetDlgItem ( hDlg, IDC_LABEL1 )) { SetBkMode ( hdc, TRANSPARENT ); return (LRESULT) hbrBkcolor; } } break; // ... } return FALSE; } 4.10: How do I programmatically select an item in a list view control? (top) Setting the selection in a list control is done by changing the state of the item you want to select. Set an item's LVIS_SELECTED state to select it, or clear that state to unselect it. Here is how to select an item whose index is nItemToSelect: // MFC: wndYourList.SetItemState ( nItemToSelect, LVIS_SELECTED, LVIS_SELECTED ); // Win32 API: ListView_SetItemState ( hwndYourList, nItemToSelect, LVIS_SELECTED, LVIS_SELECTED ); If you want the item to have the focus as well, meaning it will be drawn with the focus rectangle around it, set the LVIS_FOCUSED state. 4.11: My list or tree control works fine in debug builds, but not in release builds. Why? (top) This can usually be fixed by initializing your structures (such as LVITEM and TVINSERTSTRUCT) to zero before using them. You can do this at the same time you declare the structs: LVITEM lvi = {0}; TVINSERTSTRUCT tvins = {0}; See Also: Joseph M. Newcomer's article "Surviving the Release Build." 4.12: How do I create a newline in a multi-line edit control? (top) Use "\r\n" to create a newline. If you use "\r" or "\n" or even "\n\r" you'll see little blocks in the control. 4.13: How do I prompt the user to select a directory? (top) Use the SHBrowseForFolder() API. You can find several examples in MSDN by searching for "SHBrowseForFolder". See Also: Use this canned search to find articles related to SHBrowseForFolder() on CodeProject. 4.14: How do I retrieve the text that the mouse cursor is pointing at? (top) There is no built-in way to do this. Once the text is on the screen, it is no longer readable as a character string, only as a bitmap. You'll need OCR (optical character recognition) software to transform the bitmap back into a string of characters. 4.15: How do I set the text in the caption of a frame window or dialog? (top) Use the SetWindowText() API. // MFC: wndYourWindow.SetWindowText, _T("New text here") ); // Win32 API: SetWindowText ( hwndYourWindow, _T("New text here") ); 4.16: How do I set the icon that's displayed in the caption of a frame window or dialog? (top) You first load the icon from your program's resources, then set it as the window's current icon. You should set both the large (32x32) and small (16x16) icons; the large icon is used in the Alt+Tab window, and the small icon is used in the caption bar and the Taskbar. Note that the code generated by the MFC AppWizard is buggy and does not properly set the small icon. The LoadIcon() function can only load 32x32 icons; to load 16x16 icons, use LoadImage(). // MFC: HICON hLargeIcon = AfxGetApp()->LoadIcon ( IDI_NEW_ICON ); HICON hSmallIcon = (HICON) ::LoadImage ( AfxGetResourceHandle(), MAKEINTRESOURCE(IDI_NEW_ICON), IMAGE_ICON, 16, 16, LR_DEFAULTCOLOR ); wndYourWindow.SetIcon ( hLargeIcon, TRUE ); wndYourWindow.SetIcon ( hSmallIcon, FALSE ); // Win32 API: HICON hLargeIcon = LoadIcon ( hinstYourModuleInstance, MAKEINTRESOURCE(IDI_NEW_ICON) ); HICON hSmallIcon = (HICON) LoadImage ( hinstYourModuleInstance, MAKEINTRESOURCE(IDI_NEW_ICON), IMAGE_ICON, 16, 16, LR_DEFAULTCOLOR ); SendMessage ( hwndYourWindow, WM_SETICON, ICON_BIG, hLargeIcon ); SendMessage ( hwndYourWindow, WM_SETICON, ICON_SMALL, hSmallIcon ); 4.17: How do I read the text in an edit box in another process? (top) The GetWindowText() function behaves differently when you use it to retrieve the text in a window that's in another process. A full description of the problem and the solution is in the first question in this MSDN Magazine article. 4.18: How do I restrict my window so it can't be resized larger or smaller than a certain size? (top)x150 and 600x; } In MFC, your OnGetMinMaxInfo() handler is passed a MINMAXINFO* directly, but otherwise the code is the same. 5.1: How do I clear the screen in a console program? (top) This is covered in the Knowledge Base article Q99261, "HOWTO: Performing Clear Screen (CLS) in a Console Application." Essentially, the procedure is to get information on the size of the console output buffer, then fill it with spaces using FillConsoleOutputCharacter() and FillConsoleOutputAttribute(). Before you can use this method, however, you need to get a HANDLE to the console screen buffer, like so: HANDLE hConsole = GetStdHandle(STD_OUTPUT_HANDLE); if ( hConsole != INVALID_HANDLE_VALUE ) { // do stuff... ... } Now you are set to call this and several other Win32 console functions. Search MSDN for "console reference" to learn more about the console-related structures and functions. 5.2: With other compilers, I used to use gotoxy() to position the cursor in a console program. How do I do this in Visual C++? (top) Output to a console is essentially controlled by the console screen buffer's current settings, and each position in the buffer is addressable with a COORD structure. This code uses SetConsoleCursorPosition() to move the current output location to row 11, column 32: #include <windows.h> #include <stdio.h> int main ( int argc, char** argv ) { HANDLE hConsole = GetStdHandle ( STD_OUTPUT_HANDLE ); if ( INVALID_HANDLE_VALUE != hConsole ) { COORD pos = {32, 11}; SetConsoleCursorPosition ( hConsole, pos ); printf ( "Hello World!\n" ); } return 0; } Also, code that outputs to cout will respect the buffer settings as well. 5.3: How can I output text in different colors in a console program? (top) Each location in the console screen buffer has text attributes as well as a character associated with it, and the Win32 console functions can affect these in two ways. SetConsoleTextAttribute() affects subsequent characters written to the buffer, while FillConsoleOutputAttribute() directly changes the attributes of an existing block of text. The following functions might be used for normal, bold, and reverse text (this assumes that the class has a handle to the console, through a call to GetStdHandle()): void CMyConsoleClass::SetTextNormal() { // white on black - the default SetConsoleTextAttribute ( m_hConsole, FOREGROUND_RED | FOREGROUND_GREEN | FOREGROUND_BLUE ); } void CMyConsoleClass::SetTextBold() { // hi-white on black SetConsoleTextAttribute ( m_hConsole, FOREGROUND_RED | FOREGROUND_GREEN | FOREGROUND_BLUE | FOREGROUND_INTENSITY ); } void CMyConsoleClass::SetTextReverse() { // black on white SetConsoleTextAttribute ( m_hConsole, BACKGROUND_RED | BACKGROUND_GREEN | BACKGROUND_BLUE ); } Note that there are no settings for blink or underline, so you will need to be a bit creative if you try to emulate ANSI or VT100 terminal text modes with this method. 5.4: I've allocated a console window in my GUI program, but if the user closes the console, my program closes too. What to do? (top) One method that works well is to disable the close menu option. After the console has been allocated with AllocConsole(), you can do this if you can find the console window's handle. void DisableClose() { char buf[100]; wsprintf ( buf, _T("some crazy but unique string that will ID ") _T("our window - maybe a GUID and process ID") ); SetConsoleTitle ( (LPCTSTR) buf ); // Give this a chance - it may fail the first time through. HWND hwnd = NULL; while ( NULL == hwnd ) { hwnd = ::FindWindowEx ( NULL, NULL, NULL, (LPCTSTR) buf ); } // Reset old title - we'd normally save it with GetConsoleTitle. SetConsoleTitle ( _T("whatever") ); // Remove the Close menu item. This will also disable the [X] button // in the title bar. HMENU hmenu = GetSystemMenu ( hwnd, FALSE ); DeleteMenu ( hmenu, SC_CLOSE, MF_BYCOMMAND ); } 5.5: I've allocated a console in my GUI program. When I try to close the console window, it hangs around for a while. Why? (top) While only one console can be associated with a process, its possible for more than one process to share the same console. When you call FreeConsole(), Windows detaches your app from the console, and will close it if no other processes are using it. There is a difference between NT and 9x in when the close is performed. NT seems to check the status immediately and close the console. Windows 9x seems to take more of a "garbage collection" approach, preferring to wait until some action is performed on the console widow before checking if it's time to get rid of it. One trick that helps is to minimize the window before calling FreeConsole(); check the previous FAQ entry for a tip on getting the window handle needed to do this. CloseHandle ( hConsole ); // This helps kill the window quickly. ShowWindow ( console_hwnd, SW_HIDE ); console_hwnd = NULL; FreeConsole(); Note: Don't call SendMessage(console_hwnd, WM_SYSCOMMAND, SC_CLOSE, 0) as this will terminate your entire process! 5.6: How can I run my console program without the console window popping up? (top) Use CreateProcess() (as described in FAQ 6.4), and set some members in the STARTUPINFO struct to tell Windows to hide the console window. STARTUPINFO si = { sizeof(STARTUPINFO) }; PROCESS_INFORMATION pi = {0}; BOOL bSuccess; si.dwFlags = STARTF_USESHOWWINDOW; si.wShowWindow = SW_HIDE; // see FAQ 6.4 for all the parameters bSuccess = CreateProcess ( ..., &si, &pi ); If the CreateProcess() call succeeds, the console window will be hidden because we specified SW_HIDE in the wShowWindow member. See Also: Steven Szelei's article "Running console applications silently." 6.1: Why can't I use a member function as a callback? (top) Member functions receive the this pointer as their first parameter (even though you don't write it that way), so a member function can never match the required prototype of a callback function. There are two solutions. First, you can make the member function static. Doing so removes the this parameter, but then you cannot access non-static data or functions in the class. Second, you can keep the member function as-is, use a global function as the actual callback, and pass the global function the this pointer yourself. The global function then calls the member function using the passed-in this pointer. Windows APIs that use a callback function have a void* parameter that you can use for whatever purpose you like. Windows sends that void* parameter along to the callback function. So, you use that parameter to pass the this pointer. Here is an example: class CCallback { public: void DoWork(); BOOL MyCallback(); protected: // ...data members that the callback function needs to access... }; void CCallback::DoWork() { SomeAPIThatHasACallback ( GlobalCallback, // your callback function (void*) this ); // 'this' pointer } BOOL CCallback::MyCallback() { // Do your callback processing here... } BOOL CALLBACK GlobalCallback ( void* pv ) { // pv is the same as the 2nd parameter to // SomeAPIThatHasACallback() above. CCallback* pThis = reinterpret_cast<CCallback*>(pv); return pThis->MyCallback(); } void main() { CCallback cbk; cbk.DoWork(); } Note that this technique of passing the this pointer can also be used if you make the callback a static member function. See Also: Daniel Lohmann's article "Use member functions for C-style callbacks and threads - a general solution." 6.2: How do I share a global variable among my .CPP files? (top) First, in one of your .CPP files (and only one) declare the variable at global scope (that is, outside of all function and class definitions). For example: int g_volume; Then in a header file that is included in all .CPP files - such as stdafx.h - add an extern declaration: extern int g_volume; The extern keyword tells the compiler that g_volume is an int declared in some other .CPP file. If you forget the first step, the linker will give an unresolved external error. 6.3: How can I change a number into its string representation, or vice versa? (top) Number to a string: You can use sprintf() and its clones ( wsprintf(), CString::Format(), strstream) to get the string representation of a number. int num = 12345; TCHAR c_style_string[32]; CString cstr; std::strstream strm; std::string stl_string; sprintf ( c_style_string, "%d", num ); // CRT sprintf() cstr.Format ( "%d", num ); // MFC/WTL CString strm << num << std::ends; // First put the number into a string stream, stl_string = strm.str(); // then get the actual string. There are also C runtime functions that convert one type of number into a string: itoa(), ltoa(), ultoa(), i64toa(), ui64toa(). String to a number: Use the atoi() or atol() functions, or if the number is floating-point, atof(): char* szNumber = "10235"; int iNum = atoi ( szNumber ); // integer long lNum = atol ( szNumber ); // long integer double dNum = atof ( szNumber ); // floating-point If you need more flexibility, especially if your string is a base other than decimal, use the strtol(), strtoul(), or strtod() functions. Along with accepting strings in any base up to 36, they return a pointer to the first non-number character found, which is helpful if you're parsing a long input string. char* szNumber = "F33DF4CE~!"; char* pszStopPoint; long lNum = strtol ( szNumber, &pszStopPoint, 16 ); unsigned long ulNum = strtoul ( szNumber, &pszStopPoint, 16 ); double dNum = strtod ( szNumber, &pszStopPoint, 16 ); After each of those functions, pszStopPoint points at the '~' which was the first non-hexadecimal character found. You can also use a stringstream: std::string strWithNumbers = "100 20.5 -93"; std::stringstream strm ( strWithNumbers ); int i, j; float f; strm >> i >> f >> j; There are also three functions in shlwapi.dll, which you can use if you want to avoid using the CRT. StrToInt() works like atoi(). The macro StrToLong() is equivalent to StrToInt(). StrToIntEx() and StrToInt64Ex() accept hexadecimal numbers as well, and return a BOOL indicating whether the string contained any numeric characters to convert. #include <shlwapi.h> char* szNumber = "10235"; char* szHexNumber = "b4df00d"; int iNum = StrToInt ( szNumber ); // integer/long LONGLONG llNum; // 64-bit int StrToIntEx ( szHexNumber, STIF_SUPPORT_HEX, &iNum ); StrToInt64Ex ( szHexNumber, STIF_SUPPORT_HEX, &llNum ); These functions are in a DLL that ships with IE, so keep that in mind if your program depends on the version of IE that's installed. StrToInt() and StrToIntEx() require IE 4; StrToInt64Ex() requires IE 5. 6.4: How do I run another program from my program? (top) There are several functions that run other programs. The simplest is WinExec(): WinExec ( "C:\\path\\to\\program.exe", SW_SHOWNORMAL ); There is also ShellExecute(), which can run executables as well as files that are associated with a program. For example, you can "run" a text file, as shown here: ShellExecute ( hwndYourWindow, "open", "C:\\path\\to\\readme.txt", NULL, NULL, SW_SHOWNORMAL ); In this example, ShellExecute() looks up the program associated with .TXT files and runs that program. ShellExecute() also lets you set the program's starting directory and additional command line parameters. See the MSDN docs on ShellExecute() for more info. If you want complete control over every aspect of the program launching process, use CreateProcess(). CreateProcess() has a ton of options, so see MSDN for all the details. Here is a simple example: STARTUPINFO si = { sizeof(STARTUPINFO) }; PROCESS_INFORMATION pi = {0}; BOOL bSuccess; bSuccess = CreateProcess ( NULL, "\"C:\\Program Files\\dir\\program.exe\"", NULL, NULL, FALSE, NORMAL_PRIORITY_CLASS, NULL, NULL, &si, &pi ); Note that the program name should be enclosed in quotes, as shown above, if the path contains spaces. If CreateProcess() succeeds, be sure to close the handles in the PROCESS_INFORMATION structure once you don't need them anymore. CloseHandle ( pi.hThread ); CloseHandle ( pi.hProcess ); Of course, if all you need to do is just run a program, CreateProcess() is probably overkill, and ShellExecute() would be sufficient. 6.5: How do I declare and use a pointer to a class member function? (top) The syntax is similar to a regular function pointer, but you also have to specify the class name. Use the .* and ->* operators to call the function pointed to by the pointer. class CMyClass { public: int AddOne ( unsigned n ) { return n+1; } int AddTwo ( unsigned n ) { return n+2; } }; main() { CMyClass myclass, *pMyclass = &myclass; int (CMyClass::* pMethod1)(unsigned); // Full declaration syntax pMethod1 = CMyClass::AddOne; // sets pMethod1 to the address of AddOne cout << (myclass.*pMethod1)( 100 ); // calls myclass.AddOne(100); cout << (pMyclass->*pMethod1)( 200 ); // calls pMyclass->AddOne(200); pMethod1 = CMyClass::AddTwo; // sets pMethod1 to the address of AddTwo cout << (myclass.*pMethod1)( 300 ); // calls myclass.AddTwo(300); cout << (pMyclass->*pMethod1)( 400 ); // calls pMyclass->AddTwo(400); // Typedef a name for the function pointer type. typedef int (CMyClass::* CMyClass_fn_ptr)(unsigned); CMyClass_fn_ptr pMethod2; // Use pMethod2 just like pMethod1 above.... } The line int (CMyClass::* pMethod1)(unsigned); reads: " pMethod1 is a pointer to a function in CMyClass; that function takes an unsigned parameter and returns an int". Note that CMyClass::AddOne is very different from CMyClass::AddOne(). The first is the address of the AddOne method in CMyClass, while the second actually calls the method. 6.6: Is there a C++ equivalent to the Visual Basic "With" keyword? (top) Nope. Sorry, you have to type a struct's variable name every time you access its members. 7.1: In my MFC program, I'm trying to disable a menu item with EnableMenuItem(), but it doesn't have any effect on the menu. Why? (top) MFC uses its own system for enabling and disabling menu items and toolbar buttons, which overrides any calls you make to EnableMenuItem(). If you look in the message map of your CMainFrame class, you may see some ON_UPDATE_COMMAND_UI() macros. Those macros control when menu items and toolbar buttons are enabled and disabled. To add a macro for the menu item you want to disable, go to ClassWizard and click the Message Maps tab. In the Class name combo box, select the class where you want to add the handler (usually CMainFrame is a good choice, but for items that relate to data stored in your document, pick your CDocument-derived class instead). In the Object IDs list, select the command ID of the menu item, then in the Messages list, double-click UPDATE_COMMAND_UI. In the update handler, call: pCmdUI->Enable ( FALSE ); to disable the menu item. Note that if you have a toolbar button with the same command ID, that button will be disabled as well. See the MSDN pages on CCmdUI and ON_UPDATE_COMMAND_UI for more details. 7.2: I'm trying to change the font of a dialog control, but it's not having any effect. Why? (top) The most likely cause is a logic error that causes the font to be destroyed prematurely. For example, you might try to change the font of a static control like this: BOOL CMyDlg::OnInitDialog() { CFont myfont; CStatic* pStatic = GetDlgItem ( IDC_SOME_LABEL ); // ... create a font here using the 'myfont' object ... // Change the static control's font. pStatic->SetFont ( &myfont ); return TRUE; } The problem with this code is that a font is a GDI resource managed by the CFont class. When myfont goes out of scope, the CFont destructor destroys the GDI font object. Move the CFont object to be a member variable of your dialog class, so that the font stays around until the CMyDlg destructor cleans it up. 7.3: How do I convert a CString to a char*? (top) First, be sure you actually need a char* (non-constant pointer, or LPTSTR). If you need a const char* (or LPCTSTR), then CString has a conversion function that will be called automatically if you pass a CString to a function expecting an LPCTSTR. For example: void f ( LPCTSTR somestring ) { cout << somestring << endl; } main() { CString str = "bonjour"; f ( str ); // OK - calls CString::operator LPCTSTR() to convert } The remainder of this FAQ deals with obtaining a non-constant pointer to the string. Because a CString object manages the character array, you must explicitly tell the CString that you want to get a non-constant pointer to the string. Call GetBuffer() to get a char* to the string, and then call ReleaseBuffer() when you no longer need that pointer. Calling ReleaseBuffer() tells the CString that it can resume managing the character array. CString str = "some string"; LPTSTR pch; pch = str.GetBuffer(0); // use pch here... // When you're done using pch, give the CString control // of the buffer again. str.ReleaseBuffer(); After calling GetBuffer(), you may modify the contents of the string through pch, although you can't make the string longer since that would overrun the array. If you do modify the string, you must not call any CString methods before the call to ReleaseBuffer(), since CString methods may reallocate and move the array, which would render pch invalid. After you call ReleaseBuffer(), you must not use pch any more, again because the CString may reallocate and move the character array. If you want to create a larger buffer for the string, for example if you are going to pass it to an API that returns a filename, you can do so by passing the desired length to GetBuffer(): CString sFilename; LPTSTR pch; // Get a non-const pointer and set the buffer size. pch = sFilename.GetBuffer ( MAX_PATH ); // Pass the buffer to an API that writes to it. GetModuleFileName ( NULL, pch, MAX_PATH ); // Return control of the array to the CString object. sFilename.RelaseBuffer(); See also: "The Complete Guide to C++ Strings, Part II - String Wrapper Classes" 7.4: How do I prevent a dialog from closing when the user presses Enter or Esc? (top) Let's first cover why the dialog closes, even if you remove the OK and Cancel buttons. CDialog has two special virtual functions, OnOK() and OnCancel(), which are called when the user presses the Enter or Esc key respectively. The CDialog implementations call EndDialog(), which is why the dialog closes. Since those are special-purpose functions, they do not appear in the dialog's BEGIN_MESSAGE_MAP/ END_MESSAGE_MAP section, and they need to be overridden differently than normal button click handlers. If you still have buttons with IDs IDOK and IDCANCEL, you can use ClassWizard to add BN_CLICKED handlers for those buttons, and it will do the special handling necessary for OnOK() and OnCancel(). If you do not have buttons with those IDs, then you can override the virtual functions manually. In your dialog class definition: class CMyDialog : public CDialog { // ... // Generated message map functions //{{AFX_MSG(CMyDialog) virtual void OnOK(); virtual void OnCancel(); //}}AFX_MSG DECLARE_MESSAGE_MAP() }; Then in the corresponding .CPP file: void CMyDialog::OnOK() { } void CMyDialog::OnCancel() { } The important part here is that the handlers do not call the base-class implementation, so EndDialog() is not called and the dialog doesn't close. 7.5: How do I remove "Untitled" from the main frame window caption? (top) In your CMainFrame's PreCreateWindow() function, remove the FWS_ADDTOTITLE style. BOOL CMainFrame::PreCreateWindow ( CREATESTRUCT& cs ) { cs.style &= ~FWS_ADDTOTITLE; // rest of the AppWizard code here... } FWS_ADDTOTITLE is an MFC-specific style, which tells MFC to add the current document's name to the text in the main frame's caption. See the page "Changing the Styles of a Window Created by MFC" in MSDN for more details and examples. 7.6: I have a dialog-based application and want the dialog hidden on startup. How do I do this? (top) Normally, modal dialogs are always shown, because if the dialog were hidden, the parent application would be disabled with no way to re-enable it. Dialog-based apps are different, and hiding the window is safe (as long as you provide a way for the user to get to the dialog later!). Hiding the dialog involves handling WM_WINDOWPOSCHANGING, which is sent when the dialog is moved, shown, hidden, or rearranged in the Z order. The first time the handler is called, it tells Windows not to show the dialog. Add a member variable to your CDialog-derived class that will keep track of whether WM_WINDOWPOSCHANGING has been handled. class CYourDialog : public CDialog { // ... protected: bool m_bFirstShowWindow; }; Initialize this variable to true in the constructor: CYourDlg::CYourDlg (CWnd* pParent /*=NULL*/) : m_bFirstShowWindow(true), CDialog(CYourDlg::IDD, pParent) { // ... } Add a handler for WM_WINDOWPOSCHANGING and code it as shown below. The handler checks if the dialog is being shown, and if this is the first time it's being shown. If so, it turns off the SWP_SHOWWINDOW flag to keep the dialog from being shown. void CYourDlg::OnWindowPosChanging ( WINDOWPOS* lpwndpos ) { CDialog::OnWindowPosChanging(lpwndpos); if ( lpwndpos->flags & SWP_SHOWWINDOW ) { if ( m_bFirstShowWindow ) { m_bFirstShowWindow = false; lpwndpos->flags &= ~SWP_SHOWWINDOW; } } } 7.7: How can I make my main frame window non-resizable? (top) In your main frame's PreCreateWindow() function, remove the WS_THICKFRAME style. You can also set the window's initial size in that same function. BOOL CMainFrame::PreCreateWindow(CREATESTRUCT& cs) { if( !CFrameWnd::PreCreateWindow(cs) ) return FALSE; cs.style &= ~WS_THICKFRAME; // optional - set initial window size cs.cx = 800; cs.cy = 200; return TRUE; } 8.1: I've written a service and it can't access mapped network drives. Why? (top) Mapped drives are stored on a per-user basis. The default configuration for a service logs it in as the LocalSystem account, so it cannot access the mapped drives you make when you log in using your own account. You can manually change your service's account by viewing its properties in the Services Control Panel applet (NT 4) or the Services branch in Computer Management (Windows 2000/XP, Start->Programs->Administrative Tools->Computer Management). If you are installing your service with the CreateService() function, pass your account name and password in the 12th and 13th parameters, respectively. 8.2: A program I've written doesn't load when it's run on a computer without Visual C++ installed. Why? (top) Your program links with numerous DLLs in order to run, such as kernel32.dll, advapi32.dll, and so on. If your app is using MFC, the other computer needs to have the MFC and CRT (C runtime library) DLLs installed. You may also be using ActiveX controls and/or COM objects, which must be properly installed and registered on the other computer. To determine which DLLs your app is statically linked to (that is, DLLs which are loaded as soon as your app is run), run the Dependency Viewer on your EXE. (Start->Programs->MSVC 6->Tools->Depends) Depends version 2 is more powerful, and can show which DLLs are loaded at runtime, and which ActiveX controls and COM servers your app uses. You can get version 2 from the Platform SDK, or the very latest version from the official Dependency Walker site. If you just want to remove the dependency on the MFC DLLs, you can statically link MFC to your app. This means all the MFC and CRT code your app uses is put right in the EXE, instead of being read from the DLLs. To use static linking, click Project->Settings, and click the General tab. In the Microsoft Foundation Classes combo box, select Use MFC in a static library. Note that you normally do this for release builds only, and this will significantly increase the size of your EXE. One final thing to check is that you're distributing the release build of your program, not the debug build. MFC apps built in debug mode use several other, and much larger, DLLs. Microsoft does not allow you to distribute the debug MFC DLLs, but you shouldn't be distributing a debug build anyway, because it will be slower and much larger than the release build. 8.3: How do I find the full path to my program's EXE file? (top) Use the GetModuleFileName() function: TCHAR szEXEPath[MAX_PATH]; GetModuleFileName ( NULL, szEXEPath, MAX_PATH ); Passing a module handle of NULL tells the API to return the path for the EXE. 8.4: How do I read the summary information from an Office file? (top) Microsoft Office files are OLE structured storage documents (also called "docfiles"), and the summary info is contained in a stream in the docfile. See Knowledge Base article Q186898 for a description of how to read this data. 8.5: How do I delete a file that is currently in use? (top) The code involved is different on 9x and NT. On NT, you can use the MoveFileEx() function to mark the file for deletion on the next reboot. MoveFileEx ( szFilename, NULL, MOVEFILE_DELAY_UNTIL_REBOOT ); On 9x, you need to edit the wininit.ini file in the Windows directory and add an entry that instructs Windows to delete the file on the next reboot. The file should look like: [rename] NUL=C:\path\to\file.exe Note that this file is processed in real mode before long filenames can be read, so you must use the short name (8.3) of the file. See this C++ Q&A article (from the January 1996 MSJ) for code that modifies wininit.ini. 8.6: How do I send an email using the default mail client? (top) Use the ShellExecute() function to "execute" a mailto: URL: ShellExecute ( hwndYourWindow, "open", "mailto:user@domain.com?subject=Subject Here", NULL, NULL, SW_SHOW ); There are other methods of sending mail and controlling the mail client, including MAPI (mail API), but that subject is beyond the scope of this FAQ. 8.7: How do I tell if the computer is connected to the Internet? (top) Use the InternetGetConnectedState() function. This API returns a BOOL indicating whether the computer is connected to a network. However, a system might be connected but in offline mode. (You toggle offline mode by clicking File->Work Offline in IE.) So you also need to check the flags returned by the API. BOOL bConnected; DWORD dwFlags; bConnected = InternetGetConnectedState ( &dwFlags, 0 ); if ( bConnected ) if ( dwFlags & INTERNET_CONNECTION_OFFLINE ) bConnected = FALSE; CListCtrl/CListView FAQ at Celtic Wolf MFC Tips and Tricks at cui.unige.ch. ATL, COM, MFC FAQs at widgetware.com List of FAQs in KB articles at widgetware.com CodeProject article - "Useful Reference Books" June 28, 2003 Updated FAQs: New FAQs: March 1, 2003 Updated FAQs: std::endsto the first stringstreamexample] .*and ->*dereferences] New FAQs: "#cl_linkerrthread">_beginthreadexand "#cl_linkerrthread">_endthreadex? July 4, 2002 Updated FAQs: New FAQs: June 1, 2002 Updated FAQs: New FAQs: May 26, 2002 New FAQs: Updated FAQs: General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/cpp/cppforumfaq.aspx
crawl-002
refinedweb
8,131
63.49
There was an error while deserializing intermediate XML. Cannot find type “namespace.class” I have been staring at this error for a few days now. It’s been doing my head in. It all started when I decided to take a brake from my utility app that I have been writing for Windows Phone now for quite a while, so I thought I’d start writing a game instead. I wanted to make some of the text handling easier so I decided to try out the new combined Silverlight XNA mode for the Mango release of the Windows Phone SDK. I have never coded any XNA apps before but getting the basics up, importing textures from the content pipeline and such went fairly ok. Then I wanted to add XML files to describe my game objects and use the content pipeline to serialize those to objects. This is where the headache started… The project ‘SlXnaApp1Lib’ cannot be referenced This error pops up when you create the project from scratch, but it is described in the release notes and said to be ok: The default 3D Graphics Application project template contains the compiler warning “The project <ProjectName> cannot be referenced. The referenced project is targeted to a different framework family (.NETFramework).” This warning does not affect your application and should be left in place. So apparently that’s something we have to live with. This made me assume I could not put code in the Game Library project, so I decided to put the class I was deserializing to in the main project. But this just kept getting me the error that “there was an error deserializing intermediate xml”. After scouring the web for information it seemed clear that the content library is built before the main project is, so it’s not possible to put the class definitions for deserialization in the main project. Yay… Trying to move class definitions into the Game Library Ok, so moving classes into the Game Library seemed the only route. But just moving the class in question into the Game Library still produced the same errors. It seems the Content Library project is not aware of the Game Library project, and hence can’t get to the class definition. So just adding a reference should be easy and everything should be sorted! That didn’t work at all since it creates a circular dependency. At this point I’m starting to wonder if it’s my installation of the Windows Phone SDK that’s broken, so I deinstalled everythign and then reinstalled the Windows Phone SDK 7.1 RC. But this issue is still the same. So maybe is it the tools that are broken? In the simple examples showing off the Silverlight / XNA template they aren’t deserializing any XML files, so there seem to be no real example from Microsoft available either. The more advanced examples using deserialization isn’t using the Silverlight / XNA template, but the approach of adding the class definitions for deserialization in the Game Library is exactly what they do. Getting things to work As I want to continue coding on my game I felt I wanted to solve this issue even if the tools aren’t ready for it. So I found a way, even though I suspect it’s a hack and will end up counterproductive in the end for the project. What I did was: 1. Add a reference straight to the GameLib dll. Problem here is that it’s not even getting built at this stage. Problem is that at this moment the dll isn’t even being built! The XML deserializer is failing the build since it can’t find the class definition it’s looking for. So, removing the XML file seems appropriate. After removing the XML file the project builds. Now it’s possible to add a reference to the dll of the Game Lib to the Content Lib project. 2. Now we can put the XML file back into the Content Lib and build the solution. 3. Putting a debug breakpoint and looking at the deserialized object shows it works finally!!! (Always remember to add an empty constructor to classes that’s going to be serialized with .NET!) Conclusion I think this is a messy solution and it shouldn’t work like this. I’m especially scared of that I have to go through this rinse-rebuild-add cycle if I add new XML definitions and get the class definitions wrong from the getgo. I suspect there is a simpler solution out there that I just didn’t manage to find. So if any one is sitting on the proper solution, please – oh please let me know of it! SuppressDraw in mixed XNA / Silverlight projects on Windows Phone XNA Error loading content, file not found One thought on “XML serialization in the Silverlight / XNA template for Windows Phone 7.1” Pingback: XNA Error loading content, file not found
https://mobilemancer.com/2011/09/22/xml-serialization-in-the-silverlight-xna-template-for-windows-phone-7-1/
CC-MAIN-2022-05
refinedweb
828
69.92
go to bug id or search bugs for New/Additional Comment:. What is supposed to be missing from the documentation? @roy-orbison is saying that > namespace \Identifier; should either work as a namespace definition or be a parse error. Because of the space. The former is not possible because it's ambiguous with the namespace\ root, and the latter is not great because it assigns significance to the whitespace that did not exist before. So I figure we address the problem by adding a quick note to docs that says not to use a leading slash when defining a namespace. (Of course, anyone who did so would very quickly realize it didn't work.) Automatic comment from SVN on behalf of cmb Revision: Log: Fix #74257: "namespace \X" does not work to declare a namespace Thanks @requinix! Automatic comment on behalf of cmb Revision:;a=commit;h=59d459d265d56bcfc4e6691bfc277a69b383da84 Log: Fix #74257: "namespace \X" does not work to declare a namespace Automatic comment from SVN on behalf of mumumu Revision: Log: Fix #74257: "namespace \X" does not work to declare a namespace Automatic comment on behalf of mumumu Revision:;a=commit;h=118b5c32359378f9b8dac82e0b80ee80203dc23d Log: Fix #74257: "namespace \X" does not work to declare a namespace
https://bugs.php.net/bug.php?id=74257&edit=1
CC-MAIN-2020-45
refinedweb
204
58.32
128KB Get Adobe® Reader® Document options requiring JavaScript are not displayed Help us improve this content Level: Introductory Scott Davis (scott@aboutgroovy.com), Editor in Chief, AboutGroovy.com 11 Mar. The first two articles in this series introduced you to the basic building blocks of the Grails Web framework. I've told you — repeatedly — that Grails is based on the Model-View-Controller (MVC) architectural pattern (see Resources), and that Grails favors convention over configuration for binding the framework's parts together. Grails uses intuitively named files and directories to replace the older, more error-prone method of manually cataloguing these linkages in an external configuration file. For example, in the first article you saw that controllers have a Controller suffix and are stored in the grails-app/controller directory. In the second article, you learned that domain models can be found in the grails-app/domain directory. Controller This month, I'll round out the MVC triptych with a discussion of Grails views. Views (as you might guess) are stored in the grails-app/views directory. But there's much more to the view story than the intuitively obvious directory name. I'll talk about Groovy Server Pages (GSP) and give you pointers to many alternative view options. You'll learn about the standard Grails tag libraries (TagLibs) and find out how easy it is to create your own TagLib. You'll see how to fight the ongoing battle for DRYness (see Resources) by factoring common fragments of GSP code into their own partial templates. Finally, you'll learn how to tweak the default templates for scaffolded views, thereby balancing the convenience of automatically created views with the desire to move beyond a Grails application's default look-and-feel.. Viewing a Grails application Grails uses GSP for the presentation tier. The Groovy in Groovy Server Pages not only identifies the underlying technology, but also the language you can use if you want to write a quick scriptlet or two. In that sense, it's much like Java™Server Pages (JSP) technology, which allows you to mix a bit of Java code into your Web pages, and RHTML (the core view technology for Ruby on Rails), which let you sneak a bit of Ruby in between your HTML tags. Of course, scriptlets have long been frowned upon in the Java community. They lead to the lowest form of technology reuse — copy-and-paste — and other crimes of technological moral turpitude. (There's a huge disparity between because you can and because you should.) The G in GSP should remind fine, upstanding Java citizens of the implementation language and nothing more. Groovy TagLibs. I mention this because a Cambrian explosion of non-page-centric view technologies has occurred in recent years (see Resources). Component-oriented Web frameworks such as JavaServer Faces (JSF) and Tapestry are gaining mindshare. The Ajax revolution has spawned numerous JavaScript-based solutions such as Dojo and the Yahoo! UI (YUI) libraries. Rich Internet Application (RIA) platforms such as Adobe Flash and the Google Web Toolkit (GWT) promise the convenience of Web deployment with a richer, desktop-like user experience. Luckily, Grails can handle all of these view technologies with ease. The whole point of the MVC separation of concerns is that it lets you easily skin your Web application with any view you'd like. Grails' popular plug-in infrastructure means that many GSP alternatives are just a grails install-plugin away. (See Resources for a link to the complete list of available plug-ins, or type grails list-plugins in a command prompt.) Many of these plug-ins are community-driven, the result of people wanting to use Grails with their presentation-tier technology of choice. grails install-plugin grails list-plugins Although Grails has no native, automatic hooks for JSF, nothing precludes you from using the two together. A Grails application is a standard Java EE application, so you can put the proper JARs in the lib directory, add the expected settings in the WEB-INF/web.xml configuration file, and write the application as you normally would. A Grails application is deployed in a standard servlet container, so Grails supports JSPs just as well as GSPs. Grails plug-ins exist for Echo2 and Wicket (both component-oriented Web frameworks), so nothing is standing in the way of JSF or Tapestry plug-ins. Similarly, the steps for adding Ajax frameworks such as Dojo and YUI to Grails are no different from what you would normally do: simply copy their JavaScript libraries to the web-app/js directory. Prototype and Scriptaculous are installed with Grails by default. The RichUI plug-in takes a best-of-breed approach, choosing UI widgets from a variety of Ajax libraries. If you look through the list of plug-ins, you'll see support for such RIA clients as Flex, OpenLazlo, GWT, and ZK. Obviously, there's no shortage of alternate view solutions for Grails applications. But let's talk more about the native view technology that Grails supports out of the box — GSP. GSP 101 You can spot a GSP page in a couple of ways. A file extension of .gsp is a dead giveaway, as is the copious use of tags that begin with <g:. As a matter of fact, a GSP page is nothing more than standard HTML with some Grails tags mixed in for dynamic content. Some of the alternative view technologies I mentioned in the preceding section are opaque abstraction layers that hide the details of the HTML, CSS, and JavaScript behind a layer of Java, ActionScript, or some other programming language. GSP is a thin Groovy facade over standard HTML, making it easy to shell out of the framework and use native Web technologies whenever you feel the need. <g: But you'll have a hard time finding GSPs in the Trip Planner application as it stands now. (The first two articles in this series get you started with building the Trip Planner. If you haven't followed along so far, now would be a good time to catch up.) You're currently using dynamic scaffolding for your views, so the trip-planner/grails-app/views directory is empty. Open grails-app/controller/TripController.groovy, shown in Listing 1, in a text editor to see the command used to enable dynamic scaffolding: TripController class TripController{ def scaffold = Trip } The def scaffold = Trip line instructs Grails to generate the GSPs dynamically at run time. This is great for automatically keeping the views in sync as the domain model changes, but it doesn't give you much to look at while you are trying to learn the framework. def scaffold = Trip Type grails generate-all Trip in the root of the trip-planner directory. Answer y when it asks if you want to override the existing controller. (You can also answer a for all to override everything without being prompted repeatedly.) You should now see a full TripController class with closures named create, edit, list, and show (among others). You should also see a grails-app/views/trip directory with four GSPs: create.gsp, edit.gsp, list.gsp, and show.gsp. grails generate-all Trip y a create edit list show Convention over configuration is at play here. When you visit, you are asking the TripController to populate a list of Trip domain model objects and pass it on to the trip/list.gsp view. Take a look at TripController.groovy, shown in Listing 2, in a text editor once again: Trip class TripController{ ... def list = { if(!params.max) params.max = 10 [ tripList: Trip.list( params ) ] } ... } This short closure retrieves 10 Trip records from the database, converts them to POGOs, and stores them in an ArrayList named tripList. The list.gsp page then iterates through the list, building an HTML table row by row. ArrayList tripList The next section explores many popular Grails tags, including the <g:each> tag used to display each Trip in the Web page. <g:each> Grails tags <g:each> is a commonly used Grails tag. It iterates over each item in a list. To see it in action, open grails-app/views/trip/list.gsp (shown in Listing 3) in a text editor: <g:each <tr class="${(i % 2) == 0 ? 'even' : 'odd'}"> <td><link action="show" id="${trip.id}">${trip.id?.encodeAsHTML()}</g:link></td> <td>${trip.airline?.encodeAsHTML()}</td> <td>${trip.name?.encodeAsHTML()}</td> <td>${trip.city?.encodeAsHTML()}</td> <td>${trip.startDate?.encodeAsHTML()}</td> <td>${trip.endDate?.encodeAsHTML()}</td> </tr> </g:each> The status attribute in the <g:each> tag is a simple counter field. (Notice this value is used on the next line in a ternary statement that sets the CSS style to either even or odd.) The var attribute allows you to name the variable used to hold the current item. If you change the name to foo, you need to change the later lines to ${foo.airline?.encodeAsHTML()} and so on. (The ?. operator is a Groovy way of avoiding NullPointerExceptions. It's shorthand for saying "call the encodeAsHTML() method only if airline is not null; otherwise just return an empty string.") status even odd var foo ${foo.airline?.encodeAsHTML()} ?. NullPointerException encodeAsHTML() airline Another common Grails tag is <g:link>. As you might have already guessed, it builds an HTML <a href> link. Nothing is stopping you from using an <a href> tag directly, but this convenience tag accepts attributes such as action, id, and controller. If you'd like to get your hands on just the href value without the surrounding anchor tags, you can use <g:createLink> instead. At the top of list.gsp, you can see a third tag that returns a link: <g:createLinkTo>. This tag accepts dir and file attributes instead of logical controller, action, and id attributes. Listing 4 shows link and createLinkTo in action: <g:link> <a href> action id controller href anchor <g:createLink> <g:createLinkTo> dir file link createLinkTo <div class="nav"> <span class="menuButton"><a class="home" href="${createLinkTo(dir:'')}">Home</a></span> <span class="menuButton"><link class="create" action="create">New Trip</g:link></span> </div> Notice in Listing 4 that you can call Grails tags in two different forms interchangeably — either as tags inside angle brackets or as method calls inside curly braces. The curly-brace notation (formally known as Expression Language or EL syntax) is better suited for times when the method call is embedded in another tag's attributes. Just a few lines down in list.gsp, you can see another popular Grails tag in action: <g:if>, shown in Listing 5. In this case, it says "if the flash.message attribute is not null, display it." <g:if> flash.message <h1>Trip List</h1> <if test="${flash.message}"> <div class="message">${flash.message}</div> </g:if> As you browse through the generated views, you'll come across many other Grails tags in action. The <g:paginate> tag displays "previous" and "next" links if the database contains more Trips than the current 10 on display. The <g:sortable> tag makes the column headers clickable for sorting purposes. Nosing around the other GSP pages shows tags related to HTML forms, such as <g:form> and <g:submit>. The online Grails documentation lists all the available Grails tags and gives examples of their use (see Resources). <g:paginate> <g:sortable> <g:form> <g:submit> Custom tag libraries Although the standard Grails tags are quite helpful, you'll eventually run into a situation where you want your own custom tags. Many seasoned Java developers (myself included) say publicly, "Yes, a custom TagLib is the appropriate architectural solution to use here," and then sneak off to write a scriptlet instead when they think that nobody is looking. Writing a custom JSP TagLib requires so much additional effort that scriptlets often win the day by virtue of being the path of least resistance. They aren't the right way to do things, but they are (unfortunately) the easy way. Scriptlets — ugh. They are hacks in the truest sense of the word. They break HTML's tag-based paradigm and introduce raw code directly into your view. It's not even the code itself that's so bad; what's bad is the lack of encapsulation and reuse potential. The only way to reuse a scriptlet is by copy-and-paste. This leads to bugs, code bloat, and blatant DRY violations. And don't even get me started on a scriptlets' lack of testability. That said, I must confess that I've written more than my fair share of JSP scriptlets when deadlines got tight. The JSP Standard Tag Library (JSTL) went a long way toward helping me get beyond my evil ways, but writing my own custom JSP tags was another issue altogether. By the time I wrote my custom JSP tag in Java code, compiled it, and messed around with getting the Tag Library Descriptor (TLD) in the right format and in the right place, I'd completely forgotten the reason I wrote the tag in the first place. As for writing tests to validate my new JSP tag — let's just say that my intentions were good. In contrast, writing custom TagLibs in Grails is a breeze. The framework makes it easy to do the right thing, including writing tests. For example, I commonly need a boilerplate copyright notice at the bottom of Web pages. It should read © 2002 - 2008, FakeCo Inc. All Rights Reserved. Here's the thing: I'd like the second year always to be the current year. Listing 6 shows how you'd do this with a scriptlet: <div id="copyright"> © 2002 - ${Calendar.getInstance().get(Calendar.YEAR)}, </div> Now that you know how to hack in the current year, next you'll create a custom tag that does the same thing. To start, type grails create-tag-lib Date. This creates two files: grails-app/taglib/DateTagLib.groovy (the TagLib) and grails-app/test/integration/DateTagLibTests.groovy (the test). Add the code in Listing 7 to DateTagLib.groovy: grails create-tag-lib Date class DateTagLib { def thisYear = { out << Calendar.getInstance().get(Calendar.YEAR) } } Listing 7 creates a <g:thisYear> tag. As you can see, the year is written directly to the output stream. Listing 8 shows the new tag in action: <g:thisYear> <div id="copyright"> </div> You might think you're done at this point. I humbly suggest that you're only halfway there. Testing TagLibs Even though everything looks good for now, you should write a test to ensure that this tag doesn't break in the future. Michael Feathers, author of Working Effectively with Legacy Code, says that any code without a test is legacy code. To make sure that Mr. Feathers doesn't yell at you, add the code in Listing 9 to DateTagLibTests.groovy: class DateTagLibTests extends GroovyTestCase { def dateTagLib void setUp(){ dateTagLib = new DateTagLib() } void testThisYear() { String expected = Calendar.getInstance().get(Calendar.YEAR) assertEquals("the years don't match", expected, dateTagLib.thisYear()) } } A GroovyTestCase is a thin Groovy facade over a JUnit 3.x TestCase. Writing a test for a simple one-line tag might seem excessive, but you'd be surprised how many times the one-liners end up being the source of trouble. Writing the test isn't difficult, and it's better to be safe than sorry. Type grails test-app to run the test. If everything is okay, you should see the message shown in Listing 10: GroovyTestCase TestCase grails test-app ------------------------------------------------------- Running 2 Integration Tests... Running test DateTagLibTests... testThisYear...SUCCESS Running test TripTests... testSomething...SUCCESS Integration Tests Completed in 506ms ------------------------------------------------------- If the appearance of TripTests takes you by surprise, don't worry. When you typed grails create-domain-class Trip, a test was generated on your behalf. As a matter of fact, every Grails create command generates a corresponding test. Yes, testing is that important in modern software development. If you aren't already in the habit of writing tests, allow Grails to nudge you gently in the right direction. You won't regret it. TripTests grails create-domain-class Trip The grails test-app command, in addition to running the tests, creates a nice HTML report. Open test/reports/html/index.html in a browser to see the standard JUnit test report, as shown in Figure 1. Your simple custom tag is coded and tested. Now you'll build a tag that's a wee bit more sophisticated. Advanced custom tags More-sophisticated tags can work with attributes and the tag body. For example, the copyright solution as it stands now still requires too much copy/paste for my taste. I'd like to wrap up all of the current behavior in a truly reusable tag like this one: <g:copyrightFakeCo Inc.</g:copyright>. Listing 11 shows the code: <g:copyrightFakeCo Inc.</g:copyright> class DateTagLib { def thisYear = { out << Calendar.getInstance().get(Calendar.YEAR) } def copyright = { attrs, body -> out << "<div id='copyright'>" out << "© ${attrs['startYear']} - ${thisYear()}, ${body()}" out << "</div>" } } Notice that attrs is a HashMap of the tag attributes. I use it here to grab the startYear attribute. I call the thisYear tag as a closure. (This is the same closure call that I could make from the GSP page in curly braces if I felt so inclined.) Similarly, the body is passed into the tag as a closure, so I call it the same way I'd call any other tag. This ensures that my custom tags can be nested to any arbitrary depth in the GSP. attrs HashMap startYear thisYear body You've probably noticed that custom TagLibs use the same g: namespace as the standard Grails TagLibs. If you'd like to place your TagLibs in a custom namespace, add static namespace = 'trip' to DateTagLib.groovy. In your GSP, the TagLib should now read <trip:copyrightFakeCo Inc.</trip:copyright>. g: static namespace = 'trip' <trip:copyrightFakeCo Inc.</trip:copyright> Partial templates Custom tags are a great way to reuse short snippets of code that might otherwise end up in a copy/pasted scriptlet. For larger blocks of GSP markup, you can use a partial template. Partial templates are officially called templates in the Grails documentation. The only problem is that the word template is overloaded to mean a couple of different things in Grails. As you'll see in the next section, you'll install the default templates to change the scaffolded views. Changes to these templates can include the partial templates that I'm about to talk about in this section. To help lessen the confusion, I borrow a bit of nomenclature from the Rails community and call these things partial templates, or often just partials. A partial template is a chunk of GSP code that can be shared across multiple Web pages. For example, suppose I want a standard footer across all of my pages. To accomplish this, I'll create a partial named _footer.gsp. The leading underscore is a hint to the framework (as well as a visual cue for the developer) that this isn't a complete, well-formed GSP. If I create this file in the grails-app/views/trip directory, it's visible only to the Trip views. I'll store it in the grails-app/views directory so that it can be shared globally by all pages. Listing 12 shows the partial template for the globally shared footer: <div id="footer"> <g:copyrightFakeCo, Inc.</g:copyright> <div id="powered-by"> <img src="${createLinkTo(dir:'images', file:'grails-powered.jpg')}" /> </div> </div> As you can see, a partial template allows you to express yourself in HTML/GSP syntax. In contrast, a custom TagLib is written in Groovy. Another way to keep the two straight in your mind is that TagLibs are generally better for encapsulating microbehavior, whereas partial templates are better for reusing layout elements. For this example to work as written, you need to download the "Powered by Grails" button to the grails-app/web-app/images directory (see Resources). You'll see plenty of branding collateral on the download page, including everything from high-resolution logos to 16x16 favicons. Listing 13 shows how you include your newly created footer in the bottom of your list.gsp page: <html><body> ... <g:render </body></html> Notice that you leave the underscore off when rendering a template. If you'd saved _footer.gsp in the trip directory, you'd leave the leading slash off as well. Think of it this way: the grails-app/views directory is the root of the view hierarchy. Customizing the default scaffolding Now that you have some good, testable, reusable components in place, you can make them a part of the default scaffolding. Recall that this is what gets dynamically generated when you put def scaffold = Foo in your controller. The default scaffolding also serves as the source for the GSPs that get created when you type grails generate-views Trip or grails generate-all Trip. def scaffold = Foo grails generate-views Trip To customize the default scaffolding, type grails install-templates. This adds a new grails-app/src/templates directory to the project. You should see three directories named artifacts, scaffolding, and war. grails install-templates The artifacts directory holds the templates for various Groovy classes: Controller, DomainClass, TagLib, to name just a few. If, for example, you'd like all of your controllers to extend an abstract parent class, you can make the change here. All new controllers will be based on your modified template code. (Some people add def scaffold = @artifact.name@ so that dynamic scaffolding is the default behavior for all of their controllers.) DomainClass TagLib def scaffold = @artifact.name@ The war directory contains the web.xml file familiar to all Java EE developers. If you need to add your own parameters, filters, or servlets, this is the place to do it. (JSF enthusiasts: are you paying attention?) When you type grails war, the web.xml file found here is what gets included in the resulting WAR. grails war The scaffolding directory contains the raw material for the dynamically generated views. Open up list.gsp and add <render template="/footer" /> to the bottom of the file. Because these templates are shared across all views, be sure to use global partial templates. <render template="/footer" /> Now that you've tweaked the List view, it's time to verify that your changes are in effect. Modifications to the default templates are one of the few places that require you to restart the server. Once Grails is up and running again, visit in a browser. If you are using default scaffolding in AirlineController, your new footer should appear at the bottom of the page. AirlineController Conclusion That wraps up another installment of Mastering Grails . You should now know a bit more about GSP and the alternate view technologies available to Grails. You should have a better understanding of the default tags used in many of the generated pages. You should definitely feel a little bit dirty the next time you write a scriptlet, given how easy it is to do the right thing by writing a custom TagLib instead. You've seen how to create partial templates, and you've seen how easy it is to add them to the default scaffolded views. Next month, your tour of the Grails Web framework will focus on Ajax. The ability to make "micro" HTTP requests without reloading the entire page is the secret sauce behind Google Maps, Flickr, and many other popular Web sites. You'll put a little of that same magic to use in Grails. Specifically, you'll create a many-to-many relationship and use Ajax to make the user experience natural and enjoyable. Until then, have fun Work..
http://www.ibm.com/developerworks/web/library/j-grails03118/index.html
crawl-001
refinedweb
3,963
64.41
Opened 13 years ago Closed 13 years ago Last modified 13 years ago #5931 closed enhancement (fixed) [with patch, positive review] Greatly speed up sage.combinat.symmetric_group_algebra.e Description The old code essentially reimplemented the multiplication in the group algebra. The new code accumulates the symmetrizers and antisymmetrizers separately, and then does one multiply at the end. This probably results in the same number of operations, but it avoids creating many intermediate objects, so it is about 10x faster. Also update docs for e and e_hat. Timing on 2.2 GHz Core2Duo running 32-bit Ubuntu 8.04 of from sage.combinat.symmetric_group_algebra import e time dummy=e(1,2,3,4],[5,6,7?) Before patch: Time: CPU 3.38 s, Wall: 3.73 s After patch: Time: CPU 0.26 s, Wall: 0.40 s Attachments (3) Change History (7) Changed 13 years ago by Changed 13 years ago by comment:1 Changed 13 years ago by Changed 13 years ago by Replaces both e.patch and doc.patch; relative to 3.4 comment:2 Changed 13 years ago by - Milestone set to sage-4.0 comment:3 Changed 13 years ago by - Resolution set to fixed - Status changed from new to closed - Summary changed from [with patch, needs review] Greatly speed up sage.combinat.symmetric_group_algebra.e to [with patch, positive review] Greatly speed up sage.combinat.symmetric_group_algebra.e Looks good to me. Thanks for this! Merged in 4.0.1.rc1. comment:4 Changed 13 years ago by - Merged in set to 4.0.1.rc1 - Reviewers set to Mike Hansen I think the main reason the old code was slow was that it multiplied GAP group elements in the inner loop, while the new code in e.patch uses the combinatorial algebra multiplication, which internally multiplies sage Permutations. Another reason the old code was slower is that it looped over the GAP column_stabilizer group multiple times (probably requiring interaction with GAP) and re-computed v.sign() each time. However, I did some tests where I avoid just these problems, and still the new code in e.patch is better, almost certainly because it avoids creating lots of intermediate elements of QSn. We can avoid even more of the intermediate elements of QSn with dict.patch which I will attach below. But it only speeds things up by about 2% in the test I ran, since the runtime is dominated by the antisym*sym multiplication. If we are willing to assume that the entries in the tableau are distinct, I have another method which is 25% faster, but I don't think we want to make that assumption. Just for the record, the point is that if the entries are distinct, then each of the products v*h is distinct, so we can easily construct a dictionary for the final result whose values are plus or minus 1. Summary: I recommend the new dict.patch (which includes the documentation change), but it would also be ok to use e.patch and doc.patch if that method is preferred. PS: Note that these patches seem to reverse the order of multiplication from h*v to v*h. That's because of differing conventions between GAP group elements and permutations. PPS: My latest test case has been e(1,2,3,4,5],[6,7,8],[9,10],[11?), which takes forever with sage 3.4, but takes 20-30 seconds with the above patches.
https://trac.sagemath.org/ticket/5931
CC-MAIN-2022-33
refinedweb
576
67.96
Hi everyone, I am a student,quite new to C++. I am working on my thesis right now. Here is my situation, I have an algorithm for my thesis topic and I tried making an example of this in C++ and it works fine with a console application. But I am doing this for a company with whom I am doing my internship and they already have an application this field. They want me to implement my algorithm in their application. This application has an option to call .dll file which I am planning to use. This application has some libraries which I can use. I also tried to make a small mathfunctions example and it works fine. but when I use for loops inside the dll file the program freezes and doesn't run anymore. below is the skeleton of the .cpp file. I am supposed to write my code in the user program area. Another problem I am facing is the array data can be accessed only by calling pre defined functions from the lib file like getdata, setdata.. etc. include "stdafx.h" include <windows.h> #include "stdafx.h" #include <windows.h> #include "C:\users\include\libpltUsrSafeArray.h " int ADD(DWORD dwArgumentCount, VARIANT * pvArgument) { HRESULT hr = S_OK; int rtn = 0; LPSAFEARRAY psfarray01 = NULL; /* Input MTX(MATRIX01) */ hr = ::SafeArrayCopy(pvArgument[0].parray, &psfarray01); double ddata02 = 0; /* Output VALUE01 */ double ddata03 = 0; /* Output VALUE02 */ /***** User Program *****/ /* user program is written here*/ PLTSArrayGetSize(psfarray01, rowsize, colsize); /*pre defined function to get array size */ /***** User Program *****/ if( psfarray01 ) { ::SafeArrayDestroy( psfarray01 ); psfarray01 = NULL; } pvArgument[1].dblVal = ddata02; pvArgument[1].vt = VT_R8; pvArgument[2].dblVal = ddata03; pvArgument[2].vt = VT_R8; return(rtn); } would be helpful if someone could suggest how to approach this.
https://www.daniweb.com/programming/software-development/threads/434776/problems-with-dll-files
CC-MAIN-2018-30
refinedweb
289
59.8
Search Articles FAQs All Questions New Question Lounge - Cultural Question about India - Asked By Robbe Morris on 16-Mar-12 01:03 PM Just curious... While writing and testing a ton of our code that reads forum threads and analyzes keywords, I noticed that quite a few posts from Indian developers who do not put at least one space after the periods at the end of sentences. For other nationalities, this is quite rare. Anyone have any insight on this? ex: Indian developers participate in our forums.Plus they also write FAQs. It is virtual impossible to accurately fix this do to the nature of .net namespaces and various language object names. Since this does make the content harder to read and ultimately leads to downgrades in SEO scores due to grammatical errors, I was wanting to try and address it "somehow". Suchit shah replied to Robbe Morris on 16-Mar-12 01:13 PM Hi, You are correct in that some time what happen people forget to put a space after dot but most of the time people from India are not much used to with the space after a dot. Directly they start to right another word Interesting - Robbe Morris replied to Suchit shah on 16-Mar-12 01:24 PM Thank you, that is good to know. dipa ahuja replied to Robbe Morris on 16-Mar-12 03:40 PM I did not know that can affect SEO. From now i will try take care of it. :) Thanks ! Robbe Morris replied to dipa ahuja on 16-Mar-12 03:46 PM Grammar (including my mistakes in the original post :)) can have a small affect. The pair of words with punctuation like my example can confuse the bots into missing keywords that might be included. Is it a huge deal? Not always but it does have an impact.
http://www.nullskull.com/q/10432468/cultural-question-about-india.aspx
CC-MAIN-2014-10
refinedweb
310
70.84
added a simple page on which we can see added urls and add url. Also, I introduced logging to my application. My github repo: firsor repo. Currently my application looks like this: Templates in frisor with Django Django templates are for creating web page html with dynamic content. Django uses its own template language called DTL(Django Template Language) which is quite similar to jinja. Anyway, it’s possible to change template engine, but I decided to use default one. Using templates is usually pretty straight-forward - at backend side create a context dictionary with names and values of variables which should be returned in response to fill dynamic parts of template. It’s a common practice to create template engines this way - the same method is used in Erlang’s ChicagoBoss and Zotonic frameworks or Ruby’s jekyll. In Django you can set up location of your templates in settings.py - default location is: [app-dir]/templates/[name-of-app]/[name-of-template].html. I decided to keep that as I want to see how defaults are working in this framework. My first template of index.html page was about showing list of urls. It looked like this: <!-- index.html --> <ul> {% for url in url_list %} <li>{{ url.id }}, {{url.publish_date}}, {{url.title}}, {{url.creator}} <a target="_blank" href="{{url.url}}">{{url.url}}</a> </li> {% endfor %} </ul> It uses context dictionary in backend views.py which looks like: # views.py from django.shortcuts import render def index(): context = { 'url_list': url_list, } return render(request, 'frisor_urls/index.html', context) After lunching my application I saw ugly html page, so it was time to make it look better. I’m really bad at designing frontend, so I decided to use bootstrap for styles - it’s simple and looks good without much effort. That’s really great that such things as forms and simple web pages can be created without any JavaScript. I’m considering using django-bootstrap library to not care about putting styles into my html code. Currently frisor is a single page application and I would like to have the same header and footer for other pages. It can be done with extend tag in DTL. How to put an external url in Django template My first try with putting url into my template was like this - url.url is a value of url field of url object. Yeah, I should change name of my model to something else. <!-- index.html --> <a href={{url.url}}>{{url.url}}</a> What’s an issue here? When I put url: and clicked it I was redirected to. It appears that default redirecting is appending url to base server address. Solution To solve it I should append protocol prefix ( http://) if it’s not provided by user. I’ll do this on server side. Using forms in Django After fixing bad look of my urls table time had come to create a form to be able to add some urls to my application. To do this there are a few things needed: Form in html file On frontend side - form html with POST method (because it’s going to create a resource): <!-- index.html --> <form action="{% url 'frisor_urls:index' %}" method="post"> {% csrf_token %} <div class="form-group"> <label for="url">URL:</label> <input name="url" id="url" type="text" class="form-control"/> </div> <div class="form-group"> <label for="title">Title:</label> <input name="title" id="title" type="text" class="form-control"/> </div> <div class="form-group"> <label for="nick">Nick:</label> <input name="nick" id="nick" type="text" class="form-control"/> </div> <input type="submit" class="btn btn-primary"/> </form> Nothing surprising here except {% csrf_token %} tag. It’s for Cross Site Request Forgery protection. CSRF attacks can occur when you have a session in web application and somebody will send you a malicious website which will perform an action using your credentials and session in this application. Currently I don’t have any sessions and users, but better to just add this tag now. Handler for a POST request On backend side - handler for a POST request: # views.py in index() method if request.method == 'POST': nick = request.POST['nick'] url = request.POST['url'] title = request.POST['title'] # TODO: do something about this data I was wondering how to get to data sent by POST request. At first I tried request.body, request.read, but it happens that Django puts all data into dictionary variable named as received request. My form sends a POST request so dictionary at backend side is called POST and I can get to field in data like this: request.POST['name_of_field']. Store url in database In my model of Url (in models.py) I had to add a method for creating an url in database. Of course I could create an Url object using a constructor, but then I had to worry about putting unnecessary data like publish_date, which should always be set to current time when creating this object. My Url.create method: # models.py from datetime import datetime from django.db import models class Url(models.Model): url = models.CharField(max_length=200) publish_date = models.DateTimeField('date published') title = models.CharField(max_length=200) creator = models.CharField(max_length=200) @classmethod def create(cls, url="", title="", creator=""): url = cls(title=title, url=url, creator=creator, publish_date=datetime.now()) return url To create an Url object in my view I had to do: # views.py in index() method url = Url.create(url=url, title=title, creator=nick) url.save() It’s obvious that save method is needed to save an object in database. Create method doesn’t perform saving and in my opinion it shouldn’t. Sometimes it’s needed to create a temporary object to do some actions using it and after them throw it away. Saving logic should be extracted to some service - it shouldn’t be done in view. But as I’m following KISS principle this can be done later. How to display Django template code on my blog? At my blog I’m using jekyll with liquid as template language. Actually liquid is quite similar to jinja which is similar to DTL. So there was a funny issue with this. I wanted to display some of my template code into this post on this blog. At first it was quite surprising that when putting something like this into my posts markdown: {{url.url}} it actually became an empty string. It’s because jekyll is interpreting this as liquids filter and url.url is not defined so it becomes an empty string. Solution To be able use DTL code in my posts I’ve to use a special tag: {% raw %} {{ url.url }} {% endraw %} Logging Default logging configuration added to Django doesn’t log to console anything except Django logs, so when you put something like this into your package: import logging logger = logging.getLogger(__name__) # get an instance of logger and use it inside your packages as logger.info("Info log") it will not display those logs in console after running $ python manage.py runserver. Solution for this is adding custom logging configuration in settings.py. My logging configuration (see comments in a code): LOGGING = { 'version': 1, 'disable_existing_loggers': False, # it means we don't want to remove Django logging configuration 'formatters': { # to format logs 'verbose': { 'format': '%(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s' }, 'simple': { 'format': '%(levelname)s %(asctime)s %(module)s %(message)s' } }, 'handlers': { # how logs should be handled - they can be written to file, console, etc. 'console': { 'level': 'DEBUG', 'class': 'logging.StreamHandler', 'formatter': 'simple' } }, 'loggers': { 'django': { 'handlers': ['console'], 'propagate': True, }, 'django.request': { 'handlers': ['console'], 'level': 'ERROR', 'propagate': False, }, 'frisor_urls.views': { # my custom view 'handlers': ['console'], 'level': 'DEBUG', } } } My TODO list: - Introduce unit tests. - When creating append url if it’s not there - someone told me that I can do this also with UrlField- probably I should’ve read documentation more carefully. - Introduce django-bootstrapinstead of currently used bootstrapto templates. - Introduce footers and headers in html pages. - Introduce validation to url form. - Add comments and tags to urls - Refactor database logic from views.pyto use some service.py
https://vevurka.github.io/dsp17/python/adventures_with_templates/
CC-MAIN-2018-39
refinedweb
1,355
58.69
#include "lwip/opt.h" #include "lwip/arch.h" #include "lwip/netbuf.h" #include "lwip/sys.h" #include "lwip/ip_addr.h" #include "lwip/err.h" netconn API (to be used from non-TCPIP threads) Register an Network connection event If a nonblocking write has been rejected before, poll_tcp needs to check if the netconn is writable again Was the last connect action a non-blocking one? If this flag is set then only IPv6 communication is allowed on the netconn. As per RFC#3493 this features defaults to OFF allowing dual-stack usage by default. Should this netconn avoid blocking? Get the receive buffer in bytes Get the send timeout in milliseconds Get the blocking status of netconn calls ( Set the blocking status of netconn calls ( Set the receive buffer in bytes Set conn->last_err to err but don't overwrite fatal errors Set the send timeout in milliseconds Get the type of a netconn (as enum netconn_type). A callback prototype to inform about events for a netconn Used to inform the callback function about changes Event explanation: In the netconn implementation, there are three ways to block a client: The events have to be seen as events signaling the state of these mboxes/semaphores. For non-blocking connections, you need to know in advance whether a call to a netconn function call would block or not, and these events tell you about that. RCVPLUS events say: Safe to perform a potentially blocking call call once more. They are counted in sockets - three RCVPLUS events for accept mbox means you are safe to call netconn_accept 3 times without being blocked. Same thing for receive mbox. RCVMINUS events say: Your call to to a possibly blocking function is "acknowledged". Socket implementation decrements the counter. For TX, there is no need to count, its merely a flag. SENDPLUS means you may send something. SENDPLUS occurs when enough data was delivered to peer so netconn_send() can be called again. A SENDMINUS event occurs when the next call to a netconn_send() would be blocking. Used for netconn_join_leave_group() Current state of the netconn. Non-TCP netconns are always in state NETCONN_NONE! Get the local or remote IP address and port of a netconn. For RAW netconns, this returns the protocol instead of a port! Create a new netconn (of a specific type) that has a callback function. The corresponding pcb is also created.
http://www.nongnu.org/lwip/2_0_x/api_8h.html
CC-MAIN-2021-10
refinedweb
397
74.29
The motivation behind this post is to answer a simple question: What's the difference between Docker and classic Virtualization techniques? I set out to research this topic in depth and I will share my findings. I am by no means an expert in either Docker or virtualization so feel free to comment if you find any inconsistencies. I will start out by briefly talking about Operating Systems and the Kernel. Then move on to the Kernel's role in virtualization. Finally I will explain how Docker works and how it differs from classic virtualization. Operating Systems This is a broad subject but I will keep this overview very short, there's plenty of literature out there. The Kernel is the component of the OS that provides an abstraction layer between Device Drivers and Software. The Applications running in the OS use the Kernel System API to request access to services from the Kernel (things like storage, memory, network or process management). For example, if you call File.open in Ruby, at some point in the execution the open system call will be executed and the Kernel will abstract away the interaction with the physical hard drive. If you're interested to read more about operating systems I suggest check out the Operating Systems: Three Easy Pieces. Virtualization The key component in any virtualization software is the Hypervisor, also known as the virtual machine monitor (VMM). The hypervisor can be thought of as an API that provides access to the hardware level for the virtual machines. There are two types of hypervisors: hosted and bare-metal. Most desktop virtualization software such as VirtualBox or Vmware Fusion/Player/Workstation use a hosted hypervisor. That means the hypervisor runs as an application and is letting your Operating System's Kernel deal with hardware drivers or resource management. The bare-metal hypervisor on the other hand runs directly on the host machine's hardware. Think of it as a specialized OS that has extra instructions built-in to deal with the virtual machine's access to the actual hardware and resources. The way the Kernel is handling System Calls from Virtual Machines is the main difference between virtualization solutions. With paravirtualization, the OS running on the virtual machine has a modified Kernel that accesses system resources using a Hypervisor Call rather than a System Call. This requires a modified OS on the virtual machine because a vanilla OS will not know to use a HyperCall instead of a System Call. Full virtualization simulates the hardware of the host machine completely and commands are executed as if they would be running on dedicated hardware (through a System Call). This has the advantage that we don't need to run a modified OS. The only downside is that the System Call inside the virtual machine needs to be translated and sent to the host machine's Kernel. This extra step reduces performance. Processors like Intel VT-x and AMD-V fix this problem by providing virtualization hardware instructions and eliminating the System Call translation step. Docker Docker doesn't run different virtual machines. Instead it uses built-in Linux Kernel containment features like CGroups, Namespaces, UnionFS, chroot (more on these later) to run applications in virtual environments. Those virtual environments - called Docker containers, have separate user lists, file systems or network devices. Initially Docker was built as an abstraction layer on top of Linux Containers (LXC). LXC itself is a just an API for the Linux containment features. Starting with Docker 0.9, LXC is not the default anymore and has been replaced with a custom library (libcontainer) written in Go. Overall libcontainer's advantage is a more consistent interface to the Kernel across various Linux distributions. The only gotcha is that it requires Linux 3.8 and higher. Let's look at some of those Kernel features used by Docker. Namespaces and Containers Namespaces isolate processes such as users lists, network devices, process lists and filesystems. There are currently 6 namespaces implemented to date: - mnt (mount points, filesystems) - pid (processes) - net (network stack) - ipc (System V IPC) - uts (hostname) - user (UIDs) Namespaces are not a new concept, the first one to be implemented - the mount namespace was added to Linux 2.4.19 on 2002. CGroups CGroups is another Kernel feature heavily used by Docker. While the namespace isolates various interactions with the Kernel, the role of CGroups is to isolate or limit resource usage (CPU, memory, I/O). Union file systems This Linux service allows you to mount files and directories from other file systems (ie. a namespace isolated file system) and combine them to form a single file system. You can read more about it in this Wikipedia article. When Docker boots a container from an image it first mounts the root file system as read only. After that, instead of making the file system read-write, Docker attaches another file system layer to that container using union mounts. This process continues every time a change to the file system of the container happens. You will notice that when you push an image you create to the docker registry there are many images getting pushed, some of them already exist there, some do not and take longer to upload. UnionFS allows Docker to create a repository of file system changes and this is a wicked cool feature! It saves space and allows you to diff changes to containers very easily. You can see this hierarchy by running: docker images tree. By the way, this functionality is being removed from the core docker client and it's being worked on as a separate project. Here you can see how my two ruby images are based on the main rbenv image. $ docker images --tree ??a4d37 Virtual Size: 407.1 MB Tags: tzumby/rbenv:latest ??03a7 Virtual Size: 508.4 MB Tags: tzumby/ruby-2.0.0:latest ??f6ae Virtual Size: 521.7 MB Tags: tzumby/ruby-2.1.0:latest Let's inspect the UnionFS layers. If you are running Ubuntu you can cd straight into the docker lib folder at /var/lib/docker. If you are on OS X your docker daemon is most likely running in a VirtualBox VM and you can access that by running: $ boot2docker ssh Now you can cd into the docker lib folder and check out the UnionFS layers it created for all your containers. $ cd /var/lib/docker/aufs/diff I will explore this in more depth in future articles but if you are curious you can sort all the folders by date (ls -ltr) and check their contents as you install packages on your container. For example, after I installed rbenv on the system I could find the folder that had just the rbenv changes to the file system. Pretty neat! Conclusion We just quickly went over virtualization and the Docker architecture. Although both Docker and modern virtualization are relatively new, the underlying technologies are not new at all. Before Docker we would run processes using chroot or Jails in FreeBSD for improved security for example. So should you use Docker or classic virtualization? In reality virtualization and Docker can and are used together in modern dev-ops. Most VPS providers are running bare-metal full virtualization technologies like Xen and Docker usually runs on top of a virtualized Ubuntu instance.
https://www.monkeyvault.net/docker-vs-virtualization/
CC-MAIN-2020-05
refinedweb
1,225
55.13
16.7. mmap — Memory-mapped file support¶ Memory). Note If you want to create a memory-mapping for a writable, buffered file, you should flush() the file first. This is necessary to ensure that local modifications to the buffers are actually available to the mapping.. - class mmap. mmap(fileno, length[, tagname[, access[, offset]]])¶ . - class mmap. mmap(fileno, length[, flags[, prot[, access[, offset]]]]) (Unix version) Maps length bytes from the file specified by the file descriptor fileno, and returns a mmap object. If length is 0, the maximum length of the map will be the current size of the file when mmapis called. flags specifies the nature of the mapping. MAP_PRIVATEcreates a private copy-on-write mapping, so changes to the contents of the mmap object will be private to this process, and MAP_SHAREDcreates a mapping that’s shared with all other processes mapping the same areas of the file. The default value is MAP_SHARED. prot, if specified, gives the desired memory protection; the two most useful values are PROT_READ. To ensure validity of the created memory mapping the file specified by the descriptor fileno is internally automatically synchronized with physical backing store on Mac OS X and OpenVMS. mm = mmap.mmap(f.fileno(), 0) # read content via standard file methods print mm.readline() # prints "Hello Python!" # read content via slice notation print mm[:5] # prints "Hello" # update content using slice notation; # note that new content must have same size mm[6:] = " world!\n" # ... and read again using standard file methods mm.seek(0) print mm.readline() # prints "Hello world!" # close the map mm.close() The next example demonstrates how to create an anonymous map and exchange data between the parent and child processes: import mmap import os mm = mmap.mmap(-1, 13) mm.write(. find(string[, start[, end]])¶ Returns the lowest index in the object where the substring string is found, such that string is contained in the range [start, end]. Optional arguments start and end are interpreted as in slice notation. Returns -1on failure. flush([offset, size])¶. move(dest, src, count)¶ Copy the count bytes starting at offset src to the destination index dest. If the mmap was created with ACCESS_READ, then calls to move will raise a TypeErrorexception. read(num)¶ Return a string containing up to num bytes starting from the current file position; the file position is updated to point after the bytes that were returned. read_byte()¶ Returns a string of length 1 containing the character at the current file position, and advances the file position by 1. readline()¶ Returns a single line, starting at the current file position and up to the next newline. resize(newsize)¶ Resizes the map and the underlying file, if any. If the mmap was created with ACCESS_READor ACCESS_COPY, resizing the map will raise a TypeErrorexception. rfind(string[, start[, end]])¶ Returns the highest index in the object where the substring string is found, such that string is contained in the range [start, end]. Optional arguments start and end are interpreted as in slice notation. Returns -1on failure. seek(pos[, whence])¶ Set the file’s current position. whence argument is optional and defaults to os.SEEK_SETor 0(absolute file positioning); other values are os.SEEK_CURor 1(seek relative to the current position) and os.SEEK_ENDor 2(seek relative to the file’s end). write(string)¶ Write the bytes in string into memory at the current position of the file pointer; the file position is updated to point after the bytes that were written. If the mmap was created with ACCESS_READ, then writing to it will raise a TypeErrorexception.
https://docs.python.org/2.7/library/mmap.html
CC-MAIN-2018-43
refinedweb
595
65.83
MVEs are more concerned with the satisfaction of those they help than with the considerable points they can earn. They are the types of people you feel privileged to call colleagues. Join us in honoring this amazing group of Experts. public IEnumerable<Report> GetMyReports(int resellerid) { ReportsAndName repn = new ReportsAndName(); var reports = from c in entities.Customers from r in entities.Reports where c.CustID == r.CustomerID && c.Reseller_id == resellerid select r;// String.Format("{0} - {1}", r, c.Business_name); return reports; public IEnumerable<ReportAndCustomerName> GetMyReports(int resellerid) { var reports = from c in entities.Customers from r in entities.Reports where c.CustID == r.CustomerID && c.Reseller_id == resellerid select new ReportAndCustomerName { CustomerID = r.CustomerID, //... All other fields from Reprts and Customers BusinessName = c.BusinessName }; return reports; } // This class is used to get all the needed info from the query public class ReportAndCustomerName { // List all the fields from Reports table public int CustomerID { get; set; } // All the other fields from Reports here // The BusinessName From the cuistomer table public string BusinessName { get; set; } } If you are experiencing a similar issue, please ask a related question Join the community of 500,000 technology professionals and ask your questions. Connect with top rated Experts 6 Experts available now in Live!
https://www.experts-exchange.com/questions/26823569/How-do-I-pass-an-aggregate-Linq-query-to-my-view.html
CC-MAIN-2017-09
refinedweb
206
50.43
Important: Please read the Qt Code of Conduct - [solved] QT Creator 5.0.2 static build So after no one could help me in the german forum i give it a try in the english one. My problem is easy and if i belive all the threads here the soluion is easy, too. I made a small program with QT Creator and now i want to have a stand-alone .exe I installed QT 5.0.2 with the source. i tried to build qt static and i think i did it. But the builded programms are still not staticly. What i want is simple. in the left down corner you can select what project should be build and if it should be release or debugg. i would like to have an other option static release. The main problem i have is that all the path shown in the tuts are not the same i have. Also everybody says edit the mekspec config. Add -static at QMAKE_LFLAGS = ... just before all the other options. but there are no other option by default. When doing the confoguration for building qt i cannot choose -no-exceptions. it simply doesn't exist. I really hope someone can explain me how to build my programs staticly. a i forget i use the minGW compiler. and windows 8 Thank you welcome to devnet In order to build a static application, you need a static build of Qt libs first. All pre-compiled Qt libs are dynamic. First you need to make your own static Qt libs. However, you need to be aware of the license issues when you like to distribute your application. Second you can compile your application as statatic and link with your set of static Qt libs. AFAIK it is still better to download the "Qt source from here.": You cannot simply recompile statically the source delivered with another build. [edit, there is also a "wiki entry": for static build Qt libs in German. A bit dusty :-( ] Thank you so much. I always just found this download page I will have a look at this in the noon. And thank you for the german wiki. mh i have some problem with this again. So i now have installed: QT Creator 5.0.2 32bit MinGW. The files you gave me. The Path are: C:\Qt\Qt5.0.2... C:\Qt\Qt static... in C:\Qt\Qt static\mkspecs\win32-g++\qmake.conf i changed the following: QMAKE_CFLAGS_RELEASE = -Os -momit-leaf-frame-pointer | removed the other things QMAKE_LFLAGS = -static -static-libgcc … | added this, there where nothing by default DEFINES += QT_STATIC_BUILD | removed the other things C:\Qt\Qt static\qmake\Makefile.win32-g++: LFLAGS = -static -static-libgcc … | so now it looks like -static -static-libgcc -s C:\Qt\Qt static\src\3rdparty\webkit\Source\WebKit.pri : CONFIG += staticlib added this at line 2. Than i startet the console from QT Creator 5.0.2 changed to C:\Qt\Qt static my inputs: configure -static -release -platform win32-g++ -no-exceptions o y The last output lines: mingw32-make: *** No rule to make target 'C:\Qt\Qt', needed by 'project.o'. Stop. del many o-files C:\Qt\Qt static\qmake\project.o konnte nicht gefunden werden mingw32-make: *** No rule to make target 'C:\Qt\Qt' needed by 'project.o'. Stop Building qmake failed, return code 2 I hope you can help me hmmm ?? I have not build Qt 5.0 myself in any version yet. Unfortunately, the static build I did was under Linux with Qt 4.6.x version. That was really very easy besides the waiting time. No changes of conf files required IIRC. You need to make sure that another version is available through the path settings I guess. However, that should be already the case anyway, otherwise you have not compiled apparently already a lot of stuff. Oh, I think i t could be the forward/backward slash problem. If you have typed in the files somewhere a path, you have to use '/' the backward version is often used for continuation lines. The forward slash is standard in Linux but Windows understands it too. Everything with folder separation in Qt should be done with forward slash '/'. Saves a lot of trouble. Furthermore, if you have decided the folder name "C:\Qt\Qt static", use something with a space e.g. "C:/Qt/Qt_static" should be fine. ok the / \ thing i managed it was the qt space static now it made a lot more things, but no i get error return 3 :D What is the maximum error code?? XD But seams to be something simple Generating Makefiles... Failure to read QMAKESPEC conf file c:\QT\QT5.0.2\Src\qtbase\mkspecs\win32-g++\qmake.conf. Error processing project file: C:/QT/QT_static/Projects.pro Qmake failed, return code 3 So why is the qmake process doing something in the Qt5.0.2 folder?? it shouldn't Do not start the command prompt through creator. That might be the problem. Start the cmd.exe under "Start" or whatever it is in your version. Creator will add its own environment, which you do not want. WUHU the first step runs without problems :D I added the QDIR in the enviroment Variables in the past and that showed to the wrong path :D changed, restart and than it worked :D but not the mingw32-make -.- @C:\Qt\Qt_static>mingw32-make sub-src cd src\tools\bootstrap\ && mingw32-make -f Makefile mingw32-make[1]: Entering directory 'C:/Qt/Qt_static/src/tools/bootstrap' mingw32-make -f Makefile.Release mingw32-make[2]: Entering directory 'C:/Qt/Qt_static/src/tools/bootstrap' g++ -c -pipe -Os -momit-leaf-frame-pointer -frtti -fno-exceptions -Wall -Wextra -DQT_STATIC_BUILD -DQT_BOOTSTRAPPED -DQT_LITE_UNICODE -DQT_NO_CAST_FROM_ASCII -D QTQT_NO_DEPRECATED - DQT_NODLL -I"......\include" -I"......\include\QtCore" -I"......\include\Q tXml" -I"......\mkspecs\win32-g++" -o tmp\obj\release_static\qglobal.o ....\c orelib\global\qglobal.cpp ....\corelib\global\qglobal.cpp: In function 'QString qt_error_string(int)': ....\corelib\global\qglobal.cpp:2192:27: error: cannot convert 'LPWSTR {aka wch ar_t*}' to 'LPSTR {aka char*}' for argument '5' to 'DWORD FormatMessageA(DWORD, LPCVOID, DWORD, DWORD, LPSTR, DWORD, char**)' Makefile.Release:998: recipe for target 'tmp/obj/release_static/qglobal.o' faile d mingw32-make[2]: *** [tmp/obj/release_static/qglobal.o] Error 1 mingw32-make[2]: Leaving directory 'C:/Qt/Qt_static/src/tools/bootstrap' Makefile:34: recipe for target 'release' failed mingw32-make[1]: *** [release] Error 2 mingw32-make[1]: Leaving directory 'C:/Qt/Qt_static/src/tools/bootstrap' makefile:1365: recipe for target 'sub-tools-bootstrap-sub_src_target_ordered' fa iled mingw32-make: *** [sub-tools-bootstrap-sub_src_target_ordered] Error 2 C:\Qt\Qt_static> @ hope this says something to you. Google sayed there is something wrong with the unicode (?) but not how to fix Did you do a confclean before new configure? Could be a problem with some leftover from previous configuration. Do you have special settings with respect to unicode? i did not a clean, because i made the steps from the wiki to clean up and it seams to me to be clean. settings are: -static -platfrom win32-g++ -release -no-exceptions Do not know if this is enough. At the end of configure it says something like @ To reconfigure, run 'make confclean' and 'configure' @ This text I have copied from the linux configure file, but the windows version displays something similar. In your case probably @ To reconfigure, run 'mingw32-make confclean' and 'configure' @ A mixture of settings from your previous trials is my only explanation at the moment. So a confclean would be helpful. I doubt that this is general problem, because it would be a bug. still the same problem. At an other PC it worked. to build QT staticly, but it couldn't build the project You are aware of differences between PCs? Those differences might be the reason for your problem. :-( Jep i found it. fopr me i did all the steps i should do befor starting building. for the other i didn't. ANd i now know what the problem was. in makfile.conf the Define line says something about unicode. the last time i deletet all and replaced with the thing from the tut. this time i added the part from the tut and sice 1 hour it's building :D Good to know. Typically it takes a bit of time. Is there a problem with the German wiki entry? If so, you should correct it. If you have doubt that it is general mark it as a difference for building Qt 5. If you want I can proof read. Hi, yes i think i'll make a report of what i made to get it running. but before i need you help again. I have no problem with building. The exe is 164.431KB big. looks good i think. But when open it nothing happens. and if i run it with QT i get the messege Programm abgestürzt ..... Rückgabewert -1073741819 something know about this?? Well, what can I say besides that your app crashed :-( You need to use the debugger and step through to the point where the app is crashing. Alternatively, you have to do it like our ancestors ( nach alter Väters Sitte ) put in some output statements where you think the application steps through. With MinGW you can use gdb for debugging. When debugging you can look at the values, if there is something wrong and stuff. If you haven't done before, it is good thing to look at, if you have done before, sorry for telling the obvious stuff. Sometimes it is good to make a mixed approach debugging and output statements. BTW for output during your debug session "qDebug()": is very helpful. You can do a lot of neat stuff with it. You can use more or less as cerr and cout in standard c++. By summarizing your findings you can update the wiki entry. Typically you have only a few things to add. So, it could be less work than writing a complete entry. thanks for the link to the debug Class. That's really new for me. Normaly i debugg with couts :D the problem is, that this programm i wanted to be static has no errors. because it's a complete clean new QT applicattion. just made file>new>qt c++ application>next>next...... added CONFIG += static in the .pro file, build. and that's it. the building toke a very long time, but there where no errors. BTW it is Qt. QT stands for QuickTime. Oh, you got a "hello world" test application only. Or is it more? You are writing of a long compile time. What do you mean with long? I would assume that the problem is with the compile settings. A mixture of the "kit/tool chain". I doubt that just adding "static" to the .pro will do the job. To use qdebug I can really recommend. I used Qt already for a couple of years until I have started with qDebug. It looked cumbersome and why should you use when cout and cerr is available. However, together with Qt creator it makes sense and also you can more easily redirect the output stuff (e.g. to a file or a socket). it's even less than a hello world. its an empty window that should apear. with long time i mean it seams to me to be minuts, but i think in real it was a bout 30 sec. for complex programms i made without static let'a take an Image Filter prgm it was just a few sec. i'll have a look at my test prgm in the afternoon. i have no pc here at the moment. ok i just retest. file>new>qtapplication Kits:Desktop and desktop static Desktop, without static, works fine. Desktop with static, builds fine, but crashes. @#include "mainwindow.h" #include <QApplication> int main(int argc, char *argv[]) { QApplication a(argc, argv); MainWindow w; w.show(); return a.exec(); } @ @#include "mainwindow.h" #include "ui_mainwindow.h" MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent), ui(new Ui::MainWindow) { ui->setupUi(this); } MainWindow::~MainWindow() { delete ui; }@ you see it's a normal clean new Application. Also in the pro file i changed nothing. When debugging i get the error bq. Der Prozess wurde nach Erhalt eines Signals vom Betriebssystem angehalten Name des Signals: SIGSEGV Bedeutung: Segmentation fault Free translated: bq. The Prozess was stopped after receive a signal from operatingsystem Name of the signal: SIGSEGV Meaning: Segmentation fault So a friend got this a while ago, when using a pointer showing on a memory that was not from the Application. But this works without static and is a clean new App. I really hope someone can help me The debugger stopps showing me a file named qatomic_i386.h never heard something about this. I toke the time it takes to build: static debugg: 20 sec static release : < 10sec normal debugg: < 5sec normal release: < 5sec I assume you are using Qt creator for creating your app. What do you change when switching from dynamic to static build? The differences there are essential. My guess is that the compilation of the static app doesn't use a consistent memory model. In creator go to "Options"->"Build&Run" under the Tabs "Kits" and "Qt versions" you find what you have installed and accessible. On the left side of creator under "Projects" ->"Build&Run" you can find the actual kits used for your build. With manage kits, you have a link to the options menu. In the "Projects" menu you can switch between different setups. When switching setups, I found it helpful to rerun qmake from creator. This makes sure that you use the settings you see. For your small project it does not matter anyhow and it saves a lot of headaches. Use also shadow build. Probably you should post "part" of your error messages as displayed. PS: I am to lazy to translate the menu entries back to German. I prefer the English terms and have switched the language of creator a long time ago. ;-) Yes. I have the Creator 5.0.2. There i made a new Project. File>new>Qt-Gui-Application Name: test Kit: Desktop Qt 5.0.2 MinGW 32bit Desktop Qt4.8.4 (Qt_static) Options>..>Qt Versions give me a ! at the Qt 4.8.4 (Qt_static) version. The details say Hilfskomponeten: keine verfügbar and Es ist kein qmlviewer installiert Options...>Kits There is also a ! at the static Kit. The Message is Warnung: Der Compiler ... erstellt möglicherweise keinen mit der Qt Version ... kompatiblen Code. So maybe it's because i use Qt 5.0.2 Creator and Qt 4.8.4 for static?? YEAH!!!!! It works thank you a lot. The Problem was i cloned the Kit. And so it toke the minGW from Qt. I now deleted it and made a new one and choosed the MinGW in my C:\MinGW direction. I installed a while ago for Eclipse. And now it works :D I have just a small problem. I one Programm i made i use try and catch. and for some reason the compiler give me the error exception handling disabled, use -fexceptions to enable What does it mean? Can't i use try catch for static release?? for dynamic it works. Or do i only need a option for compiling?? Congratulations!! :D Just for clarification. There is no Qt creator version 5... out yet, ;-) What you have installed are the Qt libs version 5.0.2 together with Qt creator version 2.7 something. Qt creator is an IDE using also Qt libs. However, it may be used with different Qt lib versions. With the exceptions I am not sure right away. You need to check MinGW docs if the switch has to be used throughout all libs. IIRC this is not a requirement, but you should better check. If it is an requirement than you need to recompile Qt libs. This implies that Qt libs can be compiled with exceptions switched on. I cannot imagine that there should be a problem, because it would be too common. You would need to set the compile switch in your .pro. Checkout the" qmake documentation": for more details. Checkout the QMAKE_CFLAGS_... and QMAKE_CXXFLAGS... Ok. just saw there is an other problem. When there is a unused variable. it doesn't work eighter. The only problem, when using clickable lables the compiler say unsused, but it's is used. can it be that the option -no-exceptions is the problem?? ==EDIT== ok forget it. it works :D but how can i make the compiler using öäü...?? with the dynamic it wörks, now i get a A with an ~ over it for a Ö. The Umlaute is more a character set problem, I believe. Qt has "QLocale": for this. However, that stuff is completely outside of my experience. With QLocale it works. i made a thread here how i get Qt static working. Just noticed, that Qt now didn't find the #include <Q...> for dynamic :D but i don't care i like static more. Using static Qt libs or static libs in general has certainly advantages. However as noted already right at the beginning: [quote author="koahnig" date="1372231046"] First you need to make your own static Qt libs. However, you need to be aware of the license issues when you like to distribute your application. [/quote] "Here is the link to the license options": Jep i know. And at the moment i'm not thinking of doing a non opensource prgm. But the problem was only with one programm. don't know why. an other programm with QApplication header works.
https://forum.qt.io/topic/28706/solved-qt-creator-5-0-2-static-build
CC-MAIN-2022-05
refinedweb
2,954
77.43
- 11. Since both nodes and instances support some common functionality (names and indices), we add a class so that we can access these attributes in a generic way. - 22 May, 2009 2 - 21 May, 2009 1 commit This makes hail compile and get a request parsed via IAlloc, but nothing more. - 20 May, 2009 2 commits. Adding a small request type data structure. - 18 May, 2009 :) - 22 Mar, 2009 2 commits. - 21 Mar, 2009 1 commit This patch adds a new n_mem field to the node objects, and implements read/save/show support for it. The field is not currently used (except in the node list) but will be used for checking data consistency and instance up/down status. - 20 Mar, 2009 1 commit The modules are moved from the ‘top’ namespace to ‘Ganeti.HTools’, in compliance with standard practices. - 14 Feb, 2009 3 commits This patch moves a function to Utils and changes hn1 to be able to take data from RAPI. This patch changes the tryRapi function so that if the http request succeeded, we don't try https too. ... hopefully this is more clear. - 13 Feb, 2009 1 commit The patch adds compatibility with RAPI v1, and this required some new JSON functions as valFromObj doesn't behave nicely. Some other unrelated changes were done too. - 12 Feb, 2009 1 commit This patch changes the RAPI endpoint return the same data format as the input files. This will allow using it instead of the files.
https://git.minedu.gov.gr/itminedu/snf-ganeti/-/commits/17e7af2b139193511bde20b486dbd6f414fe2ab5/Ganeti/HTools/IAlloc.hs
CC-MAIN-2021-39
refinedweb
248
81.53
I’ve created some controls to easily embed Facebook functionality in XPages applications. All controls use the Facebook JavaScript API and Facebook’s HTML extension XFBML. Technically the tricky part was to add a namespace to the HTML tag but that complexity is hidden now in the SDK control. The controls include some of the key Social Plugins that Facebook provides, e.g. the like button, send button and login button and the ability to use Facebook comments. Additionally you can you the Graph API to read from and write to Facebook. The first screenshot shows the new controls in the palette and some of them, e.g. like button, used in an XPage: Here is the runtime UI of that XPage: The next screenshot shows the login button and the result of an API call to get the user’s picture: Watch this short video to see the controls in action. I’d like to make this available soon with other new social controls in the OpenNTF Social Enabler project.
http://heidloff.net/article/05132011020328AMNHE959.htm
CC-MAIN-2020-10
refinedweb
171
69.82
Crear cuenta - Registrarse I'm using python (Django Framework) to read a CSV file. I pull just 2 lines out of this CSV as you can see. What I have been trying to do is store in a variable the total number of rows the CSV also. How can I get the total number of rows? file = object.myfilePath fileObject = csv.reader(file) for i in range(2): data.append(fileObject.next()) I have tried: len(fileObject) fileObject.length python csv count import csv count = 0 with open('filename.csv', 'rb') as count_file: csv_reader = csv.reader(count_file) for row in csv_reader: count += 1 print count numline = len(file_read.readlines()) This works for csv and all files containing strings in Unix-based OSes: import os numOfLines = int(os.popen('wc -l < file.csv').read()[:-1]) In case the csv file contains a fields row you can deduct one from numOfLines above: numOfLines numOfLines = numOfLines - 1 Several of the above suggestions count the number of LINES in the csv file. But some CSV files will contain quoted strings which themselves contain newline characters. MS CSV files usually delimit records with \r\n, but use \n alone within quoted strings. For a file like this, counting lines of text (as delimited by newline) in the file will give too large a result. So for an accurate count you need to use csv.reader to read the records. Thank you for the comments. I tested several kinds of code to get the number of lines in a csv file in terms of speed. The best method is below. with open(filename) as f: sum(1 for line in f) Here is the code tested. import timeit import csv import pandas as pd filename = './sample_submission.csv' def talktime(filename, funcname, func): print(f"# {funcname}") t = timeit.timeit(f'{funcname}("{filename}")', setup=f'from __main__ import {funcname}', number = 100) / 100 print('Elapsed time : ', t) print('n = ', func(filename)) print('\n') def sum1forline(filename): with open(filename) as f: return sum(1 for line in f) talktime(filename, 'sum1forline', sum1forline) def lenopenreadlines(filename): with open(filename) as f: return len(f.readlines()) talktime(filename, 'lenopenreadlines', lenopenreadlines) def lenpd(filename): return len(pd.read_csv(filename)) + 1 talktime(filename, 'lenpd', lenpd) def csvreaderfor(filename): cnt = 0 with open(filename) as f: cr = csv.reader(f) for row in cr: cnt += 1 return cnt talktime(filename, 'csvreaderfor', csvreaderfor) def openenum(filename): cnt = 0 with open(filename) as f: for i, line in enumerate(f,1): cnt += 1 return cnt talktime(filename, 'openenum', openenum) The result was below. # sum1forline Elapsed time : 0.6327946722068599 n = 2528244 # lenopenreadlines Elapsed time : 0.655304473598555 n = 2528244 # lenpd Elapsed time : 0.7561274056295324 n = 2528244 # csvreaderfor Elapsed time : 1.5571560935772661 n = 2528244 # openenum Elapsed time : 0.773000013928679 n = 2528244 In conclusion, sum(1 for line in f) is fastest. But there might not be significant difference from len(f.readlines()). sum(1 for line in f) len(f.readlines()) sample_submission.csv is 30.2MB and has 31 million characters. sample_submission.csv I think we can improve the best answer a little bit, I'm using: len = sum(1 for _ in reader) Moreover, we shouldnt forget pythonic code not always have the best performance in the project. In example: If we can do more operations at the same time in the same data set Its better to do all in the same bucle instead make two or more pythonic bucles. First you have to open the file with open input_file = open("nameOfFile.csv","r+") Then use the csv.reader for open the csv reader_file = csv.reader(input_file) At the last, you can take the number of row with the instruction 'len' value = len(list(reader_file)) The total code is this: input_file = open("nameOfFile.csv","r+") reader_file = csv.reader(input_file) value = len(list(reader_file)) Remember that if you want to reuse the csv file, you have to make a input_file.fseek(0), because when you use a list for the reader_file, it reads all file, and the pointer in the file change its position row_count = sum(1 for line in open(filename)) worked for me. row_count = sum(1 for line in open(filename)) Note : sum(1 for line in csv.reader(filename)) seems to calculate the length of first line sum(1 for line in csv.reader(filename)) might want to try something as simple as below in the command line: sed -n '$=' filename or wc -l filename sed -n '$=' filename wc -l filename To do it you need to have a bit of code like my example here: file = open("Task1.csv") numline = len(file.readlines()) print (numline) I hope this helps everyone. try data = pd.read_csv("data.csv") data.shape and in the output you can see something like (aa,bb) where aa is the # of rows You need to count the number of rows: row_count = sum(1 for row in fileObject) # fileObject is your csv.reader Using sum() with a generator expression makes for an efficient counter, avoiding storing the whole file in memory. sum() If you already read 2 rows to start with, then you need to add those 2 rows to your total; rows that have already been read are not being counted. Use "list" to fit a more workably object. You can then count, skip, mutate till your heart's desire: list(fileObject) #list values len(list(fileObject)) # get length of file lines list(fileObject)[10:] # skip first 10 lines
http://publicatodo.co/Detalles/3311/Count-how-many-lines-are-in-a-CSV-Python-
CC-MAIN-2021-49
refinedweb
908
66.23
This article explains how to find and close the window using Win API . The FindWindow function retrieves a handle to the top-level window whose class name and window name match the specified strings. This function does not search child windows. This function does not perform a case-sensitive search. FindWindow FindWindow(string lpClassName,string lpWindowName) Spy++ (SPYXX.EXE) is a Win32-based utility that gives you a graphical view of the system's processes, threads, windows, and window messages. With the Window Finder Tool, you can find the properties of a selected window. Step 1: Arrange your Windows so that Spy++ and the subject window are visible. Step 2: From the Spy menu, choose Find Window to open the Find Window dialog box. Step 3: Drag the Finder Tool to the desired window. As you drag the tool, window details display in the dialog box. (Handle, Caption(Window Name), Class Name) using Microsoft.Win32; ; private void closeWindow() { // retrieve the handler of the window int iHandle = FindWindow("Notepad", "Untitled - Notepad"); if (iHandle > 0) { // close the window using API SendMessage(iHandle, WM_SYSCOMMAND, SC_CLOSE,.
http://www.codeproject.com/Articles/22257/Find-and-Close-the-Window-using-Win-API?msg=2362364
CC-MAIN-2014-35
refinedweb
182
66.54
Recently, it occurred to me that one of my websites would probably benefit from an RSS feed. However, I really didn’t understand what RSS feeds were. I understood the basic purpose but really had no clue as to how they worked. With words like “syndication” being tossed around when describing RSS feeds, I had imagined it involved some sort code that continually sent data to some mystical location. Fortunately, understanding RSS feeds is very easy, and creating your own RSS feed in ASP.NET is a breeze. RSS stands for Really Simple Syndication. It provides a standard for you to make information available to anyone who wants to request your feed. One of my sites is a shareware site and I thought a feed would allow users to stay in contact with my site and make it more likely that they would return. Moreover, an RSS feed allows them to do this without signing up or even giving me their email address. These days, it’s getting easier for users to use feeds because more and more software is starting to support them. For example, when you enter the URL of a feed into Microsoft Internet Explorer, the information is now formatted specifically for feeds. Microsoft Live Mail also has direct support for feeds. There are also a number of websites that can help you to subscribe to and view RSS feeds. An RSS feed is simply an XML file on your site that conforms to the RSS specification. Of course, since feeds are meant to be constantly updated, you would normally want to generate this file on-the-fly when it is requested. And, of course, ASP.NET makes this very easy to do. Listing 1 shows my feed file. This is a normal, every day ASPX file and what you see makes up the entire contents of the file. The first thing to notice is the OutputCache declaration on the second line. When you use OutputCache, requests for this file within the given duration will simply return a copy of the previous results. The duration is in seconds, so if two requests for this file occur within two minutes, the code will not run again for the second request. Instead, ASP.NET will simply return the same data that was returned for the first request. Since the page runs potentially lengthy code and makes a potentially substantial hit on the database, this ensures the site doesn’t get bogged down under heavy traffic. <%@ Page Language="C#" AutoEventWireup="true" %> <%@ OutputCache Duration="120" VaryByParam="none" %> <%@ Import Namespace="System.Xml" %> <%@ Import Namespace="System.Data" %> <%@ Import Namespace="System.Data.SqlClient" %> <%@ Import Namespace="SoftCircuits" %> <script runat="server"> /// <span class="code-SummaryComment"><summary> </span> /// Create RSS Feed of newest submissions /// <span class="code-SummaryComment"></summary> </span> /// <span class="code-SummaryComment"><param name="sender"></param> </span> /// <span class="code-SummaryComment"><param name="e"></param> </span> protected void Page_Load(object sender, EventArgs e) { // Clear any previous response Response.Clear(); Response.ContentType = "text/xml"; // XmlTextWriter writer = new XmlTextWriter(Response.OutputStream, Encoding.UTF8); writer.WriteStartDocument(); // The mandatory rss tag writer.WriteStartElement("rss"); writer.WriteAttributeString("version", "2.0"); // The channel tag contains RSS feed details writer.WriteStartElement("channel"); writer.WriteElementString("title", "File Parade's Newest Submissions"); writer.WriteElementString("link", ""); writer.WriteElementString("description", "The latest freeware and shareware downloads from File Parade."); writer.WriteElementString("copyright", // File Parade image writer.WriteStartElement("image"); writer.WriteElementString("url", ""); writer.WriteElementString("title", "File Parade Freeware and Trialware Downloads"); writer.WriteElementString("link", ""); writer.WriteEndElement(); // Objects needed for connecting to the SQL database using (SqlDataReader reader = DataHelper.ExecProcDataReader("GetRssFeed")) { // Loop through each item and add them to the RSS feed while (reader.Read()) { writer.WriteStartElement("item"); writer.WriteElementString("title", EncodeString(String.Format("{0} {1} by {2}", reader["Title"], reader["Version"], reader["Company"]))); writer.WriteElementString("description", EncodeString((string)reader["Description"])); writer.WriteElementString("link", String.Format("{0}", reader["ID"])); writer.WriteElementString("pubDate", ((DateTime)reader["ReleaseDate"]).ToShortDateString()); writer.WriteEndElement(); } } // Close all tags writer.WriteEndElement(); writer.WriteEndElement(); writer.WriteEndDocument(); writer.Flush(); writer.Close(); // Terminate response Response.End(); } protected string EncodeString(string s) { s = HttpUtility.HtmlEncode(s); return s.Replace("\r\n", "<br />\r\n"); } </script> Next are my declarations to import the needed namespaces. Nothing special here—just the declarations needed for database access. Note that this code won’t run for you as listed. It includes my SoftCircuits namespace, which contains some in-house routines for the database. You’ll need to replace this with your own database code. This makes sense since you’ll be returning your own data. The core of the code is placed in the Page_Load event handler. As you know, this code is called when the page is first requested. The first step is to clear the response of any previously output content. Remember, we are creating an XML file and we don’t want any other content to be returned. Next, we set some headers so that the user agent can see what type of content we are returning. From here, we go ahead and create an XmlTextWriter and attach it to our output stream, and we can start creating our output. We start with some mandatory RSS tags—these are need to identify our content as an RSS file. Next, we add some mandatory tags that describe our channel. This provides additional, descriptive information about our content. Next, I add some optional tags, which specify a small image and related data. After that, we can finally start to output our actual data. My code uses an in-house method called DataHelper.ExecProcReader, which calls a stored procedure to obtain my data. You will need to replace this with your own code to return whatever data you are syndicating. My routine simply returns a SqlDataReader and I loop through each row in the data it returned. Note that I perform some modifications to my text fields before writing them. In my case, this text is submitted from various authors and I don’t want them to include their own HTML markup. So I call HtmlEncode, which causes markup to appear as it was written instead of allowing it to modify the layout, formatting, or creating links. I then insert my own markup by placing <br /> wherever there is a newline. This ensures newlines will appear for the user. I should point out that WriteElementString() will HTML-encode the string being written. This prevents markup from disturbing the XML markup. Note that data will be HTML-decoded when it is read. So you only need to mess with this if you want to tweak the data you are returning. We then flush the XML writer for good measure, and terminate our response. Again, we are creating an XML file and this last step prevents any other output from accidently being included in the response. If you’re like me, you may be a little surprised how easy this really is. To allow someone to check your feed, you simply provide them with the URL to this page. Using software that supports feeds, they can have instant access to your data in a convenient format. And, of course, are more likely to return to your site when they need more.
http://www.codeproject.com/script/Articles/View.aspx?aid=53067
CC-MAIN-2015-18
refinedweb
1,195
58.08
GetDragObject always returning exception On 02/08/2016 at 02:38, xxxxxxxx wrote: I'm trying to detect what type of object is being dragged into a LINK gizmo, inside a dialog defined by a res file. I'm using the code: def Message(self, msg, result) : if msg.GetId() == c4d.MSG_DESCRIPTION_CHECKDRAGANDDROP: try: obj=self.GetDragObject(msg) except: return gui.GeDialog.Message(self, msg, result) print obj return gui.GeDialog.Message(self, msg, result) But the print obj is never reached. I'm not even sure if I'm using the GetDragObject() command correctly because it is not documented in the Python SDK. On 02/08/2016 at 09:54, xxxxxxxx wrote: Whenever I place questions here, I don't stop trying to search for ways to solve my problems and get results. So, here is what I have come up with: def Message(self, msg, result) : if msg.GetId() == c4d.BFM_DRAGRECEIVE: # check if the dragging operation is into my LINK gizmo if self.CheckDropArea(MY_LINK_GIZMO,msg,True,True)==True: # get the drag information draginfo=self.GetDragObject(msg) # get the list of objects dragged obj=draginfo['object'] # in my case, I just allow one if len(obj)==1: # if it is the allowed type, happily return if obj[0].GetType()==ALLOWED_ID: return gui.GeDialog.Message(self, msg, result) # otherwise, signal that is not allowed return self.SetDragDestination(c4d.MOUSE_FORBIDDEN) return gui.GeDialog.Message(self, msg, result) On 03/08/2016 at 05:09, xxxxxxxx wrote: Hi Rui, just want to acknowledge reacting on BFM_DRAGRECEIVE message is the correct approach.
https://plugincafe.maxon.net/topic/9634/12938_getdragobject-always-returning-exception
CC-MAIN-2020-40
refinedweb
259
50.02
转自: was using ArrayList.CopyTo(Array) to convert arraylist to a type[] array. I thought ArrayList.ToArray(Type) was for intrinsic types (int, string etc.) but forgot that one the facilitites of OOP is that types that we define can be treated as instinsic types for most occasions. Today I came across a code that showed the use of ArrayList.ToArray(Type) for a type defined by the programmer. ...return (Contact[])mContacts.ToArray(typeof(Contact));... here mConstacts is of type ArrayList, and Contact is a class. I was doing the following till now :- C# :- Contact[] contacts = new Contact[mContacts.Count]; mContacts.CopyTo(contacts); return contacts; I have yet to peek into the IL to see the difference/similarity between the two approaches in terms of internals, but the first one at least requires less code and fits in a single line. :-)
http://www.cnblogs.com/avlee/archive/2006/01/05/311447.html
CC-MAIN-2017-22
refinedweb
141
58.69
. -m, --md5 Use MD5 as the hash algorithm. -s, --sha1 Use SHA1 as the hash algorithm. -n, --namespace namespace Generate the hash with the namespace prefix. The namespace is UUID, or '@ns' where "ns" is well-known predefined UUID addressed by namespace name (see above). -N, --name name Generate the hash of the name. -x, --hex Interpret name name as a hexadecimal string. OSF DCE 1.1 uuidgen --sha1 --namespace @dns --name "" uuidgen was written by Andreas Dilger for libuuid. libuuid(3), RFC 4122une 2011 UUIDGEN(1) Pages that refer to this page: uuidparse(1), uuid_generate(3), uuid_generate_random(3), uuid_generate_time(3), uuid_generate_time_safe(3), swaplabel(8), uuidd(8)
https://www.man7.org/linux/man-pages/man1/uuidgen.1.html
CC-MAIN-2020-29
refinedweb
107
69.07
So earlier this week, I found out about an interesting way to map components in react. Here's an example of what I most commonly see. A Todos component then returns a list of TodoCard: export const Todos = () => { return ( <div> {todos.map(todo => ( <TodoCard key={todo.id} todo={todo} /> ))} </div> ) } Here we have to explicitly give react the key, or your console will be filled with a nasty error 🤮. Turns out we can let react handle the key with React.Children.toArray(). Lets refactor the above component: export const Todos = () => { return ( <div> {React.Children.toArray(todos.map(todo => <TodoCard todo={todo} />))} </div> ) } And tada 🎉, we no longer have to handle keys! Discussion (0)
https://dev.to/rushi444/an-alternative-way-to-map-components-in-react-35ik
CC-MAIN-2021-10
refinedweb
113
67.35